Skip to main content

Teach Mode Is Here: rabbit Gives Consumers Power to Create Custom AI Agents

Recent developments with LAM and teach mode demonstrate rabbit’s ability to deliver consumer-facing AI agent technology at scale

Today, AI startup rabbit inc. announced that it is opening access to a beta version of its teach mode agent system to all r1 users. With teach mode, a next-generation developer tool, users can create and ask their own AI agents to automate their actions on different digital interfaces, starting with websites – regardless of their coding and software development skills.

Teach mode: AI agents that learn from you

Teach mode is the latest milestone in rabbit’s pioneering work on LAM, a consumer-facing general agent system that can autonomously navigate websites, check information, and operate software user interfaces across a variety of operating systems.

Teach mode learns to perform tasks by studying how users perform tasks. After teaching the agent a task, a user can later ask the agent to recall the lesson to automate the task on the user’s behalf. The agent is also capable of intuiting subtle variations of lessons, meaning it can swap certain details, automatically “filling in the blanks” to perform similar but slightly different tasks. This type of AI agent that learns by studying user inputs brings a structured and rigorous understanding of the task to be performed, which can result in a more robust agent as it accumulates knowledge of all the lessons it has been taught.

As of today, all r1 users have been granted full access to teach mode beta, with the ability to both teach and replay lessons. At this stage, teach mode is still experimental. Output can be unpredictable at times, and the teaching function may require trial and error to achieve the desired results. rabbit plans to collect feedback from users to rapidly improve both teaching and replaying functionality. The more users teach and replay lessons, the faster the teach mode experience will improve. Users can begin experimenting with teach mode in the rabbithole web portal.

AI-native operating systems and the future of apps

rabbit continues to work on building an AI-native operating system as an inevitable replacement for today's aging app-based ecosystem. As online activities have gradually taken center stage in people’s daily lives, users have been forced to navigate an exponentially increasing number of application interfaces in web browsers, on their mobile devices, and on their desktop computers, oftentimes wading through unnecessary layers of complexity to accomplish otherwise straightforward tasks. The software and interfaces are designed to only present options for users, but not to understand their needs.

With LAM, in contrast, rabbit aims to simplify human-computer interaction for people managing hundreds of apps and interfaces by moving to allowing users to state their intentions to an agent which is capable of operating the interfaces on their behalf. Instead of retrofitting AI into legacy operating systems, rabbit's cross-platform approach, LAM, goes "over the top" of the existing software stack, essentially creating the next generation of AI-native operating systems. In this way, teach mode aims to do the same thing to apps that the graphical user interface did to the command line terminal – it makes the apps invisible and irrelevant to users by providing a more convenient interaction layer. Jesse Lyu, Founder and CEO of rabbit, said, “All the best car manufacturers compete over their engines, but when electric cars came out, they didn't even need an engine to run. We shouldn’t carry the burden of previous operating systems into the current systems.

“A developer ecosystem is crucial to the success of an operating system, and teach mode is that missing link, giving people the power to create their own custom agents.”

A tight feedback loop with early adopters fuels rapid improvements

rabbit first announced the concept of teach mode alongside its first product, rabbit r1, at CES 2024. In September, rabbit launched a closed alpha testing program, resulting in more than 400 lessons created by a group of 20 testers. The success of the alpha program allowed rabbit to expedite the release of teach mode beta to all r1 users well in advance of the end-of-year target that rabbit publicly announced in early 2024.

“A major challenge with AI products is that companies need to directly work with customers to learn their behavior and create the experience from the ground up because AI hardware is new and there are no predecessors,” said Lyu. “We are fortunate to have one of the most engaged communities for emerging technologies. With their support, we are one of the first and only companies in the world to deliver a useful general agent to consumers at scale.”

Within two months of alpha testing, the team made dozens of performance improvements to teach mode and added new features that are available in the beta release. Recording logs, for example, gives users better visibility on the interactions with the agent they are teaching. Other features include read mode, which improves the experience of triggering teach mode replays via r1 by letting users more precisely specify the parts of the results that interest them, and annotations, which let users add sophisticated LLM-based “helpers” to filter and modify each step of a lesson. Alpha testing also resulted in support for more complex and dynamic websites.

r1: same device, dramatically improved experience

Since rabbit began shipping its first product, r1, the company has remained focused on rapid iteration and substantive product updates, making r1 the fastest-improved AI device on the market. In only six months, rabbit has issued more than 20 over-the-air software updates that have brought more fun, creative, and useful new features to the r1 experience. This includes an advanced default search mode, originally called “beta rabbit,” which unites the best LLMs and allows for complex questions and more thoughtful answers with a single press of the push-to-talk button on r1, as well as magic camera, which creates AI-enhanced images with multiple vision styles from photos taken on r1. rabbit recently launched a new generative UI feature that gives users the ability to change the look of the r1 user interface to any style they want. rabbit also launched LAM playground, a vision language model (VLM) driven agent, for all r1 users on October 1. With the latest developments in LAM and teach mode, rabbit is continuously making tangible progress on its core agent technology to lead the company’s future innovation.

About rabbit

rabbit inc. is an AI startup developing custom, AI-native hardware and software, letting customers access the latest AI tools through intuitive, natural-language inputs. rabbit's operating system, rabbit OS, is capable of understanding complex user intentions, operating user interfaces, and performing actions on behalf of the user.

The company is headquartered in Santa Monica, California, and was founded by a group of researchers, engineers, and repeat entrepreneurs with extensive experience in shipping AI hardware products and operating high-performance computing (HPC) clusters to train large AI models. A two-time Y Combinator alumnus, rabbit’s Founder and CEO, Jesse Lyu, previously founded Raven Tech, a startup that pioneered conversational AI operating systems. rabbit raised over $59m in funding to date from investors including Khosla Ventures, Sound Ventures, and Synergis Capital.

Contacts

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.