Home Technology Rabbit’s AI Assistant Is Here. And Soon a Camera Wearable Will Be...

Rabbit’s AI Assistant Is Here. And Soon a Camera Wearable Will Be Too

46
0
Rabbit’s AI Assistant Is Here. And Soon a Camera Wearable Will Be Too
ab ab ab ab ab ab ab ab ab ab ab ab ab ab ab ab ab ab ab ab ab ab ab ab ab ab ab ab ab


Lyu demoed the R1’s Teach Mode as well, which lets you point the R1’s camera at a computer screen as you instruct it how to complete a task. After it learns, you can ask it to perform said task to save yourself the time and hassle. However, this feature is not yet available, and when it is, Rabbit says it will start with a small selection of users to beta test it.

But the goal for the R1 is more or less to replace your apps. Instead of hunting for an icon, just push the button and ask the R1 to handle something.

At CES, it seemed as though you’d be able to access multiple third-party apps through the R1 at launch—but, right now, there are just four services: Uber, DoorDash, Midjourney, and Spotify.

You connect to these via the Rabbit Hole web portal—which means yes, you are logging into these services through what seems to be a virtual machine hosted by Rabbit, handing over your credentials—and then you can ask the R1 to call an Uber, order McDonald’s, generate an image, or play a song. It’s using these services’ application programming interfaces (APIs) to tackle these tasks—and the R1 has been pre-trained to use them.

Lyu naturally promises there’s plenty more on the way, of course. In the summer, we are told to expect an alarm clock, calendar, contacts, GPS, memory recall, travel planning, and other features. Currently in development are Amazon Music and Apple Music integrations, and later on, we should see more third-party service integrations including Airbnb, Lyft, and OpenTable.

You might be wondering, “Hang on a minute, that just sounds like a phone,” and you … wouldn’t be off the mark.

As we’ve seen with the clunky and limited Humane Ai Pin, a smartphone can perform all of these tasks better, faster, and with richer interactions. This is where you have to start looking carefully at Rabbit’s overall vision.

The idea is to speak and then compute. No need for apps—the computer will just understand. We are a long ways away from that, but, at the launch event, Rabbit teased a wearable device that would understand what you’re pointing at.

Lyu suggested this wearable could understand you pointing at a Nest thermostat and asking to lower the temperature, without having to say the words “Nest” or “thermostat.” The image of the supposedly all-seeing wearable was blurred, though, so we don’t have much information to go on.

Lyu mentioned generative user interfaces, where a user will be able to have an interface of their own choosing—buttons on a screen placed where you want them, and at the perfect display size—and then claimed that Rabbit is working on an AI-native desktop operating system called Rabbit OS. Again, we don’t have many details, but my mind immediately went to Theo in Her installing OS1 on his PC.

An operating system that puts a personal voice assistant front and center. What can go wrong?



Source link