Nearly a decade ago, Google showed off a feature called Now on Tap in Android Marshmallowâtap and hold the home button and Google will surface helpful contextual information related to whatâs on the screen. Talking about a movie with a friend over text? Now on Tap could get you details about the title without having to leave the messaging app. Looking at a restaurant in Yelp? The phone could surface OpenTable recommendations with just a tap.
I was fresh out of college, and these improvements felt exciting and magicalâits ability to understand what was on the screen and predict the actions you might want to take felt future-facing. It was one of my favorite Android features. It slowly morphed into Google Assistant, which was great in its own right, but not quite the same.
Today, at Googleâs I/O developer conference in Mountain View, California, the new features Google is touting in its Android operating system feel like the Now on Tap of oldâallowing you to harness contextual information around you to make using your phone a bit easier. Except this time, these features are powered by a decadeâs worth of advancements in large language models.
âI think whatâs exciting is we now have the technology to build really exciting assistants,â Dave Burke, vice president of engineering on Android, tells me over a Google Meet video call. âWe need to be able to have a computer system that understands what it sees and I don’t think we had the technology back then to do it well. Now we do.â
I got a chance to speak with Burke and Sameer Samat, president of the Android ecosystem at Google, about what’s new in the world of Android, the company’s new AI assistant Gemini, and what it all holds for the future of the OS. Samat referred to these updates as a âonce-in-a-generational opportunity to reimagine what the phone can do, and to rethink all of Android.â
Circle to Search ⦠Your Homework
It starts with Circle to Search, which is Googleâs new way of approaching Search on mobile. Much like the experience of Now on Tap, Circle to Searchâwhich the company debuted a few months agoâis more interactive than just typing into a search box. (You literally circle what you want to search on the screen.) Burke says, âItâs a very visceral, fun, and modern way to search ⦠It skews younger as well because itâs so fun to use.â
Samat claims Google has received positive feedback from consumers, but Circle to Searchâs latest feature hails specifically from student feedback. Circle to Search can now be used on physics and math problems when a user circles themâGoogle will spit out step-by-step instructions on completing the problems without the user leaving the syllabus app.
Samat made it clear Gemini wasn’t just providing answers but was showing students how to solve the problems. Later this year, Circle to Search will be able to solve more complex problems like diagrams and graphs. This is all powered by Google’s LearnLM models, which are fine-tuned for education.
Gemini Gets More Contextual on Android
Gemini is Googleâs AI assistant that is in many ways eclipsing Google Assistant. Reallyâwhen you fire up Google Assistant on most Android phones these days, there’s an option to replace it with Gemini instead. So naturally, I asked Burke and Samat whether this meant Assistant was heading to the Google Graveyard.
âThe way to look at it is that Gemini is an opt-in experience on the phone,â Samat says. âI think obviously over time Gemini is becoming more advanced and is evolving. We donât have anything to announce today, but there is a choice for consumers if they want to opt into this new AI-powered assistant. They can try it out and we are seeing that people are doing that and we’re getting a lot of great feedback.â