Posted by Matthew McCullough, VP of Product Management, Android Developer

We’re in an important moment where AI changes everything, from how we work to the expectations that users have for your apps, and our goal on Android is to transform this AI evolution into opportunities for you and your users. Today in our Fall episode of The Android Show, we unpacked a bunch of new updates towards delivering the highest return on investment in building for the Android platform. From new agentic experiences for Gemini in Android Studio to a brand new on-device AI API to the first Android XR device, there’s so much to cover – let’s dive in!
Build your own custom Gen AI features with the new Prompt API
Table of Content
On Android, we offer AI models on-device, or in the cloud. Today, we’re excited to now give you full flexibility to shape the output of the Gemini Nano model by passing in any prompt you can imagine with the new Prompt API, now in Alpha. For flagship Android devices, Gemini Nano lets you build efficient on-device options where the users’ data never leaves their device. At I/O this May, we launched our on-device GenAI APIs using the Gemini Nano model, making common tasks easier with simple APIs for tasks like summarization, proofreading and image description. Kakao used the Prompt API to transform their parcel delivery service, replacing a slow, manual process where users had to copy and paste details into a form into just a simple message requesting a delivery, and the API automatically extracts all the necessary information. This single feature reduced order completion time by 24% and boosted new user conversion by an incredible 45%.
Tap into Nano Banana and Imagen using the Firebase SDK
When you want to add cutting-edge capabilities across the entire fleet of Android devices, our cloud-based AI solutions with Firebase AI Logic are a great fit. The excitement for models like Gemini 2.5 Flash Image (a.k.a. Nano Banana) and Imagen have been incredible; now your users can now generate and edit images using Nano Banana, and then for finer control, like selecting and transforming specific parts of an image, users can use the new mask-based editing feature that leverages the Imagen model. See our blog post to learn more. And beyond image generation, you can also use Gemini multimodal capabilities to process text, audio and image input. RedBus, for example, revolutionized their user reviews using Gemini Flash via Firebase AI Logic to make giving feedback easier, more inclusive, and reliable. The old problem? Short, low-quality text reviews. The new solution? Users can now leave reviews using voice input in their native languages. From the audio Gemini Flash is then generating a structured text response enabling longer, richer and more reliable user reviews. It’s a win for everyone: travelers, operators, and developers!
Helping you be more productive, with agentic experiences in Android Studio
Helping you be more productive is our goal with Gemini in Android Studio, and why we’re infusing AI across our tooling. Developers like Pocket FM have seen an impressive development time savings of 50%. With the recent launch of Agent Mode, you can describe a complex goal in natural language and (with your permission), the agent plans and executes changes on multiple files across your project. The agent’s answers are now grounded in the most modern development practices, and can even cross-reference our latest documentation in real time. We demoed new agentic experiences such as updates to Agent Mode, the ability to upgrade APIs on your behalf, the new project assistant, and we announced you’ll be able to bring any LLM of your choice to power the AI functionality inside Android Studio, giving you more flexibility and choice on how you incorporate AI into your workflow. And for the newest stable features such as Back Up and Sync, make sure to download the latest stable version of Android Studio.
Elevating AI-assisted Android development, and improving LLMs with an Android benchmark
Our goal is to make it easier for Android developers to build great experiences. With more code being written by AI, developers have been asking for models that know more about Android development. We want to help developers be more productive, and that’s why we’re building a new task set for LLMs against a range of common Android development areas. The goal is to provide LLM makers with a benchmark, a north star of high quality Android development, so Android developers have a range of helpful models to choose for AI assistance.
To reflect the challenges of Android development, the benchmark is composed of real-world problems sourced from public GitHub Android repositories. Each evaluation attempts to have an LLM recreate a pull request, which are then verified using human authored tests. This allows us to measure a model’s ability to navigate complex codebases, understand dependencies, and solve the kind of problems you encounter every day.
We’re finalizing the task set we’ll be testing against LLMs, and will be sharing the results publicly in the coming months. We’re looking forward to seeing how this shapes AI assisted Android development, and the additional flexibility and choice it gives you to build on Android.

The first Android XR device: Samsung Galaxy XR
Last week was the launch of the first in a new wave of Android XR devices: the Galaxy XR, in partnership with Samsung. Android XR devices are built entirely in the Gemini era, creating a major new platform opportunity for your app. And because Android XR is built on top of familiar Android frameworks, when building adaptively, you’re already building for XR. To unlock the full potential of Android XR features, you can use the Jetpack XR SDK. The Calm team provides a perfect example of this in action. They successfully transformed their mobile app into an immersive spatial experience, building their first functional XR menus on the first day and a core XR experience in just two weeks by leveraging their existing Android codebase and the Jetpack XR SDK. You can read more about Android XR from our Spotlight Week last week.
The new Jetpack Navigation 3 library is now in beta! Instead of having behavior embedded into the library itself, we’re providing ‘how-to recipes’ with good defaults (nav3 recipes on github). Out of the box, it’s fully customizable, has animation support and is adaptive. Nav 3 was built from the ground up with Compose State as a fundamental building block. This means that it fully buys into the declarative programming model – you change the state you own and Nav3 reacts to that new state. On the Compose front, we’ve been working on making it faster and easier for you to build UI, covering the features you told us you needed from Views, while at the same time ensuring that Compose is performant.
Accelerate your business success on Google Play
With AI speeding up app development, Google Play is streamlining your workflow in Play Console so that your business growth can keep up with your code. The reimagined, goal-oriented app dashboard puts actionable metrics front and center. Plus, new capabilities are making your day-to-day operations faster, smarter, and more efficient: from pre-release testing with deep links validation to AI-powered analytics summaries and app strings localization. These updates are just the beginning. Check out the full list of announcements to get the latest from Play.
Watch the Fall episode of The Android Show
Thank you for tuning into our Fall episode of The Android Show. We’re excited to continue building great things together, and this show is an important part of our conversation with you. We’d love to hear your ideas for our next episode, so please reach out on X or LinkedIn. A special thanks to my co-hosts, Rebecca Gutteridge and Adetunji Dahunsi, for helping us share the latest updates.