Gemini https://theinshotproapk.com/category/app/gemini/ Download InShot Pro APK for Android, iOS, and PC Thu, 05 Mar 2026 14:03:00 +0000 en-US hourly 1 https://theinshotproapk.com/wp-content/uploads/2021/07/cropped-Inshot-Pro-APK-Logo-1-32x32.png Gemini https://theinshotproapk.com/category/app/gemini/ 32 32 Elevating AI-assisted Android development and improving LLMs with Android Bench https://theinshotproapk.com/elevating-ai-assisted-android-development-and-improving-llms-with-android-bench/ Thu, 05 Mar 2026 14:03:00 +0000 https://theinshotproapk.com/elevating-ai-assisted-android-development-and-improving-llms-with-android-bench/ Posted by Matthew McCullough, VP of Product Management, Android Developer We want to make it faster and easier for you ...

Read more

The post Elevating AI-assisted Android development and improving LLMs with Android Bench appeared first on InShot Pro.

]]>

Posted by Matthew McCullough, VP of Product Management, Android Developer

We want to make it faster and easier for you to build high-quality Android apps, and one way we’re helping you be more productive is by putting AI at your fingertips. We know you want AI that truly understands the nuances of the Android platform, which is why we’ve been measuring how LLMs perform Android development tasks. Today we released the first version of Android Bench, our official leaderboard of LLMs for Android development.

Our goal is to provide model creators with a benchmark to evaluate LLM capabilities for Android development. By establishing a clear, reliable baseline for what high quality Android development looks like, we’re helping model creators identify gaps and accelerate improvements—which empowers developers to work more efficiently with a wider range of helpful models to choose for AI assistance—which ultimately will lead to higher quality apps across the Android ecosystem.

Designed with real-world Android development tasks

We created the benchmark by curating a task set against a range of common Android development areas. It is composed of real challenges of varying difficulty, sourced from public GitHub Android repositories. Scenarios include resolving breaking changes across Android releases, domain-specific tasks like networking on wearables, and migrating to the latest version of Jetpack Compose, to name a few.

Each evaluation attempts to have an LLM fix the issue reported in the task, which we then verify using unit or instrumentation tests. This model-agnostic approach allows us to measure a model’s ability to navigate complex codebases, understand dependencies, and solve the kind of problems you encounter every day. 

We validated this methodology with several LLM makers, including JetBrains.


Measuring AI’s impact on Android is a massive challenge, so it’s great to see a framework that’s this sound and realistic. While we’re active in benchmarking ourselves, Android Bench is a unique and welcome addition. This methodology is exactly the kind of rigorous evaluation Android developers need right now.”  

– Kirill Smelov, Head of AI Integrations at JetBrains.

The first Android Bench results

For this initial release, we wanted to purely measure model performance and not focus on agentic or tool use. The models were able to successfully complete 16-72% of the tasks. This is a wide range that demonstrates some LLMs already have a strong baseline for Android knowledge, while others have more room for improvement. Regardless of where the models are at now, we’re anticipating continued improvement as we encourage LLM makers to enhance their models for Android development. 

The LLM with the highest average score for this first release is Gemini 3.1 Pro, followed closely by Claude Opus 4.6. You can try all of the models we evaluated for AI assistance for your Android projects by using API keys in the latest stable version of Android Studio.

Providing developers and LLM makers with transparency

We value an open and transparent approach, so we made our methodology, dataset, and test harness publicly available on GitHub.

One challenge for any public benchmark is the risk of data contamination, where models may have seen evaluation tasks during their training process. We have taken measures to ensure our results reflect genuine reasoning rather than memorization or guessing, including a thorough manual review of agent trajectories, or the integration of a canary string to discourage training. 

Looking ahead, we will continue to evolve our methodology to preserve the integrity of the dataset, while also making improvements for future releases of the benchmark—for example, growing the quantity and complexity of tasks.

We’re looking forward to how Android Bench can improve AI assistance long-term. Our vision is to close the gap between concept and quality code. We’re building the foundation for a future where no matter what you imagine, you can build it on Android. 

The post Elevating AI-assisted Android development and improving LLMs with Android Bench appeared first on InShot Pro.

]]>
The Intelligent OS: Making AI agents more helpful for Android apps https://theinshotproapk.com/the-intelligent-os-making-ai-agents-more-helpful-for-android-apps/ Wed, 25 Feb 2026 23:47:00 +0000 https://theinshotproapk.com/the-intelligent-os-making-ai-agents-more-helpful-for-android-apps/ Posted by Matthew McCullough, VP of Product Management, Android Development User expectations for AI on their devices are fundamentally shifting ...

Read more

The post The Intelligent OS: Making AI agents more helpful for Android apps appeared first on InShot Pro.

]]>


Posted by Matthew McCullough, VP of Product Management, Android Development


User expectations for AI on their devices are fundamentally shifting how they interact with their apps. Instead of opening apps to do tasks step-by-step, they’re asking AI to do the heavy lifting for them. In this new interaction model, success is shifting from getting users to open your app, to successfully fulfilling their tasks and helping them get more done faster. 

To help you evolve your apps for this agentic future, we’re introducing early stage developer capabilities that bridge the gap between your apps and agentic apps and personalized assistants, such as Google Gemini. While we are in the early, beta stages of this journey, we’re designing these features with privacy and security at their core as our first step in exploring this paradigm shift as an app ecosystem.

Empowering apps with AppFunctions

Android AppFunctions allows apps to expose data and functionality directly to AI agents and assistants. With the AppFunctions Jetpack library and platform APIs, developers can create self-describing functions that agentic apps can discover and execute via natural language. Mirroring how backend capabilities are declared via MCP cloud servers, AppFunctions provides an on-device solution for Android apps. Much like WebMCP, it executes these functions locally on the device rather than on a server.

The Samsung Gallery integration with Gemini on the Galaxy S26 series showcases AppFunctions in action. Instead of manually scrolling through photo albums, you can now simply ask Gemini to “Show me pictures of my cat from Samsung Gallery.” Gemini takes the user query, intelligently identifies and triggers the right function, and presents the returned photos from Samsung Gallery directly in the Gemini app, so users never need to leave. This experience is multimodal and can be done via voice or text. Users can even use the returned photos in follow-up conversations, like sending them to friends in a text message.


This integration is currently available on the Galaxy S26 series and will soon expand to Samsung devices running OneUI 8.5 and higher. Through AppFunctions, Gemini can already automate tasks across app categories like Calendar, Notes, and Tasks, on devices from multiple manufacturers. Whether it’s coordinating calendar events, organizing notes, or setting to-do reminders, users can streamline daily activities in one place.

Enabling agentic apps with intelligent UI automation

While AppFunctions provides a structured framework and more control for apps to communicate with AI agents and assistants, we know that not every interaction has a dedicated integration yet. We’re also developing a UI automation framework for AI agents and assistants to intelligently execute generic tasks on users’ installed apps, with user transparency and control built in. This is the platform doing the heavy lifting, so developers can get agentic reach with zero code. It’s a low-effort way to extend their reach without a major engineering lift right now. 

To get feedback as we refine this framework, we’re starting with an early preview on the Galaxy S26 series and select Pixel 10 devices, where users will be able to delegate multi-step tasks to Gemini with just a long press of the power button. Launching as a beta feature in the Gemini app, this will support a curated selection of apps in the food delivery, grocery, and rideshare categories in the US and Korea to start. Whether users need to place a complex pizza order for their family members with particular tastes, coordinate a multi-stop rideshare with co-workers, or reorder their last grocery purchase, Gemini can help complete tasks using the context already available from your apps, without any developer work needed.


Users are in control while a task is being actioned in the background through UI automation. For any automation action, users have the option to monitor a task’s progress via notifications or “live view” and can switch to manual control at any point to take over the experience. Gemini is also designed to alert users before completing sensitive tasks, such as making a purchase.

Looking ahead

In Android 17, we’re looking to broaden these capabilities to reach even more users, developers, and device manufacturers.

We are currently building experiences with a small set of app developers, focusing on high-quality user experiences as the ecosystem evolves. We plan to share more details later this year on how you can use AppFunctions and UI automation to enable agentic integrations for your app. Stay tuned for updates.

The post The Intelligent OS: Making AI agents more helpful for Android apps appeared first on InShot Pro.

]]>
Ultrahuman launches features 15% faster with Gemini in Android Studio https://theinshotproapk.com/ultrahuman-launches-features-15-faster-with-gemini-in-android-studio/ Thu, 08 Jan 2026 22:00:00 +0000 https://theinshotproapk.com/ultrahuman-launches-features-15-faster-with-gemini-in-android-studio/ Posted by Amrit Sanjeev, Developer Relations Engineer and Trevor Johns, Developer Relations Engineer Ultrahuman is a consumer health-tech startup that ...

Read more

The post Ultrahuman launches features 15% faster with Gemini in Android Studio appeared first on InShot Pro.

]]>

Posted by Amrit Sanjeev, Developer Relations Engineer and Trevor Johns, Developer Relations Engineer




Ultrahuman is a consumer health-tech startup that provides daily well-being insights to users based on biometric data from the company’s wearables, like the RING Air and the M1 Live Continuous Glucose Monitor (CGM). The Ultrahuman team leaned on Gemini in Android Studio’s contextually aware tools to streamline and accelerate their development process.

Ultrahuman’s app is maintained by a lean team of just eight developers. They prioritize building features that their users love, and have a backlog of bugs and needed performance improvements that take a lot of time. The team needed to scale up their output of feature improvements, and also needed to handle their performance improvements, without increasing headcount. One of their biggest opportunities was reducing the amount of time and effort for their backlog: every hour saved on maintenance could be reinvested into working on features for their users.



Solving technical hurdles and boosting performance with Gemini

The team integrated Gemini in Android Studio to see if the AI enhanced tools could improve their workflow by handling many Android tasks. First, the team turned to the Gemini chat inside Android Studio. The goal was to prototype a GATT Server implementation for their application’s Bluetooth Low Energy (BLE) connectivity. 

As Ultrahuman’s Android Development Lead, Arka, noted, “Gemini helped us reach a working prototype in under an hour—something that would have otherwise taken us several hours.” The BLE implementation provided by Gemini worked perfectly for syncing large amounts of health sensor data while the app ran in the background, improving the data syncing process and saving battery life on both the user’s Android phone and Ultrahuman’s paired wearable device.

Beyond this core challenge, Gemini also proved invaluable for finding algorithmic optimizations in a custom open-source library, pointing to helpful documentation, assisting with code commenting, and analyzing crash logs. The Ultrahuman team also used code completion to help them breeze through writing otherwise repetitive code, Jetpack Compose Preview Generation to enable rapid iteration during UI design, and Agent Mode for managing complex, project-wide changes, such as rendering a new stacked bar graph that mapped to backend data models and UI models.

Transforming productivity and accelerating feature delivery 

These improvements have saved the team dozens of hours each week. This reclaimed time is being used to deliver new features to Ultrahuman’s beta users 10-15% faster. For example, the team built a new in-app AI assistant for users, powered by Gemini 2.5 Flash. The UI design, architecture, and parts of the user experience for this new feature were initially suggested by Gemini in Android Studio—showcasing a full-circle AI-assisted development process. 

Accelerate your Android development with Gemini

Gemini’s expert Android advice, closely integrated throughout Android Studio, helps Android developers spend less time digging through documentation and writing boilerplate code—freeing up more time to innovate.

Learn how Gemini in Android Studio can help your team resolve complex issues, streamline workflows, and ship new features faster.

The post Ultrahuman launches features 15% faster with Gemini in Android Studio appeared first on InShot Pro.

]]>
Build smarter apps with Gemini 3 Flash https://theinshotproapk.com/build-smarter-apps-with-gemini-3-flash/ Wed, 17 Dec 2025 16:13:00 +0000 https://theinshotproapk.com/build-smarter-apps-with-gemini-3-flash/ Posted by Thomas Ezan, Senior Developer Relations Engineer Today, we’re expanding the Gemini 3 model family with the release of ...

Read more

The post Build smarter apps with Gemini 3 Flash appeared first on InShot Pro.

]]>

Posted by Thomas Ezan, Senior Developer Relations Engineer



Today, we’re expanding the Gemini 3 model family with the release of Gemini 3 Flash, frontier intelligence built for speed at a fraction of the cost. You can start building with it immediately, as we’re officially launching Gemini 3 Flash on Firebase AI Logic. Available globally, you can securely access the Gemini 3 Flash preview model directly from your app via the Gemini Developer API or the Vertex AI Gemini API using Firebase AI Logic client SDKs. Gemini 3 Flash’s strong performance in reasoning, tool use, and multimodal capabilities is ideal for developers looking to do more complex video analysis, data extraction and visual Q&A.

Gemini 3 optimized for low-latency

Gemini 3 is our most intelligent model family to date. With the launch of Gemini 3 Flash, we are making that intelligence more accessible for low-latency and cost-effective use cases. While Gemini 3 Pro is designed for complex reasoning, Gemini 3 Flash is engineered to be significantly faster and more cost-effective for your production apps.

Seamless integration with Firebase AI Logic

Just like the Pro model, Gemini 3 Flash is available in preview directly through the Firebase AI Logic SDK. This means you can integrate it into your Android app without needing to do any complex server side setup.

Here is how to add it to your Kotlin code:


val model = Firebase.ai(backend = GenerativeBackend.googleAI())
    .generativeModel(
        modelName = "gemini-3-flash-preview")

Scale with Confidence

In addition, Firebase enables you to keep your growth secure and manageable with:

AI Monitoring

The Firebase AI monitoring dashboard gives you visibility into latency, success rates, and costs, allowing you to slice data by model name to see exactly how the model performs.

Server Prompt Templates

You can use server prompt templates to store your prompt and schema securely on Firebase servers instead of hardcoding them in your app binary. This capability ensures your sensitive prompts remain secure, prevents unauthorized prompt extraction, and allows for faster iteration without requiring app updates.

---
model: 'gemini-3-flash-preview'
input:
  schema:
    topic:
      type: 'string'
      minLength: 2
      maxLength: 40
    length:
      type: 'number'
      minimum: 1
      maximum: 200
    language:
      type: 'string'
---

{{role "system"}}
You're a storyteller that tells nice and joyful stories with happy endings.

{{role "user"}}
Create a story about {{topic}} with the length of {{length}} words in the {{language}} language.

Prompt template defined on the Firebase Console  

val generativeModel = Firebase.ai.templateGenerativeModel()
val response = generativeModel.generateContent("storyteller-v10",
    mapOf(
        "topic" to topic,
        "length" to length,
        "language" to language
    )
)
_output.value = response.text

Code snippet to access to the prompt template

Gemini 3 Flash for AI development assistance in Android Studio

Gemini 3 Flash is also available for AI assistance in Android Studio. While Gemini 3 Pro Preview is our best model for coding and agentic experiences, Gemini 3 Flash is engineered for speed, and great for common development tasks and questions.

 
The new model is rolling out to developers using Gemini in Android Studio at no-cost (default model) starting today. For higher usage rate limits and longer sessions with Agent Mode, you can use an AI Studio API key to leverage the full capabilities of either Gemini 3 Flash or Gemini 3 Pro. We’re also rolling out Gemini 3 model family access with higher usage rate limits to developers who have Gemini Code Assist Standard or Enterprise licenses. Your IT administrator will need to enable access to preview models through the Google Cloud console.

Get Started Today

You can start experimenting with Gemini 3 Flash via Firebase AI Logic today. Learn more about it in the Android and Firebase documentation. Try out any of the new Gemini 3 models in Android Studio for development assistance, and let us know what you think! As always you can follow us across LinkedIn, Blog, YouTube, and X.

The post Build smarter apps with Gemini 3 Flash appeared first on InShot Pro.

]]>
Start building for glasses, new devices for Android XR and more in The Android Show | XR Edition https://theinshotproapk.com/start-building-for-glasses-new-devices-for-android-xr-and-more-in-the-android-show-xr-edition/ Wed, 10 Dec 2025 12:01:27 +0000 https://theinshotproapk.com/start-building-for-glasses-new-devices-for-android-xr-and-more-in-the-android-show-xr-edition/ Posted by Matthew McCullough – VP of Product Management, Android Developer Today, during The Android Show | XR Edition, we ...

Read more

The post Start building for glasses, new devices for Android XR and more in The Android Show | XR Edition appeared first on InShot Pro.

]]>

Posted by Matthew McCullough – VP of Product Management, Android Developer



Today, during
The Android Show | XR Edition, we shared a look at the expanding Android XR platform, which is fundamentally evolving to bring a unified developer experience to the entire XR ecosystem. The latest announcements, from Developer Preview 3 to exciting new form factors, are designed to give you the tools and platform you need to create the next generation of XR experiences. Let’s dive into the details!

A spectrum of new devices ready for your apps

The Android XR platform is quickly expanding, providing more users and more opportunities for your apps. This growth is anchored by several new form factors that expand the possibilities for XR experiences.


A major focus is on lightweight, all-day wearables. At I/O, we announced we are working with Samsung and our partners Gentle Monster and Warby Parker to design stylish, lightweight AI glasses and Display AI glasses that you can wear comfortably all day.  The integration of Gemini on glasses is set to unlock helpful, intelligent experiences like live translation and searching what you see.

And, partners like Uber are already exploring how AI Glasses can streamline the rider experience by providing simple, contextual directions and trip status right in the user’s view


The ecosystem is simultaneously broadening its scope to include wired XR glasses, exemplified by Project Aura from XREAL. This device blends the immersive experiences typically found in headsets with portability and real-world presence. Project Aura is scheduled for launch next year.

New tools unlock development for all form factors

If you are developing for Android, you are already developing for Android XR. The release of Android XR SDK Developer Preview 3 brings increased stability for headset APIs and, most significantly, opens up development for AI Glasses. 


You can now build augmented experiences for AI glasses using new libraries like Jetpack Compose Glimmer, a UI toolkit for transparent displays , and Jetpack Projected, which lets you extend your Android mobile app directly to glasses. Furthermore, the SDK now includes powerful ARCore for Jetpack XR updates, such as Geospatial capabilities for wayfinding.

For immersive experiences on headsets and wired XR glasses like Project Aura from XREAL, this release also provides new APIs for detecting a device’s field-of-view, helping your adaptive apps adjust their UI.

Check out our post on the Android XR Developer Preview 3 to learn more about all the latest updates. 

Expanding your reach with new engine ecosystems

The Android XR platform is built on the OpenXR standard, enabling integration with the tools you already use so you can build with your preferred engine.

Developers can utilize Unreal Engine’s native Android and OpenXR capabilities, today, to build for Android XR leveraging the existing VR Template for immersive experiences. To provide additional, optimized extensions for the Android XR platform, a Google vendor plug, including support for hand tracking, hand mesh, and more, will be released early next year.

Godot now includes Android XR support, leveraging its focus on OpenXR to enable development for devices like Samsung Galaxy XR. The new Godot OpenXR vendor plugin v4.2.2 stable allows developers to port their existing projects to the platform. 

Watch The Android Show | XR Edition

Thank you for tuning into the The Android Show | XR Edition. Start building differentiated experiences today using the Developer Preview 3 SDK and test your apps with the XR Emulator in Android Studio. Your feedback is crucial as we continue to build this platform together. Head over to developer.android.com/xr to learn more and share your feedback.


The post Start building for glasses, new devices for Android XR and more in The Android Show | XR Edition appeared first on InShot Pro.

]]>
Boost user engagement with AI Image Generation https://theinshotproapk.com/boost-user-engagement-with-ai-image-generation/ Mon, 13 Oct 2025 18:00:00 +0000 https://theinshotproapk.com/boost-user-engagement-with-ai-image-generation/ Posted by Thomas Ezan, Senior Developer Relations Engineer and Mozart Louis, Developer Relations Engineer    Adding custom images to your ...

Read more

The post Boost user engagement with AI Image Generation appeared first on InShot Pro.

]]>

Posted by Thomas Ezan, Senior Developer Relations Engineer and Mozart Louis, Developer Relations Engineer


  

Adding custom images to your app can significantly improve and personalize user experience and boost user engagement. This post explores two new capabilities for image generation with Firebase AI Logic: the specialized Imagen editing features, currently in preview, and the general availability of Gemini 2.5 Flash Image (a.k.a “Nano Banana”), designed for contextual or conversational image generation.

  

  

Boost user engagement with images generated via Firebase AI Logic

Image generation models can be used to create custom user profile avatars or to integrate personalized visual assets directly into key screen flows.

  

For example, Imagen offers new editing features (in developer preview). You can now draw a mask and utilize inpainting to generate pixels within the masked area. Additionally, outpainting is available to generate pixels outside the mask.
  

 

  

Imagen supports inpainting, letting generate only a part of an image. 

  

Alternatively, Gemini 2.5 Flash Image (a.k.a Nano Banana), can use extended world knowledge and the reasoning capabilities of the Gemini models to generate contextually relevant images, which is ideal for creating dynamic illustrations that align with a user’s current in-app experience.   

  

 Use Gemini 2.5 Flash Image to create dynamic illustrations contextually relevant to your app. 

  

Finally, the ability to conversationally and iteratively edit images allow users to edit a photo using natural language.

  

Use Gemini 2.5 Flash Image to edit a picture using natural language.

  

When starting to integrate AI to your application, it is important to learn about AI safety. It is particularly key to assess your application’s security risks, consider adjustments to mitigate safety risks, perform safety testing appropriate to your use case and solicit user feedback and monitor content.

  

Imagen or Gemini: The choice is yours 

The difference between Gemini 2.5 Flash Image (“Nano Banana”) and Imagen lies in their primary focus and advanced capabilities. Gemini 2.5 Flash Image, as an image model within the larger Gemini family, excels in conversational image editing, maintaining context and subject consistency across multiple iterations, and leveraging “world knowledge and reasoning” to create contextually relevant visuals or embed accurate visuals within long text sequences. 

  

Imagen is Google’s specialized image generation model, designed for greater creative control, specializing in highly photorealistic outputs, artistic detail, specific styles, and providing explicit controls for specifying the aspect ratio or format of the generated image.

  

Gemini 2.5 Flash Images

  
(Nano Banana 🍌)

Imagen 

🌎 world knowledge and reasoning for more contextually relevant images

  

💬 edit images conversationally while maintaining context

  

📖 embed accurate visuals within long text sequences

📐 specify the aspect ratio or format of generated images

  

🖌Support of mask-based editing for in-painting and out-painting. 

  

🎚 greater control over details of the generated image (quality, artistic detail and specific styles)

Let’s see how to use them in your app.

Inpainting with Imagen 

A few months ago, we released new editing features for Imagen. Although Imagen is now ready for production for image generation, editing features are still in developer preview.

  

Imagen editing features include inpainting and outpainting, mask-based image editing features. This new capability allows users to modify specific areas of an image without regenerating the entire picture. This means you can preserve the best parts of your image and only alter what you wish to change.

 

Use Imagen editing features to make precise targeted changes in an image and guaranteeing the rest of the image integrity

These changes are made while maintaining the core elements and overall integrity of the original image and modifying only the area in the mask.

To implement inpainting with Imagen, first initialize imagen-3.0-capability-001 a specific Imagen model supporting editing features:

// Copyright 2025 Google LLC.
// SPDX-License-Identifier: Apache-2.0
val editingModel =
        Firebase.ai(backend = GenerativeBackend.vertexAI()).imagenModel(
            "imagen-3.0-capability-001",
            generationConfig = ImagenGenerationConfig(
                numberOfImages = 1,
                aspectRatio = ImagenAspectRatio.SQUARE_1x1,
                imageFormat = ImagenImageFormat.jpeg(compressionQuality = 75),
            ),
        )

From there, define the inpainting function:


// Copyright 2025 Google LLC.
// SPDX-License-Identifier: Apache-2.0

val prompt = "remove the pancakes and make it an omelet instead"

suspend fun inpaintImageWithMask(sourceImage: Bitmap, maskImage: Bitmap, prompt: String, editSteps: Int = 50): Bitmap {
        val imageResponse = editingModel.editImage(
            referenceImages = listOf(
                ImagenRawImage(sourceImage.toImagenInlineImage()),
                ImagenRawMask(maskImage.toImagenInlineImage()),
            ),
            prompt = prompt,
            config = ImagenEditingConfig(
                editMode = ImagenEditMode.INPAINT_INSERTION,
                editSteps = editSteps,
            ),
        )
        return imageResponse.images.first().asBitmap()
    }

You provide both a sourceImage, a maskImage and a prompt for the edit and the number of edit steps to be performed.

You can see it in action in the Imagen Editing Sample in the Android AI Sample catalog!

And Imagen also supports outpainting that enables you to let the model generate the pixels outside of a mask. You can also use Imagen’s Image customization capabilities to change the style of a picture or update a subject in a picture. Read more about it in the Android developer documentation.

Conversational image generation with Gemini 2.5 Flash Image

One way to edit images with Gemini 2.5 Flash Image is to use the model’s multi-turn chat capabilities.

First, initialize the model:

// Copyright 2025 Google LLC.
// SPDX-License-Identifier: Apache-2.0

val model = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel(
    modelName = "gemini-2.5-flash-image",
    // Configure the model to respond with text and images (required)
    generationConfig = generationConfig {
        responseModalities = listOf(ResponseModality.TEXT,
        ResponseModality.IMAGE)
    }
)

To achieve a similar outcome to the mask-based Imagen method described above, we can utilize the chat API to initiate a conversation with Gemini 2.5 Flash Image.

// Copyright 2025 Google LLC.
// SPDX-License-Identifier: Apache-2.0

// Initialize the chat
val chat = model.startChat()


// Load a bitmap
val source = ImageDecoder.createSource(context.contentResolver, uri)
val bitmap = ImageDecoder.decodeBitmap(source)


// Create the initial prompt instructing the model to edit the image
val prompt = content {
    image(bitmap)
    text("remove the pancakes and add an omelet")
}

// To generate an initial response, send a user message with the image and text prompt
var response = chat.sendMessage(prompt)

// Inspect the returned image
var generatedImageAsBitmap = response
    .candidates.first().content.parts.filterIsInstance<ImagePart>().firstOrNull()?.image

// Follow up requests do not need to specify the image again
response = chat.sendMessage("Now, center the omelet in the pan")
generatedImageAsBitmap = response
    .candidates.first().content.parts.filterIsInstance<ImagePart>().firstOrNull()?.image

You can see it in action in the Gemini Image Chat sample in the Android AI Sample catalog and read more about it in the Android documentation.

Conclusion

Both Imagen and Gemini 2.5 Flash Image offer powerful capabilities, allowing you to select the ideal image generation model to personalize your app and boost user engagement, depending on your specific use case.


The post Boost user engagement with AI Image Generation appeared first on InShot Pro.

]]>
Gratitude’s developers released 2X the amount of innovative experiments with the help of Gemini in Android Studio https://theinshotproapk.com/gratitudes-developers-released-2x-the-amount-of-innovative-experiments-with-the-help-of-gemini-in-android-studio/ Thu, 18 Sep 2025 21:00:00 +0000 https://theinshotproapk.com/gratitudes-developers-released-2x-the-amount-of-innovative-experiments-with-the-help-of-gemini-in-android-studio/ Posted by Sandhya Mohan, Product Manager Gratitude is a mental wellness Android app that encourages self-care and positivity with techniques ...

Read more

The post Gratitude’s developers released 2X the amount of innovative experiments with the help of Gemini in Android Studio appeared first on InShot Pro.

]]>


Posted by Sandhya Mohan, Product Manager


Gratitude is a mental wellness Android app that encourages self-care and positivity with techniques like in-app journaling, affirmations, and vision boards. These mindfulness exercises need to be free from performance bottlenecks, bugs, and errors for the app to be truly immersive and helpful—but researching solutions and debugging code took away valuable time from the team experimenting on new features. To find a better balance, Gratitude used Gemini in Android Studio to help improve the app’s code and streamline the development process, enabling the team to implement those exciting new features faster.


Unlocking new efficiencies with Gemini in Android Studio

Gratitude’s AI image generation feature, built in record time with the help of Gemini in Android Studio

Unlocking new efficiencies with Gemini in Android Studio

The Gratitude team decided to try Gemini in Android Studio, an AI assistant that supports developers throughout all stages of development, helping them be more productive. Developers can ask Gemini questions and receive context-aware solutions based on their code. Divij Gupta, senior Android developer at Gratitude, shared that the Gratitude team needed to know if it was possible to inject any object into a Kotlin object class using Hilt. Gemini suggested using an EntryPoint to access dependencies in classes where standard injection isn’t possible, which helped solve their “tricky problem,” according to Divij.

Gemini eliminated the need to search for Android documentation as well, enabling the Gratitude team to learn and apply their knowledge without having to leave Android Studio. “Gemini showed me how to use Android Studio’s CPU and memory profilers more effectively,” recalled Divij. “I also learned how to set up baseline profiles to speed up cold starts.”

Identifying performance bottlenecks became easier too. When analyzing the Gratitude team’s code, Gemini suggested using collectAsStateWithLifecycle instead of collectAsState to collect flows in composables, which helps the app handle lifecycle events more effectively and improves overall performance. Gemini also analyzes the app’s crash reports in the App Quality Insights panel and provides guidance on how to address each issue, which enabled the Gratitude team to “identify root causes faster, catch edge cases we might have missed, and improve overall app stability,” according to Divij.

Experimenting with new features using Gemini in Android Studio

Experimenting with new features using Gemini in Android Studio

Gemini in Android studio helped the Gratitude team significantly improve their development speed and morale. “This faster cycle has made the team feel more productive, motivated, and excited to keep innovating,” said Divij. Developers are able to spend more time ideating and experimenting on new features, leading to innovative new experiences.

One feature the developers built with their new found time is an image generation function for the app’s vision boards feature. Users can now upload a photo with a prompt, and then receive an AI-generated image that they can instantly pin to their board. The team was able to build the UI using Gemini in Android Studio’s Compose Preview Generation — allowing them to quickly visualize their Jetpack Compose code and craft the pixel-perfect UI their designers intended.

Going forward, the Gratitude team is looking forward to using Gemini to implement more improvements to its code, including correcting glitches, memory leaks, and improving performance based on more insights from Gemini, which will further improve user experience.

Build with Gemini in Android Studio

Build with Gemini in Android Studio

Discover all of the features available as part of Gemini in Android Studio that can accelerate your development such as code completion, code explanation, that can accelerate your development such as Agent Mode, document generation, and more.

The post Gratitude’s developers released 2X the amount of innovative experiments with the help of Gemini in Android Studio appeared first on InShot Pro.

]]>
Androidify: Building AI first Android Experiences with Gemini using Jetpack Compose and Firebase https://theinshotproapk.com/androidify-building-ai-first-android-experiences-with-gemini-using-jetpack-compose-and-firebase/ Sat, 06 Sep 2025 12:03:47 +0000 https://theinshotproapk.com/androidify-building-ai-first-android-experiences-with-gemini-using-jetpack-compose-and-firebase/ Posted by Rebecca Franks – Developer Relations Engineer, Tracy Agyemang – Product Marketer, and Avneet Singh – Product Manager Androidify ...

Read more

The post Androidify: Building AI first Android Experiences with Gemini using Jetpack Compose and Firebase appeared first on InShot Pro.

]]>

Posted by Rebecca Franks – Developer Relations Engineer, Tracy Agyemang – Product Marketer, and Avneet Singh – Product Manager

Androidify is our new app that lets you build your very own Android bot, using a selfie and AI. We walked you through some of the components earlier this year, and starting today it’s available on the web or as an app on Google Play. In the new Androidify, you can upload a selfie or write a prompt of what you’re looking for, add some accessories, and watch as AI builds your unique bot. Once you’ve had a chance to try it, come back here to learn more about the AI APIs and Android tools we used to create the app. Let’s dive in!

Key technical integrations

The Androidify app combines powerful technologies to deliver a seamless and engaging user experience. Here’s a breakdown of the core components and their roles:

AI with Gemini and Firebase

Androidify leverages the Firebase AI Logic SDK to access Google’s powerful Gemini and Imagen* models. This is crucial for several key features:

  • Image validation: The app first uses Gemini 2.5 Flash to validate the user’s photo. This includes checking that the image contains a clear, focused person and meets safety standards before any further processing. This is a critical first step to ensure high-quality and safe outputs.
  • Image captioning: Once validated, the model generates a detailed caption of the user’s image. This is done using structured output, which means the model returns a specific JSON format, making it easier for the app to parse the information. This detailed description helps create a more accurate and creative final result.
  • Android Bot Generation: The generated caption is then used to enrich the prompt for the final image generation. A specifically fine-tuned version of the Imagen 3 model is then called to generate the custom Android bot avatar based on the enriched prompt. This custom fine-tuning ensures the results are unique and align with the app’s playful and stylized aesthetic.
  • The Androidify app also has a “Help me write” feature which uses Gemini 2.5 Flash to create a random description for a bot’s clothing and hairstyle, adding a bit of a fun “I’m feeling lucky” element.

    gif showcasing the help me write button

    UI with Jetpack Compose and CameraX

    The app’s user interface is built entirely with Jetpack Compose, enabling a declarative and responsive design across form factors. The app uses the latest Material 3 Expressive design, which provides delightful and engaging UI elements like new shapes, motion schemes, and custom animations.

    For camera functionality, CameraX is used in conjunction with the ML Kit Pose Detection API. This intelligent integration allows the app to automatically detect when a person is in the camera’s view, enabling the capture button and adding visual guides for the user. It also makes the app’s camera features responsive to different device types, including foldables in tabletop mode.

    Androidify also makes extensive use of the latest Compose features, such as:

  • Adaptive layouts: It’s designed to look great on various screen sizes, from phones to foldables and tablets, by leveraging WindowSizeClass and reusable composables.
  • Shared element transitions: The app uses the new Jetpack Navigation 3 library to create smooth and delightful screen transitions, including morphing shape animations that add a polished feel to the user experience.
  • Auto-sizing text: With Compose 1.8, the app uses a new parameter that automatically adjusts font size to fit the container’s available size, which is used for the app’s main “Customize your own Android Bot” text.
  • chart illustrating the behavior of Androidify app flow

    Figure 1. Androidify Flow

    Latest updates

    In the latest version of Androidify, we’ve added some new powerful AI driven features.

    Background vibe generation with Gemini Image editing

    Using the latest Gemini 2.5 Flash Image model, we combine the Android bot with a preset background “vibe” to bring the Android bots to life.

    a three-part image showing an Android bot on the left, text prompt in the middle reads A vibrant 3D illustration of a vibrant outdoor garden with fun plants. the flowers in thisscene have an alien-like qulaity to them and are brightly colored. the entire scene is rendered with a meticulous mixture of rounded, toy-like objects, creating a clean, minimalist aesthetic..., and image on the right is the Android bot from the first image stanging in a toy like garen scene surrounded by brightly colored flowers. A whitre picket fence is in the background, and a red watering can sits on the ground next to the driod bot

    Figure 2. Combining the Android bot with a background vibe description to generate your new Android Bot in a scene

    This is achieved by using Firebase AI Logic – passing a prompt for the background vibe, and the input image bitmap of the bot, with instructions to Gemini on how to combine the two together.

    override suspend fun generateImageWithEdit(
            image: Bitmap,
            backgroundPrompt: String = "Add the input image android bot as the main subject to the result... with the background that has the following vibe...",
        ): Bitmap {
            val model = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel(
                modelName = "gemini-2.5-flash-image-preview",
                generationConfig = generationConfig {
                    responseModalities = listOf(
                        ResponseModality.TEXT,
                        ResponseModality.IMAGE,
                    )
                },
            )
    	  // We combine the backgroundPrompt with the input image which is the Android Bot, to produce the new bot with a background
            val prompt = content {
                text(backgroundPrompt)
                image(image)
            }
            val response = model.generateContent(prompt)
            val image = response.candidates.firstOrNull()
                ?.content?.parts?.firstNotNullOfOrNull { it.asImageOrNull() }
            return image ?: throw IllegalStateException("Could not extract image from model response")
        }

    Sticker mode with ML Kit Subject Segmentation

    The app also includes a “Sticker mode” option, which integrates the ML Kit Subject Segmentation library to remove the background on the bot. You can use “Sticker mode” in apps that support stickers.

    backgroud removal

    Figure 3. White background removal of Android Bot to create a PNG that can be used with apps that support stickers

    The code for the sticker implementation first checks if the Subject Segmentation model has been downloaded and installed, if it has not – it requests that and waits for its completion. If the model is installed already, the app passes in the original Android Bot image into the segmenter, and calls process on it to remove the background. The foregroundBitmap object is then returned for exporting.

    override suspend fun generateImageWithEdit(
            image: Bitmap,
            backgroundPrompt: String = "Add the input image android bot as the main subject to the result... with the background that has the following vibe...",
        ): Bitmap {
            val model = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel(
                modelName = "gemini-2.5-flash-image-preview",
                generationConfig = generationConfig {
                    responseModalities = listOf(
                        ResponseModality.TEXT,
                        ResponseModality.IMAGE,
                    )
                },
            )
    	  // We combine the backgroundPrompt with the input image which is the Android Bot, to produce the new bot with a background
            val prompt = content {
                text(backgroundPrompt)
                image(image)
            }
            val response = model.generateContent(prompt)
            val image = response.candidates.firstOrNull()
                ?.content?.parts?.firstNotNullOfOrNull { it.asImageOrNull() }
            return image ?: throw IllegalStateException("Could not extract image from model response")
        }

    See the LocalSegmentationDataSource for the full source implementation

    Learn more

    To learn more about Androidify behind the scenes, take a look at the new solutions walkthrough, inspect the code or try out the experience for yourself at androidify.com or download the app on Google Play.

    moving demo of Androidfiy app

    *Check responses. Compatibility and availability varies. 18+.

    The post Androidify: Building AI first Android Experiences with Gemini using Jetpack Compose and Firebase appeared first on InShot Pro.

    ]]>
    Entri cut UI development time by 40% with Gemini in Android Studio https://theinshotproapk.com/entri-cut-ui-development-time-by-40-with-gemini-in-android-studio/ Wed, 03 Sep 2025 18:07:00 +0000 https://theinshotproapk.com/entri-cut-ui-development-time-by-40-with-gemini-in-android-studio/ Posted by Paris Hsu – Product Manager Entri delivers online learning experiences across local languages to over 15 million people ...

    Read more

    The post Entri cut UI development time by 40% with Gemini in Android Studio appeared first on InShot Pro.

    ]]>

    Posted by Paris Hsu – Product Manager

    Entri delivers online learning experiences across local languages to over 15 million people in India, empowering them to secure jobs and advance in their careers. To seize on the latest advancements in AI, the Entri team explored a variety of tools to help their developers create better experiences for users.

    Their latest experiment? Adopting Gemini in Android Studio to enable them to move faster. Not only did Gemini speed up the teams’ work, trim tedious tasks, and foster ongoing learning, it streamlined collaboration between design and development and became an enjoyable, go-to resource that boosted the team’s productivity overall.

    Turning screenshots to code—fast

    To tighten build time, developers at Entri used Gemini in Android Studio to generate Compose UI code directly from mockups. By uploading screenshots of Figma designs, Gemini produced the UI structures they needed to build entire screens in minutes. Gemini played a key role in revamping the platform’s Sign-Up flow, for example, fast-tracking a process that typically takes hours to just under 45 minutes.

    By streamlining the creation of Compose UIs—often from just a screenshot and a few prompts—Gemini also made it significantly easier to quickly prototype new ideas and create MVPs. This allowed their team to test concepts and validate business needs without getting bogged down by repetitive UI tweaks up front.

    Entri developers found that the ability to generate code by attaching images in Gemini in Android Studio drastically reduced boilerplate work and improved alignment between design and engineering. Over time, this approach became a standard part of their prototyping process, with the team reporting 40% reduction in average UI build time per screen.

    quote from Jackson E J, Technical Lead, Mobile @ Entri

    Faster experimentation to create a better app experience

    The Entri team has a strong culture of experimentation, and often has multiple user-facing experiments running at once. The team found Gemini in Android Studio particularly valuable in speeding up their experimentation processes. The tool quickly produced code for A/B testing, including UI changes and feature toggles, allowing the team to conduct experiments faster and iterate in more informed ways. It also made it faster for them to get user feedback and apply it. By simplifying the early build phase and allowing for sharper testing, Gemini boosted their speed and confidence, freeing them up to create more, test faster, and refine smarter.

    When it came to launching new AI learning features, Entri wanted to be first to market. With Gemini in Android Studio’s help, the Entri team rolled out their AI Teaching Assistant and Interview Coach to production much faster than they normally could. “What used to take weeks, now takes days,” said Jackson. “And what used to take hours, now takes minutes.”

    quote from Sanjay Krishna, Head of Product @ Entri

    Tool integration reduces context switching

    Gemini in Android Studio has changed the game for Entri’s developers, removing the need to break focus to switch between tools or hunt through external documentation. Now the team receives instant answers to common questions about Android APIs and Kotlin syntax without leaving the application.

    For debugging crashes, Gemini was especially useful when paired with App Quality Insights in Android Studio. By sharing stack traces directly with Gemini, developers received targeted suggestions for possible root causes and quick fixes directly in the IDE. This guidance allowed them to resolve crashes reported by Firebase and Google Play more efficiently and with less context switching. Gemini surfaced overlooked edge cases and offered alternative solutions to improve app stability, too.

    quote from Jackson E J, Technical Lead, Mobile @ Entri

    Shifting focus from routine tasks to innovation

    Entri developers also wanted to test the efficiency of Gemini in Android Studio on personal projects as well. They leaned on the tool to create a weather tracker, password manager, and POS billing system—all on top of their core project work at Entri. They enjoyed trying it out in their personal projects and experimenting with different use cases.

    By offloading repetitive tasks and expediting initial UI and screen generation, Gemini has allowed developers to focus more on innovation, exploration, and creativity—things that often get sidelined when dealing with routine coding work. Now the team is able to spend their time refining final products, designing smarter UX, and strategizing, making their day-to-day work more efficient, collaborative, and motivating.

    Get started

    Ramp up your development processes with Gemini in Android Studio.

    The post Entri cut UI development time by 40% with Gemini in Android Studio appeared first on InShot Pro.

    ]]>
    Tune in on September 3: recapping the latest from Made by Google and more in our summer episode of The Android Show https://theinshotproapk.com/tune-in-on-september-3-recapping-the-latest-from-made-by-google-and-more-in-our-summer-episode-of-the-android-show/ Tue, 02 Sep 2025 12:11:53 +0000 https://theinshotproapk.com/tune-in-on-september-3-recapping-the-latest-from-made-by-google-and-more-in-our-summer-episode-of-the-android-show/ Posted by Christopher Katsaros – Senior Product Marketing Manager In just a few days, on Wednesday September 3 at 11AM ...

    Read more

    The post Tune in on September 3: recapping the latest from Made by Google and more in our summer episode of The Android Show appeared first on InShot Pro.

    ]]>

    Posted by Christopher Katsaros – Senior Product Marketing Manager

    In just a few days, on Wednesday September 3 at 11AM PT, we’ll be dropping our summer episode of #TheAndroidShow, on YouTube and on developer.android.com! In this quarterly show, we’ll be unpacking all of the goodies coming out of this month’s Made by Google event and what you as Android developers need to know!

    With the new Pixel Watch 4 running Wear OS 6, we’ll show you how to get building for the wrist. And with the latest foldable from Google, the Pixel 10 Pro Fold, we’ll show how you can leverage out of the box APIs and multi-window experiences to make your apps adaptive for this new form factor. Plus, we’ll be unpacking a set of new features for Gemini in Android Studio to help you be even more productive.

    #TheAndroidShow is your conversation with the Android developer community, this time hosted by Annyce Davis and John Zoeller. You’ll hear the latest from the developers and engineers who build Android. Don’t forget to tune in live on September 3 at 10AM PT, live on YouTube and on developer.android.com/events/show!

    The post Tune in on September 3: recapping the latest from Made by Google and more in our summer episode of The Android Show appeared first on InShot Pro.

    ]]>