Gemini https://theinshotproapk.com/category/app/gemini/ Download InShot Pro APK for Android, iOS, and PC Thu, 08 Jan 2026 22:00:00 +0000 en-US hourly 1 https://theinshotproapk.com/wp-content/uploads/2021/07/cropped-Inshot-Pro-APK-Logo-1-32x32.png Gemini https://theinshotproapk.com/category/app/gemini/ 32 32 Ultrahuman launches features 15% faster with Gemini in Android Studio https://theinshotproapk.com/ultrahuman-launches-features-15-faster-with-gemini-in-android-studio/ Thu, 08 Jan 2026 22:00:00 +0000 https://theinshotproapk.com/ultrahuman-launches-features-15-faster-with-gemini-in-android-studio/ Posted by Amrit Sanjeev, Developer Relations Engineer and Trevor Johns, Developer Relations Engineer Ultrahuman is a consumer health-tech startup that ...

Read more

The post Ultrahuman launches features 15% faster with Gemini in Android Studio appeared first on InShot Pro.

]]>

Posted by Amrit Sanjeev, Developer Relations Engineer and Trevor Johns, Developer Relations Engineer




Ultrahuman is a consumer health-tech startup that provides daily well-being insights to users based on biometric data from the company’s wearables, like the RING Air and the M1 Live Continuous Glucose Monitor (CGM). The Ultrahuman team leaned on Gemini in Android Studio’s contextually aware tools to streamline and accelerate their development process.

Ultrahuman’s app is maintained by a lean team of just eight developers. They prioritize building features that their users love, and have a backlog of bugs and needed performance improvements that take a lot of time. The team needed to scale up their output of feature improvements, and also needed to handle their performance improvements, without increasing headcount. One of their biggest opportunities was reducing the amount of time and effort for their backlog: every hour saved on maintenance could be reinvested into working on features for their users.



Solving technical hurdles and boosting performance with Gemini

The team integrated Gemini in Android Studio to see if the AI enhanced tools could improve their workflow by handling many Android tasks. First, the team turned to the Gemini chat inside Android Studio. The goal was to prototype a GATT Server implementation for their application’s Bluetooth Low Energy (BLE) connectivity. 

As Ultrahuman’s Android Development Lead, Arka, noted, “Gemini helped us reach a working prototype in under an hour—something that would have otherwise taken us several hours.” The BLE implementation provided by Gemini worked perfectly for syncing large amounts of health sensor data while the app ran in the background, improving the data syncing process and saving battery life on both the user’s Android phone and Ultrahuman’s paired wearable device.

Beyond this core challenge, Gemini also proved invaluable for finding algorithmic optimizations in a custom open-source library, pointing to helpful documentation, assisting with code commenting, and analyzing crash logs. The Ultrahuman team also used code completion to help them breeze through writing otherwise repetitive code, Jetpack Compose Preview Generation to enable rapid iteration during UI design, and Agent Mode for managing complex, project-wide changes, such as rendering a new stacked bar graph that mapped to backend data models and UI models.

Transforming productivity and accelerating feature delivery 

These improvements have saved the team dozens of hours each week. This reclaimed time is being used to deliver new features to Ultrahuman’s beta users 10-15% faster. For example, the team built a new in-app AI assistant for users, powered by Gemini 2.5 Flash. The UI design, architecture, and parts of the user experience for this new feature were initially suggested by Gemini in Android Studio—showcasing a full-circle AI-assisted development process. 

Accelerate your Android development with Gemini

Gemini’s expert Android advice, closely integrated throughout Android Studio, helps Android developers spend less time digging through documentation and writing boilerplate code—freeing up more time to innovate.

Learn how Gemini in Android Studio can help your team resolve complex issues, streamline workflows, and ship new features faster.

The post Ultrahuman launches features 15% faster with Gemini in Android Studio appeared first on InShot Pro.

]]>
Build smarter apps with Gemini 3 Flash https://theinshotproapk.com/build-smarter-apps-with-gemini-3-flash/ Wed, 17 Dec 2025 16:13:00 +0000 https://theinshotproapk.com/build-smarter-apps-with-gemini-3-flash/ Posted by Thomas Ezan, Senior Developer Relations Engineer Today, we’re expanding the Gemini 3 model family with the release of ...

Read more

The post Build smarter apps with Gemini 3 Flash appeared first on InShot Pro.

]]>

Posted by Thomas Ezan, Senior Developer Relations Engineer



Today, we’re expanding the Gemini 3 model family with the release of Gemini 3 Flash, frontier intelligence built for speed at a fraction of the cost. You can start building with it immediately, as we’re officially launching Gemini 3 Flash on Firebase AI Logic. Available globally, you can securely access the Gemini 3 Flash preview model directly from your app via the Gemini Developer API or the Vertex AI Gemini API using Firebase AI Logic client SDKs. Gemini 3 Flash’s strong performance in reasoning, tool use, and multimodal capabilities is ideal for developers looking to do more complex video analysis, data extraction and visual Q&A.

Gemini 3 optimized for low-latency

Gemini 3 is our most intelligent model family to date. With the launch of Gemini 3 Flash, we are making that intelligence more accessible for low-latency and cost-effective use cases. While Gemini 3 Pro is designed for complex reasoning, Gemini 3 Flash is engineered to be significantly faster and more cost-effective for your production apps.

Seamless integration with Firebase AI Logic

Just like the Pro model, Gemini 3 Flash is available in preview directly through the Firebase AI Logic SDK. This means you can integrate it into your Android app without needing to do any complex server side setup.

Here is how to add it to your Kotlin code:


val model = Firebase.ai(backend = GenerativeBackend.googleAI())
    .generativeModel(
        modelName = "gemini-3-flash-preview")

Scale with Confidence

In addition, Firebase enables you to keep your growth secure and manageable with:

AI Monitoring

The Firebase AI monitoring dashboard gives you visibility into latency, success rates, and costs, allowing you to slice data by model name to see exactly how the model performs.

Server Prompt Templates

You can use server prompt templates to store your prompt and schema securely on Firebase servers instead of hardcoding them in your app binary. This capability ensures your sensitive prompts remain secure, prevents unauthorized prompt extraction, and allows for faster iteration without requiring app updates.

---
model: 'gemini-3-flash-preview'
input:
  schema:
    topic:
      type: 'string'
      minLength: 2
      maxLength: 40
    length:
      type: 'number'
      minimum: 1
      maximum: 200
    language:
      type: 'string'
---

{{role "system"}}
You're a storyteller that tells nice and joyful stories with happy endings.

{{role "user"}}
Create a story about {{topic}} with the length of {{length}} words in the {{language}} language.

Prompt template defined on the Firebase Console  

val generativeModel = Firebase.ai.templateGenerativeModel()
val response = generativeModel.generateContent("storyteller-v10",
    mapOf(
        "topic" to topic,
        "length" to length,
        "language" to language
    )
)
_output.value = response.text

Code snippet to access to the prompt template

Gemini 3 Flash for AI development assistance in Android Studio

Gemini 3 Flash is also available for AI assistance in Android Studio. While Gemini 3 Pro Preview is our best model for coding and agentic experiences, Gemini 3 Flash is engineered for speed, and great for common development tasks and questions.

 
The new model is rolling out to developers using Gemini in Android Studio at no-cost (default model) starting today. For higher usage rate limits and longer sessions with Agent Mode, you can use an AI Studio API key to leverage the full capabilities of either Gemini 3 Flash or Gemini 3 Pro. We’re also rolling out Gemini 3 model family access with higher usage rate limits to developers who have Gemini Code Assist Standard or Enterprise licenses. Your IT administrator will need to enable access to preview models through the Google Cloud console.

Get Started Today

You can start experimenting with Gemini 3 Flash via Firebase AI Logic today. Learn more about it in the Android and Firebase documentation. Try out any of the new Gemini 3 models in Android Studio for development assistance, and let us know what you think! As always you can follow us across LinkedIn, Blog, YouTube, and X.

The post Build smarter apps with Gemini 3 Flash appeared first on InShot Pro.

]]>
Start building for glasses, new devices for Android XR and more in The Android Show | XR Edition https://theinshotproapk.com/start-building-for-glasses-new-devices-for-android-xr-and-more-in-the-android-show-xr-edition/ Wed, 10 Dec 2025 12:01:27 +0000 https://theinshotproapk.com/start-building-for-glasses-new-devices-for-android-xr-and-more-in-the-android-show-xr-edition/ Posted by Matthew McCullough – VP of Product Management, Android Developer Today, during The Android Show | XR Edition, we ...

Read more

The post Start building for glasses, new devices for Android XR and more in The Android Show | XR Edition appeared first on InShot Pro.

]]>

Posted by Matthew McCullough – VP of Product Management, Android Developer



Today, during
The Android Show | XR Edition, we shared a look at the expanding Android XR platform, which is fundamentally evolving to bring a unified developer experience to the entire XR ecosystem. The latest announcements, from Developer Preview 3 to exciting new form factors, are designed to give you the tools and platform you need to create the next generation of XR experiences. Let’s dive into the details!

A spectrum of new devices ready for your apps

The Android XR platform is quickly expanding, providing more users and more opportunities for your apps. This growth is anchored by several new form factors that expand the possibilities for XR experiences.


A major focus is on lightweight, all-day wearables. At I/O, we announced we are working with Samsung and our partners Gentle Monster and Warby Parker to design stylish, lightweight AI glasses and Display AI glasses that you can wear comfortably all day.  The integration of Gemini on glasses is set to unlock helpful, intelligent experiences like live translation and searching what you see.

And, partners like Uber are already exploring how AI Glasses can streamline the rider experience by providing simple, contextual directions and trip status right in the user’s view


The ecosystem is simultaneously broadening its scope to include wired XR glasses, exemplified by Project Aura from XREAL. This device blends the immersive experiences typically found in headsets with portability and real-world presence. Project Aura is scheduled for launch next year.

New tools unlock development for all form factors

If you are developing for Android, you are already developing for Android XR. The release of Android XR SDK Developer Preview 3 brings increased stability for headset APIs and, most significantly, opens up development for AI Glasses. 


You can now build augmented experiences for AI glasses using new libraries like Jetpack Compose Glimmer, a UI toolkit for transparent displays , and Jetpack Projected, which lets you extend your Android mobile app directly to glasses. Furthermore, the SDK now includes powerful ARCore for Jetpack XR updates, such as Geospatial capabilities for wayfinding.

For immersive experiences on headsets and wired XR glasses like Project Aura from XREAL, this release also provides new APIs for detecting a device’s field-of-view, helping your adaptive apps adjust their UI.

Check out our post on the Android XR Developer Preview 3 to learn more about all the latest updates. 

Expanding your reach with new engine ecosystems

The Android XR platform is built on the OpenXR standard, enabling integration with the tools you already use so you can build with your preferred engine.

Developers can utilize Unreal Engine’s native Android and OpenXR capabilities, today, to build for Android XR leveraging the existing VR Template for immersive experiences. To provide additional, optimized extensions for the Android XR platform, a Google vendor plug, including support for hand tracking, hand mesh, and more, will be released early next year.

Godot now includes Android XR support, leveraging its focus on OpenXR to enable development for devices like Samsung Galaxy XR. The new Godot OpenXR vendor plugin v4.2.2 stable allows developers to port their existing projects to the platform. 

Watch The Android Show | XR Edition

Thank you for tuning into the The Android Show | XR Edition. Start building differentiated experiences today using the Developer Preview 3 SDK and test your apps with the XR Emulator in Android Studio. Your feedback is crucial as we continue to build this platform together. Head over to developer.android.com/xr to learn more and share your feedback.


The post Start building for glasses, new devices for Android XR and more in The Android Show | XR Edition appeared first on InShot Pro.

]]>
Boost user engagement with AI Image Generation https://theinshotproapk.com/boost-user-engagement-with-ai-image-generation/ Mon, 13 Oct 2025 18:00:00 +0000 https://theinshotproapk.com/boost-user-engagement-with-ai-image-generation/ Posted by Thomas Ezan, Senior Developer Relations Engineer and Mozart Louis, Developer Relations Engineer    Adding custom images to your ...

Read more

The post Boost user engagement with AI Image Generation appeared first on InShot Pro.

]]>

Posted by Thomas Ezan, Senior Developer Relations Engineer and Mozart Louis, Developer Relations Engineer


  

Adding custom images to your app can significantly improve and personalize user experience and boost user engagement. This post explores two new capabilities for image generation with Firebase AI Logic: the specialized Imagen editing features, currently in preview, and the general availability of Gemini 2.5 Flash Image (a.k.a “Nano Banana”), designed for contextual or conversational image generation.

  

  

Boost user engagement with images generated via Firebase AI Logic

Image generation models can be used to create custom user profile avatars or to integrate personalized visual assets directly into key screen flows.

  

For example, Imagen offers new editing features (in developer preview). You can now draw a mask and utilize inpainting to generate pixels within the masked area. Additionally, outpainting is available to generate pixels outside the mask.
  

 

  

Imagen supports inpainting, letting generate only a part of an image. 

  

Alternatively, Gemini 2.5 Flash Image (a.k.a Nano Banana), can use extended world knowledge and the reasoning capabilities of the Gemini models to generate contextually relevant images, which is ideal for creating dynamic illustrations that align with a user’s current in-app experience.   

  

 Use Gemini 2.5 Flash Image to create dynamic illustrations contextually relevant to your app. 

  

Finally, the ability to conversationally and iteratively edit images allow users to edit a photo using natural language.

  

Use Gemini 2.5 Flash Image to edit a picture using natural language.

  

When starting to integrate AI to your application, it is important to learn about AI safety. It is particularly key to assess your application’s security risks, consider adjustments to mitigate safety risks, perform safety testing appropriate to your use case and solicit user feedback and monitor content.

  

Imagen or Gemini: The choice is yours 

The difference between Gemini 2.5 Flash Image (“Nano Banana”) and Imagen lies in their primary focus and advanced capabilities. Gemini 2.5 Flash Image, as an image model within the larger Gemini family, excels in conversational image editing, maintaining context and subject consistency across multiple iterations, and leveraging “world knowledge and reasoning” to create contextually relevant visuals or embed accurate visuals within long text sequences. 

  

Imagen is Google’s specialized image generation model, designed for greater creative control, specializing in highly photorealistic outputs, artistic detail, specific styles, and providing explicit controls for specifying the aspect ratio or format of the generated image.

  

Gemini 2.5 Flash Images

  
(Nano Banana 🍌)

Imagen 

🌎 world knowledge and reasoning for more contextually relevant images

  

💬 edit images conversationally while maintaining context

  

📖 embed accurate visuals within long text sequences

📐 specify the aspect ratio or format of generated images

  

🖌Support of mask-based editing for in-painting and out-painting. 

  

🎚 greater control over details of the generated image (quality, artistic detail and specific styles)

Let’s see how to use them in your app.

Inpainting with Imagen 

A few months ago, we released new editing features for Imagen. Although Imagen is now ready for production for image generation, editing features are still in developer preview.

  

Imagen editing features include inpainting and outpainting, mask-based image editing features. This new capability allows users to modify specific areas of an image without regenerating the entire picture. This means you can preserve the best parts of your image and only alter what you wish to change.

 

Use Imagen editing features to make precise targeted changes in an image and guaranteeing the rest of the image integrity

These changes are made while maintaining the core elements and overall integrity of the original image and modifying only the area in the mask.

To implement inpainting with Imagen, first initialize imagen-3.0-capability-001 a specific Imagen model supporting editing features:

// Copyright 2025 Google LLC.
// SPDX-License-Identifier: Apache-2.0
val editingModel =
        Firebase.ai(backend = GenerativeBackend.vertexAI()).imagenModel(
            "imagen-3.0-capability-001",
            generationConfig = ImagenGenerationConfig(
                numberOfImages = 1,
                aspectRatio = ImagenAspectRatio.SQUARE_1x1,
                imageFormat = ImagenImageFormat.jpeg(compressionQuality = 75),
            ),
        )

From there, define the inpainting function:


// Copyright 2025 Google LLC.
// SPDX-License-Identifier: Apache-2.0

val prompt = "remove the pancakes and make it an omelet instead"

suspend fun inpaintImageWithMask(sourceImage: Bitmap, maskImage: Bitmap, prompt: String, editSteps: Int = 50): Bitmap {
        val imageResponse = editingModel.editImage(
            referenceImages = listOf(
                ImagenRawImage(sourceImage.toImagenInlineImage()),
                ImagenRawMask(maskImage.toImagenInlineImage()),
            ),
            prompt = prompt,
            config = ImagenEditingConfig(
                editMode = ImagenEditMode.INPAINT_INSERTION,
                editSteps = editSteps,
            ),
        )
        return imageResponse.images.first().asBitmap()
    }

You provide both a sourceImage, a maskImage and a prompt for the edit and the number of edit steps to be performed.

You can see it in action in the Imagen Editing Sample in the Android AI Sample catalog!

And Imagen also supports outpainting that enables you to let the model generate the pixels outside of a mask. You can also use Imagen’s Image customization capabilities to change the style of a picture or update a subject in a picture. Read more about it in the Android developer documentation.

Conversational image generation with Gemini 2.5 Flash Image

One way to edit images with Gemini 2.5 Flash Image is to use the model’s multi-turn chat capabilities.

First, initialize the model:

// Copyright 2025 Google LLC.
// SPDX-License-Identifier: Apache-2.0

val model = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel(
    modelName = "gemini-2.5-flash-image",
    // Configure the model to respond with text and images (required)
    generationConfig = generationConfig {
        responseModalities = listOf(ResponseModality.TEXT,
        ResponseModality.IMAGE)
    }
)

To achieve a similar outcome to the mask-based Imagen method described above, we can utilize the chat API to initiate a conversation with Gemini 2.5 Flash Image.

// Copyright 2025 Google LLC.
// SPDX-License-Identifier: Apache-2.0

// Initialize the chat
val chat = model.startChat()


// Load a bitmap
val source = ImageDecoder.createSource(context.contentResolver, uri)
val bitmap = ImageDecoder.decodeBitmap(source)


// Create the initial prompt instructing the model to edit the image
val prompt = content {
    image(bitmap)
    text("remove the pancakes and add an omelet")
}

// To generate an initial response, send a user message with the image and text prompt
var response = chat.sendMessage(prompt)

// Inspect the returned image
var generatedImageAsBitmap = response
    .candidates.first().content.parts.filterIsInstance<ImagePart>().firstOrNull()?.image

// Follow up requests do not need to specify the image again
response = chat.sendMessage("Now, center the omelet in the pan")
generatedImageAsBitmap = response
    .candidates.first().content.parts.filterIsInstance<ImagePart>().firstOrNull()?.image

You can see it in action in the Gemini Image Chat sample in the Android AI Sample catalog and read more about it in the Android documentation.

Conclusion

Both Imagen and Gemini 2.5 Flash Image offer powerful capabilities, allowing you to select the ideal image generation model to personalize your app and boost user engagement, depending on your specific use case.


The post Boost user engagement with AI Image Generation appeared first on InShot Pro.

]]>
Gratitude’s developers released 2X the amount of innovative experiments with the help of Gemini in Android Studio https://theinshotproapk.com/gratitudes-developers-released-2x-the-amount-of-innovative-experiments-with-the-help-of-gemini-in-android-studio/ Thu, 18 Sep 2025 21:00:00 +0000 https://theinshotproapk.com/gratitudes-developers-released-2x-the-amount-of-innovative-experiments-with-the-help-of-gemini-in-android-studio/ Posted by Sandhya Mohan, Product Manager Gratitude is a mental wellness Android app that encourages self-care and positivity with techniques ...

Read more

The post Gratitude’s developers released 2X the amount of innovative experiments with the help of Gemini in Android Studio appeared first on InShot Pro.

]]>


Posted by Sandhya Mohan, Product Manager


Gratitude is a mental wellness Android app that encourages self-care and positivity with techniques like in-app journaling, affirmations, and vision boards. These mindfulness exercises need to be free from performance bottlenecks, bugs, and errors for the app to be truly immersive and helpful—but researching solutions and debugging code took away valuable time from the team experimenting on new features. To find a better balance, Gratitude used Gemini in Android Studio to help improve the app’s code and streamline the development process, enabling the team to implement those exciting new features faster.


Unlocking new efficiencies with Gemini in Android Studio

Gratitude’s AI image generation feature, built in record time with the help of Gemini in Android Studio

Unlocking new efficiencies with Gemini in Android Studio

The Gratitude team decided to try Gemini in Android Studio, an AI assistant that supports developers throughout all stages of development, helping them be more productive. Developers can ask Gemini questions and receive context-aware solutions based on their code. Divij Gupta, senior Android developer at Gratitude, shared that the Gratitude team needed to know if it was possible to inject any object into a Kotlin object class using Hilt. Gemini suggested using an EntryPoint to access dependencies in classes where standard injection isn’t possible, which helped solve their “tricky problem,” according to Divij.

Gemini eliminated the need to search for Android documentation as well, enabling the Gratitude team to learn and apply their knowledge without having to leave Android Studio. “Gemini showed me how to use Android Studio’s CPU and memory profilers more effectively,” recalled Divij. “I also learned how to set up baseline profiles to speed up cold starts.”

Identifying performance bottlenecks became easier too. When analyzing the Gratitude team’s code, Gemini suggested using collectAsStateWithLifecycle instead of collectAsState to collect flows in composables, which helps the app handle lifecycle events more effectively and improves overall performance. Gemini also analyzes the app’s crash reports in the App Quality Insights panel and provides guidance on how to address each issue, which enabled the Gratitude team to “identify root causes faster, catch edge cases we might have missed, and improve overall app stability,” according to Divij.

Experimenting with new features using Gemini in Android Studio

Experimenting with new features using Gemini in Android Studio

Gemini in Android studio helped the Gratitude team significantly improve their development speed and morale. “This faster cycle has made the team feel more productive, motivated, and excited to keep innovating,” said Divij. Developers are able to spend more time ideating and experimenting on new features, leading to innovative new experiences.

One feature the developers built with their new found time is an image generation function for the app’s vision boards feature. Users can now upload a photo with a prompt, and then receive an AI-generated image that they can instantly pin to their board. The team was able to build the UI using Gemini in Android Studio’s Compose Preview Generation — allowing them to quickly visualize their Jetpack Compose code and craft the pixel-perfect UI their designers intended.

Going forward, the Gratitude team is looking forward to using Gemini to implement more improvements to its code, including correcting glitches, memory leaks, and improving performance based on more insights from Gemini, which will further improve user experience.

Build with Gemini in Android Studio

Build with Gemini in Android Studio

Discover all of the features available as part of Gemini in Android Studio that can accelerate your development such as code completion, code explanation, that can accelerate your development such as Agent Mode, document generation, and more.

The post Gratitude’s developers released 2X the amount of innovative experiments with the help of Gemini in Android Studio appeared first on InShot Pro.

]]>
Androidify: Building AI first Android Experiences with Gemini using Jetpack Compose and Firebase https://theinshotproapk.com/androidify-building-ai-first-android-experiences-with-gemini-using-jetpack-compose-and-firebase/ Sat, 06 Sep 2025 12:03:47 +0000 https://theinshotproapk.com/androidify-building-ai-first-android-experiences-with-gemini-using-jetpack-compose-and-firebase/ Posted by Rebecca Franks – Developer Relations Engineer, Tracy Agyemang – Product Marketer, and Avneet Singh – Product Manager Androidify ...

Read more

The post Androidify: Building AI first Android Experiences with Gemini using Jetpack Compose and Firebase appeared first on InShot Pro.

]]>

Posted by Rebecca Franks – Developer Relations Engineer, Tracy Agyemang – Product Marketer, and Avneet Singh – Product Manager

Androidify is our new app that lets you build your very own Android bot, using a selfie and AI. We walked you through some of the components earlier this year, and starting today it’s available on the web or as an app on Google Play. In the new Androidify, you can upload a selfie or write a prompt of what you’re looking for, add some accessories, and watch as AI builds your unique bot. Once you’ve had a chance to try it, come back here to learn more about the AI APIs and Android tools we used to create the app. Let’s dive in!

Key technical integrations

The Androidify app combines powerful technologies to deliver a seamless and engaging user experience. Here’s a breakdown of the core components and their roles:

AI with Gemini and Firebase

Androidify leverages the Firebase AI Logic SDK to access Google’s powerful Gemini and Imagen* models. This is crucial for several key features:

  • Image validation: The app first uses Gemini 2.5 Flash to validate the user’s photo. This includes checking that the image contains a clear, focused person and meets safety standards before any further processing. This is a critical first step to ensure high-quality and safe outputs.
  • Image captioning: Once validated, the model generates a detailed caption of the user’s image. This is done using structured output, which means the model returns a specific JSON format, making it easier for the app to parse the information. This detailed description helps create a more accurate and creative final result.
  • Android Bot Generation: The generated caption is then used to enrich the prompt for the final image generation. A specifically fine-tuned version of the Imagen 3 model is then called to generate the custom Android bot avatar based on the enriched prompt. This custom fine-tuning ensures the results are unique and align with the app’s playful and stylized aesthetic.
  • The Androidify app also has a “Help me write” feature which uses Gemini 2.5 Flash to create a random description for a bot’s clothing and hairstyle, adding a bit of a fun “I’m feeling lucky” element.

    gif showcasing the help me write button

    UI with Jetpack Compose and CameraX

    The app’s user interface is built entirely with Jetpack Compose, enabling a declarative and responsive design across form factors. The app uses the latest Material 3 Expressive design, which provides delightful and engaging UI elements like new shapes, motion schemes, and custom animations.

    For camera functionality, CameraX is used in conjunction with the ML Kit Pose Detection API. This intelligent integration allows the app to automatically detect when a person is in the camera’s view, enabling the capture button and adding visual guides for the user. It also makes the app’s camera features responsive to different device types, including foldables in tabletop mode.

    Androidify also makes extensive use of the latest Compose features, such as:

  • Adaptive layouts: It’s designed to look great on various screen sizes, from phones to foldables and tablets, by leveraging WindowSizeClass and reusable composables.
  • Shared element transitions: The app uses the new Jetpack Navigation 3 library to create smooth and delightful screen transitions, including morphing shape animations that add a polished feel to the user experience.
  • Auto-sizing text: With Compose 1.8, the app uses a new parameter that automatically adjusts font size to fit the container’s available size, which is used for the app’s main “Customize your own Android Bot” text.
  • chart illustrating the behavior of Androidify app flow

    Figure 1. Androidify Flow

    Latest updates

    In the latest version of Androidify, we’ve added some new powerful AI driven features.

    Background vibe generation with Gemini Image editing

    Using the latest Gemini 2.5 Flash Image model, we combine the Android bot with a preset background “vibe” to bring the Android bots to life.

    a three-part image showing an Android bot on the left, text prompt in the middle reads A vibrant 3D illustration of a vibrant outdoor garden with fun plants. the flowers in thisscene have an alien-like qulaity to them and are brightly colored. the entire scene is rendered with a meticulous mixture of rounded, toy-like objects, creating a clean, minimalist aesthetic..., and image on the right is the Android bot from the first image stanging in a toy like garen scene surrounded by brightly colored flowers. A whitre picket fence is in the background, and a red watering can sits on the ground next to the driod bot

    Figure 2. Combining the Android bot with a background vibe description to generate your new Android Bot in a scene

    This is achieved by using Firebase AI Logic – passing a prompt for the background vibe, and the input image bitmap of the bot, with instructions to Gemini on how to combine the two together.

    override suspend fun generateImageWithEdit(
            image: Bitmap,
            backgroundPrompt: String = "Add the input image android bot as the main subject to the result... with the background that has the following vibe...",
        ): Bitmap {
            val model = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel(
                modelName = "gemini-2.5-flash-image-preview",
                generationConfig = generationConfig {
                    responseModalities = listOf(
                        ResponseModality.TEXT,
                        ResponseModality.IMAGE,
                    )
                },
            )
    	  // We combine the backgroundPrompt with the input image which is the Android Bot, to produce the new bot with a background
            val prompt = content {
                text(backgroundPrompt)
                image(image)
            }
            val response = model.generateContent(prompt)
            val image = response.candidates.firstOrNull()
                ?.content?.parts?.firstNotNullOfOrNull { it.asImageOrNull() }
            return image ?: throw IllegalStateException("Could not extract image from model response")
        }

    Sticker mode with ML Kit Subject Segmentation

    The app also includes a “Sticker mode” option, which integrates the ML Kit Subject Segmentation library to remove the background on the bot. You can use “Sticker mode” in apps that support stickers.

    backgroud removal

    Figure 3. White background removal of Android Bot to create a PNG that can be used with apps that support stickers

    The code for the sticker implementation first checks if the Subject Segmentation model has been downloaded and installed, if it has not – it requests that and waits for its completion. If the model is installed already, the app passes in the original Android Bot image into the segmenter, and calls process on it to remove the background. The foregroundBitmap object is then returned for exporting.

    override suspend fun generateImageWithEdit(
            image: Bitmap,
            backgroundPrompt: String = "Add the input image android bot as the main subject to the result... with the background that has the following vibe...",
        ): Bitmap {
            val model = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel(
                modelName = "gemini-2.5-flash-image-preview",
                generationConfig = generationConfig {
                    responseModalities = listOf(
                        ResponseModality.TEXT,
                        ResponseModality.IMAGE,
                    )
                },
            )
    	  // We combine the backgroundPrompt with the input image which is the Android Bot, to produce the new bot with a background
            val prompt = content {
                text(backgroundPrompt)
                image(image)
            }
            val response = model.generateContent(prompt)
            val image = response.candidates.firstOrNull()
                ?.content?.parts?.firstNotNullOfOrNull { it.asImageOrNull() }
            return image ?: throw IllegalStateException("Could not extract image from model response")
        }

    See the LocalSegmentationDataSource for the full source implementation

    Learn more

    To learn more about Androidify behind the scenes, take a look at the new solutions walkthrough, inspect the code or try out the experience for yourself at androidify.com or download the app on Google Play.

    moving demo of Androidfiy app

    *Check responses. Compatibility and availability varies. 18+.

    The post Androidify: Building AI first Android Experiences with Gemini using Jetpack Compose and Firebase appeared first on InShot Pro.

    ]]>
    Entri cut UI development time by 40% with Gemini in Android Studio https://theinshotproapk.com/entri-cut-ui-development-time-by-40-with-gemini-in-android-studio/ Wed, 03 Sep 2025 18:07:00 +0000 https://theinshotproapk.com/entri-cut-ui-development-time-by-40-with-gemini-in-android-studio/ Posted by Paris Hsu – Product Manager Entri delivers online learning experiences across local languages to over 15 million people ...

    Read more

    The post Entri cut UI development time by 40% with Gemini in Android Studio appeared first on InShot Pro.

    ]]>

    Posted by Paris Hsu – Product Manager

    Entri delivers online learning experiences across local languages to over 15 million people in India, empowering them to secure jobs and advance in their careers. To seize on the latest advancements in AI, the Entri team explored a variety of tools to help their developers create better experiences for users.

    Their latest experiment? Adopting Gemini in Android Studio to enable them to move faster. Not only did Gemini speed up the teams’ work, trim tedious tasks, and foster ongoing learning, it streamlined collaboration between design and development and became an enjoyable, go-to resource that boosted the team’s productivity overall.

    Turning screenshots to code—fast

    To tighten build time, developers at Entri used Gemini in Android Studio to generate Compose UI code directly from mockups. By uploading screenshots of Figma designs, Gemini produced the UI structures they needed to build entire screens in minutes. Gemini played a key role in revamping the platform’s Sign-Up flow, for example, fast-tracking a process that typically takes hours to just under 45 minutes.

    By streamlining the creation of Compose UIs—often from just a screenshot and a few prompts—Gemini also made it significantly easier to quickly prototype new ideas and create MVPs. This allowed their team to test concepts and validate business needs without getting bogged down by repetitive UI tweaks up front.

    Entri developers found that the ability to generate code by attaching images in Gemini in Android Studio drastically reduced boilerplate work and improved alignment between design and engineering. Over time, this approach became a standard part of their prototyping process, with the team reporting 40% reduction in average UI build time per screen.

    quote from Jackson E J, Technical Lead, Mobile @ Entri

    Faster experimentation to create a better app experience

    The Entri team has a strong culture of experimentation, and often has multiple user-facing experiments running at once. The team found Gemini in Android Studio particularly valuable in speeding up their experimentation processes. The tool quickly produced code for A/B testing, including UI changes and feature toggles, allowing the team to conduct experiments faster and iterate in more informed ways. It also made it faster for them to get user feedback and apply it. By simplifying the early build phase and allowing for sharper testing, Gemini boosted their speed and confidence, freeing them up to create more, test faster, and refine smarter.

    When it came to launching new AI learning features, Entri wanted to be first to market. With Gemini in Android Studio’s help, the Entri team rolled out their AI Teaching Assistant and Interview Coach to production much faster than they normally could. “What used to take weeks, now takes days,” said Jackson. “And what used to take hours, now takes minutes.”

    quote from Sanjay Krishna, Head of Product @ Entri

    Tool integration reduces context switching

    Gemini in Android Studio has changed the game for Entri’s developers, removing the need to break focus to switch between tools or hunt through external documentation. Now the team receives instant answers to common questions about Android APIs and Kotlin syntax without leaving the application.

    For debugging crashes, Gemini was especially useful when paired with App Quality Insights in Android Studio. By sharing stack traces directly with Gemini, developers received targeted suggestions for possible root causes and quick fixes directly in the IDE. This guidance allowed them to resolve crashes reported by Firebase and Google Play more efficiently and with less context switching. Gemini surfaced overlooked edge cases and offered alternative solutions to improve app stability, too.

    quote from Jackson E J, Technical Lead, Mobile @ Entri

    Shifting focus from routine tasks to innovation

    Entri developers also wanted to test the efficiency of Gemini in Android Studio on personal projects as well. They leaned on the tool to create a weather tracker, password manager, and POS billing system—all on top of their core project work at Entri. They enjoyed trying it out in their personal projects and experimenting with different use cases.

    By offloading repetitive tasks and expediting initial UI and screen generation, Gemini has allowed developers to focus more on innovation, exploration, and creativity—things that often get sidelined when dealing with routine coding work. Now the team is able to spend their time refining final products, designing smarter UX, and strategizing, making their day-to-day work more efficient, collaborative, and motivating.

    Get started

    Ramp up your development processes with Gemini in Android Studio.

    The post Entri cut UI development time by 40% with Gemini in Android Studio appeared first on InShot Pro.

    ]]>
    Tune in on September 3: recapping the latest from Made by Google and more in our summer episode of The Android Show https://theinshotproapk.com/tune-in-on-september-3-recapping-the-latest-from-made-by-google-and-more-in-our-summer-episode-of-the-android-show/ Tue, 02 Sep 2025 12:11:53 +0000 https://theinshotproapk.com/tune-in-on-september-3-recapping-the-latest-from-made-by-google-and-more-in-our-summer-episode-of-the-android-show/ Posted by Christopher Katsaros – Senior Product Marketing Manager In just a few days, on Wednesday September 3 at 11AM ...

    Read more

    The post Tune in on September 3: recapping the latest from Made by Google and more in our summer episode of The Android Show appeared first on InShot Pro.

    ]]>

    Posted by Christopher Katsaros – Senior Product Marketing Manager

    In just a few days, on Wednesday September 3 at 11AM PT, we’ll be dropping our summer episode of #TheAndroidShow, on YouTube and on developer.android.com! In this quarterly show, we’ll be unpacking all of the goodies coming out of this month’s Made by Google event and what you as Android developers need to know!

    With the new Pixel Watch 4 running Wear OS 6, we’ll show you how to get building for the wrist. And with the latest foldable from Google, the Pixel 10 Pro Fold, we’ll show how you can leverage out of the box APIs and multi-window experiences to make your apps adaptive for this new form factor. Plus, we’ll be unpacking a set of new features for Gemini in Android Studio to help you be even more productive.

    #TheAndroidShow is your conversation with the Android developer community, this time hosted by Annyce Davis and John Zoeller. You’ll hear the latest from the developers and engineers who build Android. Don’t forget to tune in live on September 3 at 10AM PT, live on YouTube and on developer.android.com/events/show!

    The post Tune in on September 3: recapping the latest from Made by Google and more in our summer episode of The Android Show appeared first on InShot Pro.

    ]]>
    Top 3 Updates for Android Developer Productivity @ Google I/O ‘25 https://theinshotproapk.com/top-3-updates-for-android-developer-productivity-google-i-o-25/ Mon, 23 Jun 2025 17:01:00 +0000 https://theinshotproapk.com/top-3-updates-for-android-developer-productivity-google-i-o-25/ Posted by Meghan Mehta – Android Developer Relations Engineer #1 Agentic AI is available for Gemini in Android Studio Gemini ...

    Read more

    The post Top 3 Updates for Android Developer Productivity @ Google I/O ‘25 appeared first on InShot Pro.

    ]]>

    Posted by Meghan Mehta – Android Developer Relations Engineer

    #1 Agentic AI is available for Gemini in Android Studio

    Gemini in Android Studio is the AI-powered coding companion that makes you more productive at every stage of the dev lifecycle. At Google I/O 2025 we previewed new agentic AI experiences: Journeys for Android Studio and Version Upgrade Agent. These innovations make it easier for you to build and test code. We also announced Agent Mode, which was designed to handle complex, multi-stage development tasks that go beyond typical AI assistant capabilities, invoking multiple tools to accomplish tasks on your behalf. We’re excited to see how you leverage these agentic AI experiences which are now available in the latest preview version of Android Studio on the canary release channel.

    You can also use Gemini to automatically generate Jetpack Compose previews, as well as transform UI code using natural language, saving you time and effort. Give Gemini more context by attaching images and project files to your prompts, so you can get more relevant responses. And if you’re looking for enterprise-grade privacy and security features backed by Google Cloud, Gemini in Android Studio for businesses is now available. Developers and admins can unlock these features and benefits by subscribing to Gemini Code Assist Standard or Enterprise editions.

    #2 Build better apps faster with the latest stable release of Jetpack Compose

    Compose is our recommended UI toolkit for Android development, used by over 60% of the top 1K apps on Google Play. We released a new version of our Jetpack Navigation library: Navigation 3, which has been rebuilt from the ground up to give you more flexibility and control over your implementation. We unveiled the new Material 3 Expressive update which provides tools to enhance your product’s appeal by harnessing emotional UX, making it more engaging, intuitive, and desirable for your users. The latest stable Bill of Materials (BOM) release for Compose adds new features such as autofill support, auto-sizing text, visibility tracking, animate bounds modifier, accessibility checks in tests, and more! This release also includes significant rewrites and improvements to multiple sub-systems including semantics, focus and text optimizations.

    These optimizations are available to you with no code changes other than upgrading your Compose dependency. If you’re looking to try out new Compose functionality, the alpha BOM offers new features that we’re working on including pausable composition, updates to LazyLayout prefetch, context menus, and others. Finally, we’ve added Compose support to CameraX and Media3, making it easier to integrate camera capture and video playback into your UI with Compose idiomatic components.

    #3 The new Kotlin Multiplatform (KMP) shared module template helps you share business logic

    KMP enables teams to deliver quality Android and iOS apps with less development time. The KMP ecosystem continues to grow: last year alone, over 900 new KMP libraries were published. At Google I/O we released a new Android Studio KMP shared module template to help you craft and manage business logic, updated Jetpack libraries and new codelabs (Getting started with Kotlin Multiplatform and Migrating your Room database to KMP) to help you get started with KMP. We also shared additional announcements at KotlinConf.

    Learn more about what we announced at Google I/O 2025 to help you build better apps, faster.

    The post Top 3 Updates for Android Developer Productivity @ Google I/O ‘25 appeared first on InShot Pro.

    ]]>
    Agentic AI takes Gemini in Android Studio to the next level https://theinshotproapk.com/agentic-ai-takes-gemini-in-android-studio-to-the-next-level/ Mon, 23 Jun 2025 17:00:00 +0000 https://theinshotproapk.com/agentic-ai-takes-gemini-in-android-studio-to-the-next-level/ Posted by Sandhya Mohan – Product Manager, and Jose Alcérreca – Developer Relations Engineer Software development is undergoing a significant ...

    Read more

    The post Agentic AI takes Gemini in Android Studio to the next level appeared first on InShot Pro.

    ]]>

    Posted by Sandhya Mohan – Product Manager, and Jose Alcérreca – Developer Relations Engineer

    Software development is undergoing a significant evolution, moving beyond reactive assistants to intelligent agents. These agents don’t just offer suggestions; they can create execution plans, utilize external tools, and make complex, multi-file changes. This results in a more capable AI that can iteratively solve challenging problems, fundamentally changing how developers work.

    At Google I/O 2025, we offered a glimpse into our work on agentic AI in Android Studio, the integrated development environment (IDE) focused on Android development. We showcased that by combining agentic AI with the built-in portfolio of tools inside of Android Studio, the IDE is able to assist you in developing Android apps in ways that were never possible before. We are now incredibly excited to announce the next frontier in Android development with the availability of ‘Agent Mode’ for Gemini in Android Studio.

    These features are available in the latest Android Studio Narwhal Feature Drop Canary release, and will be rolled out to business tier subscribers in the coming days. As with all new Android Studio features, we invite developers to provide feedback to direct our development efforts and ensure we are creating the tools you need to build better apps, faster.

    Agent Mode

    Gemini in Android Studio’s Agent Mode is a new experimental capability designed to handle complex development tasks that go beyond what you can experience by just chatting with Gemini.

    With Agent Mode, you can describe a complex goal in natural language — from generating unit tests to complex refactors — and the agent formulates an execution plan that can span multiple files in your project and executes under your direction. Agent Mode uses a range of IDE tools for reading and modifying code, building the project, searching the codebase and more to help Gemini complete complex tasks from start to finish with minimal oversight from you.

    To use Agent Mode, click Gemini in the sidebar, then select the Agent tab, and describe a task you’d like the agent to perform. Some examples of tasks you can try in Agent Mode include:

      • Build my project and fix any errors
      • Extract any hardcoded strings used across my project and migrate to strings.xml
      • Add support for dark mode to my application
      • Given an attached screenshot, implement a new screen in my application using Material 3

    The agent then suggests edits and iteratively fixes bugs to complete tasks. You can review, accept, or reject the proposed changes along the way, and ask the agent to iterate on your feedback.

    moving image showing Gemini breaking tasks down into a plan with simple steps, and the list of IDE tools it needs to complete each step

    Gemini breaks tasks down into a plan with simple steps. It also shows the list of IDE tools it needs to complete each step.

    While powerful, you are firmly in control, with the ability to review, refine and guide the agent’s output at every step. When the agent proposes code changes, you can choose to accept or reject them.

    screenshot of Gemini in Android Studio showing the Agent prompting the user to accept or reject a change

    The Agent waits for the developer to approve or reject a change.

    Additionally, you can enable “Auto-approve” if you are feeling lucky 😎 — especially useful when you want to iterate on ideas as rapidly as possible.

    You can delegate routine, time-consuming work to the agent, freeing up your time for more creative, high-value work. Try out Agent Mode in the latest preview version of Android Studio – we look forward to seeing what you build! We are investing in building more agentic experiences for Gemini in Android Studio to make your development even more intuitive, so you can expect to see more agentic functionality over the next several releases.

    moving image showing that Gemini understanding the context of an app

    Gemini is capable of understanding the context of your app

    Supercharge Agent Mode with your Gemini API key

    screenshot of Gemini API key prompt in Android Studio

    The default Gemini model has a generous no-cost daily quota with a limited context window. However, you can now add your own Gemini API key to expand Agent Mode’s context window to a massive 1 million tokens with Gemini 2.5 Pro.

    A larger context window lets you send more instructions, code and attachments to Gemini, leading to even higher quality responses. This is especially useful when working with agents, as the larger context provides Gemini 2.5 Pro with the ability to reason about complex or long-running tasks.

    screenshot of how to add your API Key in the Gemini settings

    Add your API key in the Gemini settings

    To enable this feature, get a Gemini API key by navigating to Google AI Studio. Sign in and get a key by clicking on the “Get API key” button. Then, back in Android Studio, navigate to the settings by going to File (Android Studio on macOS) > Settings > Tools > Gemini to enter your Gemini API key. Relaunch Gemini in Android Studio and get even better responses from Agent Mode.

    Be sure to safeguard your Gemini API key, as additional charges apply for Gemini API usage associated with a personal API key. You can monitor your Gemini API key usage by navigating to AI Studio and selecting Get API key > Usage & Billing.

    Note that business tier subscribers already get access to Gemini 2.5 Pro and the expanded context window automatically with their Gemini Code Assist license, so these developers will not see an API key option.

    Model Context Protocol (MCP)

    Gemini in Android Studio’s Agent Mode can now interact with external tools via the Model Context Protocol (MCP). This feature provides a standardized way for Agent Mode to use tools and extend knowledge and capabilities with the external environment.

    There are many tools you can connect to the MCP Host in Android Studio. For example you could integrate with the Github MCP Server to create pull requests directly from Android Studio. Here are some additional use cases to consider.

    In this initial release of MCP support in the IDE you will configure your MCP servers through a mcp.json file placed in the configuration directory of Studio, using the following format:

    {
      "mcpServers": {
        "memory": {
          "command": "npx",
          "args": [
            "-y",
            "@modelcontextprotocol/server-memory"
          ]
        },
        "sequential-thinking": {
          "command": "npx",
          "args": [
            "-y",
            "@modelcontextprotocol/server-sequential-thinking"
          ]
        },
        "github": {
          "command": "docker",
          "args": [
            "run",
            "-i",
            "--rm",
            "-e",
            "GITHUB_PERSONAL_ACCESS_TOKEN",
            "ghcr.io/github/github-mcp-server"
          ],
          "env": {
            "GITHUB_PERSONAL_ACCESS_TOKEN": "<YOUR_TOKEN>"
          }
        }
      }  
    }
    
    Example configuration with two MCP servers

    For this initial release, we support interacting with external tools via the stdio transport as defined in the MCP specification. We plan to support the full suite of MCP features in upcoming Android Studio releases, including the Streamable HTTP transport, external context resources, and prompt templates.

    For more information on how to use MCP in Studio, including the mcp.json configuration file format, please refer to the Android Studio MCP Host documentation.

    By delegating routine tasks to Gemini through Agent Mode, you’ll be able to focus on more innovative and enjoyable aspects of app development. Download the latest preview version of Android Studio on the canary release channel today to try it out, and let us know how much faster app development is for you!

    As always, your feedback is important to us – check known issues, report bugs, suggest improvements, and be part of our vibrant community on LinkedIn, Medium, YouTube, or X. Let’s build the future of Android apps together!

    The post Agentic AI takes Gemini in Android Studio to the next level appeared first on InShot Pro.

    ]]>