AI https://theinshotproapk.com/category/app/ai/ Download InShot Pro APK for Android, iOS, and PC Wed, 17 Dec 2025 16:13:00 +0000 en-US hourly 1 https://theinshotproapk.com/wp-content/uploads/2021/07/cropped-Inshot-Pro-APK-Logo-1-32x32.png AI https://theinshotproapk.com/category/app/ai/ 32 32 Build smarter apps with Gemini 3 Flash https://theinshotproapk.com/build-smarter-apps-with-gemini-3-flash/ Wed, 17 Dec 2025 16:13:00 +0000 https://theinshotproapk.com/build-smarter-apps-with-gemini-3-flash/ Posted by Thomas Ezan, Senior Developer Relations Engineer Today, we’re expanding the Gemini 3 model family with the release of ...

Read more

The post Build smarter apps with Gemini 3 Flash appeared first on InShot Pro.

]]>

Posted by Thomas Ezan, Senior Developer Relations Engineer



Today, we’re expanding the Gemini 3 model family with the release of Gemini 3 Flash, frontier intelligence built for speed at a fraction of the cost. You can start building with it immediately, as we’re officially launching Gemini 3 Flash on Firebase AI Logic. Available globally, you can securely access the Gemini 3 Flash preview model directly from your app via the Gemini Developer API or the Vertex AI Gemini API using Firebase AI Logic client SDKs. Gemini 3 Flash’s strong performance in reasoning, tool use, and multimodal capabilities is ideal for developers looking to do more complex video analysis, data extraction and visual Q&A.

Gemini 3 optimized for low-latency

Gemini 3 is our most intelligent model family to date. With the launch of Gemini 3 Flash, we are making that intelligence more accessible for low-latency and cost-effective use cases. While Gemini 3 Pro is designed for complex reasoning, Gemini 3 Flash is engineered to be significantly faster and more cost-effective for your production apps.

Seamless integration with Firebase AI Logic

Just like the Pro model, Gemini 3 Flash is available in preview directly through the Firebase AI Logic SDK. This means you can integrate it into your Android app without needing to do any complex server side setup.

Here is how to add it to your Kotlin code:


val model = Firebase.ai(backend = GenerativeBackend.googleAI())
    .generativeModel(
        modelName = "gemini-3-flash-preview")

Scale with Confidence

In addition, Firebase enables you to keep your growth secure and manageable with:

AI Monitoring

The Firebase AI monitoring dashboard gives you visibility into latency, success rates, and costs, allowing you to slice data by model name to see exactly how the model performs.

Server Prompt Templates

You can use server prompt templates to store your prompt and schema securely on Firebase servers instead of hardcoding them in your app binary. This capability ensures your sensitive prompts remain secure, prevents unauthorized prompt extraction, and allows for faster iteration without requiring app updates.

---
model: 'gemini-3-flash-preview'
input:
  schema:
    topic:
      type: 'string'
      minLength: 2
      maxLength: 40
    length:
      type: 'number'
      minimum: 1
      maximum: 200
    language:
      type: 'string'
---

{{role "system"}}
You're a storyteller that tells nice and joyful stories with happy endings.

{{role "user"}}
Create a story about {{topic}} with the length of {{length}} words in the {{language}} language.

Prompt template defined on the Firebase Console  

val generativeModel = Firebase.ai.templateGenerativeModel()
val response = generativeModel.generateContent("storyteller-v10",
    mapOf(
        "topic" to topic,
        "length" to length,
        "language" to language
    )
)
_output.value = response.text

Code snippet to access to the prompt template

Gemini 3 Flash for AI development assistance in Android Studio

Gemini 3 Flash is also available for AI assistance in Android Studio. While Gemini 3 Pro Preview is our best model for coding and agentic experiences, Gemini 3 Flash is engineered for speed, and great for common development tasks and questions.

 
The new model is rolling out to developers using Gemini in Android Studio at no-cost (default model) starting today. For higher usage rate limits and longer sessions with Agent Mode, you can use an AI Studio API key to leverage the full capabilities of either Gemini 3 Flash or Gemini 3 Pro. We’re also rolling out Gemini 3 model family access with higher usage rate limits to developers who have Gemini Code Assist Standard or Enterprise licenses. Your IT administrator will need to enable access to preview models through the Google Cloud console.

Get Started Today

You can start experimenting with Gemini 3 Flash via Firebase AI Logic today. Learn more about it in the Android and Firebase documentation. Try out any of the new Gemini 3 models in Android Studio for development assistance, and let us know what you think! As always you can follow us across LinkedIn, Blog, YouTube, and X.

The post Build smarter apps with Gemini 3 Flash appeared first on InShot Pro.

]]>
Android Studio Narwhal 3 Feature Drop: Resizable Compose Preview, monthly releases and smarter AI https://theinshotproapk.com/android-studio-narwhal-3-feature-drop-resizable-compose-preview-monthly-releases-and-smarter-ai/ Sat, 06 Sep 2025 12:06:28 +0000 https://theinshotproapk.com/android-studio-narwhal-3-feature-drop-resizable-compose-preview-monthly-releases-and-smarter-ai/ Posted by Paris Hsu – Product Manager, Android Studio Welcome to the Android Studio Narwhal Feature Drop 3 release. This ...

Read more

The post Android Studio Narwhal 3 Feature Drop: Resizable Compose Preview, monthly releases and smarter AI appeared first on InShot Pro.

]]>

Posted by Paris Hsu – Product Manager, Android Studio

Welcome to the Android Studio Narwhal Feature Drop 3 release. This update delivers significant improvements across the board to enhance your productivity. While we continue to innovate with powerful, project-aware AI assistance in Gemini, this release also brings fundamental upgrades to core development workflows. Highlights include a resizable Compose Preview for faster UI iteration and robust app Backup & Restore tools to ensure smooth app transfers across devices for your users. These additions, alongside a more context-aware Gemini, aim to streamline every phase of your development process.

These features are delivered as part of our new monthly release cadence for Android Studio, which allows us to provide improvements more frequently. Learn more about this change and how we’re accelerating development with monthly releases for Android Studio.

What’s New in Android Studio Narwhal 3 Feature Drop

Develop with AI 🚀

Since launching Gemini in Android Studio, we’ve been working hard to introduce features and integrations across Studio with the needs of Android developers in mind. Developers have been telling us about the productivity benefits AI brings to their workflow — such as Entri, who reduced their UI development time per screen by 40%.

With this release, enhanced how you interact with Gemini — with improved options for providing project context, file attachments, and support for image attachments.

AGENTS.md: providing project-level context to Gemini

AGENTS.md is a Markdown file that lets you provide project-specific instructions, coding style rules, and other guidance that Gemini automatically uses for context. The AGENTS.md file can be checked into your version control system (like Git), ensuring your entire team shares the same core instructions and receives consistent, context-aware AI assistance. AGENTS.md files are located right alongside your code; use multiple AGENTS.md files across different directories for more granular control over your codebase.

screenshot of AGENT.md automatically included in Context Drawer in Android Studio

AGENTS.md automatically included in Context Drawer

screenshot of Sample AGENT.md file in Android Studio

Sample AGENTS.md file

We’re making it much easier to provide rich, on-the-fly context. That’s why we are also excited to share that two powerful features, Image Attachment and the @File Context, are graduating from Studio Labs and are now stable:

Image attachment – Gemini in Android Studio

The ability to attach images to your queries with Gemini is now available in the stable channel! This feature accelerates UI development and improves architectural understanding. You can:

    • Generate UI from a mock-up: Provide a design image and ask Gemini to generate the Compose code.
    • Understand an existing screen: Upload a screenshot and ask Gemini to explain the UI’s component structure and data flow.
    • Debug UI bugs: Take a screenshot of a bug, circle the issue, and ask Gemini for solutions.

screenshot of image attachment in Gemini in Android Studio

Image attachment in Gemini in Android Studio

@file attachment – Gemini in Android Studio

The File attachment and context drawer are also graduating from Studio Labs! Easily attach relevant project files to your prompts by typing @ in the chat window. Gemini can then use the full context of those files to provide more accurate and relevant answers. Gemini will also suggest files it thinks are relevant, which you can easily add or remove.

screenshot of evoking @file attachment in Android Studio

Evoke @file attachment

What’s next: Deeper integration with MCP support

Looking ahead, in our summer episode of #TheAndroidShow, we went behind the scenes with Android Studio’s new MCP (Model Context Protocol) support. This protocol enhances Gemini’s interoperability with the broader developer ecosystem, allowing it to connect to tools like GitHub. Learn how MCP support can make Gemini’s Agent Mode even more helpful for your workflow, and try it today in the Canary channel.

Optimize and refine ✨

This release includes several new features to help you optimize your app, improve project organization, and ensure compliance.

Test app backup and restore

With new Android hardware devices coming out, ensuring a smooth app transfer experience for your users switching to a new device is critical. Android Studio now provides tools to generate a backup of your app’s data and restore it to another device. This makes it much easier to test your app’s backup and restore functionality and protect users from data loss. Additionally, you can create and attach backups to your run configurations, making it easy to utilize Backup and Restore for your day-to-day development.

screenshot of backup and restore dialog in Android Studio

Backup and restore dialog

Play policy insights

Get early warnings about potential Play policy violations to help you build more compliant apps with Play Policy Insights, now in Android Studio. The IDE now shows lint warnings directly in your code when it relates to a Google Play policy requirement. You can also integrate these lint checks into your CI/CD pipelines. These insights provide an overview of the policy, dos and don’ts, and links more resources, helping you address potential issues early in your development cycle.

moving image of Play policy insights in Android Studio

Play policy insights example

Proguard inspections for overly broad keep rules

Android Studio’s Proguard file editor now warns you about keep rules that are overly broad. These rules can limit R8’s ability to optimize your code, potentially impacting app size and performance. This inspection helps you write more precise rules for a more optimized app.

screenshot of proguard inspections example in Android Studio

Proguard inspections example

Improved Android view for multi-module projects

For those working on large projects, the Android view has a new setting to display build files directly under their corresponding modules. This change makes it easier to navigate and manage build scripts in projects with many modules.

screenshot of option to display build files in module in Android Studio

Option to display build files in module

More control over automatic project sync

For developers working on large projects, automatic Gradle syncs can sometimes interrupt your workflow. To give you more control, we’re introducing an option to switch to manual project sync with reminders. When enabled, Android Studio will inform you when a sync is needed, but lets you decide when to run it, so there aren’t unexpected interruptions. You can try this feature by navigating to Settings > Build, Execution, Deployment > Build Tools.

screenshot of project sync mode in Android Studio

screenshot of enable / disable auto project sync in Android Studio

Enable / Disable auto project sync

Faster UI iteration 🎨

Resizable compose preview

Building responsive UIs just got easier: Compose Preview now supports dynamic resizing, giving you instant visual feedback on how your UI adapts to different screen sizes. Simply enter Focus mode in the Compose Preview and drag the edges to see your layout change in real-time. You can even save a specific size as a new @Preview annotation with a single click, streamlining your multi-device development process.

screenshot of enable / disable auto project sync in Android Studio

Resizable compose preview

Summary

To recap, Android Studio Narwhal Feature Drop 3 includes the following enhancements and features:

Develop with AI

    • AGENTS.md support: Provide project-specific context to Gemini for more tailored responses.
    • Image attachment (Stable): Easily attach image files for Gemini in Android Studio.
    • @File attachment (Stable): Easily attach project files as context for Gemini in Android Studio.

Optimize and refine

    • Backup and restore support: Easily test your app’s data backup and restoration flow.
    • Play policy insights: Get early warnings about potential Play Policy violations.
    • Proguard inspections: Identify and fix overly broad keep rules for better optimization.
    • Display build files under module: Improve project navigation in the Android view.
    • Manual project sync: Gain more control over when Gradle syncs occur in large projects.

Faster UI iteration

    • Resizable compose preview: Dynamically resize your previews to test responsive UIs instantly.

Get started

Ready to accelerate your development? Download Android Studio Narwhal 3 Feature Drop from the stable channel today!

Your feedback is essential. Please continue to share your thoughts by reporting bugs or suggesting features. For early access to the latest features, download Android Studio from the Canary channel.

Join our vibrant Android developer community on LinkedIn, Medium, YouTube, or X. We can’t wait to see what you build!

The post Android Studio Narwhal 3 Feature Drop: Resizable Compose Preview, monthly releases and smarter AI appeared first on InShot Pro.

]]>
Android Studio Narwhal Feature Drop is stable – start using Agent Mode https://theinshotproapk.com/android-studio-narwhal-feature-drop-is-stable-start-using-agent-mode/ Thu, 31 Jul 2025 17:30:00 +0000 https://theinshotproapk.com/android-studio-narwhal-feature-drop-is-stable-start-using-agent-mode/ Posted by Paris Hsu – Product Manager, Android Studio The next wave of innovation is here with Android Studio Narwhal ...

Read more

The post Android Studio Narwhal Feature Drop is stable – start using Agent Mode appeared first on InShot Pro.

]]>

Posted by Paris Hsu – Product Manager, Android Studio

The next wave of innovation is here with Android Studio Narwhal Feature Drop. We’re thrilled to announce that Gemini in Android Studio’s Agent Mode is now available in the stable release, ready to tackle your most complex coding challenges. This release also brings powerful new tools for XR development, continued quality improvements, and key updates to enhance your productivity and help you build high-quality apps.

Dive in to learn more about all the updates and new features designed to supercharge your workflow.

moving image of Gemini in Android Studio: Agent Mode

Gemini in Android Studio: Agent Mode

Develop with Gemini

Try out Agent Mode

Go beyond chat and assign tasks to Gemini. Gemini in Android Studio’s Agent Mode is a powerful AI feature designed to handle complex, multi-stage development tasks. To use Agent Mode, click Gemini in the sidebar and then select the Agent tab. You can describe a high-level goal, like adding a new feature, generating comprehensive unit tests, or fixing a nuanced bug.

The agent analyzes your request, breaks it down into smaller steps, and formulates an execution plan that uses IDE tools, such as reading and writing files and performing Gradle tasks, and can span multiple files in your project. It then iteratively suggests code changes, and you’re always in control—you can review, accept, or reject the proposed changes and ask the agent to iterate based on your feedback. Let the agent handle the heavy lifting while you focus on the bigger picture.

After releasing Agent Mode to Canary, we had positive feedback from the developers who tried it. We were so excited about the feature’s potential, we moved it to the stable channel faster than ever before, so that you can get your hands on it. Try it out and let us know what you build.

screen grab of Gemini's Agent Mode in Android Studio

Gemini in Android Studio: Agent Mode

Currently, the default model offered in the free tier in Android Studio has a shorter context length, which can limit the depth of response from some agent questions and tasks. In order to get the best performance from Agent Mode, you can bring your own key for the public Gemini API. Once you add your Gemini API key with a paid GCP project, you’ll then be able to use the latest Gemini 2.5 Pro with a full 1M context window from Android Studio. Remember to pick the “Gemini 2.5 Pro” from the model picker in the chat and agent input boxes.

screen grab of Gemini's model selector in Android Studio

Gemini in Android Studio: model selector

Rules in prompt library

Tailor the response from Gemini to fit your project’s specific needs with Rules in the prompt library. You can define preferred coding styles, tech stacks, languages, or output formats to help Gemini understand your project standards for more accurate and personalized code assistance. You can set these preferences once, and they’ll be automatically applied to all subsequent prompts sent to Gemini. For example, you can create a rule such as, “Always provide concise responses in Kotlin using Jetpack Compose.” You can also set rules at the IDE level for personal use across projects, or at the project level, which can be shared with teammates by adding the .idea folder to your version control system.

screen grab of Rules in Prompt Library in Android Studio

Rules in prompt library

Transform UI with Gemini [Studio Labs]

You can now transform UI code within the Compose Preview environment using natural language, directly in the preview. This experimental feature, available through Studio Labs, speeds up UI development by letting you iterate with simple text commands. To use it, right-click in the Compose Preview and select Transform UI With Gemini. Then enter your natural language requests, such as “Center align these buttons,” to guide Gemini in adjusting your layout or styling, or select specific UI elements in the preview for better context. Gemini will then edit your Compose UI code in place, which you can review and approve.

side by side screen captures of accessing the 'Transform UI with Gemini' menu on the left, and applying a natural language transformationto a Compose preview on the right in Android Studio

Immersive development

XR Android Emulator and template

Kickstart your extended reality development! Android Studio now includes:

    • XR Android Emulator: The XR Android Emulator now launches embedded within the IDE by default. You can deploy your Jetpack app, navigate the 3D space, and use the Embedded Layout Inspector directly inside Android Studio.
    • XR template: Get a head start on your next project with a new template specifically designed for Jetpack XR. This provides a solid foundation with boilerplate code to begin your immersive experience development journey right away.

XR Android Emulator in Android Studio

XR Android Emulator

XR Android Emulator in Android Studio

XR Android template in new project template

Embedded Layout Inspector for XR

The embedded Layout Inspector now supports XR applications, which lets you inspect and optimize your UI layouts within the XR environment. Get detailed insights into your app’s component structure and identify potential layout issues to create more polished and performant experiences.

Embedded Layout Inspector for XR in Android Studio

Embedded Layout Inspector for XR

Android Partner Device Labs available with Android Device Streaming

Android Partner Device Labs are device labs operated by Google OEM partners, such as Samsung, Xiaomi, OPPO, OnePlus, vivo, and others, and expand the selection of devices available in Android Device Streaming. To learn more, see Connect to Android Partner Device Labs.

Embedded Layout Inspector for XR in Android Studio

Android Device Streaming supports Android Partner Device Labs

Optimize and refine

Jetpack Compose preview quality improvements

We’ve made several enhancements to Compose previews to make UI iteration faster and more intuitive:

    • Improved code navigation: You can now click on a preview’s name to instantly jump to its @Preview definition, or click an individual component within the preview to navigate directly to the function where it’s defined. Hover states and improved keyboard arrow navigation make moving through multiple previews a breeze.
    • Preview picker: The new Compose preview picker is now available. You can click any @Preview annotation in your Compose code to access the picker and easily manage your previews.

improved code navigation in Compose preview in Android Studio

Compose preview: Improved code navigation

Compose preview picker in Android Studio

Compose preview picker

K2 mode by default

Android Studio now uses the K2 Kotlin compiler by default. This next-generation compiler brings significant performance improvements to the IDE and your builds. By enabling K2, we are paving the way for future Kotlin programming language features and an even faster, more robust development experience in Kotlin.

K2 mode setting in Android Studio

K2 mode setting

16 KB page size support

To help you prepare for the future of Android hardware, this release adds improved support for transitioning to 16 KB page sizes. Android Studio now offers proactive warnings when building apps that are incompatible with 16 KB devices. You can use the APK Analyzer to identify which specific libraries in your project are incompatible. Lint checks also highlight the native libraries which are not 16 KB aligned. To test your app in this new environment, a dedicated 16 KB emulator target is also available in the AVD Manager.

16 KB page size support: APK Analyzer indication

16 KB page size support: APK Analyzer indication

16 KB page size support: APK Analyzer indication

16 KB page size support: Lint checks

Services compatibility policy

Android Studio offers service integrations that help you and your team make faster progress as you develop, release, and maintain Android apps. Services are constantly evolving and may become incompatible with older versions of Android Studio. Therefore, we are introducing a policy where features that depend on a Google Cloud service are supported for approximately a year in each version of Android Studio. The IDE will notify you when the current version is within 30 days of becoming incompatible so you can update it.

Example notification for services compatibility policy in Android Studio

Example notification for services compatibility policy

Summary

To recap, Android Studio Narwhal Feature Drop includes the following enhancements and features:

Develop with Gemini

    • Gemini in Android Studio: agent mode: use Gemini for tackling complex, multi-step coding tasks.
    • Rules in Prompt Library: Customize Gemini’s output for your project’s standards.
    • Transform preview with Gemini [Studio Labs]: Use natural language to iterate on Compose UI.

Immersive development

    • Embedded XR Android Emulator: Test and debug XR apps directly within the IDE.
    • XR template: A new project template to kickstart XR development.
    • Embedded Layout Inspector for XR: Debug and optimize your UI in an XR environment.
    • Android Partner Device Labs available with Android Device Streaming: access more Google OEM partner devices.

Optimize and refine

    • Compose preview improvements: Better navigation and a new picker for a smoother workflow.
    • K2 mode by default: Faster performance with the next-gen Kotlin compiler.
    • 16KB page size support: Lint warnings, analysis, and an emulator to prepare for new devices.
    • Services compatibility policy: Stay up-to-date for access to integrated Google services.

Get started

Ready to accelerate your development? Download Android Studio Narwhal Feature Drop and start exploring these powerful new features today! As always, your feedback is crucial to us.

Check known issues, report bugs, suggest improvements, and be part of our vibrant community on LinkedIn Medium, YouTube, or X. Let’s build the future of Android apps together!

The post Android Studio Narwhal Feature Drop is stable – start using Agent Mode appeared first on InShot Pro.

]]>
Androidify: Building delightful UIs with Compose https://theinshotproapk.com/androidify-building-delightful-uis-with-compose/ Tue, 03 Jun 2025 12:07:48 +0000 https://theinshotproapk.com/androidify-building-delightful-uis-with-compose/ Posted by Rebecca Franks – Developer Relations Engineer Androidify is a new sample app we built using the latest best ...

Read more

The post Androidify: Building delightful UIs with Compose appeared first on InShot Pro.

]]>

Posted by Rebecca Franks – Developer Relations Engineer

Androidify is a new sample app we built using the latest best practices for mobile apps. Previously, we covered all the different features of the app, from Gemini integration and CameraX functionality to adaptive layouts. In this post, we dive into the Jetpack Compose usage throughout the app, building upon our base knowledge of Compose to add delightful and expressive touches along the way!

Material 3 Expressive

Material 3 Expressive is an expansion of the Material 3 design system. It’s a set of new features, updated components, and design tactics for creating emotionally impactful UX.

It’s been released as part of the alpha version of the Material 3 artifact (androidx.compose.material3:material3:1.4.0-alpha10) and contains a wide range of new components you can use within your apps to build more personalized and delightful experiences. Learn more about Material 3 Expressive’s component and theme updates for more engaging and user-friendly products.

Material Expressive Component updates

Material Expressive Component updates

In addition to the new component updates, Material 3 Expressive introduces a new motion physics system that’s encompassed in the Material theme.

In Androidify, we’ve utilized Material 3 Expressive in a few different ways across the app. For example, we’ve explicitly opted-in to the new MaterialExpressiveTheme and chosen MotionScheme.expressive() (this is the default when using expressive) to add a bit of playfulness to the app:

@Composable
fun AndroidifyTheme(
   content: @Composable () -> Unit,
) {
   val colorScheme = LightColorScheme


   MaterialExpressiveTheme(
       colorScheme = colorScheme,
       typography = Typography,
       shapes = shapes,
       motionScheme = MotionScheme.expressive(),
       content = {
           SharedTransitionLayout {
               CompositionLocalProvider(LocalSharedTransitionScope provides this) {
                   content()
               }
           }
       },
   )
}

Some of the new componentry is used throughout the app, including the HorizontalFloatingToolbar for the Prompt type selection:

moving example of expressive button shapes in slow motion

The app also uses MaterialShapes in various locations, which are a preset list of shapes that allow for easy morphing between each other. For example, check out the cute cookie shape for the camera capture button:

Material Expressive Component updates

Camera button with a MaterialShapes.Cookie9Sided shape

Animations

Wherever possible, the app leverages the Material 3 Expressive MotionScheme to obtain a themed motion token, creating a consistent motion feeling throughout the app. For example, the scale animation on the camera button press is powered by defaultSpatialSpec(), a specification used for animations that move something across a screen (such as x,y or rotation, scale animations):

val interactionSource = remember { MutableInteractionSource() }
val animationSpec = MaterialTheme.motionScheme.defaultSpatialSpec<Float>()
Spacer(
   modifier
       .indication(interactionSource, ScaleIndicationNodeFactory(animationSpec))
       .clip(MaterialShapes.Cookie9Sided.toShape())
       .size(size)
       .drawWithCache {
           //.. etc
       },
)

Camera button scale interaction

Camera button scale interaction

Shared element animations

The app uses shared element transitions between different screen states. Last year, we showcased how you can create shared elements in Jetpack Compose, and we’ve extended this in the Androidify sample to create a fun example. It combines the new Material 3 Expressive MaterialShapes, and performs a transition with a morphing shape animation:

moving example of expressive button shapes in slow motion

To do this, we created a custom Modifier that takes in the target and resting shapes for the sharedBounds transition:

@Composable
fun Modifier.sharedBoundsRevealWithShapeMorph(
   sharedContentState: 
SharedTransitionScope.SharedContentState,
   sharedTransitionScope: SharedTransitionScope = 
LocalSharedTransitionScope.current,
   animatedVisibilityScope: AnimatedVisibilityScope = 
LocalNavAnimatedContentScope.current,
   boundsTransform: BoundsTransform = 
MaterialTheme.motionScheme.sharedElementTransitionSpec,
   resizeMode: SharedTransitionScope.ResizeMode = 
SharedTransitionScope.ResizeMode.RemeasureToBounds,
   restingShape: RoundedPolygon = RoundedPolygon.rectangle().normalized(),
   targetShape: RoundedPolygon = RoundedPolygon.circle().normalized(),
)

Then, we apply a custom OverlayClip to provide the morphing shape, by tying into the AnimatedVisibilityScope provided by the LocalNavAnimatedContentScope:

val animatedProgress =
   animatedVisibilityScope.transition.animateFloat(targetValueByState = targetValueByState)


val morph = remember {
   Morph(restingShape, targetShape)
}
val morphClip = MorphOverlayClip(morph, { animatedProgress.value })


return this@sharedBoundsRevealWithShapeMorph
   .sharedBounds(
       sharedContentState = sharedContentState,
       animatedVisibilityScope = animatedVisibilityScope,
       boundsTransform = boundsTransform,
       resizeMode = resizeMode,
       clipInOverlayDuringTransition = morphClip,
       renderInOverlayDuringTransition = renderInOverlayDuringTransition,
   )

View the full code snippet for this Modifer on GitHub.

Autosize text

With the latest release of Jetpack Compose 1.8, we added the ability to create text composables that automatically adjust the font size to fit the container’s available size with the new autoSize parameter:

BasicText(text,
style = MaterialTheme.typography.titleLarge,
autoSize = TextAutoSize.StepBased(maxFontSize = 220.sp),
)

This is used front and center for the “Customize your own Android Bot” text:

Text reads Customize your own Android Bot with an inline moving image

“Customize your own Android Bot” text with inline GIF

This text composable is interesting because it needed to have the fun dancing Android bot in the middle of the text. To do this, we use InlineContent, which allows us to append a composable in the middle of the text composable itself:

@Composable
private fun DancingBotHeadlineText(modifier: Modifier = Modifier) {
   Box(modifier = modifier) {
       val animatedBot = "animatedBot"
       val text = buildAnnotatedString {
           append(stringResource(R.string.customize))
           // Attach "animatedBot" annotation on the placeholder
           appendInlineContent(animatedBot)
           append(stringResource(R.string.android_bot))
       }
       var placeHolderSize by remember {
           mutableStateOf(220.sp)
       }
       val inlineContent = mapOf(
           Pair(
               animatedBot,
               InlineTextContent(
                   Placeholder(
                       width = placeHolderSize,
                       height = placeHolderSize,
                       placeholderVerticalAlign = PlaceholderVerticalAlign.TextCenter,
                   ),
               ) {
                   DancingBot(
                       modifier = Modifier
                           .padding(top = 32.dp)
                           .fillMaxSize(),
                   )
               },
           ),
       )
       BasicText(
           text,
           modifier = Modifier
               .align(Alignment.Center)
               .padding(bottom = 64.dp, start = 16.dp, end = 16.dp),
           style = MaterialTheme.typography.titleLarge,
           autoSize = TextAutoSize.StepBased(maxFontSize = 220.sp),
           maxLines = 6,
           onTextLayout = { result ->
               placeHolderSize = result.layoutInput.style.fontSize * 3.5f
           },
           inlineContent = inlineContent,
       )
   }
}

Composable visibility with onLayoutRectChanged

With Compose 1.8, a new modifier, Modifier.onLayoutRectChanged, was added. This modifier is a more performant version of onGloballyPositioned, and includes features such as debouncing and throttling to make it performant inside lazy layouts.

In Androidify, we’ve used this modifier for the color splash animation. It determines the position where the transition should start from, as we attach it to the “Let’s Go” button:

var buttonBounds by remember {
   mutableStateOf<RelativeLayoutBounds?>(null)
}
var showColorSplash by remember {
   mutableStateOf(false)
}
Box(modifier = Modifier.fillMaxSize()) {
   PrimaryButton(
       buttonText = "Let's Go",
       modifier = Modifier
           .align(Alignment.BottomCenter)
           .onLayoutRectChanged(
               callback = { bounds ->
                   buttonBounds = bounds
               },
           ),
       onClick = {
           showColorSplash = true
       },
   )
}

We use these bounds as an indication of where to start the color splash animation from.

moving image of a blue color splash transition between Androidify demo screens

Learn more delightful details

From fun marquee animations on the results screen, to animated gradient buttons for the AI-powered actions, to the path drawing animation for the loading screen, this app has many delightful touches for you to experience and learn from.

animated marquee example

animated gradient button for AI powered actions example

animated loading screen example

Check out the full codebase at github.com/android/androidify and learn more about the latest in Compose from using Material 3 Expressive, the new modifiers, auto-sizing text and of course a couple of delightful interactions!

Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.

The post Androidify: Building delightful UIs with Compose appeared first on InShot Pro.

]]>
Androidify: Building powerful AI-driven experiences with Jetpack Compose, Gemini and CameraX https://theinshotproapk.com/androidify-building-powerful-ai-driven-experiences-with-jetpack-compose-gemini-and-camerax/ Mon, 02 Jun 2025 12:07:28 +0000 https://theinshotproapk.com/androidify-building-powerful-ai-driven-experiences-with-jetpack-compose-gemini-and-camerax/ Posted by Rebecca Franks – Developer Relations Engineer The Android bot is a beloved mascot for Android users and developers, ...

Read more

The post Androidify: Building powerful AI-driven experiences with Jetpack Compose, Gemini and CameraX appeared first on InShot Pro.

]]>

Posted by Rebecca Franks – Developer Relations Engineer

The Android bot is a beloved mascot for Android users and developers, with previous versions of the bot builder being very popular – we decided that this year we’d rebuild the bot maker from the ground up, using the latest technology backed by Gemini. Today we are releasing a new open source app, Androidify, for learning how to build powerful AI driven experiences on Android using the latest technologies such as Jetpack Compose, Gemini through Firebase, CameraX, and Navigation 3.

a moving image of various droid bots dancing individually

Androidify app demo

Here’s an example of the app running on the device, showcasing converting a photo to an Android bot that represents my likeness:

moving image showing the conversion of an image of a woman in a pink dress holding na umbrella into a 3D image of a droid bot wearing a pink dress holding an umbrella

Under the hood

The app combines a variety of different Google technologies, such as:

    • Gemini API – through Firebase AI Logic SDK, for accessing the underlying Imagen and Gemini models.
    • Jetpack Compose – for building the UI with delightful animations and making the app adapt to different screen sizes.
    • Navigation 3 – the latest navigation library for building up Navigation graphs with Compose.
    • CameraX Compose and Media3 Compose – for building up a custom camera with custom UI controls (rear camera support, zoom support, tap-to-focus) and playing the promotional video.

This sample app is currently using a standard Imagen model, but we’ve been working on a fine-tuned model that’s trained specifically on all of the pieces that make the Android bot cute and fun; we’ll share that version later this year. In the meantime, don’t be surprised if the sample app puts out some interesting looking examples!

How does the Androidify app work?

The app leverages our best practices for Architecture, Testing, and UI to showcase a real world, modern AI application on device.

Flow chart describing Androidify app flow

Androidify app flow chart detailing how the app works with AI

AI in Androidify with Gemini and ML Kit

The Androidify app uses the Gemini models in a multitude of ways to enrich the app experience, all powered by the Firebase AI Logic SDK. The app uses Gemini 2.5 Flash and Imagen 3 under the hood:

    • Image validation: We ensure that the captured image contains sufficient information, such as a clearly focused person, and assessing for safety. This feature uses the multi-modal capabilities of Gemini API, by giving it a prompt and image at the same time:

val response = generativeModel.generateContent(
   content {
       text(prompt)
       image(image)
   },
)

    • Text prompt validation: If the user opts for text input instead of image, we use Gemini 2.5 Flash to ensure the text contains a sufficiently descriptive prompt to generate a bot.

    • Image captioning: Once we’re sure the image has enough information, we use Gemini 2.5 Flash to perform image captioning., We ask Gemini to be as descriptive as possible,focusing on the clothing and its colors.

    • “Help me write” feature: Similar to an “I’m feeling lucky” type feature, “Help me write” uses Gemini 2.5 Flash to create a random description of the clothing and hairstyle of a bot.

    • Image generation from the generated prompt: As the final step, Imagen generates the image, providing the prompt and the selected skin tone of the bot.

The app also uses the ML Kit pose detection to detect a person in the viewfinder and enable the capture button when a person is detected, as well as adding fun indicators around the content to indicate detection.

Explore more detailed information about AI usage in Androidify.

Jetpack Compose

The user interface of Androidify is built using Jetpack Compose, the modern UI toolkit that simplifies and accelerates UI development on Android.

Delightful details with the UI

The app uses Material 3 Expressive, the latest alpha release that makes your apps more premium, desirable, and engaging. It provides delightful bits of UI out-of-the-box, like new shapes, componentry, and using the MotionScheme variables wherever a motion spec is needed.

MaterialShapes are used in various locations. These are a preset list of shapes that allow for easy morphing between each other—for example, the cute cookie shape for the camera capture button:

Androidify app UI showing camera button

Camera button with a MaterialShapes.Cookie9Sided shape

Beyond using the standard Material components, Androidify also features custom composables and delightful transitions tailored to the specific needs of the app:

    • There are plenty of shared element transitions across the app—for example, a morphing shape shared element transition is performed between the “take a photo” button and the camera surface.

      moving example of expressive button shapes in slow motion

    • Custom enter transitions for the ResultsScreen with the usage of marquee modifiers.

      animated marquee example

    • Fun color splash animation as a transition between screens.

      moving image of a blue color splash transition between Androidify demo screens

    • Animating gradient buttons for the AI-powered actions.

      animated gradient button for AI powered actions example

To learn more about the unique details of the UI, read Androidify: Building delightful UIs with Compose

Adapting to different devices

Androidify is designed to look great and function seamlessly across candy bar phones, foldables, and tablets. The general goal of developing adaptive apps is to avoid reimplementing the same app multiple times on each form factor by extracting out reusable composables, and leveraging APIs like WindowSizeClass to determine what kind of layout to display.

a collage of different adaptive layouts for the Androidify app across small and large screens

Various adaptive layouts in the app

For Androidify, we only needed to leverage the width window size class. Combining this with different layout mechanisms, we were able to reuse or extend the composables to cater to the multitude of different device sizes and capabilities.

    • Responsive layouts: The CreationScreen demonstrates adaptive design. It uses helper functions like isAtLeastMedium() to detect window size categories and adjust its layout accordingly. On larger windows, the image/prompt area and color picker might sit side-by-side in a Row, while on smaller windows, the color picker is accessed via a ModalBottomSheet. This pattern, called “supporting pane”, highlights the supporting dependencies between the main content and the color picker.

    • Foldable support: The app actively checks for foldable device features. The camera screen uses WindowInfoTracker to get FoldingFeature information to adapt to different features by optimizing the layout for tabletop posture.

    • Rear display: Support for devices with multiple displays is included via the RearCameraUseCase, allowing for the device camera preview to be shown on the external screen when the device is unfolded (so the main content is usually displayed on the internal screen).

Using window size classes, coupled with creating a custom @LargeScreensPreview annotation, helps achieve unique and useful UIs across the spectrum of device sizes and window sizes.

CameraX and Media3 Compose

To allow users to base their bots on photos, Androidify integrates CameraX, the Jetpack library that makes camera app development easier.

The app uses a custom CameraLayout composable that supports the layout of the typical composables that a camera preview screen would include— for example, zoom buttons, a capture button, and a flip camera button. This layout adapts to different device sizes and more advanced use cases, like the tabletop mode and rear-camera display. For the actual rendering of the camera preview, it uses the new CameraXViewfinder that is part of the camerax-compose artifact.

CameraLayout in Compose

CameraLayout composable that takes care of different device configurations, such as table top mode

CameraLayout in Compose

CameraLayout composable that takes care of different device configurations, such as table top mode

The app also integrates with Media3 APIs to load an instructional video for showing how to get the best bot from a prompt or image. Using the new media3-ui-compose artifact, we can easily add a VideoPlayer into the app:

@Composable
private fun VideoPlayer(modifier: Modifier = Modifier) {
    val context = LocalContext.current
    var player by remember { mutableStateOf<Player?>(null) }
    LifecycleStartEffect(Unit) {
        player = ExoPlayer.Builder(context).build().apply {
            setMediaItem(MediaItem.fromUri(Constants.PROMO_VIDEO))
            repeatMode = Player.REPEAT_MODE_ONE
            prepare()
        }
        onStopOrDispose {
            player?.release()
            player = null
        }
    }
    Box(
        modifier
            .background(MaterialTheme.colorScheme.surfaceContainerLowest),
    ) {
        player?.let { currentPlayer ->
            PlayerSurface(currentPlayer, surfaceType = SURFACE_TYPE_TEXTURE_VIEW)
        }
    }
}

Using the new onLayoutRectChanged modifier, we also listen for whether the composable is completely visible or not, and play or pause the video based on this information:

var videoFullyOnScreen by remember { mutableStateOf(false) }     

LaunchedEffect(videoFullyOnScreen) {
     if (videoFullyOnScreen) currentPlayer.play() else currentPlayer.pause()
} 

// We add this onto the player composable to determine if the video composable is visible, and mutate the videoFullyOnScreen variable, that then toggles the player state. 
Modifier.onVisibilityChanged(
                containerWidth = LocalView.current.width,
                containerHeight = LocalView.current.height,
) { fullyVisible -> videoFullyOnScreen = fullyVisible }

// A simple version of visibility changed detection
fun Modifier.onVisibilityChanged(
    containerWidth: Int,
    containerHeight: Int,
    onChanged: (visible: Boolean) -> Unit,
) = this then Modifier.onLayoutRectChanged(100, 0) { layoutBounds ->
    onChanged(
        layoutBounds.boundsInRoot.top > 0 &&
            layoutBounds.boundsInRoot.bottom < containerHeight &&
            layoutBounds.boundsInRoot.left > 0 &&
            layoutBounds.boundsInRoot.right < containerWidth,
    )
}

Additionally, using rememberPlayPauseButtonState, we add on a layer on top of the player to offer a play/pause button on the video itself:

val playPauseButtonState = rememberPlayPauseButtonState(currentPlayer)
            OutlinedIconButton(
                onClick = playPauseButtonState::onClick,
                enabled = playPauseButtonState.isEnabled,
            ) {
                val icon =
                    if (playPauseButtonState.showPlay) R.drawable.play else R.drawable.pause
                val contentDescription =
                    if (playPauseButtonState.showPlay) R.string.play else R.string.pause
                Icon(
                    painterResource(icon),
                    stringResource(contentDescription),
                )
            }

Check out the code for more details on how CameraX and Media3 were used in Androidify.

Navigation 3

Screen transitions are handled using the new Jetpack Navigation 3 library androidx.navigation3. The MainNavigation composable defines the different destinations (Home, Camera, Creation, About) and displays the content associated with each destination using NavDisplay. You get full control over your back stack, and navigating to and from destinations is as simple as adding and removing items from a list.

@Composable
fun MainNavigation() {
   val backStack = rememberMutableStateListOf<NavigationRoute>(Home)
   NavDisplay(
       backStack = backStack,
       onBack = { backStack.removeLastOrNull() },
       entryProvider = entryProvider {
           entry<Home> { entry ->
               HomeScreen(
                   onAboutClicked = {
                       backStack.add(About)
                   },
               )
           }
           entry<Camera> {
               CameraPreviewScreen(
                   onImageCaptured = { uri ->
                       backStack.add(Create(uri.toString()))
                   },
               )
           }
           // etc
       },
   )
}

Notably, Navigation 3 exposes a new composition local, LocalNavAnimatedContentScope, to easily integrate your shared element transitions without needing to keep track of the scope yourself. By default, Navigation 3 also integrates with predictive back, providing delightful back experiences when navigating between screens, as seen in this prior shared element transition:

CameraLayout in Compose

Learn more about Jetpack Navigation 3, currently in alpha.

Learn more

By combining the declarative power of Jetpack Compose, the camera capabilities of CameraX, the intelligent features of Gemini, and thoughtful adaptive design, Androidify is a personalized avatar creation experience that feels right at home on any Android device. You can find the full code sample at github.com/android/androidify where you can see the app in action and be inspired to build your own AI-powered app experiences.

Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.

The post Androidify: Building powerful AI-driven experiences with Jetpack Compose, Gemini and CameraX appeared first on InShot Pro.

]]>
16 things to know for Android developers at Google I/O 2025 https://theinshotproapk.com/16-things-to-know-for-android-developers-at-google-i-o-2025/ Tue, 20 May 2025 18:03:00 +0000 https://theinshotproapk.com/16-things-to-know-for-android-developers-at-google-i-o-2025/ Posted by Matthew McCullough – VP of Product Management, Android Developer Today at Google I/O, we announced the many ways ...

Read more

The post 16 things to know for Android developers at Google I/O 2025 appeared first on InShot Pro.

]]>
Posted by Matthew McCullough – VP of Product Management, Android Developer

Today at Google I/O, we announced the many ways we’re helping you build excellent, adaptive experiences, and helping you stay more productive through updates to our tooling that put AI at your fingertips and throughout your development lifecycle. Here’s a recap of 16 of our favorite announcements for Android developers; you can also see what was announced last week in The Android Show: I/O Edition. And stay tuned over the next two days as we dive into all of the topics in more detail!

Building AI into your Apps

1: Building intelligent apps with Generative AI

Generative AI enhances apps’ experience by making them intelligent, personalized and agentic. This year, we announced new ML Kit GenAI APIs using Gemini Nano for common on-device tasks like summarization, proofreading, rewrite, and image description. We also provided capabilities for developers to harness more powerful models such as Gemini Pro, Gemini Flash, and Imagen via Firebase AI Logic for more complex use cases like image generation and processing extensive data across modalities, including bringing AI to life in Android XR, and a new AI sample app, Androidify, that showcases how these APIs can transform your selfies into unique Android robots! To start building intelligent experiences by leveraging these new capabilities, explore the developer documentation, sample apps, and watch the overview session to choose the right solution for your app.

New experiences across devices

2: One app, every screen: think adaptive and unlock 500 million screens

Mobile Android apps form the foundation across phones, foldables, tablets and ChromeOS, and this year we’re helping you bring them to cars and XR and expanding usages with desktop windowing and connected displays. This expansion means tapping into an ecosystem of 500 million devices – a significant opportunity to engage more users when you think adaptive, building a single mobile app that works across form factors. Resources, including Compose Layouts library and Jetpack Navigation updates, help make building these dynamic experiences easier than before. You can see how Peacock, NBCUniveral’s streaming service (available in the US) is building adaptively to meet users where they are.

Disclaimer: Peacock is available in the US only. This video will only be viewable to US viewers.

3: Material 3 Expressive: design for intuition and emotion

The new Material 3 Expressive update provides tools to enhance your product’s appeal by harnessing emotional UX, making it more engaging, intuitive, and desirable for users. Check out the I/O talk to learn more about expressive design and how it inspires emotion, clearly guides users toward their goals, and offers a flexible and personalized experience.

moving image of Material 3 Expressive demo

4: Smarter widgets, engaging live updates

Measure the return on investment of your widgets (available soon) and easily create personalized widget previews with Glance 1.2. Promoted Live Updates notify users of important ongoing notifications and come with a new Progress Style standardized template.

moving image of Material 3 Expressive demo

5: Enhanced Camera & Media: low light boost and battery savings

This year’s I/O introduces several camera and media enhancements. These include a software low light boost for improved photography in dim lighting and native PCM offload, allowing the DSP to handle more audio playback processing, thus conserving user battery. Explore our detailed sessions on built-in effects within CameraX and Media3 for further information.

6: Build next-gen app experiences for Cars

We’re launching expanded opportunities for developers to build in-car experiences, including new Gemini integrations, support for more app categories like Games and Video, and enhanced capabilities for media and communication apps via the Car App Library and new APIs. Alongside updated car app quality tiers and simplified distribution, we’ll soon be providing improved testing tools like Android Automotive OS on Pixel Tablet and Firebase Test Lab access to help you bring your innovative apps to cars. Learn more from our technical session and blog post on new in-car app experiences.

7: Build for Android XR’s expanding ecosystem with Developer Preview 2 of the SDK

We announced Android XR in December, and today at Google I/O we shared a bunch of updates coming to the platform including Developer Preview 2 of the Android XR SDK plus an expanding ecosystem of devices: in addition to the first Android XR headset, Samsung’s Project Moohan, you’ll also see more devices including a new portable Android XR device from our partners at XREAL. There’s lots more to cover for Android XR: Watch the Compose and AI on Android XR session, and the Building differentiated apps for Android XR with 3D content session, and learn more about building for Android XR.

product image of XREAL’s Project Aura against a nebulous black background

XREAL’s Project Aura

8: Express yourself on Wear OS: meet Material Expressive on Wear OS 6

This year we are launching Wear OS 6: the most powerful and expressive version of Wear OS. Wear OS 6 features Material 3 Expressive, a new UI design with personalized visuals and motion for user creativity, coming to Wear, Android, and Google apps later this year. Developers gain access to Material 3 Expressive on Wear OS by utilizing new Jetpack libraries: Wear Compose Material 3, which provides components for apps and Wear ProtoLayout Material 3 which provides components and layouts for tiles. Get started with Material 3 libraries and other updates on Wear.

moving image displays examples of Material 3 Expressive on Wear OS experiences

Some examples of Material 3 Expressive on Wear OS experiences

9: Engage users on Google TV with excellent TV apps

You can leverage more resources within Compose’s core and Material libraries with the stable release of Compose for TV, empowering you to build excellent adaptive UIs across your apps. We’re also thrilled to share exciting platform updates and developer tools designed to boost app engagement, including bringing Gemini capabilities to TV in the fall, opening enrollment for our Video Discovery API, and more.

Developer productivity

10: Build beautiful apps faster with Jetpack Compose

Compose is our big bet for UI development. The latest stable BOM release provides the features, performance, stability, and libraries that you need to build beautiful adaptive apps faster, so you can focus on what makes your app valuable to users.

moving image of compose adaptive layouts updates in the Google Play app

Compose Adaptive Layouts Updates in the Google Play app

11: Kotlin Multiplatform: new Shared Template lets you build across platforms, easily

Kotlin Multiplatform (KMP) enables teams to reach new audiences across Android and iOS with less development time. We’ve released a new Android Studio KMP shared module template, updated Jetpack libraries and new codelabs (Getting started with Kotlin Multiplatform and Migrating your Room database to KMP) to help developers who are looking to get started with KMP. Shared module templates make it easier for developers to craft, maintain, and own the business logic. Read more on what’s new in Android’s Kotlin Multiplatform.

12: Gemini in Android Studio: AI Agents to help you work

Gemini in Android Studio is the AI-powered coding companion that makes Android developers more productive at every stage of the dev lifecycle. In March, we introduced Image to Code to bridge the gap between UX teams and software engineers by intelligently converting design mockups into working Compose UI code. And today, we previewed new agentic AI experiences, Journeys for Android Studio and Version Upgrade Agent. These innovations make it easier to build and test code. You can read more about these updates in What’s new in Android development tools.

13: Android Studio: smarter with Gemini

In this latest release, we’re empowering devs with AI-driven tools like Gemini in Android Studio, streamlining UI creation, making testing easier, and ensuring apps are future-proofed in our ever-evolving Android ecosystem. These innovations accelerate development cycles, improve app quality, and help you stay ahead in a dynamic mobile landscape. To take advantage, upgrade to the latest Studio release. You can read more about these innovations in What’s new in Android development tools.

moving image of Gemini in Android Studio Agentic Experiences including Journeys and Version Upgrade

And the latest on driving business growth

14: What’s new in Google Play

Get ready for exciting updates from Play designed to boost your discovery, engagement and revenue! Learn how we’re continuing to become a content-rich destination with enhanced personalization and fresh ways to showcase your apps and content. Plus, explore powerful new subscription features designed to streamline checkout and reduce churn. Read I/O 2025: What’s new in Google Play to learn more.

a moving image of three mobile devices displaying how content is displayed on the Play Store

15: Start migrating to Play Games Services v2 today

Play Games Services (PGS) connects over 2 billion gamer profiles on Play, powering cross-device gameplay, personalized gaming content and rewards for your players throughout the gaming journey. We are moving PGS v1 features to v2 with more advanced features and an easier integration path. Learn more about the migration timeline and new features.

16: And of course, Android 16

We unpacked some of the latest features coming to users in Android 16, which we’ve been previewing with you for the last few months. If you haven’t already, make sure to test your apps with the latest Beta of Android 16. Android 16 includes Live Updates, professional media and camera features, desktop windowing and connected displays, major accessibility enhancements and much more.

Check out all of the Android and Play content at Google I/O

This was just a preview of some of the cool updates for Android developers at Google I/O, but stay tuned to Google I/O over the next two days as we dive into a range of Android developer topics in more detail. You can check out the What’s New in Android and the full Android track of sessions, and whether you’re joining in person or around the world, we can’t wait to engage with you!

Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.

The post 16 things to know for Android developers at Google I/O 2025 appeared first on InShot Pro.

]]>