Google I/O 2025 https://theinshotproapk.com/category/app/google-i-o-2025/ Download InShot Pro APK for Android, iOS, and PC Mon, 16 Jun 2025 16:00:00 +0000 en-US hourly 1 https://theinshotproapk.com/wp-content/uploads/2021/07/cropped-Inshot-Pro-APK-Logo-1-32x32.png Google I/O 2025 https://theinshotproapk.com/category/app/google-i-o-2025/ 32 32 Top 3 things to know for AI on Android at Google I/O ‘25 https://theinshotproapk.com/top-3-things-to-know-for-ai-on-android-at-google-i-o-25/ Mon, 16 Jun 2025 16:00:00 +0000 https://theinshotproapk.com/top-3-things-to-know-for-ai-on-android-at-google-i-o-25/ Posted by Kateryna Semenova – Sr. Developer Relations Engineer AI is reshaping how users interact with their favorite apps, opening ...

Read more

The post Top 3 things to know for AI on Android at Google I/O ‘25 appeared first on InShot Pro.

]]>

Posted by Kateryna Semenova – Sr. Developer Relations Engineer

AI is reshaping how users interact with their favorite apps, opening new avenues for developers to create intelligent experiences. At Google I/O, we showcased how Android is making it easier than ever for you to build smart, personalized and creative apps. And we’re committed to providing you with the tools needed to innovate across the full development stack in this evolving landscape.

This year, we focused on making AI accessible across the spectrum, from on-device processing to cloud-powered capabilities. Here are the top 3 announcements you need to know for building with AI on Android from Google I/O ‘25:

#1 Leverage the efficiency of Gemini Nano for on-device AI experiences

For on-device AI, we announced a new set of ML Kit GenAI APIs powered by Gemini Nano, our most efficient and compact model designed and optimized for running directly on mobile devices. These APIs provide high-level, easy integration for common tasks including text summarization, proofreading, rewriting content in different styles, and generating image description. Building on-device offers significant benefits such as local data processing and offline availability at no additional cost for inference. To start integrating these solutions explore the ML Kit GenAI documentation, the sample on GitHub and watch the “Gemini Nano on Android: Building with on-device GenAI” talk.

#2 Seamlessly integrate on-device ML/AI with your own custom models

The Google AI Edge platform enables building and deploying a wide range of pretrained and custom models on edge devices and supports various frameworks like TensorFlow, PyTorch, Keras, and Jax, allowing for more customization in apps. The platform now also offers improved support of on-device hardware accelerators and a new AI Edge Portal service for broad coverage of on-device benchmarking and evals. If you are looking for GenAI language models on devices where Gemini Nano is not available, you can use other open models via the MediaPipe LLM Inference API.

Serving your own custom models on-device can pose challenges related to handling large model downloads and updates, impacting the user experience. To improve this, we’ve launched Play for On-Device AI in beta. This service is designed to help developers manage custom model downloads efficiently, ensuring the right model size and speed are delivered to each Android device precisely when needed.

For more information watch “Small language models with Google AI Edge” talk.

#3 Power your Android apps with Gemini Flash, Pro and Imagen using Firebase AI Logic

For more advanced generative AI use cases, such as complex reasoning tasks, analyzing large amounts of data, processing audio or video, or generating images, you can use larger models from the Gemini Flash and Gemini Pro families, and Imagen running in the cloud. These models are well suited for scenarios requiring advanced capabilities or multimodal inputs and outputs. And since the AI inference runs in the cloud any Android device with an internet connection is supported. They are easy to integrate into your Android app by using Firebase AI Logic, which provides a simplified, secure way to access these capabilities without managing your own backend. Its SDK also includes support for conversational AI experiences using the Gemini Live API or generating custom contextual visual assets with Imagen. To learn more, check out our sample on GitHub and watch “Enhance your Android app with Gemini Pro and Flash, and Imagen” session.

These powerful AI capabilities can also be brought to life in immersive Android XR experiences. You can find corresponding documentation, samples and the technical session: “The future is now, with Compose and AI on Android XR“.

Flow cahrt demonstrating Firebase AI Logic integration architecture

Figure 1: Firebase AI Logic integration architecture

Get inspired and start building with AI on Android today

We released a new open source app, Androidify, to help developers build AI-driven Android experiences using Gemini APIs, ML Kit, Jetpack Compose, CameraX, Navigation 3, and adaptive design. Users can create personalized Android bot with Gemini and Imagen via the Firebase AI Logic SDK. Additionally, it incorporates ML Kit pose detection to detect a person in the camera viewfinder. The full code sample is available on GitHub for exploration and inspiration. Discover additional AI examples in our Android AI Sample Catalog.

moving image of the Androidify app on a mobile device, showing a fair-skinned woman with blond hair wearing a red jacket with black shirt and pants and a pair of sunglasses converting into a 3D image of a droid with matching skin tone and blond hair wearing a red jacket with black shirt and pants and a pair of sunglasses

The original image and Androidifi-ed image

Choosing the right Gemini model depends on understanding your specific needs and the model’s capabilities, including modality, complexity, context window, offline capability, cost, and device reach. To explore these considerations further and see all our announcements in action, check out the AI on Android at I/O ‘25 playlist on YouTube and check out our documentation.

We are excited to see what you will build with the power of Gemini!

The post Top 3 things to know for AI on Android at Google I/O ‘25 appeared first on InShot Pro.

]]>
Upcoming changes to Wear OS watch faces https://theinshotproapk.com/upcoming-changes-to-wear-os-watch-faces/ Thu, 12 Jun 2025 16:00:00 +0000 https://theinshotproapk.com/upcoming-changes-to-wear-os-watch-faces/ Posted by François Deschênes Product Manager – Wear OS Today, we are announcing important changes to Wear OS watch face ...

Read more

The post Upcoming changes to Wear OS watch faces appeared first on InShot Pro.

]]>

Posted by François Deschênes Product Manager – Wear OS

Today, we are announcing important changes to Wear OS watch face development that will affect how developers publish and update watch faces on Google Play. As part of our ongoing effort to enhance Wear OS app quality, we are moving towards supporting only the Watch Face Format and removing support for AndroidX / Wearable Support Library (WSL) watch faces.

We introduced Watch Face Format at Google I/O in 2023 to make it easier to create watch faces that are customizable and power-efficient. The Watch Face Format is a declarative XML format, so there is no executable code involved in creating a watch face, and there is no code embedded in the watch face APK.

What’s changing?

Developers will need to migrate published watch faces to the Watch Face Format by January 14, 2026. Developers using Watch Face Studio to build watch faces will need to resubmit their watch faces to the Play Store using Watch Face Studio version 1.8.7 or above – see below for more details.

When are these changes coming?

Starting January 27, 2025 (already in effect):

Starting January 14, 2026:

    • Availability: Users will not be able to install legacy watch faces on any Wear OS devices from the Play Store. Legacy watch faces already installed on a Wear OS device will continue to work.
    • Updates: Developers will not be able to publish updates for legacy watch faces to the Play Store.
    • Monetization: The following won’t be possible for legacy watch faces: one-off watch face purchases, in-app purchases, and subscriptions. Existing purchases and subscriptions will continue to work, but they will not renew, including auto-renewals.

What should developers do next?

To prepare for these changes and to continue publishing watch faces to the Play Store, developers using AndroidX or WSL to build watch faces must migrate their watch faces to the Watch Face Format and resubmit to the Play Store by January 14, 2026.

Developers using Watch Face Studio to build watch faces will need to resubmit their watch faces to the Play Store using Watch Face Studio version 1.8.7 or above:

    • Be sure to republish for all Play tracks, including all testing tracks as well as production.
    • Remove any bundles from these tracks that were created using Watch Face Studio versions prior to 1.8.7.

Benefits of the Watch Face Format

Watch Face Format was developed to support developers in creating watch faces. This format provides numerous advantages to both developers and end users:

    • Simplified development: Streamlined workflows and visual design tools make building watch faces easier.
    • Enhanced performance: Optimized for battery efficiency and smooth interactions.
    • Increased security: Robust security features protect user data and privacy.
    • Forward-compatible: Access to the latest features and capabilities of Wear OS.

Resources to help with migration

To get started migrating your watch faces to the Watch Face Format, check out the following developer guidance:

We encourage developers to begin the migration process as soon as possible to ensure a seamless transition and continued availability of your watch faces on Google Play.

We understand that this change requires effort. If you have further questions, please refer to the Wear OS community announcement. Please report any issues using the issue tracker.

The post Upcoming changes to Wear OS watch faces appeared first on InShot Pro.

]]>
A product manager’s guide to adapting Android apps across devices https://theinshotproapk.com/a-product-managers-guide-to-adapting-android-apps-across-devices/ Thu, 12 Jun 2025 12:01:47 +0000 https://theinshotproapk.com/a-product-managers-guide-to-adapting-android-apps-across-devices/ Posted by Fahd Imtiaz, Product Manager, Android Developer Experience Today, Android is launching a few updates across the platform! This ...

Read more

The post A product manager’s guide to adapting Android apps across devices appeared first on InShot Pro.

]]>

Posted by Fahd Imtiaz, Product Manager, Android Developer Experience

Today, Android is launching a few updates across the platform! This includes the start of Android 16’s rollout, with details for both developers and users, a Developer Preview for enhanced Android desktop experiences with connected displays, and updates for Android users across Google apps and more, plus the June Pixel Drop. We’re also recapping all the Google I/O updates for Android developers focused on building excellent, adaptive Android apps.

With new form factors emerging continually, the Android ecosystem is more dynamic than ever.

From phones and foldables to tablets, Chromebooks, TVs, cars, Wear and XR, Android users expect their apps to run seamlessly across an increasingly diverse range of form factors. Yet, many Android apps fall short of these expectations as they are built with UI constraints such as being locked to a single orientation or restricted in resizability.

With this in mind, Android 16 introduced API changes for apps targeting SDK level 36 to ignore orientation and resizability restrictions starting with large screen devices, shifting toward a unified model where adaptive apps are the norm. This is the moment to move ahead. Adaptive apps aren’t just the future of Android, they’re the expectation for your app to stand out across Android form factors.

Why you should prioritize adaptive now

500+ devices including foldables, tablets, Chromebooks, and mobile-app capable cars

Source: internal Google data

Prioritizing optimizations to make your app adaptive isn’t just about keeping up with the orientation and resizability API changes in Android 16 for apps targeting SDK 36. Adaptive apps unlock tangible benefits across user experience, development efficiency, and market reach.

    • Mobile apps can now reach users on over 500 million active large screen devices: Mobile apps run on foldables, tablets, Chromebooks, and even compatible cars, with minimal changes. Android 16 will introduce significant advancements in desktop windowing for a true desktop-like experience on large screens, including connected displays. And Android XR opens a new dimension, allowing your existing apps to be available in immersive environments. The user expectation is clear: a consistent, high-quality experience that intelligently adapts to any screen – be it a foldable, a tablet with a keyboard, or a movable, resizable window on a Chromebook.

    • “The new baseline” with orientation and resizability API changes in Android 16: We believe mobile apps are undergoing a shift to have UI adapt responsively to any screen size, just like websites. Android 16 will ignore app-defined restrictions like fixed orientation (portrait-only) and non-resizable windows, beginning with large screens (smallest width of the device is >= 600dp) including tablets and inner displays on foldables. For most apps, it’s key to helping them stretch to any screen size. In some cases if your app isn’t adaptive, it could deliver a broken user experience on these screens. This moves adaptive design from a nice-to-have to a foundational requirement.

Side by side displays of non-adaptive app UI with on the left with text reading Goodbye 'mobile-only' apps and adaptive app UI on the right with text reads Hello adaptive apps

    • Increase user reach and app discoverability in Play: Adaptive apps are better positioned to be ranked higher in Play, and featured in editorial articles across form factors, reaching a wider audience across Play search and homepages. Additionally, Google Play Store surfaces ratings and reviews across all form factors. If your app is not optimized, a potential user’s first impression might be tainted by a 1-star review complaining about a stretched UI on a device they don’t even own yet. Users are also more likely to engage with apps that provide a great experience across their devices.
    • Increased engagement on large screens: Users on large screen devices often have different interaction patterns. On large screens, users may engage for longer sessions, perform more complex tasks, and consume more content.
    • Concepts saw a 70% increase in user engagement on large screens after optimizing.

      Usage for 6 major media streaming apps in the US was up to 3x more for tablet and phone users, as compared to phone only users.

    • More accessible app experiences: According to the World Bank, 15% of the world’s population has some type of disability. People with disabilities depend on apps and services that support accessibility to communicate, learn, and work. Matching the user’s preferred orientation improves the accessibility of applications, helping to create an inclusive experience for all.

Today, most apps are building for smartphones only

A display of varying Android form factors, including a tablet, a desktop monitor, a laptop, a large-screen mobile, hand-held device, and an in-car app screen

“…looking at the number of users, the ROI does not justify the investment”.

That’s a frequent pushback from product managers and decision-makers, and if you’re just looking at top-line analytics comparing the number of tablet sessions to smartphone sessions, it might seem like a closed case.

While top-line analytics might show lower session numbers on tablets compared to smartphones, concluding that large screens aren’t worth the effort based solely on current volume can be a trap, causing you to miss out on valuable engagement and future opportunities.

Let’s take a deeper look into why:

      1. The user experience ‘chicken and egg’ loop: Is it possible that the low usage is a symptom rather than the root cause? Users are quick to abandon apps that feel clunky or broken. If your app on large screens is a stretched-out phone interface, the app likely provides a negative user experience. The lack of users might reflect the lack of a good experience, not always necessarily lack of potential users.

      2. Beyond user volume, look at user engagement: Don’t just count users, analyze their worth. Users interact with apps on large screens differently. The large screen often leads to longer sessions and more immersive experiences. As mentioned above, usage data shows that engagement time increases significantly for users who interact with apps on both their phone and tablet, as compared to phone only users.

      3. Market evolution: The Android device ecosystem is continuing to evolve. With the rise of foldables, upcoming connected displays support in Android 16, and form factors like XR and Android Auto, adaptive design is now more critical than ever. Building for a specific screen size creates technical debt, and may slow your development velocity and compromise the product quality in the long run.

Okay, I am convinced. Where do I start?

A three-step workflow outlines how to optimize your Android app to be adaptive

For organizations ready to move forward, Android offers many resources and developer tools to optimize apps to be adaptive. See below for how to get started:

      1.Check how your app looks on large screens today: Begin by looking at your app’s current state on tablets, foldables (in different postures), Chromebooks, and environments like desktop windowing. Confirm if your app is available on these devices or if you are unintentionally leaving out these users by requiring unnecessary features within your app.

      2. Address common UI issues: Assess what feels awkward in your app UI today. We have a lot of guidance available on how you can easily translate your mobile app to other screens.

          a. Check the Large screens design gallery for inspiration and understanding how your app UI can evolve across devices using proven solutions to common UI challenges.

          b. Start with quick wins. For example, prevent buttons from stretching to the full screen width, or switch to a vertical navigation bar on large screens to improve ergonomics.

          c. Identify patterns where canonical layouts (e.g. list-detail) could solve any UI awkwardness you identified. Could a list-detail view improve your app’s navigation? Would a supporting pane on the side make better use of the extra space than a bottom sheet?

      3. Optimize your app incrementally, screen by screen: It may be helpful to prioritize how you approach optimization because not everything needs to be perfectly adaptive on day one. Incrementally improve your app based on what matters most – it’s not all or nothing.

          a. Start with the foundations. Check out the large screen app quality guidelines which tier and prioritize the fixes that are most critical to users. Remove orientation restrictions to support portrait and landscape, and ensure support for resizability (for when users are in split screen), and prevent major stretching of buttons, text fields, and images. These foundational fixes are critical, especially with API changes in Android 16 that will make these aspects even more important.

          b. Implement adaptive layout optimizations with a focus on core user journeys or screens first.

              i. Identify screens where optimizations (for example a two-pane layout) offer the biggest UX win

              ii. And then proceed to screens or parts of the app that are not as often used on large screens

          c. Support input methods beyond touch, including keyboard, mouse, trackpad, and stylus input. With new form factors and connected displays support, this sets users up to interact with your UI seamlessly.

          d. Add differentiating hero user experiences like support for tabletop mode or dual-screen mode on foldables. This can happen on a per-use-case basis – for example, tabletop mode is great for watching videos, and dual screen mode is great for video calls.

While there’s an upfront investment in adopting adaptive principles (using tools like Jetpack Compose and window size classes), the long-term payoff may be significant. By designing and building features once, and letting them adapt across screen sizes, the benefits outweigh the cost of creating multiple bespoke layouts. Check out the adaptive apps developer guidance for more.

Unlock your app’s potential with adaptive app design

The message for my fellow product managers, decision-makers, and businesses is clear: adaptive design will uplevel your app for high-quality Android experiences in 2025 and beyond. An adaptive, responsive UI is the scalable way to support the many devices in Android without developing on a per-form factor basis. If you ignore the diverse device ecosystem of foldables, tablets, Chromebooks, and emerging form factors like XR and cars, your business is accepting hidden costs from negative user reviews, lower discovery in Play, increased technical debt, and missed opportunities for increased user engagement and user acquisition.

Maximize your apps’ impact and unlock new user experiences. Learn more about building adaptive apps today.

The post A product manager’s guide to adapting Android apps across devices appeared first on InShot Pro.

]]>
Top 3 updates for building excellent, adaptive apps at Google I/O ‘25 https://theinshotproapk.com/top-3-updates-for-building-excellent-adaptive-apps-at-google-i-o-25/ Tue, 10 Jun 2025 18:01:00 +0000 https://theinshotproapk.com/top-3-updates-for-building-excellent-adaptive-apps-at-google-i-o-25/ Posted by Mozart Louis – Developer Relations Engineer Today, Android is launching a few updates across the platform! This includes ...

Read more

The post Top 3 updates for building excellent, adaptive apps at Google I/O ‘25 appeared first on InShot Pro.

]]>

Posted by Mozart Louis – Developer Relations Engineer

Today, Android is launching a few updates across the platform! This includes the start of Android 16’s rollout, with details for both developers and users, a Developer Preview for enhanced Android desktop experiences with connected displays, and updates for Android users across Google apps and more, plus the June Pixel Drop. We’re also recapping all the Google I/O updates for Android developers focused on building excellent, adaptive Android apps.

Google I/O 2025 brought exciting advancements to Android, equipping you with essential knowledge and powerful tools you need to build outstanding, user-friendly applications that stand out.

If you missed any of the key #GoogleIO25 updates and just saw the release of Android 16 or you’re ready to dive into building excellent adaptive apps, our playlist is for you. Learn how to craft engaging experiences with Live Updates in Android 16, capture video effortlessly with CameraX, process it efficiently using Media3’s editing tools, and engage users across diverse platforms like XR, Android for Cars, Android TV, and Desktop.

Check out the Google I/O playlist for all the session details.

Here are three key announcements directly influencing how you can craft deeply engaging experiences and truly connect with your users:

#1: Build adaptively to unlock 500 million devices

In today’s diverse device ecosystem, users expect their favorite applications to function seamlessly across various form factors, including phones, tablets, Chromebooks, automobiles, and emerging XR glasses and headsets. Our recommended approach for developing applications that excel on each of these surfaces is to create a single, adaptive application. This strategy avoids the need to rebuild the application for every screen size, shape, or input method, ensuring a consistent and high-quality user experience across all devices.

The talk emphasizes that you don’t need to rebuild apps for each form factor. Instead, small, iterative changes can unlock an app’s potential.

Here are some resources we encourage you to use in your apps:

New feature support in Jetpack Compose Adaptive Libraries

    • We’re continuing to make it as easy as possible to build adaptively with Jetpack Compose Adaptive Libraries. with new features in 1.1 like pane expansion and predictive back. By utilizing canonical layout patterns such as List Detail or Supporting Pane layouts and integrating your app code, your application will automatically adjust and reflow when resized.

Navigation 3

    • The alpha release of the Navigation 3 library now supports displaying multiple panes. This eliminates the need to alter your navigation destination setup for separate list and detail views. Instead, you can adjust the setup to concurrently render multiple destinations when sufficient screen space is available.

Updates to Window Manager Library

    • AndroidX.window 1.5 introduces two new window size classes for expanded widths, facilitating better layout adaptation for large tablets and desktops. A width of 1600dp or more is now categorized as “extra large,” while widths between 1200dp and 1600dp are classified as “large.” These subdivisions offer more granularity for developers to optimize their applications for a wider range of window sizes.

Support all orientations and be resizable

Extend to Android XR

Upgrade your Wear OS apps to Material 3 Design

You should build a single, adaptive mobile app that brings the best experiences to all Android surfaces. By building adaptive apps, you meet users where they are today and in the future, enhancing user engagement and app discoverability. This approach represents a strategic business decision that optimizes an app’s long-term success.

#2: Enhance your app’s performance optimization

Get ready to take your app’s performance to the next level! Google I/O 2025, brought an inside look at cutting-edge tools and techniques to boost user satisfaction, enhance technical performance metrics, and drive those all-important key performance indicators. Imagine an end-to-end workflow that streamlines performance optimization.

Redesigned UiAutomator API

    • To make benchmarking reliable and reproducible, there’s the brand new UiAutomator API. Write robust test code and run it on your local devices or in Firebase Test Lab, ensuring consistent results every time.

Macrobenchmarks

    • Once your tests are in place, it’s time to measure and understand. Macrobenchmarks give you the hard data, while App Startup Insights provide actionable recommendations for improvement. Plus, you can get a quick snapshot of your app’s health with the App Performance Score via DAC. These tools combined give you a comprehensive view of your app’s performance and where to focus your efforts.

R8, More than code shrinking and obfuscation

    • You might know R8 as a code shrinking tool, but it’s capable of so much more! The talk dives into R8’s capabilities using the “Androidify” sample app. You’ll see how to apply R8, troubleshoot any issues (like crashes!), and configure it for optimal performance. It’ll also be shown how library developers can include “consumer Keep rules” so that their important code is not touched when used in an application.

#3: Build Richer Image and Video Experiences

In today’s digital landscape, users increasingly expect seamless content creation capabilities within their apps. To meet this demand, developers require robust tools for building excellent camera and media experiences.

Media3Effects in CameraX Preview

    • At Google I/O, developers delve into practical strategies for capturing high-quality video using CameraX, while simultaneously leveraging the Media3Effects on the preview.

Google Low-Light Boost

    • Google Low Light Boost in Google Play services enables real-time dynamic camera brightness adjustment in low light, even without device support for Low Light Boost AE Mode.

New Camera & Media Samples!

Learn more about how CameraX & Media3 can accelerate your development of camera and media related features.

Learn how to build adaptive apps

Want to learn more about building excellent, adaptive apps? Watch this playlist to learn more about all the session details.

The post Top 3 updates for building excellent, adaptive apps at Google I/O ‘25 appeared first on InShot Pro.

]]>
The Android Show: I/O Edition – what Android devs need to know! https://theinshotproapk.com/the-android-show-i-o-edition-what-android-devs-need-to-know/ Thu, 05 Jun 2025 12:00:45 +0000 https://theinshotproapk.com/the-android-show-i-o-edition-what-android-devs-need-to-know/ Posted by Matthew McCullough – Vice President, Product Management, Android Developer We just dropped an I/O Edition of The Android ...

Read more

The post The Android Show: I/O Edition – what Android devs need to know! appeared first on InShot Pro.

]]>

Posted by Matthew McCullough – Vice President, Product Management, Android Developer

We just dropped an I/O Edition of The Android Show, where we unpacked exciting new experiences coming to the Android ecosystem: a fresh and dynamic look and feel, smarts across your devices, and enhanced safety and security features. Join Sameer Samat, President of Android Ecosystem, and the Android team to learn about exciting new development in the episode below, and read about all of the updates for users.

Tune into Google I/O next week – including the Developer Keynote as well as the full Android track of sessions – where we’re covering these topics in more detail and how you can get started.

Start building with Material 3 Expressive

The world of UX design is constantly evolving, and you deserve the tools to create truly engaging and impactful experiences. That’s why Material Design’s latest evolution, Material 3 Expressive, provides new ways to make your product more engaging, easy to use, and desirable. Learn more, and try out the new Material 3 Expressive: an expansion pack designed to enhance your app’s appeal by harnessing emotional UX, making it more engaging, intuitive, and desirable for users. It comes with new components, motion-physics system, type styles, colors, shapes and more.

Material 3 Expressive will be coming to Android 16 later this year; check out the Google I/O talk next week where we’ll dive into this in more detail.

A fluid design built for your watch’s round display

Wear OS 6, arriving later this year, brings Material 3 Expressive design to Google’s smartwatch platform. New design language puts the round watch display at the heart of the experience, and is embraced in every single component and motion of the System, from buttons to notifications. You’ll be able to try new visual design and upgrade existing app experiences to a new level. Next week, tune in to the What’s New in Android session to learn more.

Plus some goodies in Android 16…

We also unpacked some of the latest features coming to users in Android 16, which we’ve been previewing with you for the last few months. If you haven’t already, you can try out the latest Beta of Android 16.

A few new features that Android 16 adds which developers should pay attention to are Live updates, professional media and camera features, desktop windowing for tablets, major accessibility enhancements and much more:

Watch the What’s New in Android session and the Live updates talk to learn more.

Tune in next week to Google I/O

This was just a preview of some Android-related news, so remember to tune in next week to Google I/O, where we’ll be diving into a range of Android developer topics in a lot more detail. You can check out What’s New in Android and the full Android track of sessions to start planning your time.

We can’t wait to see you next week, whether you’re joining in person or virtually from anywhere around the world!

The post The Android Show: I/O Edition – what Android devs need to know! appeared first on InShot Pro.

]]>
Android Design at Google I/O 2025 https://theinshotproapk.com/android-design-at-google-i-o-2025/ Wed, 04 Jun 2025 12:03:16 +0000 https://theinshotproapk.com/android-design-at-google-i-o-2025/ Posted by Ivy Knight – Senior Design Advocate Here’s your guide to the essential Android Design sessions, resources, and announcements ...

Read more

The post Android Design at Google I/O 2025 appeared first on InShot Pro.

]]>

Posted by Ivy Knight – Senior Design Advocate

Here’s your guide to the essential Android Design sessions, resources, and announcements for I/O ‘25:

Check out the latest Android updates

The Android Show: I/O Edition

The Android Show had a special I/O edition this year with some exciting announcements like Material Expressive!

Learn more about the new Live Update Notification templates in the Android Notifications & Live Updates for an in-depth look at what they are, when to use them, and why. You can also get the Live Update design template in the Android UI Kit, read more in the updated Notification guidance, and get hands-on with the Jetsnack Live Updates and Widget case study.

Make your apps more expressive

Get a jump on the future of Google’s UX design: Material 3 Expressive. Learn how to use new emotional design patterns to boost engagement, usability, and desire for your product in the Build Next-Level UX with Material 3 Expressive session and check out the expressive update on Material.io.

Stay up to date with Android Accessibility Updates, highlighting accessibility features launching with Android 16: enhanced dark themes, options for those with motion sickness, a new way to increase text contrast, and more.

Catch the Mastering text input in Compose session to learn more about how engaging robust text experiences are built with Jetpack Compose. It covers Autofill integration, dynamic text resizing, and custom input transformations. This is a great session to watch to see what’s possible when designing text inputs.

Thinking across form factors

These design resources and sessions can help you design across more Android form factors or update your existing experiences.

Preview Gemini in-car, imagining seamless navigation and personalized entertainment in the New In-Car App Experiences session. Then explore the new Car UI Design Kit to bring your app to Android Car platforms and speed up your process with the latest Android form factor kit.

Engaging with users on Google TV with excellent TV apps session discusses new ways the Google TV experience is making it easier for users to find and engage with content, including improvement to out-of-box solutions and updates to Android TV OS.

Want a peek at how to bring immersive content, like 3D models, to Android XR with the Building differentiated apps for Android XR with 3D Content session.

Plus WearOS is releasing an updated design kit @AndroidDesign Figma and learning Pathway.

Tip top apps

We’ve also released the following new Android design guidance to help you design the best Android experiences:

In-app Settings

Read up on the latest suggested patterns to build out your app’s settings.

Help and Feedback

Along with settings, learn about adding help and feedback to your app.

Widget Configuration

Does your app need setup? New guidance to help guide in adding configuration to your app’s widgets.

Edge-to-edge design

Allow your apps to take full advantage of the entire screen with the latest guidance on designing for edge-to-edge.

Check out figma.com/@androiddesign for even more new and updated resources.

Visit the I/O 2025 website, build your schedule, and engage with the community. If you are at the Shoreline come say hello to us in the Android tent at our booths.

We can’t wait to see what you create with these new tools and insights. Happy I/O!

Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.

The post Android Design at Google I/O 2025 appeared first on InShot Pro.

]]>
Google I/O 2025: Build adaptive Android apps that shine across form factors https://theinshotproapk.com/google-i-o-2025-build-adaptive-android-apps-that-shine-across-form-factors/ Wed, 04 Jun 2025 12:03:09 +0000 https://theinshotproapk.com/google-i-o-2025-build-adaptive-android-apps-that-shine-across-form-factors/ Posted by Fahd Imtiaz – Product Manager, Android Developer If your app isn’t built to adapt, you’re missing out on ...

Read more

The post Google I/O 2025: Build adaptive Android apps that shine across form factors appeared first on InShot Pro.

]]>

Posted by Fahd Imtiaz – Product Manager, Android Developer

If your app isn’t built to adapt, you’re missing out on the opportunity to reach a giant swath of users across 500 million devices! At Google I/O this year, we are exploring how adaptive development isn’t just a good idea, but essential to building apps that shine across the expanding Android device ecosystem. This is your guide to meeting users wherever they are, with experiences that are perfectly tailored to their needs.

The advantage of building adaptive

In today’s multi-device world, users expect their favorite applications to work flawlessly and intuitively, whether they’re on a smartphone, tablet, or Chromebook. This expectation for seamless experiences isn’t just about convenience; it’s an important factor for user engagement and retention.

For example, entertainment apps (including Prime Video, Netflix, and Hulu) users on both phone and tablet spend almost 200% more time in-app (nearly 3x engagement) than phone-only users in the US*.

Peacock, NBCUniversal’s streaming service has seen a trend of users moving between mobile and large screens and building adaptively enables a single build to work across the different form factors.

“This allows Peacock to have more time to innovate faster and deliver more value to its customers.”

– Diego Valente, Head of Mobile, Peacock and Global Streaming

Adaptive Android development offers the strategic solution, enabling apps to perform effectively across an expanding array of devices and contexts through intelligent design choices that emphasize code reuse and scalability. With Android’s continuous growth into new form factors and upcoming enhancements such as desktop windowing and connected displays in Android 16, an app’s ability to seamlessly adapt to different screen sizes is becoming increasingly crucial for retaining users and staying competitive.

Beyond direct user benefits, designing adaptively also translates to increased visibility. The Google Play Store actively helps promote developers whose apps excel on different form factors. If your application delivers a great experience on tablets or is excellent on ChromeOS, users on those devices will have an easier time discovering your app. This creates a win-win situation: better quality apps for users and a broader audience for you.

examples of form factors across small phones, tablets, laoptops, and auto

Latest in adaptive Android development from Google I/O

To help you more effectively build compelling adaptive experiences, we shared several key updates at I/O this year.

Build for the expanding Android device ecosystem

Your mobile apps can now reach users beyond phones on over 500 million active devices, including foldables, tablets, Chromebooks, and even compatible cars, with minimal changes. Android 16 introduces significant advancements in desktop windowing for a true desktop-like experience on large screens and when devices are connected to external displays. And, Android XR is opening a new dimension, allowing your existing mobile apps to be available in immersive virtual environments.

The mindset shift to Adaptive

With the expanding Android device ecosystem, adaptive app development is a fundamental strategy. It’s about how the same mobile app runs well across phones, foldables, tablets, Chromebooks, connected displays, XR, and cars, laying a strong foundation for future devices and differentiating for specific form factors. You don’t need to rebuild your app for each form factor; but rather make small, iterative changes, as needed, when needed. Embracing this adaptive mindset today isn’t just about keeping pace; it’s about leading the charge in delivering exceptional user experiences across the entire Android ecosystem.

examples of form factors including vr headset

Leverage powerful tools and libraries to build adaptive apps:

    • Compose Adaptive Layouts library: This library makes adaptive development easier by allowing your app code to fit into canonical layout patterns like list-detail and supporting pane, that automatically reflow as your app is resized, flipped or folded. In the 1.1 release, we introduced pane expansion, allowing users to resize panes. The Socialite demo app showcased how one codebase using this library can adapt across six form factors. New adaptation strategies like “Levitate” (elevating a pane, e.g., into a dialog or bottom sheet) and “Reflow” (reorganizing panes on the same level) were also announced in 1.2 (alpha). For XR, component overrides can automatically spatialize UI elements.

    • Jetpack Navigation 3 (Alpha): This new navigation library simplifies defining user journeys across screens with less boilerplate code, especially for multi-pane layouts in Compose. It helps handle scenarios where list and detail panes might be separate destinations on smaller screens but shown together on larger ones. Check out the new Jetpack Navigation library in alpha.

    • Jetpack Compose input enhancements: Compose’s layered architecture, strong input support, and single location for layout logic simplify creating adaptive UIs. Upcoming in Compose 1.9 are right-click context menus and enhanced trackpad/mouse functionality.

    • Window Size Classes: Use window size classes for top-level layout decisions. AndroidX.window 1.5 introduces two new width size classes – “large” (1200dp to 1600dp) and “extra-large” (1600dp and larger) – providing more granular breakpoints for large screens. This helps in deciding when to expand navigation rails or show three panes of content. Support for these new breakpoints was also announced in the Compose adaptive layouts library 1.2 alpha, along with design guidance.

    • Compose previews: Get quick feedback by visualizing your layouts across a wide variety of screen sizes and aspect ratios. You can also specify different devices by name to preview your UI on their respective sizes and with their inset values.

    • Testing adaptive layouts: Validating your adaptive layouts is crucial and Android Studio offers various tools for testing – including previews for different sizes and aspect ratios, a resizable emulator to test across different screen sizes with a single AVD, screenshot tests, and instrumental behavior tests. And with Journeys with Gemini in Android Studio, you can define tests using natural language for even more robust testing across different window sizes.

Ensuring app availability across devices

Avoid unnecessarily declaring required features (like specific cameras or GPS) in your manifest, as this can prevent your app from appearing in the Play Store on devices that lack those specific hardware components but could otherwise run your app perfectly.

Handling different input methods

Remember to handle various input methods like touch, keyboard, and mouse, especially with Chromebook detachables and connected displays.

Prepare for orientation and resizability API changes in Android 16

Beginning in Android 16, for apps targeting SDK 36, manifest and runtime restrictions on orientation, resizability, and aspect ratio will be ignored on displays that are at least 600dp in both dimensions. To meet user expectations, your apps will need layouts that work for both portrait and landscape windows, and support resizing at runtime. There’s a temporary opt-out manifest flag at both the application and activity level to delay these changes until targetSdk 37, and these changes currently do not apply to apps categorized as “Games”. Learn more about these API changes.

Adaptive considerations for games

Games need to be adaptive too and Unity 6 will add enhanced support for configuration handling, including APIs for screenshots, aspect ratio, and density. Success stories like Asphalt Legends Unite show significant user retention increases on foldables after implementing adaptive features.

examples of form factors including vr headset

Start building adaptive today

Now is the time to elevate your Android apps, making them intuitively responsive across form factors. With the latest tools and updates we’re introducing, you have the power to build experiences that seamlessly flow across all devices, from foldables to cars and beyond. Implementing these strategies will allow you to expand your reach and delight users across the Android ecosystem.

Get inspired by the “Adaptive Android development makes your app shine across devices” talk, and explore all the resources you’ll need to start your journey at developer.android.com/adaptive-apps!

Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.

*Source: internal Google data

The post Google I/O 2025: Build adaptive Android apps that shine across form factors appeared first on InShot Pro.

]]>
Androidify: Building delightful UIs with Compose https://theinshotproapk.com/androidify-building-delightful-uis-with-compose/ Tue, 03 Jun 2025 12:07:48 +0000 https://theinshotproapk.com/androidify-building-delightful-uis-with-compose/ Posted by Rebecca Franks – Developer Relations Engineer Androidify is a new sample app we built using the latest best ...

Read more

The post Androidify: Building delightful UIs with Compose appeared first on InShot Pro.

]]>

Posted by Rebecca Franks – Developer Relations Engineer

Androidify is a new sample app we built using the latest best practices for mobile apps. Previously, we covered all the different features of the app, from Gemini integration and CameraX functionality to adaptive layouts. In this post, we dive into the Jetpack Compose usage throughout the app, building upon our base knowledge of Compose to add delightful and expressive touches along the way!

Material 3 Expressive

Material 3 Expressive is an expansion of the Material 3 design system. It’s a set of new features, updated components, and design tactics for creating emotionally impactful UX.

It’s been released as part of the alpha version of the Material 3 artifact (androidx.compose.material3:material3:1.4.0-alpha10) and contains a wide range of new components you can use within your apps to build more personalized and delightful experiences. Learn more about Material 3 Expressive’s component and theme updates for more engaging and user-friendly products.

Material Expressive Component updates

Material Expressive Component updates

In addition to the new component updates, Material 3 Expressive introduces a new motion physics system that’s encompassed in the Material theme.

In Androidify, we’ve utilized Material 3 Expressive in a few different ways across the app. For example, we’ve explicitly opted-in to the new MaterialExpressiveTheme and chosen MotionScheme.expressive() (this is the default when using expressive) to add a bit of playfulness to the app:

@Composable
fun AndroidifyTheme(
   content: @Composable () -> Unit,
) {
   val colorScheme = LightColorScheme


   MaterialExpressiveTheme(
       colorScheme = colorScheme,
       typography = Typography,
       shapes = shapes,
       motionScheme = MotionScheme.expressive(),
       content = {
           SharedTransitionLayout {
               CompositionLocalProvider(LocalSharedTransitionScope provides this) {
                   content()
               }
           }
       },
   )
}

Some of the new componentry is used throughout the app, including the HorizontalFloatingToolbar for the Prompt type selection:

moving example of expressive button shapes in slow motion

The app also uses MaterialShapes in various locations, which are a preset list of shapes that allow for easy morphing between each other. For example, check out the cute cookie shape for the camera capture button:

Material Expressive Component updates

Camera button with a MaterialShapes.Cookie9Sided shape

Animations

Wherever possible, the app leverages the Material 3 Expressive MotionScheme to obtain a themed motion token, creating a consistent motion feeling throughout the app. For example, the scale animation on the camera button press is powered by defaultSpatialSpec(), a specification used for animations that move something across a screen (such as x,y or rotation, scale animations):

val interactionSource = remember { MutableInteractionSource() }
val animationSpec = MaterialTheme.motionScheme.defaultSpatialSpec<Float>()
Spacer(
   modifier
       .indication(interactionSource, ScaleIndicationNodeFactory(animationSpec))
       .clip(MaterialShapes.Cookie9Sided.toShape())
       .size(size)
       .drawWithCache {
           //.. etc
       },
)

Camera button scale interaction

Camera button scale interaction

Shared element animations

The app uses shared element transitions between different screen states. Last year, we showcased how you can create shared elements in Jetpack Compose, and we’ve extended this in the Androidify sample to create a fun example. It combines the new Material 3 Expressive MaterialShapes, and performs a transition with a morphing shape animation:

moving example of expressive button shapes in slow motion

To do this, we created a custom Modifier that takes in the target and resting shapes for the sharedBounds transition:

@Composable
fun Modifier.sharedBoundsRevealWithShapeMorph(
   sharedContentState: 
SharedTransitionScope.SharedContentState,
   sharedTransitionScope: SharedTransitionScope = 
LocalSharedTransitionScope.current,
   animatedVisibilityScope: AnimatedVisibilityScope = 
LocalNavAnimatedContentScope.current,
   boundsTransform: BoundsTransform = 
MaterialTheme.motionScheme.sharedElementTransitionSpec,
   resizeMode: SharedTransitionScope.ResizeMode = 
SharedTransitionScope.ResizeMode.RemeasureToBounds,
   restingShape: RoundedPolygon = RoundedPolygon.rectangle().normalized(),
   targetShape: RoundedPolygon = RoundedPolygon.circle().normalized(),
)

Then, we apply a custom OverlayClip to provide the morphing shape, by tying into the AnimatedVisibilityScope provided by the LocalNavAnimatedContentScope:

val animatedProgress =
   animatedVisibilityScope.transition.animateFloat(targetValueByState = targetValueByState)


val morph = remember {
   Morph(restingShape, targetShape)
}
val morphClip = MorphOverlayClip(morph, { animatedProgress.value })


return this@sharedBoundsRevealWithShapeMorph
   .sharedBounds(
       sharedContentState = sharedContentState,
       animatedVisibilityScope = animatedVisibilityScope,
       boundsTransform = boundsTransform,
       resizeMode = resizeMode,
       clipInOverlayDuringTransition = morphClip,
       renderInOverlayDuringTransition = renderInOverlayDuringTransition,
   )

View the full code snippet for this Modifer on GitHub.

Autosize text

With the latest release of Jetpack Compose 1.8, we added the ability to create text composables that automatically adjust the font size to fit the container’s available size with the new autoSize parameter:

BasicText(text,
style = MaterialTheme.typography.titleLarge,
autoSize = TextAutoSize.StepBased(maxFontSize = 220.sp),
)

This is used front and center for the “Customize your own Android Bot” text:

Text reads Customize your own Android Bot with an inline moving image

“Customize your own Android Bot” text with inline GIF

This text composable is interesting because it needed to have the fun dancing Android bot in the middle of the text. To do this, we use InlineContent, which allows us to append a composable in the middle of the text composable itself:

@Composable
private fun DancingBotHeadlineText(modifier: Modifier = Modifier) {
   Box(modifier = modifier) {
       val animatedBot = "animatedBot"
       val text = buildAnnotatedString {
           append(stringResource(R.string.customize))
           // Attach "animatedBot" annotation on the placeholder
           appendInlineContent(animatedBot)
           append(stringResource(R.string.android_bot))
       }
       var placeHolderSize by remember {
           mutableStateOf(220.sp)
       }
       val inlineContent = mapOf(
           Pair(
               animatedBot,
               InlineTextContent(
                   Placeholder(
                       width = placeHolderSize,
                       height = placeHolderSize,
                       placeholderVerticalAlign = PlaceholderVerticalAlign.TextCenter,
                   ),
               ) {
                   DancingBot(
                       modifier = Modifier
                           .padding(top = 32.dp)
                           .fillMaxSize(),
                   )
               },
           ),
       )
       BasicText(
           text,
           modifier = Modifier
               .align(Alignment.Center)
               .padding(bottom = 64.dp, start = 16.dp, end = 16.dp),
           style = MaterialTheme.typography.titleLarge,
           autoSize = TextAutoSize.StepBased(maxFontSize = 220.sp),
           maxLines = 6,
           onTextLayout = { result ->
               placeHolderSize = result.layoutInput.style.fontSize * 3.5f
           },
           inlineContent = inlineContent,
       )
   }
}

Composable visibility with onLayoutRectChanged

With Compose 1.8, a new modifier, Modifier.onLayoutRectChanged, was added. This modifier is a more performant version of onGloballyPositioned, and includes features such as debouncing and throttling to make it performant inside lazy layouts.

In Androidify, we’ve used this modifier for the color splash animation. It determines the position where the transition should start from, as we attach it to the “Let’s Go” button:

var buttonBounds by remember {
   mutableStateOf<RelativeLayoutBounds?>(null)
}
var showColorSplash by remember {
   mutableStateOf(false)
}
Box(modifier = Modifier.fillMaxSize()) {
   PrimaryButton(
       buttonText = "Let's Go",
       modifier = Modifier
           .align(Alignment.BottomCenter)
           .onLayoutRectChanged(
               callback = { bounds ->
                   buttonBounds = bounds
               },
           ),
       onClick = {
           showColorSplash = true
       },
   )
}

We use these bounds as an indication of where to start the color splash animation from.

moving image of a blue color splash transition between Androidify demo screens

Learn more delightful details

From fun marquee animations on the results screen, to animated gradient buttons for the AI-powered actions, to the path drawing animation for the loading screen, this app has many delightful touches for you to experience and learn from.

animated marquee example

animated gradient button for AI powered actions example

animated loading screen example

Check out the full codebase at github.com/android/androidify and learn more about the latest in Compose from using Material 3 Expressive, the new modifiers, auto-sizing text and of course a couple of delightful interactions!

Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.

The post Androidify: Building delightful UIs with Compose appeared first on InShot Pro.

]]>
Androidify: How Androidify leverages Gemini, Firebase and ML Kit https://theinshotproapk.com/androidify-how-androidify-leverages-gemini-firebase-and-ml-kit/ Tue, 03 Jun 2025 12:07:31 +0000 https://theinshotproapk.com/androidify-how-androidify-leverages-gemini-firebase-and-ml-kit/ Posted by Thomas Ezan – Developer Relations Engineer, Rebecca Franks – Developer Relations Engineer, and Avneet Singh – Product Manager ...

Read more

The post Androidify: How Androidify leverages Gemini, Firebase and ML Kit appeared first on InShot Pro.

]]>

Posted by Thomas Ezan – Developer Relations Engineer, Rebecca Franks – Developer Relations Engineer, and Avneet Singh – Product Manager

We’re bringing back Androidify later this year, this time powered by Google AI, so you can customize your very own Android bot and share your creativity with the world. Today, we’re releasing a new open source demo app for Androidify as a great example of how Google is using its Gemini AI models to enhance app experiences.

In this post, we’ll dive into how the Androidify app uses Gemini models and Imagen via the Firebase AI Logic SDK, and we’ll provide some insights learned along the way to help you incorporate Gemini and AI into your own projects. Read more about the Androidify demo app.

App flow

The overall app functions as follows, with various parts of it using Gemini and Firebase along the way:

flow chart demonstrating Androidify app flow

Gemini and image validation

To get started with Androidify, take a photo or choose an image on your device. The app needs to make sure that the image you upload is suitable for creating an avatar.

Gemini 2.5 Flash via Firebase helps with this by verifying that the image contains a person, that the person is in focus, and assessing image safety, including whether the image contains abusive content.

val jsonSchema = Schema.obj(
   properties = mapOf("success" to Schema.boolean(), "error" to Schema.string()),
   optionalProperties = listOf("error"),
   )
   
val generativeModel = Firebase.ai(backend = GenerativeBackend.googleAI())
   .generativeModel(
            modelName = "gemini-2.5-flash-preview-04-17",
   	     generationConfig = generationConfig {
                responseMimeType = "application/json"
                responseSchema = jsonSchema
            },
            safetySettings = listOf(
                SafetySetting(HarmCategory.HARASSMENT, HarmBlockThreshold.LOW_AND_ABOVE),
                SafetySetting(HarmCategory.HATE_SPEECH, HarmBlockThreshold.LOW_AND_ABOVE),
                SafetySetting(HarmCategory.SEXUALLY_EXPLICIT, HarmBlockThreshold.LOW_AND_ABOVE),
                SafetySetting(HarmCategory.DANGEROUS_CONTENT, HarmBlockThreshold.LOW_AND_ABOVE),
                SafetySetting(HarmCategory.CIVIC_INTEGRITY, HarmBlockThreshold.LOW_AND_ABOVE),
    	),
    )

 val response = generativeModel.generateContent(
            content {
                text("You are to analyze the provided image and determine if it is acceptable and appropriate based on specific criteria.... (more details see the full sample)")
                image(image)
            },
        )

val jsonResponse = Json.parseToJsonElement(response.text)
val isSuccess = jsonResponse.jsonObject["success"]?.jsonPrimitive?.booleanOrNull == true
val error = jsonResponse.jsonObject["error"]?.jsonPrimitive?.content

In the snippet above, we’re leveraging structured output capabilities of the model by defining the schema of the response. We’re passing a Schema object via the responseSchema param in the generationConfig.

We want to validate that the image has enough information to generate a nice Android avatar. So we ask the model to return a json object with success = true/false and an optional error message explaining why the image doesn’t have enough information.

Structured output is a powerful feature enabling a smoother integration of LLMs to your app by controlling the format of their output, similar to an API response.

Image captioning with Gemini Flash

Once it’s established that the image contains sufficient information to generate an Android avatar, it is captioned using Gemini 2.5 Flash with structured output.

val jsonSchema = Schema.obj(
            properties = mapOf(
                "success" to Schema.boolean(),
                "user_description" to Schema.string(),
            ),
            optionalProperties = listOf("user_description"),
        )
val generativeModel = createGenerativeTextModel(jsonSchema)

val prompt = "You are to create a VERY detailed description of the main person in the given image. This description will be translated into a prompt for a generative image model..."

val response = generativeModel.generateContent(
content { 
       	text(prompt) 
             	image(image) 
	})
        
val jsonResponse = Json.parseToJsonElement(response.text!!) 
val isSuccess = jsonResponse.jsonObject["success"]?.jsonPrimitive?.booleanOrNull == true

val userDescription = jsonResponse.jsonObject["user_description"]?.jsonPrimitive?.content

The other option in the app is to start with a text prompt. You can enter in details about your accessories, hairstyle, and clothing, and let Imagen be a bit more creative.

Android generation via Imagen

We’ll use this detailed description of your image to enrich the prompt used for image generation. We’ll add extra details around what we would like to generate and include the bot color selection as part of this too, including the skin tone selected by the user.

val imagenPrompt = "A 3D rendered cartoonish Android mascot in a photorealistic style, the pose is relaxed and straightforward, facing directly forward [...] The bot looks as follows $userDescription [...]"

We then call the Imagen model to create the bot. Using this new prompt, we create a model and call generateImages:

// we supply our own fine-tuned model here but you can use "imagen-3.0-generate-002" 
val generativeModel = Firebase.ai(backend = GenerativeBackend.googleAI()).imagenModel(
            "imagen-3.0-generate-002",
            safetySettings =
            ImagenSafetySettings(
                ImagenSafetyFilterLevel.BLOCK_LOW_AND_ABOVE,
                personFilterLevel = ImagenPersonFilterLevel.ALLOW_ALL,
            ),
)

val response = generativeModel.generateImages(imagenPrompt)

val image = response.images.first().asBitmap()

And that’s it! The Imagen model generates a bitmap that we can display on the user’s screen.

Finetuning the Imagen model

The Imagen 3 model was finetuned using Low-Rank Adaptation (LoRA). LoRA is a fine-tuning technique designed to reduce the computational burden of training large models. Instead of updating the entire model, LoRA adds smaller, trainable “adapters” that make small changes to the model’s performance. We ran a fine tuning pipeline on the Imagen 3 model generally available with Android bot assets of different color combinations and different assets for enhanced cuteness and fun. We generated text captions for the training images and the image-text pairs were used to finetune the model effectively.

The current sample app uses a standard Imagen model, so the results may look a bit different from the visuals in this post. However, the app using the fine-tuned model and a custom version of Firebase AI Logic SDK was demoed at Google I/O. This app will be released later this year and we are also planning on adding support for fine-tuned models to Firebase AI Logic SDK later in the year.

moving image of Androidify app demo turning a selfie image of a bearded man wearing a black tshirt and sunglasses, with a blue back pack into a green 3D bearded droid wearing a black tshirt and sunglasses with a blue backpack

The original image… and Androidifi-ed image

ML Kit

The app also uses the ML Kit Pose Detection SDK to detect a person in the camera view, which triggers the capture button and adds visual indicators.

To do this, we add the SDK to the app, and use PoseDetection.getClient(). Then, using the poseDetector, we look at the detectedLandmarks that are in the streaming image coming from the Camera, and we set the _uiState.detectedPose to true if a nose and shoulders are visible:

private suspend fun runPoseDetection() {
    PoseDetection.getClient(
        PoseDetectorOptions.Builder()
            .setDetectorMode(PoseDetectorOptions.STREAM_MODE)
            .build(),
    ).use { poseDetector ->
        // Since image analysis is processed by ML Kit asynchronously in its own thread pool,
        // we can run this directly from the calling coroutine scope instead of pushing this
        // work to a background dispatcher.
        cameraImageAnalysisUseCase.analyze { imageProxy ->
            imageProxy.image?.let { image ->
                val poseDetected = poseDetector.detectPersonInFrame(image, imageProxy.imageInfo)
                _uiState.update { it.copy(detectedPose = poseDetected) }
            }
        }
    }
}

private suspend fun PoseDetector.detectPersonInFrame(
    image: Image,
    imageInfo: ImageInfo,
): Boolean {
    val results = process(InputImage.fromMediaImage(image, imageInfo.rotationDegrees)).await()
    val landmarkResults = results.allPoseLandmarks
    val detectedLandmarks = mutableListOf<Int>()
    for (landmark in landmarkResults) {
        if (landmark.inFrameLikelihood > 0.7) {
            detectedLandmarks.add(landmark.landmarkType)
        }
    }

    return detectedLandmarks.containsAll(
        listOf(PoseLandmark.NOSE, PoseLandmark.LEFT_SHOULDER, PoseLandmark.RIGHT_SHOULDER),
    )
}

moving image showing the camera shutter button activating when an orange droid figurine is held in the camera frame

The camera shutter button is activated when a person (or a bot!) enters the frame.

Get started with AI on Android

The Androidify app makes an extensive use of the Gemini 2.5 Flash to validate the image and generate a detailed description used to generate the image. It also leverages the specifically fine-tuned Imagen 3 model to generate images of Android bots. Gemini and Imagen models are easily integrated into the app via the Firebase AI Logic SDK. In addition, ML Kit Pose Detection SDK controls the capture button, enabling it only when a person is present in front of the camera.

To get started with AI on Android, go to the Gemini and Imagen documentation for Android.

Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.

The post Androidify: How Androidify leverages Gemini, Firebase and ML Kit appeared first on InShot Pro.

]]>
Android’s Kotlin Multiplatform announcements at Google I/O and KotlinConf 25 https://theinshotproapk.com/androids-kotlin-multiplatform-announcements-at-google-i-o-and-kotlinconf-25/ Mon, 02 Jun 2025 12:08:33 +0000 https://theinshotproapk.com/androids-kotlin-multiplatform-announcements-at-google-i-o-and-kotlinconf-25/ Posted by Ben Trengrove – Developer Relations Engineer, Matt Dyor – Product Manager Google I/O and KotlinConf 2025 bring a ...

Read more

The post Android’s Kotlin Multiplatform announcements at Google I/O and KotlinConf 25 appeared first on InShot Pro.

]]>

Posted by Ben Trengrove – Developer Relations Engineer, Matt Dyor – Product Manager

Google I/O and KotlinConf 2025 bring a series of announcements on Android’s Kotlin and Kotlin Multiplatform efforts. Here’s what to watch out for:

Announcements from Google I/O 2025

Jetpack libraries

Our focus for Jetpack libraries and KMP is on sharing business logic across Android and iOS, but we have begun experimenting with web/WASM support.

We are adding KMP support to Jetpack libraries. Last year we started with Room, DataStore and Collection, which are now available in a stable release and recently we have added ViewModel, SavedState and Paging. The levels of support that our Jetpack libraries guarantee for each platform have been categorised into three tiers, with the top tier being for Android, iOS and JVM.

Tool improvements

We’re developing new tools to help easily start using KMP in your app. With the KMP new module template in Android Studio Meerkat, you can add a new module to an existing app and share code to iOS and other supported KMP platforms.

In addition to KMP enhancements, Android Studio now supports Kotlin K2 mode for Android specific features requiring language support such as Live Edit, Compose Preview and many more.

How Google is using KMP

Last year, Google Workspace began experimenting with KMP, and this is now running in production in the Google Docs app on iOS. The app’s runtime performance is on par or better than before1.

It’s been helpful to have an app at this scale test KMP out, because we’re able to identify issues and fix issues that benefit the KMP developer community.

For example, we’ve upgraded the Kotlin Native compiler to LLVM 16 and contributed a more efficient garbage collector and string implementation. We’re also bringing the static analysis power of Android Lint to Kotlin targets and ensuring a unified Gradle DSL for both AGP and KGP to improve the plugin management experience.

New guidance

We’re providing comprehensive guidance in the form of two new codelabs: Getting started with Kotlin Multiplatform and Migrating your Room database to KMP, to help you get from standalone Android and iOS apps to shared business logic.

Kotlin Improvements

Kotlin Symbol Processing (KSP2) is stable to better support new Kotlin language features and deliver better performance. It is easier to integrate with build systems, is thread-safe, and has better support for debugging annotation processors. In contrast to KSP1, KSP2 has much better compatibility across different Kotlin versions. The rewritten command line interface also becomes significantly easier to use as it is now a standalone program instead of a compiler plugin.

KotlinConf 2025

Google team members are presenting a number of talks at KotlinConf spanning multiple topics:

Talks

    • Deploying KMP at Google Workspace by Jason Parachoniak, Troels Lund, and Johan Bay from the Workspace team discusses the challenges and solutions, including bugs and performance optimizations, encountered when launching Kotlin Multiplatform at Google Workspace, offering comparisons to ObjectiveC and a Q&A. (Technical Session)

    • The Life and Death of a Kotlin/Native Object by Troels Lund offers a high-level explanation of the Kotlin/Native runtime’s inner workings concerning object instantiation, memory management, and disposal. (Technical Session)

    • APIs: How Hard Can They Be? presented by Aurimas Liutikas and Alan Viverette from the Jetpack team delves into the lifecycle of API design, review processes, and evolution within AndroidX libraries, particularly considering KMP and related tools. (Technical Session)

    • Project Sparkles: How Compose for Desktop is changing Android Studio and IntelliJ with Chris Sinco and Sebastiano Poggi from the Android Studio team introduces the initiative (‘Project Sparkles’) aiming to modernize Android Studio and IntelliJ UIs using Compose for Desktop, covering goals, examples, and collaborations. (Technical Session)

    • JSpecify: Java Nullness Annotations and Kotlin presented by David Baker explains the significance and workings of JSpecify’s standard Java nullness annotations for enhancing Kotlin’s interoperability with Java libraries. (Lightning Session)

    • Lessons learned decoupling Architecture Components from platform specific code features Jeremy Woods and Marcello Galhardo from the Jetpack team sharing insights from the Android team on decoupling core components like SavedState and System Back from platform specifics to create common APIs. (Technical Session)

    • KotlinConf’s Closing Panel, a regular staple of the conference, returns, featuring Jeffrey van Gogh as Google’s representative on the panel. (Panel)

Live Workshops

If you are at KotlinConf in person, we will have guided live workshops with our new codelabs from above.

    • The codelab Migrating Room to Room KMP, also led by Matt Dyor, and Dustin Lam, Tomáš Mlynarič, demonstrates the process of migrating an existing Room database implementation to Room KMP within a shared module.

We love engaging with the Kotlin community. If you are attending KotlinConf, we hope you get a chance to check out our booth, with opportunities to chat with our engineers, get your questions answered, and learn more about how you can leverage Kotlin and KMP.

Learn more about Kotlin Multiplatform

To learn more about KMP and start sharing your business logic across platforms, check out our documentation and the sample.

Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.

1 Google Internal Data, March 2025

The post Android’s Kotlin Multiplatform announcements at Google I/O and KotlinConf 25 appeared first on InShot Pro.

]]>