#TheAndroidShow https://theinshotproapk.com/category/app/theandroidshow/ Download InShot Pro APK for Android, iOS, and PC Sat, 01 Nov 2025 12:00:42 +0000 en-US hourly 1 https://theinshotproapk.com/wp-content/uploads/2021/07/cropped-Inshot-Pro-APK-Logo-1-32x32.png #TheAndroidShow https://theinshotproapk.com/category/app/theandroidshow/ 32 32 New agentic experiences for Android Studio, new AI APIs, the first Android XR device and more, in our Fall episode of The Android Show https://theinshotproapk.com/new-agentic-experiences-for-android-studio-new-ai-apis-the-first-android-xr-device-and-more-in-our-fall-episode-of-the-android-show/ Sat, 01 Nov 2025 12:00:42 +0000 https://theinshotproapk.com/new-agentic-experiences-for-android-studio-new-ai-apis-the-first-android-xr-device-and-more-in-our-fall-episode-of-the-android-show/ Posted by Matthew McCullough, VP of Product Management, Android Developer We’re in an important moment where AI changes everything, from ...

Read more

The post New agentic experiences for Android Studio, new AI APIs, the first Android XR device and more, in our Fall episode of The Android Show appeared first on InShot Pro.

]]>

Posted by Matthew McCullough, VP of Product Management, Android Developer

We’re in an important moment where AI changes everything, from how we work to the expectations that users have for your apps, and our goal on Android is to transform this AI evolution into opportunities for you and your users. Today in our Fall episode of The Android Show, we unpacked a bunch of new updates towards delivering the highest return on investment in building for the Android platform. From new agentic experiences for Gemini in Android Studio to a brand new on-device AI API to the first Android XR device, there’s so much to cover – let’s dive in! 


Build your own custom Gen AI features with the new Prompt API

On Android, we offer AI models on-device, or in the cloud.  Today, we’re excited to now give you full flexibility to shape the output of the Gemini Nano model by passing in any prompt you can imagine with the new Prompt API, now in Alpha. For flagship Android devices, Gemini Nano lets you build efficient on-device options where the users’ data never leaves their device. At I/O this May, we launched our on-device GenAI APIs using the Gemini Nano model, making common tasks easier with simple APIs for tasks like summarization, proofreading and image description. Kakao used the Prompt API to transform their parcel delivery service, replacing a slow, manual process where users had to copy and paste details into a form into just a simple message requesting a delivery, and the API automatically extracts all the necessary information. This single feature reduced order completion time by 24% and boosted new user conversion by an incredible 45%.

Tap into Nano Banana and Imagen using the Firebase SDK 

When you want to add cutting-edge capabilities across the entire fleet of Android devices, our  cloud-based AI solutions with Firebase AI Logic are a great fit. The excitement for models like Gemini 2.5 Flash Image (a.k.a. Nano Banana) and Imagen have been incredible; now your users can now generate and edit images using Nano Banana, and then for finer control, like selecting and transforming specific parts of an image, users can use the new mask-based editing feature that leverages the Imagen model. See our blog post to learn more. And beyond image generation, you can also use Gemini multimodal capabilities to process text, audio and image input. RedBus, for example, revolutionized their user reviews using Gemini Flash via Firebase AI Logic to make giving feedback easier, more inclusive, and reliable. The old problem? Short, low-quality text reviews. The new solution? Users can now leave reviews using voice input in their native languages. From the audio Gemini Flash is then generating a structured text response enabling longer, richer and more reliable user reviews. It’s a win for everyone: travelers, operators, and developers!



Helping you be more productive, with agentic experiences in Android Studio

Helping you be more productive is our goal with Gemini in Android Studio, and why we’re infusing AI across our tooling. Developers like Pocket FM have seen an impressive development time savings of 50%. With the recent launch of Agent Mode, you can describe a complex goal in natural language and (with your permission), the agent plans and executes changes on multiple files across your project. The agent’s answers are now grounded in the most modern development practices, and can even cross-reference our latest documentation in real time. We demoed new agentic experiences such as updates to Agent Mode, the ability to upgrade APIs on your behalf, the new project assistant, and we announced you’ll be able to bring any LLM of your choice to power the AI functionality inside Android Studio, giving you more flexibility and choice on how you incorporate AI into your workflow. And for the newest stable features such as Back Up and Sync, make sure to download the latest stable version of Android Studio.



Elevating AI-assisted Android development, and improving LLMs with an Android benchmark

Our goal is to make it easier for Android developers to build great experiences. With more code being written by AI, developers have been asking for models that know more about Android development. We want to help developers be more productive, and that’s why we’re building a new task set for LLMs against a range of common Android development areas. The goal is to provide LLM makers with a benchmark, a north star of high quality Android development, so Android developers have a range of helpful models to choose for AI assistance. 


To reflect the challenges of Android development, the benchmark is composed of real-world problems sourced from public GitHub Android repositories. Each evaluation attempts to have an LLM recreate a pull request, which are then verified using human authored tests. This allows us to measure a model’s ability to navigate complex codebases, understand dependencies, and solve the kind of problems you encounter every day. 

We’re finalizing the task set we’ll be testing against LLMs, and will be sharing the results publicly in the coming months. We’re looking forward to seeing how this shapes AI assisted Android development, and the additional flexibility and choice it gives you to build on Android.

The first Android XR device: Samsung Galaxy XR

Last week was the launch of the first in a new wave of Android XR devices: the Galaxy XR, in partnership with Samsung. Android XR devices are built entirely in the Gemini era, creating a major new platform opportunity for your app. And because Android XR is built on top of familiar Android frameworks, when building adaptively, you’re already building for XR. To unlock the full potential of Android XR features, you can use the Jetpack XR SDK. The Calm team provides a perfect example of this in action. They successfully transformed their mobile app into an immersive spatial experience, building their first functional XR menus on the first day and a core XR experience in just two weeks by leveraging their existing Android codebase and the Jetpack XR SDK.  You can read more about Android XR from our Spotlight Week last week. 


Jetpack Navigation 3 is in Beta

The new Jetpack Navigation 3 library is now in beta! Instead of having behavior embedded into the library itself, we’re providing ‘how-to recipes’ with good defaults (nav3 recipes on github). Out of the box, it’s fully customizable, has animation support and is adaptive. Nav 3 was built from the ground up with Compose State as a fundamental building block. This means that it fully buys into the declarative programming model – you change the state you own and Nav3 reacts to that new state. On the Compose front, we’ve been working on making it faster and easier for you to build UI, covering the features you told us you needed from Views, while at the same time ensuring that Compose is performant.

Accelerate your business success on Google Play

With AI speeding up app development, Google Play is streamlining your workflow in Play Console so that your business growth can keep up with your code. The reimagined, goal-oriented app dashboard puts actionable metrics front and center. Plus, new capabilities are making your day-to-day operations faster, smarter, and more efficient: from pre-release testing with deep links validation to AI-powered analytics summaries and app strings localization. These updates are just the beginning. Check out the full list of announcements to get the latest from Play.  



Watch the Fall episode of The Android Show

Thank you for tuning into our Fall episode of The Android Show. We’re excited to continue building great things together, and this show is an important part of our conversation with you. We’d love to hear your ideas for our next episode, so please reach out on X or LinkedIn. A special thanks to my co-hosts,  Rebecca Gutteridge and Adetunji Dahunsi, for helping us share the latest updates.


The post New agentic experiences for Android Studio, new AI APIs, the first Android XR device and more, in our Fall episode of The Android Show appeared first on InShot Pro.

]]>
Androidify: Building AI first Android Experiences with Gemini using Jetpack Compose and Firebase https://theinshotproapk.com/androidify-building-ai-first-android-experiences-with-gemini-using-jetpack-compose-and-firebase/ Sat, 06 Sep 2025 12:03:47 +0000 https://theinshotproapk.com/androidify-building-ai-first-android-experiences-with-gemini-using-jetpack-compose-and-firebase/ Posted by Rebecca Franks – Developer Relations Engineer, Tracy Agyemang – Product Marketer, and Avneet Singh – Product Manager Androidify ...

Read more

The post Androidify: Building AI first Android Experiences with Gemini using Jetpack Compose and Firebase appeared first on InShot Pro.

]]>

Posted by Rebecca Franks – Developer Relations Engineer, Tracy Agyemang – Product Marketer, and Avneet Singh – Product Manager

Androidify is our new app that lets you build your very own Android bot, using a selfie and AI. We walked you through some of the components earlier this year, and starting today it’s available on the web or as an app on Google Play. In the new Androidify, you can upload a selfie or write a prompt of what you’re looking for, add some accessories, and watch as AI builds your unique bot. Once you’ve had a chance to try it, come back here to learn more about the AI APIs and Android tools we used to create the app. Let’s dive in!

Key technical integrations

The Androidify app combines powerful technologies to deliver a seamless and engaging user experience. Here’s a breakdown of the core components and their roles:

AI with Gemini and Firebase

Androidify leverages the Firebase AI Logic SDK to access Google’s powerful Gemini and Imagen* models. This is crucial for several key features:

  • Image validation: The app first uses Gemini 2.5 Flash to validate the user’s photo. This includes checking that the image contains a clear, focused person and meets safety standards before any further processing. This is a critical first step to ensure high-quality and safe outputs.
  • Image captioning: Once validated, the model generates a detailed caption of the user’s image. This is done using structured output, which means the model returns a specific JSON format, making it easier for the app to parse the information. This detailed description helps create a more accurate and creative final result.
  • Android Bot Generation: The generated caption is then used to enrich the prompt for the final image generation. A specifically fine-tuned version of the Imagen 3 model is then called to generate the custom Android bot avatar based on the enriched prompt. This custom fine-tuning ensures the results are unique and align with the app’s playful and stylized aesthetic.
  • The Androidify app also has a “Help me write” feature which uses Gemini 2.5 Flash to create a random description for a bot’s clothing and hairstyle, adding a bit of a fun “I’m feeling lucky” element.

    gif showcasing the help me write button

    UI with Jetpack Compose and CameraX

    The app’s user interface is built entirely with Jetpack Compose, enabling a declarative and responsive design across form factors. The app uses the latest Material 3 Expressive design, which provides delightful and engaging UI elements like new shapes, motion schemes, and custom animations.

    For camera functionality, CameraX is used in conjunction with the ML Kit Pose Detection API. This intelligent integration allows the app to automatically detect when a person is in the camera’s view, enabling the capture button and adding visual guides for the user. It also makes the app’s camera features responsive to different device types, including foldables in tabletop mode.

    Androidify also makes extensive use of the latest Compose features, such as:

  • Adaptive layouts: It’s designed to look great on various screen sizes, from phones to foldables and tablets, by leveraging WindowSizeClass and reusable composables.
  • Shared element transitions: The app uses the new Jetpack Navigation 3 library to create smooth and delightful screen transitions, including morphing shape animations that add a polished feel to the user experience.
  • Auto-sizing text: With Compose 1.8, the app uses a new parameter that automatically adjusts font size to fit the container’s available size, which is used for the app’s main “Customize your own Android Bot” text.
  • chart illustrating the behavior of Androidify app flow

    Figure 1. Androidify Flow

    Latest updates

    In the latest version of Androidify, we’ve added some new powerful AI driven features.

    Background vibe generation with Gemini Image editing

    Using the latest Gemini 2.5 Flash Image model, we combine the Android bot with a preset background “vibe” to bring the Android bots to life.

    a three-part image showing an Android bot on the left, text prompt in the middle reads A vibrant 3D illustration of a vibrant outdoor garden with fun plants. the flowers in thisscene have an alien-like qulaity to them and are brightly colored. the entire scene is rendered with a meticulous mixture of rounded, toy-like objects, creating a clean, minimalist aesthetic..., and image on the right is the Android bot from the first image stanging in a toy like garen scene surrounded by brightly colored flowers. A whitre picket fence is in the background, and a red watering can sits on the ground next to the driod bot

    Figure 2. Combining the Android bot with a background vibe description to generate your new Android Bot in a scene

    This is achieved by using Firebase AI Logic – passing a prompt for the background vibe, and the input image bitmap of the bot, with instructions to Gemini on how to combine the two together.

    override suspend fun generateImageWithEdit(
            image: Bitmap,
            backgroundPrompt: String = "Add the input image android bot as the main subject to the result... with the background that has the following vibe...",
        ): Bitmap {
            val model = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel(
                modelName = "gemini-2.5-flash-image-preview",
                generationConfig = generationConfig {
                    responseModalities = listOf(
                        ResponseModality.TEXT,
                        ResponseModality.IMAGE,
                    )
                },
            )
    	  // We combine the backgroundPrompt with the input image which is the Android Bot, to produce the new bot with a background
            val prompt = content {
                text(backgroundPrompt)
                image(image)
            }
            val response = model.generateContent(prompt)
            val image = response.candidates.firstOrNull()
                ?.content?.parts?.firstNotNullOfOrNull { it.asImageOrNull() }
            return image ?: throw IllegalStateException("Could not extract image from model response")
        }

    Sticker mode with ML Kit Subject Segmentation

    The app also includes a “Sticker mode” option, which integrates the ML Kit Subject Segmentation library to remove the background on the bot. You can use “Sticker mode” in apps that support stickers.

    backgroud removal

    Figure 3. White background removal of Android Bot to create a PNG that can be used with apps that support stickers

    The code for the sticker implementation first checks if the Subject Segmentation model has been downloaded and installed, if it has not – it requests that and waits for its completion. If the model is installed already, the app passes in the original Android Bot image into the segmenter, and calls process on it to remove the background. The foregroundBitmap object is then returned for exporting.

    override suspend fun generateImageWithEdit(
            image: Bitmap,
            backgroundPrompt: String = "Add the input image android bot as the main subject to the result... with the background that has the following vibe...",
        ): Bitmap {
            val model = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel(
                modelName = "gemini-2.5-flash-image-preview",
                generationConfig = generationConfig {
                    responseModalities = listOf(
                        ResponseModality.TEXT,
                        ResponseModality.IMAGE,
                    )
                },
            )
    	  // We combine the backgroundPrompt with the input image which is the Android Bot, to produce the new bot with a background
            val prompt = content {
                text(backgroundPrompt)
                image(image)
            }
            val response = model.generateContent(prompt)
            val image = response.candidates.firstOrNull()
                ?.content?.parts?.firstNotNullOfOrNull { it.asImageOrNull() }
            return image ?: throw IllegalStateException("Could not extract image from model response")
        }

    See the LocalSegmentationDataSource for the full source implementation

    Learn more

    To learn more about Androidify behind the scenes, take a look at the new solutions walkthrough, inspect the code or try out the experience for yourself at androidify.com or download the app on Google Play.

    moving demo of Androidfiy app

    *Check responses. Compatibility and availability varies. 18+.

    The post Androidify: Building AI first Android Experiences with Gemini using Jetpack Compose and Firebase appeared first on InShot Pro.

    ]]>
    The latest for devs from Made by Google, updates to Gemini in Android Studio, plus a new Androidify: our summer episode of The Android Show https://theinshotproapk.com/the-latest-for-devs-from-made-by-google-updates-to-gemini-in-android-studio-plus-a-new-androidify-our-summer-episode-of-the-android-show/ Wed, 03 Sep 2025 18:08:00 +0000 https://theinshotproapk.com/the-latest-for-devs-from-made-by-google-updates-to-gemini-in-android-studio-plus-a-new-androidify-our-summer-episode-of-the-android-show/ Posted by Matthew McCullough – VP of Product Management, Android Developer In this dynamic and complex ecosystem, our commitment is ...

    Read more

    The post The latest for devs from Made by Google, updates to Gemini in Android Studio, plus a new Androidify: our summer episode of The Android Show appeared first on InShot Pro.

    ]]>

    Posted by Matthew McCullough – VP of Product Management, Android Developer

    In this dynamic and complex ecosystem, our commitment is to your success. That’s why in our summer episode of The Android Show, we’re making it easier for you to build amazing apps by unpacking the latest tools and opportunities. In this episode, we’ll cover how you can get building for Wear OS 6, boost your productivity with the latest Gemini in Android Studio updates, create for the new Pixel 10 Pro Fold, and even have some fun with the new AI-powered Androidify. (And for Android users, we also just dropped a bunch of new feature updates today; you can read more about those here). Let’s dive in!

    Get the most out of Agent Mode in Android Studio with MCP

    We’re focused on making you more productive by integrating AI directly into your workflow. Gemini in Android Studio is at the center of this, helping teams like Entri who was able to reduce UI development time by 40%. You can now connect Model Context Protocol servers to Android Studio, which expands the tools, knowledge, and capabilities of the AI Agent. We also just launched the Android Studio Narwhal 3 feature drop, which brings more productivity boosters like Resizable Compose Preview and Play Policy Insights.

    Build for every screen with Compose Adaptive Layouts 1.2 beta

    The new Pixel 10 Pro Fold creates an incredible canvas for your app, and we’re simplifying development so you can take full advantage of it. The Compose Adaptive Layouts 1.2 library, now officially in beta, makes it easier than ever to build for large screens and to embrace adaptive app development. This library is packed with tools to help you create sophisticated, adaptive UIs with less code. We’re focused on helping you build intuitive and engaging experiences for every screen. This foundational library is packed with powerful tools to help you create sophisticated, adaptive UIs with less code. Build dynamic, multi-pane experiences using new layout strategies like Reflow and Levitate, and use the new Large and Extra-Large window size classes to make your app more intuitive and engaging than ever. Read more about these new tools here.

    Bring your most expressive apps to the wrist with Wear OS 6

    We want to help you build amazing experiences for the wrist, and the new Pixel Watch 4 with Wear OS 6 provides a powerful new stage for your apps. We’re giving you the tools to make your apps more expressive and personal, with Material 3 Expressive to create stunning UIs. You can also engage users in new ways by building your own marketplace with the Watch Face Push API. All of this is built on a more reliable foundation, with watches updating to Wear OS 6 seeing up to 10% better battery life and faster app launches.

    Androidify yourself, with a selfie + AI!

    Our journey to reimagine Android with Gemini at its center extends to everything we do—including our mascot. That’s why we rebuilt Androidify with AI at its core. With the new Androidify, available on the web or on Google Play, you can use a selfie or a prompt to create your own unique Android bot, powered by Gemini 2.5 Flash and Imagen. This is a fun example of how we’re building better user experiences powered by AI… Try it out for yourself—we can’t wait to see what you build.

    Under the hood, we’re using Gemini 2.5 Flash to validate the prompt and Imagen to create your Android bot. And on Friday’s this month, you’ll be able to animate your Android bot into an 8-second video; this feature is powered by Veo and available to a limited number of creations. You can read more about the technical building of the Androidify app here. Try it out for yourself – we can’t wait to see your inner Android!

    Watch the Summer episode of The Android Show

    Thank you for tuning into this quarter’s episode. We’re excited to continue building great things together, and this show is an important part of our conversation with you. We’d love to hear your ideas for our next episode, so please reach out on X or LinkedIn. A special thanks to my co-hosts, Annyce Davis and John Zoeller, for helping us share the latest updates.

    The post The latest for devs from Made by Google, updates to Gemini in Android Studio, plus a new Androidify: our summer episode of The Android Show appeared first on InShot Pro.

    ]]>
    Unfold new possibilities with Compose Adaptive Layouts 1.2 beta https://theinshotproapk.com/unfold-new-possibilities-with-compose-adaptive-layouts-1-2-beta/ Wed, 03 Sep 2025 18:05:00 +0000 https://theinshotproapk.com/unfold-new-possibilities-with-compose-adaptive-layouts-1-2-beta/ Posted by Fahd Imtiaz – Senior Product Manager and Miguel Montemayor – Developer Relations Engineer With new form factors like ...

    Read more

    The post Unfold new possibilities with Compose Adaptive Layouts 1.2 beta appeared first on InShot Pro.

    ]]>

    Posted by Fahd Imtiaz – Senior Product Manager and Miguel Montemayor – Developer Relations Engineer

    With new form factors like the Pixel 10 Pro Fold joining the Android ecosystem, adaptive app development is essential for creating high-quality user experiences across phones, tablets, and foldables. Users expect your app’s UI to seamlessly adapt to these different sizes and postures.

    To help you build these dynamic experiences more efficiently, we are announcing that the Compose Adaptive Layouts Library 1.2 is officially entering beta. This release provides powerful new tools to create polished, responsive UIs for this expanding device ecosystem.

    Powerful new tools for a bigger canvas

    The Compose Adaptive Layouts library is our foundational toolkit for building UIs that adapt across different window sizes. This new beta release is packed with powerful features to help you create sophisticated layouts with less code. Key additions include:

      • New Window Size Classes: The release adds built-in support for the new Large and Extra-Large window size classes. These new breakpoints are essential for designing and triggering rich, multi-pane UI changes on expansive screens like tablets and large foldables.

    reflowlevitate

    Two new pane adaptation strategies: reflow (left) and levitate (right)

    For a full list of changes, check out the official release documentation. Explore our guides on canonical layouts and building a supporting pane layout.

    Engage more users on every screen

    Embracing an adaptive mindset is more than a best practice, it’s a strategy for growth. The goal isn’t just to make your app work on a larger screen, but to make it shine by becoming more intuitive for users. Instead of simply stretching a single-column layout, think about how you can use the extra space to create more efficient and immersive experiences.

    This is the core principle behind dynamic layout strategies like reflow, a powerful new feature in the Compose Adaptive Layouts 1.2 beta designed to help you build these UIs. For example, a great starting point is adopting a multi-pane layout. By showing a list and its corresponding detail view side-by-side, you reduce taps and allow users to accomplish tasks more quickly.

    This kind of thoughtful adaptive development is what truly boosts engagement. And, as we highlighted during the latest episode of #TheAndroidShow, this is why we see that users who use an app on both their phone and a larger screen are almost three times more engaged. Building adaptively doesn’t just make your current users happier; it creates a more valuable and compelling experience that builds lasting loyalty and helps you reach new users.

    The expanding Android ecosystem, from foldables to desktops

    This shift toward adaptive design extends across the entire Android ecosystem. From the new Pixel 10 Pro Fold to the latest Samsung Galaxy foldables, developers have the opportunity to engage a large and growing user base on over 500 million large-screen devices.

    This is also why we’re continuing to invest in forward-looking experiences like Connected Displays, currently available to try in developer preview. This feature opens up new surfaces and interaction models for apps to run on, enabling true desktop-class features and multi-instance workflows. We’ve previously shared details on how you can get started with the Connected Displays developer preview and see how it’s shaping the future of multi-device experiences.

    Putting adaptive principles into practice

    For developers who want to get their apps ready for this adaptive future, here are a few key best practices to keep in mind:

      • Take inventory: The first step is to see where you are today. Test your app on a large screen device or with the resizable emulator in Android Studio to identify areas for improvement, like stretched UIs or usability issues.
      • Think beyond touch: A great adaptive experience means supporting all input methods. This goes beyond basic functionality to include thoughtful details that users expect, like hover states for mouse cursors, context menus on right-click, and support for keyboard shortcuts.

    Your app’s potential is no longer confined to a single screen. Explore the large screen design gallery and app quality guidelines today to envision where your app can go. Get inspired and find design patterns, official guidance, and sample apps you need to build for every fold, flip, and screen at developer.android.com/adaptive-apps.

    The post Unfold new possibilities with Compose Adaptive Layouts 1.2 beta appeared first on InShot Pro.

    ]]>
    Tune in on September 3: recapping the latest from Made by Google and more in our summer episode of The Android Show https://theinshotproapk.com/tune-in-on-september-3-recapping-the-latest-from-made-by-google-and-more-in-our-summer-episode-of-the-android-show/ Tue, 02 Sep 2025 12:11:53 +0000 https://theinshotproapk.com/tune-in-on-september-3-recapping-the-latest-from-made-by-google-and-more-in-our-summer-episode-of-the-android-show/ Posted by Christopher Katsaros – Senior Product Marketing Manager In just a few days, on Wednesday September 3 at 11AM ...

    Read more

    The post Tune in on September 3: recapping the latest from Made by Google and more in our summer episode of The Android Show appeared first on InShot Pro.

    ]]>

    Posted by Christopher Katsaros – Senior Product Marketing Manager

    In just a few days, on Wednesday September 3 at 11AM PT, we’ll be dropping our summer episode of #TheAndroidShow, on YouTube and on developer.android.com! In this quarterly show, we’ll be unpacking all of the goodies coming out of this month’s Made by Google event and what you as Android developers need to know!

    With the new Pixel Watch 4 running Wear OS 6, we’ll show you how to get building for the wrist. And with the latest foldable from Google, the Pixel 10 Pro Fold, we’ll show how you can leverage out of the box APIs and multi-window experiences to make your apps adaptive for this new form factor. Plus, we’ll be unpacking a set of new features for Gemini in Android Studio to help you be even more productive.

    #TheAndroidShow is your conversation with the Android developer community, this time hosted by Annyce Davis and John Zoeller. You’ll hear the latest from the developers and engineers who build Android. Don’t forget to tune in live on September 3 at 10AM PT, live on YouTube and on developer.android.com/events/show!

    The post Tune in on September 3: recapping the latest from Made by Google and more in our summer episode of The Android Show appeared first on InShot Pro.

    ]]>
    The Android Show: I/O Edition – what Android devs need to know! https://theinshotproapk.com/the-android-show-i-o-edition-what-android-devs-need-to-know/ Thu, 05 Jun 2025 12:00:45 +0000 https://theinshotproapk.com/the-android-show-i-o-edition-what-android-devs-need-to-know/ Posted by Matthew McCullough – Vice President, Product Management, Android Developer We just dropped an I/O Edition of The Android ...

    Read more

    The post The Android Show: I/O Edition – what Android devs need to know! appeared first on InShot Pro.

    ]]>

    Posted by Matthew McCullough – Vice President, Product Management, Android Developer

    We just dropped an I/O Edition of The Android Show, where we unpacked exciting new experiences coming to the Android ecosystem: a fresh and dynamic look and feel, smarts across your devices, and enhanced safety and security features. Join Sameer Samat, President of Android Ecosystem, and the Android team to learn about exciting new development in the episode below, and read about all of the updates for users.

    Tune into Google I/O next week – including the Developer Keynote as well as the full Android track of sessions – where we’re covering these topics in more detail and how you can get started.

    Start building with Material 3 Expressive

    The world of UX design is constantly evolving, and you deserve the tools to create truly engaging and impactful experiences. That’s why Material Design’s latest evolution, Material 3 Expressive, provides new ways to make your product more engaging, easy to use, and desirable. Learn more, and try out the new Material 3 Expressive: an expansion pack designed to enhance your app’s appeal by harnessing emotional UX, making it more engaging, intuitive, and desirable for users. It comes with new components, motion-physics system, type styles, colors, shapes and more.

    Material 3 Expressive will be coming to Android 16 later this year; check out the Google I/O talk next week where we’ll dive into this in more detail.

    A fluid design built for your watch’s round display

    Wear OS 6, arriving later this year, brings Material 3 Expressive design to Google’s smartwatch platform. New design language puts the round watch display at the heart of the experience, and is embraced in every single component and motion of the System, from buttons to notifications. You’ll be able to try new visual design and upgrade existing app experiences to a new level. Next week, tune in to the What’s New in Android session to learn more.

    Plus some goodies in Android 16…

    We also unpacked some of the latest features coming to users in Android 16, which we’ve been previewing with you for the last few months. If you haven’t already, you can try out the latest Beta of Android 16.

    A few new features that Android 16 adds which developers should pay attention to are Live updates, professional media and camera features, desktop windowing for tablets, major accessibility enhancements and much more:

    Watch the What’s New in Android session and the Live updates talk to learn more.

    Tune in next week to Google I/O

    This was just a preview of some Android-related news, so remember to tune in next week to Google I/O, where we’ll be diving into a range of Android developer topics in a lot more detail. You can check out What’s New in Android and the full Android track of sessions to start planning your time.

    We can’t wait to see you next week, whether you’re joining in person or virtually from anywhere around the world!

    The post The Android Show: I/O Edition – what Android devs need to know! appeared first on InShot Pro.

    ]]>