InShot Pro https://theinshotproapk.com/ Download InShot Pro APK for Android, iOS, and PC Wed, 22 Apr 2026 23:00:00 +0000 en-US hourly 1 https://theinshotproapk.com/wp-content/uploads/2021/07/cropped-Inshot-Pro-APK-Logo-1-32x32.png InShot Pro https://theinshotproapk.com/ 32 32 What’s new in the Jetpack Compose April ’26 release https://theinshotproapk.com/whats-new-in-the-jetpack-compose-april-26-release/ Wed, 22 Apr 2026 23:00:00 +0000 https://theinshotproapk.com/whats-new-in-the-jetpack-compose-april-26-release/ Posted by Meghan Mehta, Android Developer Relations Engineer Today, the Jetpack Compose April ‘26 release is stable. This release contains version ...

Read more

The post What’s new in the Jetpack Compose April ’26 release appeared first on InShot Pro.

]]>
Posted by Meghan Mehta, Android Developer Relations Engineer

Today, the Jetpack Compose April ‘26 release is stable. This release contains version 1.11 of core Compose modules (see the full BOM mapping), shared element debug tools, trackpad events, and more. We also have a few experimental APIs that we’d love you to try out and give us feedback on.

To use today’s release, upgrade your Compose BOM version to:

implementation(platform("androidx.compose:compose-bom:2026.04.01"))

Changes in Compose 1.11.0

Coroutine execution in tests

We’re introducing a major update to how Compose handles test timing. Following the opt-in period announced in Compose 1.10, the v2 testing APIs are now the default, and the v1 APIs have been deprecated. The key change is a shift in the default test dispatcher. While the v1 APIs relied on UnconfinedTestDispatcher, which executed coroutines immediately, the v2 APIs use the StandardTestDispatcher. This means that when a coroutine is launched in your tests, it is now queued and does not execute until the virtual clock is advanced.

This better mimics production conditions, effectively flushing out race conditions and making your test suite significantly more robust and less flaky.

To ensure your tests align with standard coroutine behavior and to avoid future compatibility issues, we strongly recommend migrating your test suite. Check out our comprehensive  migration guide for API mappings and common fixes.

Shared element improvements and animation tooling

We’ve also added some handy visual debugging tools for shared elements and Modifier.animatedBounds. You can now see exactly what’s happening under the hood—like target bounds, animation trajectories, and how many matches are found—making it much easier to spot why a transition might not be behaving as expected. To use the new tooling, simply surround your SharedTransitionLayout with the LookaheadAnimationVisualDebugging composable.

LookaheadAnimationVisualDebugging(
    overlayColor = Color(0x4AE91E63),
    isEnabled = true,
    multipleMatchesColor = Color.Green,
    isShowKeylabelEnabled = false,
    unmatchedElementColor = Color.Red,
) {
    SharedTransitionLayout {
        CompositionLocalProvider(
            LocalSharedTransitionScope provides this,
        ) {
            // your content
        }
    }
}

Trackpad events

We’ve revamped Compose support for trackpads, like built-in laptop trackpads, attachable trackpads for tablets, or external/virtual trackpads. Basic trackpad events will now generally be considered PointerType.Mouse events, aligning mouse and trackpad behavior to better match user expectations. Previously, these trackpad events were interpreted as fake touchscreen fingers of PointerType.Touch, which led to confusing user experiences. For example, clicking and dragging with a trackpad would scroll instead of selecting. By changing the pointer type these events have in the latest release of Compose, clicking and dragging with a trackpad will no longer scroll.

We also added support for more complicated trackpad gestures as recognized by the platform since API 34, including two finger swipes and pinches. These gestures are automatically recognized by components like Modifier.scrollable and Modifier.transformable to have better behavior with trackpads.

These changes improve behavior for trackpads across built-in components, with redundant touch slop removed, a more intuitive drag-and-drop starting gesture, double-click and triple-click selection in text fields, and desktop-styled context menus in text fields.

To test trackpad behavior, there are new testing APIs with performTrackpadInput, which allow validating the behavior of your apps when being used with a trackpad. If you have custom gesture detectors, validate behavior across input types, including touchscreens, mice, trackpads, and styluses, and ensure support for mouse scroll wheels and trackpad gestures.

Before After

Composition host defaults (Compose runtime)

We introduced HostDefaultProvider, LocalHostDefaultProvider, HostDefaultKey, and ViewTreeHostDefaultKey to supply host-level services directly through compose-runtime. This removes the need for libraries to depend on compose-ui for lookups, better supporting Kotlin Multiplatform. To link these values to the composition tree, library authors can use compositionLocalWithHostDefaultOf to create a CompositionLocal that resolves defaults from the host.

Preview wrappers

Android Studio custom previews is a new feature that allows you to define exactly how the contents of a Compose preview are displayed.

By implementing the PreviewWrapperProvider interface and applying the new @PreviewWrapper annotation, you can easily inject custom logic, such as applying a specific Theme. The annotation can be applied to a function annotated with @Composable and @Preview or @MultiPreview, offering a generic, easy-to-use solution that works across preview features and significantly reduces repetitive code.

class ThemeWrapper: PreviewWrapper {

    @Composable

    override fun Wrap(content: @Composable (() -> Unit)) {

        JetsnackTheme {

            content()

        }

    }

}


@PreviewWrapperProvider(ThemeWrapper::class)

@Preview

@Composable

private fun ButtonPreview() {

    // JetsnackTheme in effect

    Button(onClick = {}) {

        Text(text = "Demo")

    }

}

Deprecations and removals

  • As announced in the Compose 1.10 blog post, we’re deprecating Modifier.onFirstVisible(). Its name often led to misconceptions, particularly within lazy layouts, where it would trigger multiple times during scrolling. We recommend migrating to Modifier.onVisibilityChanged(), which allows for more precise manual tracking of visibility states tailored to your specific use case requirements.
  • The ComposeFoundationFlags.isTextFieldDpadNavigationEnabled flag was removed because D-pad navigation for TextFields is now always enabled by default. The new behavior ensures that the D-pad events from a gamepad or a TV remote first move the cursor in the given direction. The focus can move to another element only when the cursor reaches the end of the text.

Upcoming APIs

In the upcoming Compose 1.12.0 release, the compileSdk will be upgraded to compileSdk 37, with AGP 9 and all apps and libraries that depend on Compose inheriting this requirement. We recommend keeping up to date with the latest released versions, as Compose aims to promptly adopt new compileSdks to provide access to the latest Android features. Be sure to check out the documentation here for more information on which version of AGP is supported for different API levels.

In Compose 1.11.0, the following APIs are introduced as @Experimental, and we look forward to hearing your feedback as you explore them in your apps. Note that @Experimental APIs are provided for early evaluation and feedback and may undergo significant changes or removal in future releases.

Styles (Experimental)

We are introducing a new experimental foundation API for styling. The Style API is a new paradigm for customizing visual elements of components, which has traditionally been performed with modifiers. It is designed to unlock deeper, easier customization by exposing a standard set of styleable properties with simple state-based styling and animated transitions. With this new API, we’re already seeing promising performance benefits. We plan to adopt Styles in Material components once the Style API stabilizes.

A basic example of overriding a pressed state style background:

@Composable
fun LoginButton(modifier: Modifier = Modifier) {
    Button(
        onClick = {
            // Login logic
        },
        modifier = modifier,
        style = {
            background(
                Brush.linearGradient(
                    listOf(lightPurple, lightBlue)
                )
            )
            width(75.dp)
            height(50.dp)
            textAlign(TextAlign.Center)
            externalPadding(16.dp)

            pressed {
                background(
                    Brush.linearGradient(
                        listOf(Color.Magenta, Color.Red)
                    )
                )
            }
        }
    ){
        Text(
            text = "Login",
        )
    }
}

Check out the documentation and file any bugs here.

MediaQuery (Experimental)

The new mediaQuery API provides a declarative and performant way to adapt your UI to its environment. It abstracts complex information retrieval into simple conditions within a UiMediaScope, ensuring recomposition only happens when needed.

With support for a wide range of environmental signals—from device capabilities like keyboard types and pointer precision, to contextual states like window size and posture—you can build deeply responsive experiences. Performance is baked in with derivedMediaQuery to handle high-frequency updates, while the ability to override scopes makes testing and previews seamless across hardware configurations.

Previously, to get access to certain device properties — like if a device was in tabletop mode — you’d need to write a lot of boilerplate to do so:

@Composable
fun isTabletopPosture(
    context: Context = LocalContext.current
): Boolean {
    val windowLayoutInfo by
        WindowInfoTracker
            .getOrCreate(context)
            .windowLayoutInfo(context)
            .collectAsStateWithLifecycle(null)

    return windowLayoutInfo.displayFeatures.any { displayFeature ->
        displayFeature is FoldingFeature &&
            displayFeature.state == FoldingFeature.State.HALF_OPENED &&
            displayFeature.orientation == FoldingFeature.Orientation.HORIZONTAL
    }
}

@Composable
fun VideoPlayer() {
    if(isTabletopPosture()) {
        TabletopLayout()
    } else {
        FlatLayout()
    }
}

Now, with UIMediaQuery, you can add the mediaQuery syntax to query device properties, such as if a device is in tabletop mode:

@OptIn(ExperimentalMediaQueryApi::class)
@Composable
fun VideoPlayer() {
    if (mediaQuery { windowPosture == UiMediaScope.Posture.Tabletop }) {
        TabletopLayout()
    } else {
        FlatLayout()
    }
}

Check out the documentation and file any bugs here.

Grid (Experimental)

Grid is a powerful new API for building complex, two-dimensional layouts in Jetpack Compose. While Row and Column are great for linear designs, Grid gives you the structural control needed for screen-level architecture and intricate components without the overhead of a scrollable list.

Grid allows you to define your layout using tracks, gaps, and cells, offering familiar sizing options like Dp, percentages, intrinsic content sizes, and flexible “Fr” units.

@OptIn(ExperimentalGridApi::class)

@Composable

fun GridExample() {

    Grid(

        config = {

            repeat(4) { column(0.25f) }

            repeat(2) { row(0.5f) }

            gap(16.dp)

        }

    ) {

        Card1(modifier = Modifier.gridItem(rowSpan = 2)

        Card2(modifier = Modifier.gridItem(colmnSpan = 3)

        Card3(modifier = Modifier.gridItem(columnSpan = 2)

        Card4()

    }

}


You can place items automatically or explicitly span them across multiple rows and columns for precision. Best of all, it’s highly adaptive—you can dynamically reconfigure your grid tracks and spans to respond to device states like tabletop mode or orientation changes, ensuring your UI looks great across form factors.

Check out the documentation and file any bugs here.

FlexBox (Experimental)

FlexBox is a layout container designed for high performance, adaptive UIs. It manages item sizing and space distribution based on available container dimensions. It handles complex tasks like wrapping (wrap) and multi-axis alignment of items (justifyContent, alignItems, alignContent). It allows items to grow (grow) or shrink (shrink) to fill the container.

@OptIn(ExperimentalFlexBoxApi::class)
fun FlexBoxWrapping(){
    FlexBox(
        config = {
            wrap(FlexWrap.Wrap)
            gap(8.dp)
        }
    ) {
        RedRoundedBox()
        BlueRoundedBox()
        GreenRoundedBox(modifier = Modifier.width(350.dp).flex { grow(1.0f) })
        OrangeRoundedBox(modifier = Modifier.width(200.dp).flex { grow(0.7f) })
        PinkRoundedBox(modifier = Modifier.width(200.dp).flex { grow(0.3f) })
    }
}

Check out the documentation and file any bugs here.

New SlotTable implementation (Experimental)

We’ve introduced a new implementation of the SlotTable, which is disabled by default in this release. SlotTable is the internal data structure that the Compose runtime uses to track the state of your composition hierarchy, track invalidations/recompositions, store remembered values, and track all metadata of the composition at runtime. This new implementation is designed to improve performance, primarily around random edits.

To try the new SlotTable, enable ComposeRuntimeFlags.isLinkBufferComposerEnabled.

Start coding today!

With so many exciting new APIs in Jetpack Compose, and many more coming up, it’s never been a better time to migrate to Jetpack Compose. As always, we value your feedback and feature requests (especially on @Experimental features that are still baking) — please file them here. Happy composing!

The post What’s new in the Jetpack Compose April ’26 release appeared first on InShot Pro.

]]>
Streamline User Journeys with Verified Email via Credential Manager https://theinshotproapk.com/streamline-user-journeys-with-verified-email-via-credential-manager/ Wed, 22 Apr 2026 20:00:00 +0000 https://theinshotproapk.com/streamline-user-journeys-with-verified-email-via-credential-manager/ Posted by Niharika Arora, Senior Developer Relations Engineer and Jean-Pierre Pralle, Product Manager, Credential Manager In the modern digital landscape, ...

Read more

The post Streamline User Journeys with Verified Email via Credential Manager appeared first on InShot Pro.

]]>
Posted by Niharika Arora, Senior Developer Relations Engineer and Jean-Pierre Pralle, Product Manager, Credential Manager


In the modern digital landscape, the first encounter a user has with an app is often the most critical. Yet, for decades, this initial interaction has been hindered by the friction of traditional verification methods. Today, we’re excited to announce a new verified email credential issued by Google, which developers can now retrieve directly from Android’s Credential Manager Digital Credential API.

The Problem: Authentication Friction in the Modern Era

The “current era” of authentication is defined by a trade-off between security and convenience. To ensure that a user owns the email address they provide, you typically rely on One-Time Passwords (OTPs) or “magic links” sent by email or SMS.

While effective, these traditional steps introduce significant hurdles:

  • Context switching: Users must leave the app, open their inbox or messaging app, find the code, and return, a process where many potential users simply drop off.
  • Delivery issues: While Emails are free, they can be delayed or diverted to spam folders.
  • Onboarding friction: Every extra second spent in the “verification loop” is a second where a user might lose interest, directly impacting conversion rates.

The Solution: Seamless, Verified Email

Google now issues a cryptographically verified email credential directly to Android devices. This verified email credential is delivered through the Credential Manager API, which is Android’s implementation of the W3C’s Digital Credential API standard.

For users, this completely removes the need to manually verify their email through external channels. For developers, the API securely delivers these verified user claims for any scenario whether you are building an account creation flow, a recovery process, or a high-risk step-up authentication.

While this specific verified email address is sourced securely from the user’s Google Account on their device, the underlying Digital Credentials API is issuer-agnostic. This fosters an open ecosystem, allowing any holder of a digital credential with an email claim to offer that verification to your app.

User Experience

The beauty of this API lies in its simplicity for the end user. Instead of hunting for OTP codes, the experience is integrated directly into the Android OS:

  • Initiation: The process begins when a user focuses on an email input field or taps a “Sign up” or “Recover account” button. You can also initiate the process on page load.
  • Transparency: A native Android bottom sheet appears, clearly detailing exactly what data is being requested (for example, user’s verified email address).
  • One-tap consent: The user simply taps “Agree and continue” to share the data.
  • Immediate progress: Once consent is given, the app receives the data instantly. For sign-up or account recovery flows, you can then seamlessly transition the user into passkey creation, ensuring:
    • Users do not have to enter any user information manually, as compared to the traditional username/password registration.
    • Their next login is even faster and more secure.

Use case 1. Sign up

Accelerate onboarding by fetching a verified email the moment the user taps “Sign up”. We strongly recommend you pair the verified email retrieval with passkey creation, also part of the Credential Manager API:

Note: You can also fetch other unverified fields such as a user’s given name, family name, name, profile picture and the hosted domain connected with the verified email.

Use case 2. Account recovery

Eliminate the frustration of users hunting for recovery codes in their spam folders by allowing them to recover their account using the verified email securely stored on their device.

Use case 3. Re-authentication for sensitive actions

Protect sensitive user actions, such as changing settings or updating profile details, by requiring a quick re-authentication step. Instead of an OTP, you can provide a low-friction verification using the device’s verified email.

Important Considerations

As you design your authentication architecture around the Digital Credentials API, keep the following details in mind:

  • Account support: For the specific email credential issued by Google, only regular consumer Google Accounts are supported (Workspace and supervised accounts are currently not supported). Keep in mind that the Credential Manager API itself is issuer-agnostic, meaning other identity providers can issue credentials with their own account support policies.
  • Other user data: Beyond email, you can request the user’s given name, family name, full name, and profile picture. However, note that only the email is verified by Google.
  • Auto verify your @gmail accounts: The API provides verified emails for all consumer Google Accounts. We recommend auto-verifying @gmail.com users and routing custom domains to your existing verification flow – for example, an OTP flow. This ensures you maintain long-term access for external domains not directly managed by Google.
  • Complementary to Sign in with Google: While both the new verified email credential & Sign in with Google API provides a verified email, the choice depends on the intended user experience:
    • Use Sign in with Google when your users want to create a federated login session.
    • Use Verified Email when your users want to sign in traditionally with a username/password or passkey but want to auto-verify the email address without the manual chore of an OTP.

Conclusion and Next steps

By integrating the new verified email via Credential Manager API, you can drastically reduce onboarding friction and provide users with a more streamlined, secure authentication journey. This represents a shift toward a future where “verification” is no longer a manual chore for the user, but a seamless, integrated part of the native mobile experience.

Ready to see how this fits into your own app? To get started, update your project to the latest Credential Manager API and explore our Integration Guide. We encourage you to explore how this streamlined verification can simplify your critical user journeys from optimizing account creation, to enhancing re-authentication flows.

The post Streamline User Journeys with Verified Email via Credential Manager appeared first on InShot Pro.

]]>
Level up your development with Planning Mode and Next Edit Prediction in Android Studio Panda 4 https://theinshotproapk.com/level-up-your-development-with-planning-mode-and-next-edit-prediction-in-android-studio-panda-4/ Tue, 21 Apr 2026 14:00:00 +0000 https://theinshotproapk.com/level-up-your-development-with-planning-mode-and-next-edit-prediction-in-android-studio-panda-4/ Posted by Matt Dyor, Senior Product Manager Android Studio Panda 4 is now stable and ready for you to use ...

Read more

The post Level up your development with Planning Mode and Next Edit Prediction in Android Studio Panda 4 appeared first on InShot Pro.

]]>
Panda Statics Metadata Card


Panda Statics Metadata Card

Posted by Matt Dyor, Senior Product Manager


Android Studio Panda 4 is now stable and ready for you to use in production. This release brings Planning Mode, Next Edit Prediction, and more, making it easier than ever to build high-quality Android apps.

Here’s a deep dive into what’s new:

Planning Mode

Before the Agent starts working on complex tasks for you, it would be helpful if it could come up with a detailed plan. Jumping straight into a large coding project without a design often leads to technical debt or logic errors; the same is true for AI. That’s why we’re adding Planning Mode.

In this mode, the Agent comes up with a detailed project plan before executing tasks. Instead of a single pass where the model directly predicts the next token of code, Planning Mode facilitates a multi-stage reasoning process — giving the agent additional space to evaluate its own proposed logic for potential issues before presenting it to you. This is especially useful for complex and long-running tasks which demand a high degree of architectural precision.

To use Planning Mode, switch your conversation mode to “Planning” in the agent input box and enter your prompt. to “Planning” and enter your prompt.

Switch to Planning Mode

In Planning Mode, the agent examines your request and may generate an implementation plan for large or complex tasks. You have the opportunity to fix mistakes or clarify which approaches to use—all before the agent has spent any time or tokens heading in the wrong direction.

Open Implementation Plan

Add Comments to Implementation Plan

After adding comments, simply hit “Submit Comments” and the agent will use your feedback to revise the implementation plan. To stay on track during execution—which is particularly important with larger changes—the agent organizes its work and generates a Task List” artifact. You can sit back and watch as the agent methodically completes all of the tasks.

Task List Artifact

After the work is done, the agent produces a “Walkthrough” artifact, giving you a clear summary of exactly what has been changed—making it easy to review the agent’s changes. Build with more confidence and control using Planning Mode in the latest release of Android Studio.

Add Comments to Implementation Plan

Next Edit Prediction

Classic autocomplete is great for finishing your sentences, but coding is rarely a linear path. Often, a change in one place requires a secondary change elsewhere—like adding a new parameter to a function and then needing to update its invocations, or a UI preview update when a Composable is changed. Traditionally, this meant breaking your focus to hunt down the related lines of code that need attention.

Next Edit Prediction (NEP) evolves code completion by anticipating your next move, even when it’s not at your current cursor position. By analyzing your recent edits, Android Studio recognizes the logical pattern of your workflow. If you modify a data class or update a constructor, NEP can suggest the next relevant edit—perhaps in a distant function—allowing you to jump straight to the fix.

Instead of manually navigating back and forth, you can accept these multi-location suggestions with a single keystroke. This keeps you deep in the “flow state,” reducing the cognitive load of routine updates and letting you focus on the complex logic that truly matters to your application. Experience a more intuitive, non-linear way to code in the latest version of Android Studio.

 

NEP Updating Function Name

NEP Adding New Line

Gemini API Starter Template

Adding powerful AI features to your app just got easier, introducing the Gemini API Starter Template for Android Studio!

Integrating generative AI into your Android application used to mean managing complex backend plumbing and worrying about API key security. With the new Gemini API Starter template in Android Studio, developers can now jump straight into building features rather than spending time configuring infrastructure.

Key benefits include:

  • Zero API key management: Stop worrying about provisioning or rotating keys. By leveraging Firebase AI Logic, the template eliminates the need to embed sensitive credentials in your client-side code.
  • Automated Firebase integration: The backend plumbing is handled for you. The template automatically connects your project to Firebase services, ensuring a secure bridge between your app and Google’s Gemini models.
  • Built to scale: This isn’t just for prototypes. The production-ready architecture allows you to scale from a local test to a global user base without redesigning your foundation.
  • Multimodal processing: Supports text, image, video, and audio inputs. You can build features like real-time image analysis, video summarization, and audio transcription.

Get Started

  1. Open Android Studio.
  2. Navigate to File > New > New Project.
  3. Select the Gemini API Starter template from the gallery.
Gemini API Starter new project template

Agent Web Search

When you’re deep in development, the right answer is often just a search away—but leaving your IDE to find it can snap you out of your flow. Whether you need the exact version number for a dependency or the latest API changes for a third-party library, the Agent Web Search tool is here to help without you ever having to leave Android Studio.

While Android Studio’s agent already leverages the Android Knowledge Base for official documentation, modern Android development relies on a vast ecosystem of external libraries. Agent Web Search expands Gemini’s reach, allowing it to query Google directly to fetch current reference material from across the web. From checking the latest setup guides for Coil to finding advanced configuration tips for Koin or Moshi, the agent can now pull in the most up-to-date information in real time.

The Agent Web Search tool is designed to be helpful but unobtrusive; it will automatically trigger a web search when it identifies a gap in its local knowledge. You can also take the wheel by asking it to find something specific—simply include “search the web for…” in your prompt. By integrating live web results directly into your workspace, Agent Web Search ensures you’re always building with the most current data available, speeding up your workflow and keeping your project on the cutting edge.

 

Agent Web Search Tool Invocation

Android Studio Panda Releases

Panda 4 continues Android Studio’s focus on accelerating developer productivity with AI.
Check out Go from prompt to working prototype with Android Studio Panda 2
and Increase Guidance and Control over Agent Mode with Android Studio Panda 3.


Android Studio Panda 2

  • AI-powered New Project flow: Allows you to build a working app prototype with a single prompt. The agent manages initial setup, navigation configuration, and proper dependencies, and features an autonomous generation loop to handle build errors and deploy to an emulator.
  • Version Upgrade Assistant: Automates dependency management and updates, iteratively attempting builds and resolving conflicts until a stable configuration is found.


Android Studio Panda 3

  • Agent skills: Specialized, user-defined instructions (stored in a .skills directory) that teach the AI agent project-specific capabilities, coding standards, or library usage.
  • Agent permissions: Provides fine-grained control over what agents can do, with features like “Always Allow” rules for trusted operations. For even more security, you can also use an optional sandbox to enforce strict, isolated control over the agent.
  • Empty Car App Library App template: Simplifies building driving-optimized apps for Android Auto and Android Automotive OS by handling required boilerplate code.

Get started

Dive in and accelerate your development. Download Android Studio Panda 4 and start exploring these powerful new agentic features today.
As always, your feedback is crucial to us. Check known issues, report bugs, and be part of our vibrant community on LinkedIn, Medium, YouTube, or X. Happy coding!
Android Studio, androidstudio, Android Studio Panda, featured,
Gemini in Android Studio, AI-powered new project flow, version upgrade assistant

The post Level up your development with Planning Mode and Next Edit Prediction in Android Studio Panda 4 appeared first on InShot Pro.

]]>
Experimental hybrid inference and new Gemini models for Android https://theinshotproapk.com/experimental-hybrid-inference-and-new-gemini-models-for-android/ Fri, 17 Apr 2026 20:00:00 +0000 https://theinshotproapk.com/experimental-hybrid-inference-and-new-gemini-models-for-android/ Posted by Thomas Ezan, Senior Developer Relations Engineer If you are an Android developer looking to implement innovative AI features ...

Read more

The post Experimental hybrid inference and new Gemini models for Android appeared first on InShot Pro.

]]>

Posted by Thomas Ezan, Senior Developer Relations Engineer



If you are an Android developer looking to implement innovative AI features into your app, we recently launched powerful new updates: Hybrid inference, a new API for Firebase AI Logic to leverage both on-device and Cloud inference, and support for new Gemini models including the latest Nano Banana models for image generation.

Let’s jump in!

Experiment with hybrid inference

With the new Firebase API for hybrid inference, we implemented a simple rule-based routing approach as an initial solution to let you use both on-device and cloud inference via a unified API. We are planning on providing more sophisticated routing capabilities in the future.

It allows your app to dynamically switch between Gemini Nano running locally on the device and cloud-hosted Gemini models. The on-device execution uses ML Kit’s Prompt API. The cloud inference supports all the Gemini models from Firebase AI Logic in both Vertex AI and the Developer API.

To use it, add the firebase-ai-ondevice dependencies to your app along with Firebase AI Logic:

dependencies {
 [...] 
 implementation("com.google.firebase:firebase-ai:17.11.0")
 implementation("com.google.firebase:firebase-ai-ondevice:16.0.0-beta01")
}

During initialization, you create a GenerativeModel instance and configure it with specific inference modes, such as PREFER_ON_DEVICE (falls back to cloud if Gemini Nano is not available on the device) or PREFER_IN_CLOUD (falls back to on-device inference if offline):

val model = Firebase.ai(backend = GenerativeBackend.googleAI())
    .generativeModel(
        modelName = "gemini-3.1-flash-lite",
        onDeviceConfig = OnDeviceConfig(
           mode = InferenceMode.PREFER_ON_DEVICE
        )
    )

val response = model.generateContent(prompt)

The Firebase API for hybrid inference for Android is still experimental, and we encourage you to try it in your app, especially if you are already using Firebase AI Logic. Currently, on-device models are specialized for single-turn text generation based on text or single Bitmap image inputs. Review the limitations for more details.

We just published a new sample in the AI Sample Catalog leveraging the Firebase API for hybrid; it demonstrates how the Firebase API for hybrid inference can be used to generate a review based on a few selected topics and then translating it into various languages. Check out the code to see it in action!

The new hybrid inference sample in action

Try our new models

As part of the new Gemini models, we’ve released two models particularly helpful to Android developers and easy to integrate in your application via the Firebase AI Logic SDK.

Nano Banana

Last year we released Nano Banana, a state-of-the-art image generation model. And a few weeks ago, we released a couple of new Nano Banana models.

Nano Banana Pro (Gemini 3 Pro Image) is designed for professional asset production and can render high-fidelity text, even in a specific font or simulating different types of handwriting.

Nano Banana 2 (Gemini 3.1 Flash Image) is the high-efficiency counterpart to Nano Banana Pro. It’s optimized for speed and high-volume use cases. It can be used for a broad range of use cases (infographics, virtual stickers, contextual illustrations, etc.).

The new Nano Banana models leverage real-world knowledge and deep reasoning capabilities to generate precise and detailed images.

We updated our Magic Selfie sample (use image generation to change the background of your selfie!) to use Nano Banana 2. The background segmentation is now handled directly with the image generation model which makes the implementation easier and lets Nano Banana 2 improved image generation capabilities shine. See it in action here.

The updated Magic Selfie sample uses Nano Banana 2 to update a selfie background

You can use it via Firebase AI Logic SDK. Read more about it in the Android documentation.

Gemini 3.1 Flash-Lite

We also released Gemini 3.1 Flash-Lite, a new version of the Gemini Flash-Lite family. The Gemini Flash-Lite models have been particularly favored by Android developers for its good quality/latency ratio and low inference cost. It’s been used by Android developers for various use-cases such as in-app messaging translation or generating a recipe from a picture of a dish.

Gemini 3.1 Flash-Lite, currently in preview, will enable more advanced use cases with latency comparable to Gemini 2.5 Flash-Lite. To learn more about this model, review the Firebase documentation.

Conclusion

It’s a great time to explore the new Hybrid sample in our catalog to see these capabilities in action and understand the benefits of routing between on-device and cloud inference. We also encourage you to check out our documentation to test the new Gemini models.

The post Experimental hybrid inference and new Gemini models for Android appeared first on InShot Pro.

]]>
The Fourth Beta of Android 17 https://theinshotproapk.com/the-fourth-beta-of-android-17/ Thu, 16 Apr 2026 20:00:00 +0000 https://theinshotproapk.com/the-fourth-beta-of-android-17/ Posted by Dan Galpin, Developer Relations Engineer Android 17 has reached beta 4, the last scheduled beta of this release ...

Read more

The post The Fourth Beta of Android 17 appeared first on InShot Pro.

]]>
Posted by Dan Galpin, Developer Relations Engineer


Android 17 has reached beta 4, the last scheduled beta of this release cycle, a critical milestone for app compatibility and platform stability. Whether you’re fine-tuning your app’s user experience, ensuring smooth edge-to-edge rendering, or leveraging the newest APIs, Beta 4 provides the near-final environment you need to be testing with.

Get your apps, libraries, tools, and game engines ready!

If you develop an Android SDK, library, tool, or game engine, it’s critical to prepare any necessary updates now to prevent your downstream app and game developers from being blocked by compatibility issues and allow them to target the latest SDK features. Please let your downstream developers know if updates are needed to fully support Android 17.

Testing involves installing your production app or a test app making use of your library or engine using Google Play or other means onto a device or emulator running Android 17 Beta 4. Work through all your app’s flows and look for functional or UI issues. Each release of Android contains platform changes that improve privacy, security, and overall user experience; review the app impacting behavior changes for apps running on and targeting Android 17 to focus your testing, including the following:

  • Resizability on large screens: Once you target Android 17, you can no longer opt out of maintaining orientation, resizability and aspect ratio constraints on large screens.
  • Dynamic code loading: If your app targets Android 17 or higher, the Safer Dynamic Code Loading (DCL) protection introduced in Android 14 for DEX and JAR files now extends to native libraries. All native files loaded using System.load() must be marked as read-only. Otherwise, the system throws UnsatisfiedLinkError.
  • Enable CT by default: Certificate transparency (CT) is enabled by default. (On Android 16, CT is available but apps had to opt in.)
  • Local network protections: Apps targeting Android 17 or higher have local network access blocked by default. Switch to using privacy preserving pickers if possible, and use the new ACCESS_LOCAL_NETWORK permission for broad, persistent access.
  • Background audio hardening: Starting in Android 17, the audio framework enforces restrictions on background audio interactions including audio playback, audio focus requests, and volume change APIs. Based on your feedback, we’ve made some changes since beta 2, including targetSDK gating while-in-use FGS enforcement and exempting alarm audio. Full details available in updated guidance.

App memory limits

Android is introducing app memory limits based on the device’s total RAM to create a more stable and deterministic environment for your applications and Android users. In Android 17, limits are set conservatively to establish system baselines, targeting extreme memory leaks and other outliers before they trigger system-wide instability resulting in UI stuttering, higher battery drain, and apps being killed. While we anticipate minimal impact on the vast majority of app sessions, we recommend the following memory best practices, including establishing a baseline for memory.

In the current implementation, getDescription in ApplicationExitInfo will contain the string “MemoryLimiter” if your app was impacted. You can also use trigger-based profiling with TRIGGER_TYPE_ANOMALY to get heap dumps that are collected when the memory limit is hit.

The LeakCanary task in the Android Studio Profiler

To help you find memory leaks, Android Studio Panda adds LeakCanary integration directly in the Android Studio Profiler as a dedicated task, contextualized within the IDE and fully integrated with your source code.

A lighter memory footprint translates directly to smoother performance, longer battery life, and a premium experience across all form factors. Let’s build a faster, more reliable future for the Android ecosystem together!

Profiling triggers for app anomalies

Android introduces an on-device anomaly detection service that monitors for resource-intensive behaviors and potential compatibility regressions. Integrated with ProfilingManager, this service allows your app to receive profiling artifacts triggered by specific system-detected events.

Use the TRIGGER_TYPE_ANOMALY trigger to detect system performance issues such as excessive binder calls and excessive memory usage. When an app breaches OS-defined memory limits, the anomaly trigger allows developers to receive app-specific heap dumps to help identify and fix memory issues. Additionally, for excessive binder spam, the anomaly trigger provides a stack sampling profile on binder transactions.

This API callback occurs prior to any system imposed enforcements. For example, it can help developers collect debug data before the app is terminated by the system due exceeding memory limits. To understand how to use the trigger check out our documentation on trigger based profiling.

val profilingManager = applicationContext.getSystemService(ProfilingManager::class.java)
val triggers = ArrayList<ProfilingTrigger>()  
triggers.add(ProfilingTrigger.Builder(
             ProfilingTrigger.TRIGGER_TYPE_ANOMALY))
val mainExecutor: Executor = Executors.newSingleThreadExecutor()
val resultCallback = Consumer<ProfilingResult> { profilingResult ->
    if (profilingResult.errorCode != ProfilingResult.ERROR_NONE) {
        // upload profile result to server for further analysis          
        setupProfileUploadWorker(profilingResult.resultFilePath)
    } 
}
profilingManager.registerForAllProfilingResults(mainExecutor, resultCallback)
profilingManager.addProfilingTriggers(triggers)

Post-Quantum Cryptography (PQC) in Android Keystore

Android Keystore added support for the NIST-standardized ML-DSA (Module-Lattice-Based Digital Signature Algorithm). On supported devices, you can generate ML-DSA keys and use them to produce quantum-safe signatures, entirely in the device’s secure hardware. Android Keystore exposes the ML-DSA-65 and ML-DSA-87 algorithm variants through the standard Java Cryptographic Architecture APIs: KeyPairGenerator, KeyFactory, and Signature. For further details, see our developer documentation.

KeyPairGenerator generator = KeyPairGenerator.getInstance(
        “ML-DSA-65”, "AndroidKeyStore");
generator.initialize(
        new KeyGenParameterSpec.Builder(
                “my-key-alias”,
                KeyProperties.PURPOSE_SIGN | KeyProperties.PURPOSE_VERIFY)
        .build());
KeyPair keyPair = generator.generateKeyPair();

Get started with Android 17

You can enroll any supported Pixel device to get this and future Android Beta updates over-the-air. If you don’t have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio.

If you are currently in the Android Beta program, you will be offered an over-the-air update to Beta 4. Continue to report issues and submit feature requests on the feedback page. The earlier we get your feedback, the more we can include in our work on the final release.

For the best development experience with Android 17, we recommend that you use the latest preview of Android Studio (Panda). Once you’re set up, here are some of the things you should do:

  • Compile against the new SDK, test in CI environments, and report any issues in our tracker on the feedback page.
  • Test your current app for compatibility, learn whether your app is affected by changes in Android 17, and install your app onto a device or emulator running Android 17 and extensively test it.

We’ll update the preview/beta system images and SDK regularly throughout the Android 17 release cycle. Once you’ve installed a beta build, you’ll automatically get future updates over-the-air for all later previews and Betas. For complete information, visit the Android 17 developer site.

Join the conversation

Your feedback remains our most valuable asset. Whether you’re an early adopter on the Canary channel or an app developer testing on Beta 4, consider joining our communities and filing feedback. We’re listening.

The post The Fourth Beta of Android 17 appeared first on InShot Pro.

]]>
Android CLI: Build Android apps 3x faster using any agent https://theinshotproapk.com/android-cli-build-android-apps-3x-faster-using-any-agent/ Thu, 16 Apr 2026 17:00:00 +0000 https://theinshotproapk.com/android-cli-build-android-apps-3x-faster-using-any-agent/ Posted by Adarsh Fernando, Group Product Manager and Esteban de la Canal, Senior Staff Software Engineer As Android developers, you ...

Read more

The post Android CLI: Build Android apps 3x faster using any agent appeared first on InShot Pro.

]]>
Posted by Adarsh Fernando, Group Product Manager and Esteban de la Canal, Senior Staff Software Engineer



As Android developers, you have many choices when it comes to the agents, tools, and LLMs you use for app development. Whether you are using Gemini in Android Studio, Gemini CLI, Antigravity, or third-party agents like Claude Code or Codex, our mission is to ensure that high-quality Android development is possible everywhere.

Today, we are introducing a new suite of Android tools and resources for agentic workflowsAndroid CLI with Android skills and the Android Knowledge Base. This collection of tools is designed to eliminate the guesswork of core Android development workflows when you direct an agent’s work outside of Android Studio, making your agents more efficient, effective, and capable of following the latest recommended patterns and best practices.

Whether you are just starting your development journey on Android, are a seasoned Android developer, or managing apps across mobile and web platforms, building your apps with the latest guidance, tools, and AI-assistance is easier than ever. No matter which environment you begin with these resources, you can always transition your development experience to Android Studio—where the state-of-the-art tools and agents for Android development are available to help your app experience truly shine.

(Re)Introducing the Android CLI

Your agents perform best when they have a lightweight, programmatic interface to interact with the Android SDK and development environment. So, at the heart of this new workflow is a revitalized Android CLI. The new Android CLI serves as the primary interface for Android development from the terminal, featuring commands for environment setup, project creation, and device management—with more modern capabilities and easy updatability in mind.

The create command makes an Android app project in seconds.

In our internal experiments, Android CLI improved project and environment setup by reducing LLM token usage by more than 70%, and tasks were completed 3X faster than when agents attempted to navigate these tasks using only the standard toolsets.

Key capabilities available to you include:

  • SDK management: Use android sdk install to download only the specific components needed, ensuring a lean development environment.
  • Snappy project creation: The android create command generates new projects from official templates, ensuring the recommended architecture and best practices are applied from the very first line of code.
  • Rapid device creation and deployment: Create and manage virtual devices with android emulator and deploy apps using android run, eliminating the guesswork involved in manual build and deploy cycles.
  • Updatability: Run android update to ensure that you have the latest capabilities available.

Android CLI can create a device, run your app on it, and make it easier for agents to navigate UI.

While Android CLI will empower your agentic development flows, it’s also been designed to streamline CI, maintenance, and any other scripted automation for the increasingly distributed nature of Android development. Download and try out the Android CLI today!

Grounding LLMs with official Android Skills

Traditional documentation can be descriptive, conceptual, and high-level. While perfect for learning, LLMs often require precise, actionable instructions to execute complex workflows without using outdated patterns and libraries.

To bridge this gap, we are launching the Android skills GitHub repository. Skills are modular, markdown-based (SKILL.md) instruction sets that provide a technical specification for a task and are designed to trigger automatically when your prompt matches the skill’s metadata, saving you the hassle of manually attaching documentation to every prompt.

Android skills cover some of the most common workflows that some Android developers and LLMs may struggle with—they help models better understand and execute specific patterns that follow our best practices and guidance on Android development.

In our initial release, the repository includes skills like:

  • Navigation 3 setup and migration.
  • Implementing edge-to-edge support.
  • AGP 9 and XML-to-Compose migrations.
  • R8 config analysis, and more!

If you’re using Android CLI, you can browse and set up your agent workflow with our growing collection of skills using the android skills command. These skills can also live alongside any other skills you create, or third-party skills created by the Android developer community. Learn more about getting started with Android skills.

Install Android skills via the Android CLI to make your agent more effective and efficient.

The latest guidance via the Android Knowledge Base

The third component we are launching today is the Android Knowledge Base. Accessible through the android docs command and already available in the latest version of Android Studio, this specialized data source enables agents to search and fetch the latest authoritative developer guidelines to use as relevant context.

The Android Knowledge Base ensures agents have the latest context, guidance, and best practices for Android.

By accessing the frequently updated knowledge base, agents can ground their responses in the most recent information from Android developer docs, Firebase, Google Developers, and Kotlin docs. This ensures that even if an LLM’s training cutoff is a year old, it can still provide guidance on the latest frameworks and patterns we recommend today.

Android Studio: The ultimate destination for premium apps

In addition to empowering developers and agents to handle project setup and boilerplate code, we’ve also designed these new tools and resources to make it easier to transition to Android Studio. That means you can start a prototype quickly with an agent using Android CLI and then open the project in Android Studio to fine-tune your UI with visual tools for code editing, UI design, deep debugging, and advanced profiling that scale with the growing capabilities of your app.

And when it is time to build a high-quality app for large-scale publication across various device types, our agent in Android Studio is here to help, while leveraging the latest development best practices and libraries. Beyond the powerful Agent and Planning Modes for active development, we have introduced an AI-powered New Project flow, which provides an entry point to rapidly prototyping your next great idea for Android.

These built-in agents make it simple to extend your app ideas across phones, foldables, tablets, Wear OS, Android Auto, and Android TV. Equipped with full context of your project’s source code and a comprehensive suite of debugging, profiling, and emulation tools, you have an end-to-end, AI-accelerated toolkit at your disposal.

Get started today

Android CLI is available in preview today, along with a growing set of Android skills and knowledge for agents. To get started, head over to d.android.com/tools/agents to download Android CLI.

The post Android CLI: Build Android apps 3x faster using any agent appeared first on InShot Pro.

]]>
Boosting user privacy and business protection with updated Play policies https://theinshotproapk.com/boosting-user-privacy-and-business-protection-with-updated-play-policies/ Wed, 15 Apr 2026 17:00:00 +0000 https://theinshotproapk.com/boosting-user-privacy-and-business-protection-with-updated-play-policies/ Posted by Bennet Manuel, Group Product Manager, App & Ecosystem Trust We strive to make Google Play the safest and ...

Read more

The post Boosting user privacy and business protection with updated Play policies appeared first on InShot Pro.

]]>

Posted by Bennet Manuel, Group Product Manager, App & Ecosystem Trust

We strive to make Google Play the safest and most trusted experience possible. Today, we’re announcing a new set of policy updates and an account transfer feature to boost user privacy and protect your business from fraud. By providing better features for users and easy-to-integrate tools for you, we’re making it simpler to build safer apps so you can focus on creating great experiences.

We’re also expanding our features to help you manage new contact and location policy changes, so you have a smoother, more predictable app review experience. By October, Play policy insights in Android Studio can help you proactively identify if your app should use these new features and guide you on the exact steps to take. Additionally, new pre-review checks in the Play Console will be available starting October 27 to flag potential contacts or location permissions policy issues so you can fix them before you submit your app for review.

Here is what is new and how you can prepare.

Contact Picker: A privacy-friendly way to access contacts

Android is introducing the Android Contact Picker as the new standard for accessing contact information (e.g., for invites, sharing, or one-time lookups). This picker lets users share only the specific contacts they want to, helping build trust and protect privacy. Alongside this tool, we are updating our policy to require that all applicable apps use the picker, or other privacy-focused alternatives like Sharesheet, as the primary way to access users’ contacts. READ_CONTACTS will be reserved for apps that can’t function without it.

What you’ll need to do

  • If your app asks for access to contacts for features like sharing or inviting, you should update your code to use the picker and remove the READ_CONTACTS permission entirely (if targeting Android 17 and above).
  • If your app requires full, ongoing access to a user’s contact list to function, you must justify this need by submitting a Play Developer Declaration in the Play Console. This form will be available before October.

Location button: More privacy-friendly way to access location

Android is introducing a new, streamlined location button to make requesting precise data easier for one-time actions, like finding a store or tagging a photo. This feature replaces complex permission dialogs with a single tap, helping users make clearer choices about how much information they share and for how long. We’re updating our policy to require apps to use this button for one-time precise location access unless they require persistent, always-on location access. This creates a faster, more predictable experience for your users and reduces the friction of traditional permission requests.

What you’ll need to do

  • Review your app’s location usage to ensure you are requesting the minimum amount of location data needed for your app to work.
  • If your app targets Android 17 and above and uses precise location for discrete, temporary actions, implement the location button by adding the onlyForLocationButton flag in your manifest.
  • If your app requires persistent precise location to function, you will need to submit a Play Developer Declaration in Play Console to show why the new button or coarse location isn’t sufficient for your app’s core features. This form will be available before October.

Account Transfer: Protecting your business

You asked for a secure way to transfer app ownership during business changes, and we listened. We’re launching an official account transfer feature directly in Play Console that’s designed to help you easily transfer ownership during sales and mergers while also protecting your business from fraud. Starting May 27, account ownership changes must use this official feature. That means that unofficial transfers (like sharing login credentials or buying and selling accounts on third-party marketplaces) which leave your business vulnerable are not permitted.

What you’ll need to do

  • Initiate any future account owner changes through the “Users and permissions” page in Play Console.
  • Every transfer will include a mandatory 7-day security cool-down period. This gives your team time to spot and cancel any unauthorized attempts to take over your account. See Transferring ownership of a Play Console developer account for more guidance.

What’s next

We want to give you plenty of time to review these changes and update your apps. For more information, deadlines, and the full list of Google Play policy updates we’re announcing today, please visit the Policy Announcements page.

Thank you for your partnership in keeping Play safe for everyone.

The post Boosting user privacy and business protection with updated Play policies appeared first on InShot Pro.

]]>
Get ready for Google I/O: Livestream schedule revealed https://theinshotproapk.com/get-ready-for-google-i-o-livestream-schedule-revealed/ Tue, 14 Apr 2026 12:30:00 +0000 https://theinshotproapk.com/get-ready-for-google-i-o-livestream-schedule-revealed/ Google I/O 2026: Livestream Schedule Revealed Posted by The Google I/O team The Google I/O schedule is here! Tune in ...

Read more

The post Get ready for Google I/O: Livestream schedule revealed appeared first on InShot Pro.

]]>
Google I/O 2026: Livestream Schedule Revealed











Posted by The Google I/O team

The Google I/O schedule is here! Tune in May 19–20 as we unveil Google’s biggest updates across AI, Android, Chrome, and Cloud. Discover new tools and features designed to unlock the future of development with agentic coding.

We’re kicking things off with the Google keynote at 10:00 am PT on May 19, followed by the Developer keynote at 1:30 pm PT. Block your calendars for two days of live sessions, straight from Mountain View, full of announcements, live demos, and new professional development sessions.

Here’s a sneak peak at what we’ll cover:

  • The agentic era of development: Discover how the next evolution of our developer tools is transforming the way you write software. Learn how to seamlessly transition from rapid ideation to orchestrating powerful, autonomous workflows, allowing AI to handle the heavy lifting while you focus on the big picture.
  • Enabling Android development anywhere: Learn how we are making AI even more helpful for your app workflows. From initial prototyping to final native polish, explore the latest ways we’re making it easier and faster to build high quality Android experiences.
  • Building powerful, agentic web applications: The web is accelerating faster than ever, and we are equipping you for what’s next. Discover new tools to build agent-ready web applications, automate complex debugging workflows, and ship highly interactive UI directly in the browser.

Join us online May 19–20, followed by a fresh drop of on-demand sessions and codelabs on May 21. Register today to explore the full program and catch all the latest developer updates, featuring sessions like:

The post Get ready for Google I/O: Livestream schedule revealed appeared first on InShot Pro.

]]>
Test Multi-Device Interactions with the Android Emulator https://theinshotproapk.com/test-multi-device-interactions-with-the-android-emulator/ Mon, 13 Apr 2026 13:00:00 +0000 https://theinshotproapk.com/test-multi-device-interactions-with-the-android-emulator/ Posted by Steven Jenkins, Product Manager, Android Studio Testing multi-device interactions is now easier than ever with the Android Emulator. ...

Read more

The post Test Multi-Device Interactions with the Android Emulator appeared first on InShot Pro.

]]>

Posted by Steven Jenkins, Product Manager, Android Studio


Testing multi-device interactions is now easier than ever with the Android Emulator. Whether you are building a multiplayer game, extending your mobile application across form factors, or launching virtual devices that require a device connection, the Android Emulator now natively supports these developer experiences.

Previously, interconnecting multiple Android Virtual Devices (AVDs) caused significant friction. It required manually managing complex port forwarding rules just to get two emulators to connect.

Now you can take advantage of a new networking stack for the Android Emulator which brings zero-configuration peer-to-peer connectivity across all your AVDs.

Interconnecting emulator instances

The new networking stack for the Android Emulator transforms how emulators communicate. Previously, each virtual device operated on its own local area network (LAN), effectively isolating it from other AVDs. The new Wi-Fi network stack changes this by creating a shared virtual network backplane that bridges all running instances on the same host machine.

Key Benefits:

  • Zero-configuration: No more manual port forwarding or scripting adb commands. AVDs on the same host appear on the same virtual network.
  • Peer-to-peer connectivity: Critical protocols like Wi-Fi Direct and Network Service Discovery (NSD) work out of the box between emulators.
  • Improved stability: Resolves long-standing stability issues, such as data loss and connection drops found in the legacy stack.
  • Cross-platform consistency: Works the same across Windows, macOS and Linux.

Use Cases

The enhanced emulator networking supports a wide range of multi-device development scenarios:

  • Multi-device apps: Test file sharing, local multiplayer gaming, or control flows between a phone and another Android device.
  • Continuous Integration: Create robust, automated multi-device test pipelines without flaky network scripts.
  • Android XR & AI glasses: Easily test companion app pairing and data streaming between a phone and glasses within Android Studio.
  • Automotive & Wear OS: Validate connectivity flows between a mobile device and a vehicle head unit or smartwatch.

The new emulator networking stack allows multiple AVDs to share a virtual network, 
enabling direct peer-to-peer communication with zero configuration.

Get Started

The new networking capability is enabled by default in the latest Android Emulator release (36.5), which is available via the Android Studio SDK Manager. Just update your emulator and launch multiple devices!

If you need to disable this feature or want to learn more, please refer to our documentation.

As always, we appreciate any feedback. If you find a bug or issue, please file an issue. Also you can be part of our vibrant Android developer community on LinkedIn, Medium, Youtube, or X.

The post Test Multi-Device Interactions with the Android Emulator appeared first on InShot Pro.

]]>
Gemma 4: The new standard for local agentic intelligence on Android https://theinshotproapk.com/gemma-4-the-new-standard-for-local-agentic-intelligence-on-android/ Sat, 04 Apr 2026 12:09:57 +0000 https://theinshotproapk.com/gemma-4-the-new-standard-for-local-agentic-intelligence-on-android/ Posted by Matthew McCullough, VP of Product Management Android Development Today, we are enhancing Android development with Gemma 4, our ...

Read more

The post Gemma 4: The new standard for local agentic intelligence on Android appeared first on InShot Pro.

]]>

Posted by Matthew McCullough, VP of Product Management Android Development

Today, we are enhancing Android development with Gemma 4, our latest state-of-the-art open model designed with complex reasoning and autonomous tool-calling capabilities.

Our vision is to enable local agentic AI on Android across the entire software lifecycle, from development to production. Android supports a range of Gemma 4 models, from the most efficient ones running directly on-device in your apps to more powerful ones running on your development machine to help you build apps. We are bringing Gemma 4 to Android developers through two pillars:

  • Local-first Agentic coding: Experience powerful, local AI code assistance with Gemma 4 in Android Studio in your development computer.
  • On-device intelligence: Build intelligent experiences using the ML Kit GenAI Prompt API to run Gemma 4 directly on Android device hardware.

Coding with Gemma 4 in Android Studio

When building Android apps, Android Studio can use Gemma 4 to leverage its state-of-the-art reasoning power and native support for tool use, while keeping the model and inference contained entirely on your local machine.

Gemma 4 was trained on Android development and designed with Agent Mode in mind. This means that when you select Gemma 4 as your local model, you can leverage the full suite of Agent Mode capabilities for a variety of Android development use cases, including refactoring legacy code, building an entire app or new features, and applying fixes iteratively.

Learn more about the possibilities Gemma 4 brings to your app development flow and how to get started.

Prototyping with Gemma 4 on-device

Since the introduction of Gemini Nano as the foundation model on Android, it has become available on over 140 million devices. Gemma 4 is the base model for the next generation of Gemini Nano (Gemini Nano 4) that is optimized for performance and quality on Android devices. This model is up to 4x faster than the previous version and uses up to 60% less battery.

To make it as easy as possible to preview and prototype with Gemma 4 E2B and E4B models directly on AICore-supported devices, we’re launching the AICore Developer Preview. While we continue to expand the ML Kit GenAI Prompt API surface to unlock additional advanced capabilities of the model, you can already start exploring new use cases with Gemma 4 using the Prompt API.

Prepare your apps for the launch of the Gemini Nano 4 on the new flagship Android devices later this year by prototyping with Gemma 4 today. Read about the upcoming features and deep dive into AICore Developer Preview and its Gemma 4 support here.

Local agentic intelligence with Gemma 4

Running Gemma 4 locally, you can leverage its advanced reasoning and tool-calling capabilities in your entire workflow, from developing with the AI coding assistant in Android Studio to shipping intelligent features in your app with ML Kit GenAI Prompt API. This local-first approach, available under Gemma’s open Apache license, provides an alternative for developers to innovate in a privacy-centric and cost effective manner.  In a future release, we will update Android Bench to include Gemma 4 and other open models, providing the quantified data you need to navigate performance trade-offs and select the best model for your use case.

We can’t wait to see what you build!

The post Gemma 4: The new standard for local agentic intelligence on Android appeared first on InShot Pro.

]]>