App https://theinshotproapk.com/category/app/ Download InShot Pro APK for Android, iOS, and PC Mon, 13 Apr 2026 13:00:00 +0000 en-US hourly 1 https://theinshotproapk.com/wp-content/uploads/2021/07/cropped-Inshot-Pro-APK-Logo-1-32x32.png App https://theinshotproapk.com/category/app/ 32 32 Test Multi-Device Interactions with the Android Emulator https://theinshotproapk.com/test-multi-device-interactions-with-the-android-emulator/ Mon, 13 Apr 2026 13:00:00 +0000 https://theinshotproapk.com/test-multi-device-interactions-with-the-android-emulator/ Posted by Steven Jenkins, Product Manager, Android Studio Testing multi-device interactions is now easier than ever with the Android Emulator. ...

Read more

The post Test Multi-Device Interactions with the Android Emulator appeared first on InShot Pro.

]]>

Posted by Steven Jenkins, Product Manager, Android Studio


Testing multi-device interactions is now easier than ever with the Android Emulator. Whether you are building a multiplayer game, extending your mobile application across form factors, or launching virtual devices that require a device connection, the Android Emulator now natively supports these developer experiences.

Previously, interconnecting multiple Android Virtual Devices (AVDs) caused significant friction. It required manually managing complex port forwarding rules just to get two emulators to connect.

Now you can take advantage of a new networking stack for the Android Emulator which brings zero-configuration peer-to-peer connectivity across all your AVDs.

Interconnecting emulator instances

The new networking stack for the Android Emulator transforms how emulators communicate. Previously, each virtual device operated on its own local area network (LAN), effectively isolating it from other AVDs. The new Wi-Fi network stack changes this by creating a shared virtual network backplane that bridges all running instances on the same host machine.

Key Benefits:

  • Zero-configuration: No more manual port forwarding or scripting adb commands. AVDs on the same host appear on the same virtual network.
  • Peer-to-peer connectivity: Critical protocols like Wi-Fi Direct and Network Service Discovery (NSD) work out of the box between emulators.
  • Improved stability: Resolves long-standing stability issues, such as data loss and connection drops found in the legacy stack.
  • Cross-platform consistency: Works the same across Windows, macOS and Linux.

Use Cases

The enhanced emulator networking supports a wide range of multi-device development scenarios:

  • Multi-device apps: Test file sharing, local multiplayer gaming, or control flows between a phone and another Android device.
  • Continuous Integration: Create robust, automated multi-device test pipelines without flaky network scripts.
  • Android XR & AI glasses: Easily test companion app pairing and data streaming between a phone and glasses within Android Studio.
  • Automotive & Wear OS: Validate connectivity flows between a mobile device and a vehicle head unit or smartwatch.

The new emulator networking stack allows multiple AVDs to share a virtual network, 
enabling direct peer-to-peer communication with zero configuration.

Get Started

The new networking capability is enabled by default in the latest Android Emulator release (36.5), which is available via the Android Studio SDK Manager. Just update your emulator and launch multiple devices!

If you need to disable this feature or want to learn more, please refer to our documentation.

As always, we appreciate any feedback. If you find a bug or issue, please file an issue. Also you can be part of our vibrant Android developer community on LinkedIn, Medium, Youtube, or X.

The post Test Multi-Device Interactions with the Android Emulator appeared first on InShot Pro.

]]>
Gemma 4: The new standard for local agentic intelligence on Android https://theinshotproapk.com/gemma-4-the-new-standard-for-local-agentic-intelligence-on-android/ Sat, 04 Apr 2026 12:09:57 +0000 https://theinshotproapk.com/gemma-4-the-new-standard-for-local-agentic-intelligence-on-android/ Posted by Matthew McCullough, VP of Product Management Android Development Today, we are enhancing Android development with Gemma 4, our ...

Read more

The post Gemma 4: The new standard for local agentic intelligence on Android appeared first on InShot Pro.

]]>

Posted by Matthew McCullough, VP of Product Management Android Development

Today, we are enhancing Android development with Gemma 4, our latest state-of-the-art open model designed with complex reasoning and autonomous tool-calling capabilities.

Our vision is to enable local agentic AI on Android across the entire software lifecycle, from development to production. Android supports a range of Gemma 4 models, from the most efficient ones running directly on-device in your apps to more powerful ones running on your development machine to help you build apps. We are bringing Gemma 4 to Android developers through two pillars:

  • Local-first Agentic coding: Experience powerful, local AI code assistance with Gemma 4 in Android Studio in your development computer.
  • On-device intelligence: Build intelligent experiences using the ML Kit GenAI Prompt API to run Gemma 4 directly on Android device hardware.

Coding with Gemma 4 in Android Studio

When building Android apps, Android Studio can use Gemma 4 to leverage its state-of-the-art reasoning power and native support for tool use, while keeping the model and inference contained entirely on your local machine.

Gemma 4 was trained on Android development and designed with Agent Mode in mind. This means that when you select Gemma 4 as your local model, you can leverage the full suite of Agent Mode capabilities for a variety of Android development use cases, including refactoring legacy code, building an entire app or new features, and applying fixes iteratively.

Learn more about the possibilities Gemma 4 brings to your app development flow and how to get started.

Prototyping with Gemma 4 on-device

Since the introduction of Gemini Nano as the foundation model on Android, it has become available on over 140 million devices. Gemma 4 is the base model for the next generation of Gemini Nano (Gemini Nano 4) that is optimized for performance and quality on Android devices. This model is up to 4x faster than the previous version and uses up to 60% less battery.

To make it as easy as possible to preview and prototype with Gemma 4 E2B and E4B models directly on AICore-supported devices, we’re launching the AICore Developer Preview. While we continue to expand the ML Kit GenAI Prompt API surface to unlock additional advanced capabilities of the model, you can already start exploring new use cases with Gemma 4 using the Prompt API.

Prepare your apps for the launch of the Gemini Nano 4 on the new flagship Android devices later this year by prototyping with Gemma 4 today. Read about the upcoming features and deep dive into AICore Developer Preview and its Gemma 4 support here.

Local agentic intelligence with Gemma 4

Running Gemma 4 locally, you can leverage its advanced reasoning and tool-calling capabilities in your entire workflow, from developing with the AI coding assistant in Android Studio to shipping intelligent features in your app with ML Kit GenAI Prompt API. This local-first approach, available under Gemma’s open Apache license, provides an alternative for developers to innovate in a privacy-centric and cost effective manner.  In a future release, we will update Android Bench to include Gemma 4 and other open models, providing the quantified data you need to navigate performance trade-offs and select the best model for your use case.

We can’t wait to see what you build!

The post Gemma 4: The new standard for local agentic intelligence on Android appeared first on InShot Pro.

]]>
Increase Guidance and Control over Agent Mode with Android Studio Panda 3 https://theinshotproapk.com/increase-guidance-and-control-over-agent-mode-with-android-studio-panda-3/ Sat, 04 Apr 2026 12:09:46 +0000 https://theinshotproapk.com/increase-guidance-and-control-over-agent-mode-with-android-studio-panda-3/ Posted by Matt Dyor, Senior Product Manager Android Studio Panda 3 is now stable and ready for you to use in ...

Read more

The post Increase Guidance and Control over Agent Mode with Android Studio Panda 3 appeared first on InShot Pro.

]]>


Posted by Matt Dyor, Senior Product Manager



Android Studio Panda 3 is now stable and ready for you to use in production. This release gives you even more control and customization over your AI-powered workflows, making it easier than ever to build high-quality Android apps.

Whether you’re bringing new capabilities to an existing app or standing up a brand new app, these updates elevate your development experience by allowing your AI Agent in Android Studio to learn your specific practices and giving you granular control over its permissions.

Lastly, in addition to AI skills and Agent Mode enchantments, Android Studio Panda 3 also includes updated support for build Android apps for cars.

Here’s a deep dive into what’s new:

Agent skills

Create a more helpful AI agent by using agent skills in Android Studio. Agent skills are specialized instructions that teach the agent new capabilities and best practices for a specific workflow, which the agent can then leverage as needed. This significantly reduces the level of detail required for your day-to-day prompts. Agent skills work with Gemini in Android Studio or with other remote 3rd party LLMs you integrate into the agent framework in Android Studio.

You and members of your team can create skills that tell the agent exactly how you want to handle specific tasks in your codebase. For example, you could create a custom “code review” skill tailored to your organization’s coding standards, or custom skill to provide the agent with more information on using an in-house library.

Once you have created a skill, the agent will be able to use it automatically, or you can manually trigger it by typing @ followed by the skill name. Check out the documentation to learn more about how to create skills for your codebase, or better yet—ask your agent to help you build a new skill and it will guide you through the details!

Manually Trigger Agent Skill in Android Studio

Getting Started

To build a skill for your project, do the following:

  • Create a .skills directory inside your project’s root folder.
  • Place a SKILL.md file inside this new directory.
  • Add a name and description to the file to define your custom workflow, and your skill is ready.
  • Optionally include scripts, assets, and references to provide even more guidance to your agent.
Agent skills in Android Studio

Manage permissions for Agent Mode

You control your codebase, and you can now be more deliberate with which data and capabilities you choose to share with AI agents. The new granular agent permissions in Android Studio let you decide exactly what agents can do for you.

When Agent Mode needs to read files, run shell commands, or access the web, it explicitly asks for your permission. We know that ‘approval fatigue’ is a real risk in AI workflows—when a tool asks for permission too often, it’s easy to start clicking ‘Allow’ without fully reviewing the action. By offering granular ‘Always Allow’ rules for trusted operations and an optional sandbox for experimental ones, Android Studio helps you stay focused on the high-stakes decisions that actually require your manual sign-off.

Agent Permissions

Agent permissions are intuitive to set up and use. For example, granting high-level permissions automatically authorizes related sub-tools, while commands you have previously approved will run automatically without interrupting your flow. Rest assured, accessing sensitive files like SSH keys will always require your explicit sign-off.

For even more security, you can also use an optional sandbox to enforce strict, isolated control over the agent.

Agent Shell Sandbox

Empty Car App Library App template

We’re making it easier to build Android apps for cars. Building apps for the car used to mean wrestling with complex configurations just to get the project to build successfully.

Now, you can accelerate your development with the new “Empty Car App Library App” template in Android Studio. This template takes care of the required boilerplate code for a driving-optimized app on both Android Auto and Android Automotive OS, saving you significant time and effort. Instead of getting bogged down in setup, you can focus on creating the best experience for your users on the road.

Getting Started

To use the new template:

  • Select New Project on the Welcome to Android Studio screen (or File > New > New Project from within a project).
  • Search for or select the Empty Car App Library App template.
  • Name your app and click Finish to generate your driving-optimized app.
Empty Car App Library App template

Android Studio Panda releases 

Panda 3 builds off last month’s AI-focused Panda 2 release. Check out Go from prompt to working prototype with Android Studio Panda 2 post to learn more about new Android Studio features, including the AI-powered New Project Flow that takes you from prompt to prototype and the Version Upgrade Assistant that takes the toil out of updating your dependencies.

Get started

Dive in and accelerate your development. Download Android Studio Panda 3 and start exploring these powerful new agentic features today.

As always, your feedback is crucial to us. Check known issues, report bugs, and be part of our vibrant community on LinkedIn, Medium, YouTube, or X. Happy coding!

The post Increase Guidance and Control over Agent Mode with Android Studio Panda 3 appeared first on InShot Pro.

]]>
Announcing Gemma 4 in the AICore Developer Preview https://theinshotproapk.com/announcing-gemma-4-in-the-aicore-developer-preview/ Thu, 02 Apr 2026 14:00:00 +0000 https://theinshotproapk.com/announcing-gemma-4-in-the-aicore-developer-preview/ Posted by David Chou, Product Manager and Caren Chang, Developer Relations Engineer At Google, we’re committed to bringing the most ...

Read more

The post Announcing Gemma 4 in the AICore Developer Preview appeared first on InShot Pro.

]]>
Posted by David Chou, Product Manager and Caren Chang, Developer Relations Engineer



At Google, we’re committed to bringing the most capable AI models directly to the Android devices in your pocket. Today, we’re thrilled to announce the release of our latest state-of-the-art open model: Gemma 4.

These models are the foundation for the next generation of Gemini Nano, so code you write today for Gemma 4 will automatically work on Gemini Nano 4-enabled devices that will be available later this year. With Gemini Nano 4, you’ll benefit from our additional performance optimizations so you can ship to production across the Android ecosystem with the most efficient on-device inference.

You can get early access to this model today through the AICore Developer Preview.

Select the Gemini Nano 4 Fast model in the Developer Preview UI

to see its blazing fast inference speed in action before you write any code

Because Gemma 4 natively supports over 140 languages, you can expect improved localized, multilingual experiences for your global audience. Furthermore, Gemma 4 offers industry-leading performance with multimodal understanding, allowing your apps to understand and process text, images, and audio. To give you the best balance of performance and efficiency, Gemma 4 on Android comes in two sizes:

  • E4B: Designed for higher reasoning power and complex tasks.
  • E2B: Optimized for maximum speed (3x faster than the E4B model!) and lower latency.

The new model is up to 4x faster than previous versions and uses up to 60% less battery. Starting today, you can experiment with improved capabilities including:

  • Reasoning: Chain-of-thought commands and conditional statements can now be expected to return higher quality results. For example: “Determine if the following comment for a discussion thread passes the community guidelines. The comment does not pass the community guideline if it contains one or more of these reason_for_flag: profanity, derogatory language, hate speech”. If the review passes the community guidelines, return {true}. Otherwise, return {false, reason_for_flag}.”
  • Math: With better math skills, the model can now more accurately answer questions. For example: “If I get 26 paychecks per year, how much should I contribute each paycheck to reach my savings goal of $10,000 over the course of a year?”
  • Time understanding: The model is now more capable when reasoning about time, making it more accurate for use cases that involve calendars, reminders, and alarms. For example: “The event is at 6PM on August 18th, and a reminder should be sent out 10 hours before the event. Return the time and date the reminder should be sent.”
  • Image understanding: Use cases that involve OCR (Optical Character Recognition) – such as chart understanding, visual data extraction, and handwriting recognition – will now return more accurate results.

Join the Developer Preview today to download these models in preview models and start building next-generation features right away.

Start building with Gemma 4

Start testing the model

You can try out the model without code by following the Developer Preview guide. If you want to jump straight into integrating these models with your existing workflow, we’ve made that seamless. Head over to Android Studio to refine your prompt and build with the familiar ML Kit Prompt API. We’ve introduced a new ability to specify a model, allowing you to target the E2B (fast) or E4B (full) variants for testing.

// Define the configuration with a specific track and preference
val previewFullConfig = generationConfig {
    modelConfig = ModelConfig {
        releaseTrack = ModelReleaseTrack.PREVIEW
        preference = ModelPreference.FULL
    }
}

// Initialize the GenerativeModel with the configuration
val previewModel = GenerativeModel.getClient(previewFullConfig)

// Verify that the specific preview model is available
val previewModelStatus = previewModel.checkStatus()
if (previewModelStatus == FeatureStatus.AVAILABLE) {
    // Proceed with inference
    val response = previewModel.generateContent("If I get 26 paychecks per year, how much I should contribute each paycheck to reach my savings goal of $10k over the course of a year? Return only the amount.")

} else {
    // Handle the case where the preview model is not available
    // (e.g., print out log statements)
}

What to expect during the Developer Preview

The goal of this Developer Preview is to give you a head start on refining prompt accuracy and exploring new use cases for your specific apps. 

We will be making several updates throughout the preview period, including support for tool calling, structured output, system prompts, and thinking mode in Prompt API, making it easier to take full advantage of the new capabilities and significant performance optimizations in Gemma 4.

The preview models are available for testing on AICore-enabled devices. These models will run on the latest generation of specialized AI accelerators from Google, MediaTek, and Qualcomm Technologies. On other devices, the models will initially run on a CPU implementation that is not representative of final production performance. If your device is not AICore-enabled, you can also test these models via the AI Edge Gallery app. We’ll provide support for more devices in the future.

How to get started

Ready to see what Gemma 4 can do for your users?

  1. Opt-in: Sign up for the AICore Developer Preview.
  2. Download: Once opted in, you can trigger the download of the latest Gemma 4 models directly to your supported test device.
  3. Build: Update your ML Kit implementation to target the new models and start building in Android Studio.

The post Announcing Gemma 4 in the AICore Developer Preview appeared first on InShot Pro.

]]>
Android Studio supports Gemma 4: our most capable local model for agentic coding https://theinshotproapk.com/android-studio-supports-gemma-4-our-most-capable-local-model-for-agentic-coding/ Thu, 02 Apr 2026 14:00:00 +0000 https://theinshotproapk.com/android-studio-supports-gemma-4-our-most-capable-local-model-for-agentic-coding/ Posted by Matthew Warner, Google Product Manager Every developer’s AI workflow and needs are unique, and it’s important to be ...

Read more

The post Android Studio supports Gemma 4: our most capable local model for agentic coding appeared first on InShot Pro.

]]>





Posted by Matthew Warner, Google Product Manager


Every developer’s AI workflow and needs are unique, and it’s important to be able to choose how AI helps your development. In January, we introduced the ability to choose any local or remote AI model to power AI functionality in Android Studio, and today, we’re announcing the availability of Gemma 4 for AI coding assistance in Android Studio. This new local model trained on Android development provides the best of both worlds: the privacy and cost-efficiency of on-device processing alongside state-of-the-art reasoning and tool-calling capabilities.

AI assistance, locally delivered

By running locally on your machine, Gemma 4 gives you AI code assistance that doesn’t require an internet connection or an API key for its core operations. Key benefits include:

  • Privacy and security: Your code stays on your machine. Gemma 4 processes all Agent Mode requests locally, making it an ideal choice for developers working with data privacy requirements or in secure corporate environments.
  • Cost efficiency: Run complex agentic workflows without worrying about hitting quotas. Gemma 4 is optimized to run efficiently on modern development hardware, utilizing local GPU and RAM to provide snappy, responsive assistance.
  • Offline availability: Use the agent to write code even when you don’t have an internet connection.
  • State-of-the-art reasoning: Gemma 4 delivers best-in-class reasoning, capable of complex multi-step coding tasks in Agent Mode.

Powerful agentic coding

Gemma 4 was trained for Android development with agentic tool calling capabilities. When you select Gemma 4 as your local model, you can leverage Agent Mode for a variety of development use cases, such as:

  • Designing new features: Developers can ask the agent to build a new feature or an entire app with commands like “build a calculator app” and the agent will not only generate the UI code but will use Android best practices like writing in Kotlin and using Jetpack Compose.
  • Refactoring: You can give high-level commands such as “Extract all hardcoded strings and migrate them to strings.xml.” The agent will scan your codebase, identify instances requiring changes, and apply the edits across multiple files simultaneously.
  • Bug fixing and build resolution: If a project fails to build or has persistent lint errors, you can prompt the agent to “Build my project and fix any errors.” The agent will navigate to the offending code and iteratively apply fixes until the build is successful.

Recommended hardware requirements

The 26B MoE is recommended for Android app developers using a machine with the minimum hardware requirements. Total RAM needed includes both Android Studio and Gemma.

Model Total RAM needed Storage needed
Gemma E2B 8GB 2 GB
Gemma E4B 12 GB 4 GB
Gemma 26B MoE 24 GB 17 GB

Get started

To get started, ensure you have the latest version of Android Studio installed.
  1. Install an LLM provider, such as LM Studio or Ollama, on your local computer.
  2. In Settings > Tools > AI > Model Providers add your LM Studio or Ollama instance.


  1. Download the Gemma 4 model from Ollama or LM Studio. Refer to hardware requirements for model size selection.
  2. In Agent Mode, select Gemma 4 as your active model.


For a detailed walkthrough on configuration, check out the official documentation on how to use a local model.

We are excited to see how Gemma 4 enables more private, secure, and powerful development workflows. As always, your feedback is essential as we continue to refine the AI experience in Android Studio. If you find a bug or issue, please file an issue. Also you can be part of our vibrant Android developer community on LinkedIn, YouTube, or X. Happy coding!

The post Android Studio supports Gemma 4: our most capable local model for agentic coding appeared first on InShot Pro.

]]>
Get your Wear OS apps ready for the 64-bit requirement https://theinshotproapk.com/get-your-wear-os-apps-ready-for-the-64-bit-requirement/ Wed, 01 Apr 2026 20:00:00 +0000 https://theinshotproapk.com/get-your-wear-os-apps-ready-for-the-64-bit-requirement/ Posted by Michael Stillwell, Developer Relations Engineer and Dimitris Kosmidis, Product Manager, Wear OS 64-bit architectures provide performance improvements and ...

Read more

The post Get your Wear OS apps ready for the 64-bit requirement appeared first on InShot Pro.

]]>

Posted by Michael Stillwell, Developer Relations Engineer and Dimitris Kosmidis, Product Manager, Wear OS

64-bit architectures provide performance improvements and a foundation for future innovation, delivering faster and richer experiences for your users. We’ve supported 64-bit CPUs since Android 5. This aligns Wear OS with recent updates for Google TV and other form factors, building on the 64-bit requirement first introduced for mobile in 2019.

Today, we are extending this 64-bit requirement to Wear OS. This blog provides guidance to help you prepare your apps to meet these new requirements.

The 64-bit requirement: timeline for Wear OS developers

Starting September 15, 2026:

  • All new apps and app updates that include native code will be required to provide 64-bit versions in addition to 32-bit versions when publishing to Google Play.
  • Google Play will start blocking the upload of non-compliant apps to the Play Console.

We are not making changes to our policy on 32-bit support, and Google Play will continue to deliver apps to existing 32-bit devices.

The vast majority of Wear OS developers has already made this shift, with 64-bit compliant apps already available. For the remaining apps, we expect the effort to be small.

Preparing for the 64-bit requirement

Many apps are written entirely in non-native code (i.e. Kotlin or Java) and do not need any code changes. However, it is important to note that even if you do not write native code yourself, a dependency or SDK could be introducing it into your app, so you still need to check whether your app includes native code.

Assess your app

  • Inspect your APK or app bundle for native code using the APK Analyzer in Android Studio.
  • Look for .so files within the lib folder. For ARM devices, 32-bit libraries are located in lib/armeabi-v7a, while the 64-bit equivalent is lib/arm64-v8a.
  • Ensure parity: The goal is to ensure that your app runs correctly in a 64-bit-only environment. While specific configurations may vary, for most apps this means that for each native 32-bit architecture you support, you should include the corresponding 64-bit architecture by providing the relevant .so files for both ABIs.
  • Upgrade SDKs: If you only have 32-bit versions of a third-party library or SDK, reach out to the provider for a 64-bit compliant version.

How to test 64-bit compatibility

The 64-bit version of your app should offer the same quality and feature set as the 32-bit version. The Wear OS Android Emulator can be used to verify that your app behaves and performs as expected in a 64-bit environment.

Note: Since Wear OS apps are required to target Wear OS 4 or higher to be submitted to Google Play, you are likely already testing on these newer, 64-bit only images.

When testing, pay attention to native code loaders such as SoLoader or older versions of OpenSSL, which may require updates to function correctly on 64-bit only hardware.

Next steps

We are announcing this requirement now to give developers a six-month window to bring their apps into compliance before enforcement begins in September 2026. For more detailed guidance on the transition, please refer to our in-depth documentation on supporting 64-bit architectures.

This transition marks an exciting step for the future of Wear OS and the benefits that 64-bit compatibility will bring to the ecosystem.

The post Get your Wear OS apps ready for the 64-bit requirement appeared first on InShot Pro.

]]>
Android developer verification: Rolling out to all developers on Play Console and Android Developer Console https://theinshotproapk.com/android-developer-verification-rolling-out-to-all-developers-on-play-console-and-android-developer-console/ Wed, 01 Apr 2026 12:01:57 +0000 https://theinshotproapk.com/android-developer-verification-rolling-out-to-all-developers-on-play-console-and-android-developer-console/ Posted by Matthew Forsythe, Director Product Management, Android App Safety Android is for everyone. It’s built on a commitment to ...

Read more

The post Android developer verification: Rolling out to all developers on Play Console and Android Developer Console appeared first on InShot Pro.

]]>

Posted by Matthew Forsythe, Director Product Management, Android App Safety

Android is for everyone. It’s built on a commitment to an open and safe platform. Users should feel confident installing apps, no matter where they get them from. However, our recent analysis found over 90 times more malware from sideloaded sources than on Google Play. So as an extra layer of security, we are rolling out Android developer verification to help prevent malicious actors from hiding behind anonymity to repeatedly spread harm. Over the past several months, we’ve worked closely with the community to improve the design so we account for the many ways people use Android to balance openness with safety.

Start your verification today

Today, we’re starting to roll out Android developer verification to all developers in both the new Android Developer Console and Play Console. This allows you to complete your verification and register your apps before user-facing changes begin later this year.

  • If you only distribute apps outside of Google Play, you can create an account in Android Developer Console today.
  • If you’re on Google Play, check your Play Console account for updates over the next few weeks. If you’ve already verified your identity here, then you’re likely already set.

Most of your users’ download experience will not change at all

While verification tools are rolling out now, the experience for users downloading your apps will not change until later this year. The user side protections will first go live in Brazil, Indonesia, Singapore, and Thailand this September, before expanding globally in 2027. We’ve shared this timeline early to ensure you have ample time to complete your verification.

Following this deadline, for the vast majority of users, the experience of installing apps will stay exactly the same. It’s only when a user tries to install an unregistered app that they’ll require ADB or advanced flow, helping us keep the broader community safe while preserving the flexibility for our power users.

Developers can still choose where to distribute their apps. Most users’ download experience will not change

Tailoring the verification experience to your feedback

To balance the need for safety with our commitment to openness, we’ve improved the verification experience based on your feedback. We’ve streamlined the developer experience to be more integrated with existing workflows and maintained choice for power users.

  • For Android Studio developers: In the next two months, you’ll see your app’s registration status right in Android Studio when you generate a signed App Bundle or APK.

You’ll see your app’s registration status in Android Studio when you generate a signed App Bundle or APK.

  • For Play developers: If you’ve completed Play Console’s developer verification requirements, your identity is already verified and we’ll automatically register eligible Play apps for you. In the rare case that we are unable to register your apps for you, you will need to follow the manual app claim process. Over the next couple of weeks, more details will be provided in the Play Console and through email. Also, you’ll be able to register apps you distribute outside of Play in the Play Console too.

The Android developer verification page in your Play Console will show the registration status for each of your apps.

  • For students and hobbyists: To keep Android accessible to everyone, we’re building a free, no government ID required, limited distribution account so you can share your work with up to 20 devices. You only need an email account to get started. Sign up for early access. We’ll send invites in June.
  • For power users: We are maintaining the choice to install apps from any source. You can use the new advanced flow for sideloading unregistered apps or continue using ADB. This maintains choice while protecting vulnerable users.

What’s next?

We’re rolling this out carefully and working closely with developers, users, and our partners. In April, we’ll introduce Android Developer Verifier, a new Google system service that will be used to check if an app is registered to a verified developer.

  • April 2026: Users will start to see Android Developer Verifier in their Google Systems services settings.
  • June 2026: Early access: Limited distribution accounts for students and hobbyists.
  • August 2026: 
  • September 30, 2026: Apps must be registered by verified developers in order to be installed and updated on certified Android devices in Brazil, Indonesia, Singapore, and Thailand. Unregistered apps can be sideloaded with ADB or advanced flow.
  • 2027 and beyond: We will roll out this requirement globally.

We’re committed to an Android that is both open and safe. Check out our developer guides to get started today.

The post Android developer verification: Rolling out to all developers on Play Console and Android Developer Console appeared first on InShot Pro.

]]>
Media3 1.10 is out https://theinshotproapk.com/media3-1-10-is-out/ Mon, 30 Mar 2026 23:00:00 +0000 https://theinshotproapk.com/media3-1-10-is-out/ Posted by Andrew Lewis, Software Engineer Media3 1.10 is out! Media3 1.10 includes new features, bug fixes and feature improvements, ...

Read more

The post Media3 1.10 is out appeared first on InShot Pro.

]]>

Posted by Andrew Lewis, Software Engineer

Media3 1.10 is out!

Media3 1.10 includes new features, bug fixes and feature improvements, including Material3-based playback widgets, expanded format support in ExoPlayer and improved speed adjustment when exporting media with Transformer. Read on to find out more, and check out the full release notes for a comprehensive list of changes.

Playback UI and Compose

We are continuing to expand the media3-ui-compose-material3 module to help you build Compose UIs for playback.

We’ve added a new Player Composable that combines a ContentFrame with customizable playback controls, giving you an out-of-the-box player widget with a modern UI.

This release also adds a ProgressSlider Composable for displaying player progress and performing seeks using dragging and tapping gestures. For playback speed management, a new PlaybackSpeedControl is available in the base media3-ui-compose module, alongside a styled PlaybackSpeedToggleButton in the Material 3 module.

We’ll continue working on new additions like track selection utils, subtitle support and more customization options in the upcoming Media3 releases. We’re eager to hear your feedback so please share your thoughts on the project issue tracker.


 Player Composable in the Media3 Compose demo app

Playback feature enhancements

Media3 1.10 includes a variety of additions and improvements across the playback modules:

  • Format support: ExoPlayer now supports extracting Dolby Vision Profile 10 and Versatile Video Coding (VVC) tracks in MP4 containers, and we’ve introduced MPEG-H UI manager support in the decoder_mpeghextension. The IAMF extension now seamlessly supports binaural output, either through the decoder viaiamf_tools or through the Android OS Spatializer, with new logic to match the output layout of the speakers.

  • Ad playback: Improvements to reliability, improved HLS interstitial support forX-PLAYOUT-LIMIT  and X-SNAP, and with the latest IMA SDK dependency you can control whether ad click-through URLs open in custom tabs with setEnableCustomTabs.

  • HLS: ExoPlayer now allows location fallback upon encountering load errors if redundant streams from different locations are available.

  • Session: MediaSessionService now extends LifecycleService, allowing apps to access the lifecycle scoping of the service.

One of our key focus areas this year is on playback efficiency and performance. Media3 1.10 includes experimental support for scheduling the core playback loop in a more efficient way. You can try this out by enabling experimentalSetDynamicSchedulingEnabled() via the ExoPlayer.Builder. We plan to make further improvements in future releases so stay tuned!

Media editing and Transformer

For developers building media editing experiences, we’ve made speed adjustments more robust. EditedMediaItem.Builder.setFrameRate()can now set a maximum output frame rate for video. This is particularly helpful for controlling output size and maintaining performance when increasing media speed with setSpeed().

New modules for frame extraction and applying Lottie effects

In this release we’ve split some functionality into new modules to reduce the scope of some dependencies:

  • FrameExtractor has been removed from the main media3-inspector module, so please migrate your code to use the new media3-inspector-framemodule and update your imports toandroidx.media3.inspector.frame.FrameExtractor.

  • We have also moved theLottieOverlayeffect to a separate media3-effect-lottie module. As a reminder, this gives you a straightforward way to apply vector-based Lottie animations directly to video frames.

Please get in touch via the issue tracker if you run into any bugs, or if you have questions or feature requests. We look forward to hearing from you!

The post Media3 1.10 is out appeared first on InShot Pro.

]]>
Monzo boosts performance metrics by up to 35% with a simple R8 update https://theinshotproapk.com/monzo-boosts-performance-metrics-by-up-to-35-with-a-simple-r8-update/ Mon, 30 Mar 2026 22:00:00 +0000 https://theinshotproapk.com/monzo-boosts-performance-metrics-by-up-to-35-with-a-simple-r8-update/ Posted by Ben Weiss, Senior Developer Relations Engineer Monzo is a UK digital bank with 15 million customers and growing. ...

Read more

The post Monzo boosts performance metrics by up to 35% with a simple R8 update appeared first on InShot Pro.

]]>

Posted by Ben Weiss, Senior Developer Relations Engineer

Monzo is a UK digital bank with 15 million customers and growing. As the app scaled, the engineering team identified app startup time as a critical area for improvement but worried it would require significant changes to their codebase.

By fully enabling R8 optimizations, Monzo achieved a massive 35% reduction in their Application Not Responding (ANR) rate. This simple change proved that impactful optimizations don’t always require complex engineering efforts.

Unlocking broad performance wins with R8 full mode

Monzo identified R8 full mode as an easy fix worth trying; and it worked, improving performance across the board:

  • Startup Reliability: Cold starts improved by 30%, Warm starts by 24%, and Hot starts by 14%.
  • Launch Speed: P50 launch times improved by 11% and P90 launch times by 12%.
  • Efficiency: Overall app size was reduced by 9%.
  • Stability: ANR reduction of 35%.

Enabling optimizations with a single change

Many Android apps use an outdated default configuration file which disables most functionality of the R8 optimizer. The main change Monzo made to unlock these performance improvements was to replace the proguard-android.txt default file with proguard-android-optimize.txt. This change removes the -dontoptimize instruction and allows R8 to properly do its job.

buildTypes {
  release {
    isMinifyEnabled = true
    isShrinkResources = true
    proguardFiles(
      getDefaultProguardFile("proguard-android-optimize.txt"),
    )
  }
}

After making this change, it’s worth looking at your Keep configuration files. These files tell R8 which parts of your code to leave alone (usually because they’re called dynamically or by external libraries). Tidying up unnecessary Keep rules means R8 can do more.

Improving scroll performance with Baseline Profiles

To further enhance the user experience, Monzo implemented Baseline Profiles, specifically targeting scroll and rendering performance on their main feed. This strategy ensured that the most common user journeys—opening the app and scrolling the feed—were fully optimized. The impact on rendering was substantial: P90 scroll performance became 71% faster, and P95 scroll performance improved by 87%. Now scrolling the app is smoother than before.

Monzo built this into their release process to maintain these improvements over time. “We trigger the baseline profile generation every week day (before running our nightly builds) and commit the latest changes once completed,” Neumayer explains.

Keeping up with modern Android development

Monzo’s experience shows what’s possible when you stay up to date with Android build-tooling recommendations. While legacy apps often struggle with complex reflection usage, Monzo found the transition straightforward by documenting their Keep Rules properly. “We always add a comment explaining why Keep Rules are in place, so we know when it’s safe to remove the rules,” Neumayer notes.

Neumayer’s advice for other teams? Regularly check your practices against current standards: “Take a look at the latest recommendations from Google around app performance and check if you’re following all the latest advice.”

To get started and learn more about R8, visit https://d.android.com/r8

The post Monzo boosts performance metrics by up to 35% with a simple R8 update appeared first on InShot Pro.

]]>
Redefining Location Privacy: New Tools and Improvements for Android 17 https://theinshotproapk.com/redefining-location-privacy-new-tools-and-improvements-for-android-17/ Thu, 26 Mar 2026 23:00:00 +0000 https://theinshotproapk.com/redefining-location-privacy-new-tools-and-improvements-for-android-17/ Posted by Robert Clifford, Developer Relations Engineer and Manjeet Rulhania, Software Engineer A pillar of the Android ecosystem is our ...

Read more

The post Redefining Location Privacy: New Tools and Improvements for Android 17 appeared first on InShot Pro.

]]>

Posted by Robert Clifford, Developer Relations Engineer and Manjeet Rulhania, Software Engineer












A pillar of the Android ecosystem is our shared commitment to user trust. As the mobile landscape has evolved, so does our approach to protecting sensitive information. In Android 17, we’re introducing a suite of new location privacy features designed to give users more control and provide developers elegant solutions for data minimization and product safety. Our strategy focuses on introducing new tools to balance high-quality experiences with robust privacy protections, and improving transparency for users to help manage their data.

Introducing the location button: simplified access for one time use

For many common tasks, like finding a nearby shop or tagging a social post, your app doesn’t need permanent or background access to a user’s precise location.With Android 17, we are introducing the location button, a new UI element designed to provide a well-lit path for responsible one time precise location access. Industry partners have requested this new feature as a way to bring a simpler, and more private location flow to their users.


Users get better privacy protection

Moving the decision making for location sharing to the point where a user takes action, helps the user make a clearer choice about how much information they want to share and for how long. This empowers users to limit data sharing to only what apps need in that session. Once consent is provided, this session based access eliminates repeated prompts for location dependent features. This benefits developers by creating a smoother experience for their users and providing high confidence in user intent, as access is explicitly requested at the moment of action.

Full UI customization to match your app’s aesthetic

The location button provides extensive customization options to ensure integration with your app’s aesthetic while maintaining system-wide recognizability. You can modify the button’s visual style including:

  • Background and icon color scheme
  • Outline style
  • Size and shape

Additionally, you can select the appropriate text label from a predefined list of options. To ensure security and trust, the location icon itself remains mandatory and non-customizable, while the font size is system-managed to respect user accessibility settings

Simplified Integration with Jetpack and automatic backwards compatibility

The location button will be provided as a Jetpack library, ensuring easy integration into your existing app layouts similar to any other Jetpack view implementation, and simplifying how you request permission to access precise location. Additionally, when you implement location button with the Jetpack library it will automatically handle backwards compatibility by defaulting to the existing location prompt when a user taps it on a device running Android 16 or below.

The Android location button is available for testing as of Android 17 Beta 3. 

Location access transparency

Users often struggle to understand the tools they can use to monitor and control access to their location data. In Android 17, we are aligning location permission transparency with the high standards already set for the Microphone and Camera.


   

  • Updated Location Indicator: A persistent indicator will now appear to inform a user whenever a non-system app accesses their location
  • Attribution & Control: Users can tap the indicator to see exactly which apps have recently accessed their location and manage those permissions immediately through a “Recent app use” dialog.

Strengthening user privacy with density-based Coarse Location

Android 17 is also improving the algorithm for approximate (coarse) locations to be aware of population density. Previously, coarse locations used a static 2 km-wide grid, which in low-population areas may not be sufficiently private since a 2km square could often contain only a handful of users. The new approach replaces this fixed grid with a dynamically-sized area based on local population density. By increasing the grid for areas with lower population density, Android ensures a more consistent privacy guarantee across different environments from dense urban centers to remote regions.

Improved runtime permission dialog

The runtime permission dialog for location is one of the more complex flows for users to navigate, with users being asked to decide on the granularity and length of permission access they are willing to grant to each app. In an effort to help users to make the most informed privacy decisions with less friction, we’ve redesigned the dialog to make “Precise” and “Approximate” choices more visually distinct, encouraging users to select the level of access which best suits their needs.
 

Start building for Android 17

The new location privacy tools are available now in Beta 3. We’re looking for your feedback to help refine these features before the general release.

Build a smoother, more private experience today.

The post Redefining Location Privacy: New Tools and Improvements for Android 17 appeared first on InShot Pro.

]]>