InShot Pro https://theinshotproapk.com/ Download InShot Pro APK for Android, iOS, and PC Sun, 01 Feb 2026 12:07:00 +0000 en-US hourly 1 https://theinshotproapk.com/wp-content/uploads/2021/07/cropped-Inshot-Pro-APK-Logo-1-32x32.png InShot Pro https://theinshotproapk.com/ 32 32 Accelerating your insights with faster, smarter monetization data and recommendations https://theinshotproapk.com/accelerating-your-insights-with-faster-smarter-monetization-data-and-recommendations/ Sun, 01 Feb 2026 12:07:00 +0000 https://theinshotproapk.com/accelerating-your-insights-with-faster-smarter-monetization-data-and-recommendations/ Posted by Phalene Gowling, Product Manager, Google Play To build a thriving business on Google Play, you need more than just ...

Read more

The post Accelerating your insights with faster, smarter monetization data and recommendations appeared first on InShot Pro.

]]>

Posted by Phalene Gowling, Product Manager, Google Play

To build a thriving business on Google Play, you need more than just data  you need a clear path to action. Today, we’re announcing a suite of upgrades to the Google Play Console and beyond, giving you greater visibility into your financial performance and specific, data-backed steps to improve it.

From new, actionable recommendations to more granular sales reporting, here’s how we’re helping you maximize your ROI.

New: Monetization insights and recommendations
Launch Status: Rolling out today

The Monetize with Play overview page is designed to be your ultimate command center. Today, we are upgrading it with a new dynamic insights section designed to give you a clearer view of your revenue drivers.


This new insights carousel highlights the visible and invisible value Google Play delivers to your bottom line – including recovered revenue. Alongside these insights, you can now track these critical signals alongside your core performance metrics:

  • Optimize conversion: Track your new Cart Conversion Rate.
  • Reduce churn: Track cancelled subscriptions over time.

  • Optimize pricing: Monitor your Average Revenue Per Paying User (ARPPU).

  • Increase buyer reach: Analyze how much of your engaged audience convert to buyers.

But we aren’t just showing you the data – we’re helping you act on it. Starting today, Play Console will surface customized, actionable recommendations. If there are relevant opportunities – for example, a high churn rate – we will suggest specific, high-impact steps to help you reach your next monetization goal. Recommendations include effort levels and estimated ROI (where available), helping you prioritize your roadmap based on actual business value. Learn more.



Granular visibility: Sales Channel reporting
Launch Status: Recently launched

We recently rolled out new Sales Channel data in your financial reporting. This allows you to attribute revenue to specific surfaces – including your app, the Play Store, and platforms like Google Play Games on PC. 

For native-PC game developers and media & entertainment subscription businesses alike, this granularity allows you to calculate the precise ROI of your cross-platform investments and understand exactly which channels are driving your growth. Learn more.



Operational efficiency: The Orders API
Launch Status: Available now

The Orders API provides programmatic access to one-time and recurring order transaction details. If you haven’t integrated it yet, this API allows you to ingest real-time data directly into your internal dashboards for faster reconciliation and improved customer support.

Feedback so far has been overwhelmingly positive:

Level Infinite (Tencent) says the API  “works so well that we want every app to use it.”

Continuous improvements towards objective-led reporting 

You’ve told us that the biggest challenge isn’t just accessing data, but connecting the dots across different metrics to see the full picture. We’re enhancing reporting that goes beyond data dumps to provide straightforward, actionable insights that help you reach business objectives faster.

Our goal is to create a more cohesive product experience centered around your objectives. By shifting from static reporting to dynamic, goal-orientated tools, we’re making it easier to track and optimize for revenue, conversion rates, and churn. These updates are just the beginning of a transformation designed to help you turn data into measurable growth.









The post Accelerating your insights with faster, smarter monetization data and recommendations appeared first on InShot Pro.

]]>
How Automated Prompt Optimization Unlocks Quality Gains for ML Kit’s GenAI Prompt API https://theinshotproapk.com/how-automated-prompt-optimization-unlocks-quality-gains-for-ml-kits-genai-prompt-api/ Sun, 01 Feb 2026 12:06:39 +0000 https://theinshotproapk.com/how-automated-prompt-optimization-unlocks-quality-gains-for-ml-kits-genai-prompt-api/ Posted by Chetan Tekur, PM at AI Innovation and Research, Chao Zhao, SWE at AI Innovation and Research, Paul Zhou, ...

Read more

The post How Automated Prompt Optimization Unlocks Quality Gains for ML Kit’s GenAI Prompt API appeared first on InShot Pro.

]]>

Posted by Chetan Tekur, PM at AI Innovation and Research, Chao Zhao, SWE at AI Innovation and Research, Paul Zhou, Prompt Quality Lead at GCP Cloud AI and Industry Solutions, and Caren Chang, Developer Relations Engineer at Android


To further help bring your ML Kit Prompt API use cases to production, we are excited to announce Automated Prompt Optimization (APO) targeting On-Device models on Vertex AI. Automated Prompt Optimization is a tool that helps you automatically find the optimal prompt for your use cases.

The era of On-Device AI is no longer a promise—it is a production reality. With the release of Gemini Nano v3, we are placing unprecedented language understanding and multimodal capabilities directly into the palms of users. Through the Gemini Nano family of models, we have wide coverage of supported devices across the Android Ecosystem. But for developers building the next generation of intelligent apps, access to a powerful model is only step one. The real challenge lies in customization: How do you tailor a foundation model to expert-level performance for your specific use case without breaking the constraints of mobile hardware?

In the server-side world, the larger LLMs tend to be highly capable and require less domain adaptation. Even when needed, more advanced options such as LoRA (Low-Rank Adaptation) fine-tuning can be feasible options. However, the unique architecture of Android AICore prioritizes a shared, memory-efficient system model. This means that deploying custom LoRA adapters for every individual app comes with challenges on these shared system services.

But there is an alternate path that can be equally impactful. By leveraging Automated Prompt Optimization (APO) on Vertex AI, developers can achieve quality approaching fine-tuning, all while working seamlessly within the native Android execution environment. By focusing on superior system instruction, APO enables developers to tailor model behavior with greater robustness and scalability than traditional fine-tuning solutions.

Note: Gemini Nano V3 is a quality optimized version of the highly acclaimed Gemma 3N model. Any prompt optimizations that are made on the open source Gemma 3N model will apply to Gemini Nano V3 as well. On supported devices, ML Kit GenAI APIs leverage the nano-v3 model to maximize the quality for Android Developers.


Automated Prompt Optimization (APO)

To further help bring your ML Kit Prompt API use cases to production, we are excited to announce Automated Prompt Optimization (APO) targeting On-Device models on Vertex AI. Automated Prompt Optimization is a tool that helps you automatically find the optimal prompt for your use cases.

The era of On-Device AI is no longer a promise—it is a production reality. With the release of Gemini Nano v3, we are placing unprecedented language understanding and multimodal capabilities directly into the palms of users. Through the Gemini Nano family of models, we have wide coverage of supported devices across the Android Ecosystem. But for developers building the next generation of intelligent apps, access to a powerful model is only step one. The real challenge lies in customization: How do you tailor a foundation model to expert-level performance for your specific use case without breaking the constraints of mobile hardware?

In the server-side world, the larger LLMs tend to be highly capable and require less domain adaptation. Even when needed, more advanced options such as LoRA (Low-Rank Adaptation) fine-tuning can be feasible options. However, the unique architecture of Android AICore prioritizes a shared, memory-efficient system model. This means that deploying custom LoRA adapters for every individual app comes with challenges on these shared system services.

But there is an alternate path that can be equally impactful. By leveraging Automated Prompt Optimization (APO) on Vertex AI, developers can achieve quality approaching fine-tuning, all while working seamlessly within the native Android execution environment. By focusing on superior system instruction, APO enables developers to tailor model behavior with greater robustness and scalability than traditional fine-tuning solutions.

Note: Gemini Nano V3 is a quality optimized version of the highly acclaimed Gemma 3N model. Any prompt optimizations that are made on the open source Gemma 3N model will apply to Gemini Nano V3 as well. On supported devices, ML Kit GenAI APIs leverage the nano-v3 model to maximize the quality for Android Developers

APO treats the prompt not as a static text, but as a programmable surface that can be optimized. It leverages server-side models (like Gemini Pro and Flash) to propose prompts, evaluate variations and find the optimal one for your specific task. This process employs three specific technical mechanisms to maximize performance:

  1. Automated Error Analysis: APO analyzes error patterns from training data to Automatically identify specific weaknesses in the initial prompt.

  2. Semantic Instruction Distillation: It analyzes massive training examples to distill the “true intent” of a task, creating instructions that more accurately reflect the real data distribution.

  3. Parallel Candidate Testing: Instead of testing one idea at a time, APO generates and tests numerous prompt candidates in parallel to identify the global maximum for quality.


Why APO Can Approach Fine Tuning Quality

It is a common misconception that fine-tuning always yields better quality than prompting. For modern foundation models like Gemini Nano v3, prompt engineering can be impactful by itself:

  • Preserving General capabilities: Fine-tuning ( PEFT/LoRA) forces a model’s weights to over-index on a specific distribution of data. This often leads to “catastrophic forgetting,” where the model gets better at your specific syntax but worse at general logic and safety. APO leaves the weights untouched, preserving the capabilities of the base model.

  • Instruction Following & Strategy Discovery: Gemini Nano v3 has been rigorously trained to follow complex system instructions. APO exploits this by finding the exact instruction structure that unlocks the model’s latent capabilities, often discovering strategies that might be hard for human engineers to find. 

To validate this approach, we evaluated APO across diverse production workloads. Our validation has shown consistent 5-8% accuracy gains across various use cases.Across multiple deployed on-device features, APO provided significant quality lifts.

Use Case

Task Type

Task Description

Metric

APO Improvement

Topic classification

Text classification

Classify a news article into topics such as finance, sports, etc

Accuracy

+5%

Intent classification

Text classification

Classify a customer service query into intents

Accuracy

+8.0%

Webpage translation

Text translation

Translate a webpage from English to a local language

BLEU

+8.57%

A Seamless, End-to-End Developer Workflow

It is a common misconception that fine-tuning always yields better quality than prompting. For modern foundation models like Gemini Nano v3, prompt engineering can be impactful by itself:

  • Preserving General capabilities: Fine-tuning ( PEFT/LoRA) forces a model’s weights to over-index on a specific distribution of data. This often leads to “catastrophic forgetting,” where the model gets better at your specific syntax but worse at general logic and safety. APO leaves the weights untouched, preserving the capabilities of the base model.

  • Instruction Following & Strategy Discovery: Gemini Nano v3 has been rigorously trained to follow complex system instructions. APO exploits this by finding the exact instruction structure that unlocks the model’s latent capabilities, often discovering strategies that might be hard for human engineers to find. 

To validate this approach, we evaluated APO across diverse production workloads. Our validation has shown consistent 5-8% accuracy gains across various use cases.Across multiple deployed on-device features, APO provided significant quality lifts.

Conclusion

The release of Automated Prompt Optimization (APO) marks a turning point for on-device generative AI. By bridging the gap between foundation models and expert-level performance, we are giving developers the tools to build more robust mobile applications. Whether you are just starting with Zero-Shot Optimization or scaling to production with Data-Driven refinement, the path to high-quality on-device intelligence is now clearer. Launch your on-device use cases to production today with ML Kit’s Prompt API and Vertex AI’s Automated Prompt Optimization. 

Relevant links: 
















The post How Automated Prompt Optimization Unlocks Quality Gains for ML Kit’s GenAI Prompt API appeared first on InShot Pro.

]]>
Ready to review some changes but not others? Try using Play Console’s new Save for later feature https://theinshotproapk.com/ready-to-review-some-changes-but-not-others-try-using-play-consoles-new-save-for-later-feature/ Wed, 21 Jan 2026 17:00:00 +0000 https://theinshotproapk.com/ready-to-review-some-changes-but-not-others-try-using-play-consoles-new-save-for-later-feature/ Posted by Georgia Doyle, Senior UX Writer and Content Designer, and Kanu Tibrewal, Software Engineer We’ve launched a new Save ...

Read more

The post Ready to review some changes but not others? Try using Play Console’s new Save for later feature appeared first on InShot Pro.

]]>


Posted by Georgia Doyle, Senior UX Writer and Content Designer, and Kanu Tibrewal, Software Engineer



We’ve launched a new Save for later feature on Google Play Console’s Publishing overview to give you more control over when you send changes for review. 


In the past, changes to your app were bundled together before being sent for review. This presented challenges if you needed to reprioritize changes, or if the changes were no longer relevant. For example, updates to your test tracks grouped with marketing changes that need to be rescheduled. This lack of flexibility meant that if some changes were ready for review but not others, you could end up delaying urgent fixes, or publishing changes that you weren’t quite ready to make.

Now, you have the ability to hold back the changes you’re not ready to have reviewed.

How it works

In the ‘Changes not yet sent for review’ section of the Publishing overview page, select ‘Save for later’ on the groups of changes that you don’t want to include in your next review. You can view and edit the list of saved changes, and return them to the Publishing overview if you change your mind. Once the review has started, your saved changes will be added back to ‘Changes not yet sent for review’.

Integration with our pre-review checks


Save for later also works with our pre-review checks. Pre-review checks look for issues in your changes that may prevent your app from being published, so that you can fix them before you send changes for review. If checks find issues with your app, there are two ways you can proceed:

  • If issues are isolated to an individual track, we’ll show you an error beside that change, so you know what to save for later in order to proceed to review with your other changes.
  • If you have issues that affect your whole app, for example, App content issues, Save for later will be unavailable and you will need to fix them before you can send any changes for review.

Greater flexibility in your workflows

Our goal for Save for later is to give you greater flexibility over your release schedule. With this feature you can manage what changes you send for review, and address issues affecting individual tracks without holding up ready-to-release changes, so you can iterate faster and minimize the impact of rejections on your release timeline.

So, what’s next?

We’re committed to continuously improving your publishing experience. Save for later is a significant step towards providing you with more granular control over this all-important stage in the journey to publishing your app. We’ll continue to gather your feedback and look at ways we can provide greater flexibility to the review and publishing process.

We’re excited to see how Save for later helps you to streamline your release process and bring your app innovations to users even faster.

The post Ready to review some changes but not others? Try using Play Console’s new Save for later feature appeared first on InShot Pro.

]]>
LLM flexibility, Agent Mode improvements, and new agentic experiences in Android Studio Otter 3 Feature Drop https://theinshotproapk.com/llm-flexibility-agent-mode-improvements-and-new-agentic-experiences-in-android-studio-otter-3-feature-drop/ Thu, 15 Jan 2026 17:18:00 +0000 https://theinshotproapk.com/llm-flexibility-agent-mode-improvements-and-new-agentic-experiences-in-android-studio-otter-3-feature-drop/ Posted by Sandhya Mohan, Senior Product Manager and Trevor Johns, Developer Relations Engineer We are excited to announce that Android Studio ...

Read more

The post LLM flexibility, Agent Mode improvements, and new agentic experiences in Android Studio Otter 3 Feature Drop appeared first on InShot Pro.

]]>

Posted by Sandhya Mohan, Senior Product Manager and Trevor Johns, Developer Relations Engineer


We are excited to announce that Android Studio Otter 3 Feature Drop is now stable! This feature-packed release brings a huge update to your agentic workflows in Android Studio, and offers you more flexibility and control for how you use AI to help you build Android apps. 

  • Bring Your Own Model: You can now use any LLM to power the AI functionality in Android Studio.
  • Agent Mode Enhancements: You can now more easily have Agent Mode interact with your app on devices, review and accept suggested changes, and have multiple conversations threads.
  • Run user journey tests using natural language: with Journeys in Android Studio.
  • Enable Agent Mode to connect to more tools: including the ability to connect to remote servers via MCP.
  • Build, iterate and test your UI: with UI agentic experiences in Android Studio. 
  • Build deep links using natural language: with the new app links assistant. 
  • Debug R8 optimized code: with Automatic Logcat retracing.
  • Simplify Android library modules: with the Fused library plugin.


Here’s a deep dive into what’s new:

Bring Your Own Model (BYOM)

Every developer has a unique workflow when using AI, and different companies have different policies on AI model usage. With this release, Android Studio now brings you more flexibility by allowing you to choose the LLM that powers the AI functionality in Android Studio, giving you more control over performance, privacy, and cost.

Use a remote model

You can now integrate remote models—such as OpenAI’s GPT, Anthropic’s Claude, or a similar model—directly into Android Studio. This allows you to leverage your preferred model provider without changing your IDE. To get started, configure a remote model provider in Settings by adding your API endpoint and key. Once configured, you can select your custom model directly from the picker in the AI chat window.

Enter the remote model provider information.


Use a local model

If you have limited internet connectivity, strict data privacy requirements, or a desire to experiment with open-source research, Android Studio now supports local models via providers like LM Studio or Ollama. While Gemini in Android Studio remains the default recommendation—tuned specifically for Android development with full context awareness—if you have a specific model preference, Android Studio supports it.
Model picker in Android Studio.

A local model offers an alternative to the LLM support built into Android Studio, and typically requires significant local system RAM and hard drive space to run well. However, Gemini in Android Studio provides the best Android development experience because Gemini is tuned for Android and supports all features of Android Studio. With Gemini, you can choose from a variety of models for your Android development tasks, including the no-cost default model or models accessed with a paid Gemini API key.

Use your Gemini API key


While Android Studio includes access to a default Gemini model with generous quotas at no cost, some developers need more. By adding your Gemini API key, Android Studio can directly access all the latest Gemini models available from Google.


For example, this allows you to use the most recent Gemini 3 Pro and Gemini 3 Flash models (among others) with expanded context windows and quota. This is especially useful for developers who are using Agent Mode for extended coding sessions, where this additional processing power can provide higher fidelity responses.


You can also read more about how we’re rolling out Gemini 3 to all Android Studio users, including Gemini Code Assist subscribers and developers accessing the default Gemini in Android Studio model at no-cost.

Agent Mode enhancements

Agent Mode is the semi-autonomous AI assistant in Android Studio that aids in your software development, used by many developers, including the Ultrahuman team. Get more out of Agent Mode with these new updates.

Run your app and interact with it on devices

Agent Mode can now deploy an application to the connected device, inspect what is currently shown on the screen, take screenshots, check Logcat for errors, and interact with the running application. This lets the agent help you with changes or fixes that involve re-running the application, checking for errors, and verifying that a particular update was made successfully (for example, by taking and reviewing screenshots).


Agent mode uses device actions to deploy and verify changes.

Find and review changes using the changes drawer

You can now see and manage all changes made by the AI agent using the changes drawer. When the agent makes changes to your codebase, you can see the files that were edited in Files to review. From there, you can keep or revert the changes individually or all together. Click an individual file in the drawer to see the code diff in the editor and make refinements if needed. With the changes drawer, you can keep track of edits made by the agent during your chat and revisit specific changes without scrolling back through your conversation history.


See all the files that the agent has proposed edits to in the changes drawer.

Note: If the Don’t ask to edit files setting is disabled in Agent Options , Agent Mode will request permission for every individual change. Each change must be accepted before it appears in the changes drawer. To allow multiple file edits to appear in the drawer simultaneously, enable the Don’t ask to edit files option.


Accept a change to add it to the changes drawer.

Manage multiple conversation threads


You can now organize your conversations with Gemini in Android Studio into multiple threads. This lets you create a new chat or agent thread when you need to start with a clean slate, and you can go back to older conversations in the history tab. Using separate threads for each distinct task can improve response quality by limiting the scope of the AI’s context to only the topic at hand.



To start a new thread, click New Conversation The New Chat plus sign.. To see your conversation history, click Recent Chats. The Recent Chats word
bubble.


See prior conversations in the “Recent Chats” tab.



Your conversation history is saved to your account, so if you have to sign out or switch accounts you can resume right where you left off when you come back.

Journeys for Android Studio

Running end-to-end UI tests can improve confidence that you’re shipping a high-quality app to production, but writing and maintaining those tests can be difficult, brittle, and limited in what you’re able to test. Journeys for Android Studio leverages the reasoning and vision capabilities of Gemini to enable you to write and maintain end-to-end UI tests using natural language instructions—and it’s now available in the latest stable release of Android Studio when you enable it from Studio Labs in your Android Studio Settings.

Journeys for Android Studio.

These natural language instructions are converted into interactions that Gemini performs directly on your app. This not only makes your tests easier to write and understand, but also enables you to define complex assertions that Gemini evaluates based on what it “sees” on the device screen. Because Gemini reasons about how to achieve your goals, these tests are more resilient to subtle changes in your app’s layout, significantly reducing flaky tests when running against different app versions or device configurations.

Journeys for Android Studio.


You can write and run journeys directly from Android Studio against any local or remote device. The IDE provides a new editor experience for crafting your test steps in an XML file, using either a code view or a dedicated design view. When you run a journey, Android Studio provides rich, detailed results that help you follow Gemini’s execution. The test panel breaks down the entire journey into its discrete steps, showing you screenshots for each action, what action was taken, and Gemini’s reasoning for why it took that action, making debugging and validation clearer than ever. And because journeys are run as Gradle tasks, you can run them from the command line after you authenticate with a Google Cloud Project.

Support for remote MCP servers

Android Studio now lets you connect directly to remote Model Context Protocol (MCP) servers such as Figma, Notion, Canva, Linear, and more. This significantly reduces your context switching since it enables the AI agent in Android Studio to leverage external tools, helping you stay in your flow. For example, you can connect to Figma’s remote MCP server to access files and provide this information to Agent Mode, generating more accurate code from your designs. To learn more about how to add an MCP server, see Add an MCP server.


Connect to the Figma remote MCP server in Android Studio Settings.


Quickly add a screen to your app using the Figma remote MCP server.

Supercharge your UI development with Agent Mode

Gemini in Android Studio is now integrated into the UI development workflow directly from within the Compose Preview panel, helping you go from design to a high-quality implementation faster. These new agentic capabilities are designed to assist you at every stage of development, from initial code generation to iteration, refinement, and debugging, with entry points in the context of your work.

Create new UI from a design mock


Accelerate your initial UI implementation by generating Compose code directly from a design mock. Simply click Generate Code From Screenshot in an empty Preview panel, and Gemini will use the image to generate a starting implementation, saving you from writing boilerplate from scratch.

Generate code from a screenshot in an empty Preview panel.


Example turning design into Compose code.

Match your UI with a target image


Once you have an initial implementation, you can iteratively refine it to be pixel-perfect. Right-click your Compose Preview and select AI Actions > Match UI to Target Image. Upload a reference design, and the agent will suggest code changes to make your UI match the design as closely as possible.


Example of using “Match UI to Target Image”


Iterate on your UI with natural language

For more specific or creative changes, right-click on your preview and use the AI Actions >  Change UI. This capability now leverages Agent Mode to validate the results, making it more powerful and accurate. You can use natural language prompts like “change the button color to blue” or “add padding around this text,” and Gemini will apply the code modifications instantly.

Example of using “Change UI”


Find and fix UI quality issues


Verifying your UI is high-quality and more accessible is a critical final step. The AI Actions > Fix all UI check tool audits your UI for common problems, such as accessibility issues. The agent will then propose and apply fixes to resolve the detected issues.

Entry point to trigger “Fix all UI check issues”


You can also find the same functionality by using the Fix with AI button in Compose UI check mode:

“Fix with AI” in UI Check mode



The features mentioned above are also accessible by the toolbar icon in the Preview panel:

Second entry point to UI development AI features

Beyond iterating on your UI, Gemini also helps streamline your development environment.

To accelerate your setup, you can:

  • Generate Compose Previews: This feature is now enhanced by Agent Mode to provide more accurate results. When working in a file that has Composable functions but no @Preview annotations, you can right-click on the Composable and select Gemini > Generate [Composable name] Preview. The agent will now better analyze your Composable to generate the necessary boilerplate with correct parameters, to help verify that a successfully rendered preview is added.

Entry point to generate Compose Preview



  • Fix Preview rendering errors: When a Compose Preview fails to render, Gemini can now analyze the error message and your code to find the root cause and apply a fix.

Using “Fix with AI” on Preview render error

App Links Assistant

The App Links Assistant now integrates with Agent Mode to automate the creation of deep link logic, simplifying one of the most time-consuming steps of implementation. Instead of manually writing code to parse incoming intents and navigate users to the correct screen, you can now let Gemini generate the necessary code and tests. Gemini presents a diff view of the suggested code changes for your review and approval, streamlining the process of handling deep links and ensuring users are seamlessly directed to the right content in your app.


To get started, open the App Links Assistant through the tools menu, then choose Create Applink. In the second step, Add logic to handle the intent, select Generate code with AI assistance. If a sample URL is available, enter it, and then click Insert Code.


App Links Assistant

Automatic Logcat Retracing

Debugging R8-optimized code just became seamless. Previously, when R8 was enabled (minifyEnabled = true in your build.gradle.kts file), it would obfuscate stack traces, changing class names, methods, and line numbers. To find the source of a crash, developers had to manually use the R8 retrace command line tool.

Starting with Android Studio Otter 3 Feature Drop with AGP versions 8.12 and above, this extra step is no longer necessary. Logcat now automatically detects and retraces R8-processed stack traces, so you can see the original, human-readable stack trace directly in the IDE. This provides a much-improved debugging experience with no extra work required.


Logcat now automatically detects and retraces R8-processed stack traces


Fused Library Plugin: Publish multiple Android libraries as one


The new Fused Library plugin bundled with Android Gradle Plugin 9.0 allows you to package multiple Android library modules into a single, publishable Android Library (AAR). This was one of the most requested features for Android Gradle Plugin, and we are making it available for you today. This plugin enables you to modularize your code and resources internally while simplifying the integration process for your users by exposing only a single dependency. In addition to streamlining project setup and version management, distributing a fused library can help reduce library size through improved code shrinking and offer better control over your internal implementation details. To learn more about the Fused Library plugin see Publish multiple Android libraries as one with Fused Library.



Get started

Ready to dive in and accelerate your development? Download Android Studio Otter 3 Feature Drop and start exploring these powerful new features today! 


As always, your feedback is crucial to us. Check known issues, report bugs, and be part of our vibrant community on LinkedIn, Medium, YouTube, or X. Let’s build the future of Android apps together!



The post LLM flexibility, Agent Mode improvements, and new agentic experiences in Android Studio Otter 3 Feature Drop appeared first on InShot Pro.

]]>
Ultrahuman launches features 15% faster with Gemini in Android Studio https://theinshotproapk.com/ultrahuman-launches-features-15-faster-with-gemini-in-android-studio/ Thu, 08 Jan 2026 22:00:00 +0000 https://theinshotproapk.com/ultrahuman-launches-features-15-faster-with-gemini-in-android-studio/ Posted by Amrit Sanjeev, Developer Relations Engineer and Trevor Johns, Developer Relations Engineer Ultrahuman is a consumer health-tech startup that ...

Read more

The post Ultrahuman launches features 15% faster with Gemini in Android Studio appeared first on InShot Pro.

]]>

Posted by Amrit Sanjeev, Developer Relations Engineer and Trevor Johns, Developer Relations Engineer




Ultrahuman is a consumer health-tech startup that provides daily well-being insights to users based on biometric data from the company’s wearables, like the RING Air and the M1 Live Continuous Glucose Monitor (CGM). The Ultrahuman team leaned on Gemini in Android Studio’s contextually aware tools to streamline and accelerate their development process.

Ultrahuman’s app is maintained by a lean team of just eight developers. They prioritize building features that their users love, and have a backlog of bugs and needed performance improvements that take a lot of time. The team needed to scale up their output of feature improvements, and also needed to handle their performance improvements, without increasing headcount. One of their biggest opportunities was reducing the amount of time and effort for their backlog: every hour saved on maintenance could be reinvested into working on features for their users.



Solving technical hurdles and boosting performance with Gemini

The team integrated Gemini in Android Studio to see if the AI enhanced tools could improve their workflow by handling many Android tasks. First, the team turned to the Gemini chat inside Android Studio. The goal was to prototype a GATT Server implementation for their application’s Bluetooth Low Energy (BLE) connectivity. 

As Ultrahuman’s Android Development Lead, Arka, noted, “Gemini helped us reach a working prototype in under an hour—something that would have otherwise taken us several hours.” The BLE implementation provided by Gemini worked perfectly for syncing large amounts of health sensor data while the app ran in the background, improving the data syncing process and saving battery life on both the user’s Android phone and Ultrahuman’s paired wearable device.

Beyond this core challenge, Gemini also proved invaluable for finding algorithmic optimizations in a custom open-source library, pointing to helpful documentation, assisting with code commenting, and analyzing crash logs. The Ultrahuman team also used code completion to help them breeze through writing otherwise repetitive code, Jetpack Compose Preview Generation to enable rapid iteration during UI design, and Agent Mode for managing complex, project-wide changes, such as rendering a new stacked bar graph that mapped to backend data models and UI models.

Transforming productivity and accelerating feature delivery 

These improvements have saved the team dozens of hours each week. This reclaimed time is being used to deliver new features to Ultrahuman’s beta users 10-15% faster. For example, the team built a new in-app AI assistant for users, powered by Gemini 2.5 Flash. The UI design, architecture, and parts of the user experience for this new feature were initially suggested by Gemini in Android Studio—showcasing a full-circle AI-assisted development process. 

Accelerate your Android development with Gemini

Gemini’s expert Android advice, closely integrated throughout Android Studio, helps Android developers spend less time digging through documentation and writing boilerplate code—freeing up more time to innovate.

Learn how Gemini in Android Studio can help your team resolve complex issues, streamline workflows, and ship new features faster.

The post Ultrahuman launches features 15% faster with Gemini in Android Studio appeared first on InShot Pro.

]]>
Media3 1.9.0 – What’s new https://theinshotproapk.com/media3-1-9-0-whats-new/ Fri, 19 Dec 2025 22:00:00 +0000 https://theinshotproapk.com/media3-1-9-0-whats-new/ Posted by Kristina Simakova, Engineering Manager Media3 1.9.0 – What’s new? Media3 1.9.0 is out! Besides the usual bug fixes ...

Read more

The post Media3 1.9.0 – What’s new appeared first on InShot Pro.

]]>

Posted by Kristina Simakova, Engineering Manager



Media3 1.9.0 – What’s new?

Media3 1.9.0 is out! Besides the usual bug fixes and performance improvements, the latest release also contains four new or largely rewritten modules:

  • media3-inspector – Extract metadata and frames outside of playback

  • media3-ui-compose-material3 – Build a basic Material3 Compose Media UI in just a few steps

  • media3-cast – Automatically handle transitions between Cast and local playbacks

  • media3-decoder-av1 – Consistent AV1 playback with the rewritten extension decoder based on the dav1d library

We also added caching and memory management improvements to PreloadManager, and provided several new ExoPlayer, Transformer and MediaSession simplifications. 

This release also gives you the first experimental access to CompositionPlayer to preview media edits.  


Read on to find out more, and as always please check out the full release notes for a comprehensive overview of changes in this release.

Extract metadata and frames outside of playback

There are many cases where you want to inspect media without starting a playback. For example, you might want to detect which formats it contains or what its duration is, or to retrieve thumbnails.

The new media3-inspector module combines all utilities to inspect media without playback in one place:

  • MetadataRetriever to read duration, format and static metadata from a MediaItem.

  • FrameExtractor to get frames or thumbnails from an item. 

  • MediaExtractorCompat as a direct replacement for the Android platform MediaExtractor class, to get detailed information about samples in the file.

MetadataRetriever and FrameExtractor follow a simple AutoCloseable pattern. Have a look at our new guide pages for more details.

suspend fun extractThumbnail(mediaItem: MediaItem) {
  FrameExtractor.Builder(context, mediaItem).build().use {
    val thumbnail = frameExtractor.getThumbnail().await()
  } 
}

Build a basic Material3 Compose Media UI in just a few steps

In previous releases we started providing connector code between Compose UI elements and your Player instance. With Media3 1.9.0, we added a new module media3-ui-compose-material3 with fully-styled Material3 buttons and content elements. They allow you to build a media UI in just a few steps, while providing all the flexibility to customize style. If you prefer to build your own UI style, you can use the building blocks that take care of all the update and connection logic, so you only need to concentrate on designing the UI element. Please check out our extended guide pages for the Compose UI modules.


We are also still working on even more Compose components, like a prebuilt seek bar, a complete out-of-the-box replacement for PlayerView, as well as subtitle and ad integration.

@Composable
fun SimplePlayerUI(player: Player, modifier: Modifier = Modifier) {
  Column(modifier) {
    ContentFrame(player)  // Video surface and shutter logic
    Row (Modifier.align(Alignment.CenterHorizontally)) {                 
      SeekBackButton(player)   // Simple controls
      PlayPauseButton(player)
      SeekForwardButton(player)
    }
  }
}

Simple Compose player UI with out-of-the-box elements

Automatically handle transitions between Cast and local playbacks

The CastPlayer in the media3-cast module has been rewritten to automatically handle transitions between local playback (for example with ExoPlayer) and remote Cast playback.

When you set up your MediaSession, simply build a CastPlayer around your ExoPlayer and add a MediaRouteButton to your UI and you’re done!

// MediaSession setup with CastPlayer 
val exoPlayer = ExoPlayer.Builder(context).build()
val castPlayer = CastPlayer.Builder(context).setLocalPlayer(exoPlayer).build()
val session = MediaSession.Builder(context, player)
// MediaRouteButton in UI 
@Composable fun UIWithMediaRouteButton() {
  MediaRouteButton()
}

New CastPlayer integration in Media3 session demo app

Consistent AV1 playback with the rewritten extension based on dav1d

The 1.9.0 release contains a completely rewritten AV1 extension module based on the popular dav1d library.

As with all extension decoder modules, please note that it requires building from source to bundle the relevant native code correctly. Bundling a decoder provides consistency and format support across all devices, but because it runs the decoding in your process, it’s best suited for content you can trust. 

Integrate caching and memory management into PreloadManager

We made our PreloadManager even better as well. It already enabled you to preload media into memory outside of playback and then seamlessly hand it over to a player when needed. Although pretty performant, you still had to be careful to not exceed memory limits by accidentally preloading too much. So with Media3 1.9.0, we added two features that makes this a lot easier and more stable:


  1. Caching support – When defining how far to preload, you can now choose PreloadStatus.specifiedRangeCached(0, 5000) as a target state for preloaded items. This will add the specified range to your cache on disk instead of loading the data to memory. With this, you can provide a much larger range of items for preloading as the ones further away from the current item no longer need to occupy memory. Note that this requires setting a Cache in DefaultPreloadManager.Builder.

  2. Automatic memory management – We also updated our LoadControl interface to better handle the preload case so you are now able to set an explicit upper memory limit for all preloaded items in memory. It’s 144 MB by default, and you can configure the limit in DefaultLoadControl.Builder. The DefaultPreloadManager will automatically stop preloading once the limit is reached, and automatically releases memory of lower priority items if required.

Rely on new simplified default behaviors in ExoPlayer

As always, we added lots of incremental improvements to ExoPlayer as well. To name just a few:

  • Mute and unmute – We already had a setVolume method, but have now added the convenience mute and unmute methods to easily restore the previous volume without keeping track of it yourself.

  • Stuck player detection – In some rare cases the player can get stuck in a buffering or playing state without making any progress, for example, due to codec issues or misconfigurations. Your users will be annoyed, but you never see these issues in your analytics! To make this more obvious, the player now reports a StuckPlayerException when it detects a stuck state.

  • Wakelock by default – The wake lock management was previously opt-in, resulting in hard to find edge cases where playback progress can be delayed a lot when running in the background. Now this feature is opt-out, so you don’t have to worry about it and can also remove all manual wake lock handling around playback.

  • Simplified setting for CC button logic Changing TrackSelectionParameters to say “turn subtitles on/off” was surprisingly hard to get right, so we added a simple boolean selectTextByDefault option for this use case.

Simplify your media button preferences in MediaSession

Until now, defining your preferences for which buttons should show up in the media notification drawer on Android Auto or WearOS required defining custom commands and buttons, even if you simply wanted to trigger a standard player method.

Media3 1.9.0 has new functionality to make this a lot simpler – you can now define your media button preferences with a standard player command, requiring no custom command handling at all.

session.setMediaButtonPreferences(listOf(
    CommandButton.Builder(CommandButton.ICON_FAST_FORWARD) // choose an icon
      .setDisplayName(R.string.skip_forward)
      .setPlayerCommand(Player.COMMAND_SEEK_FORWARD) // choose an action 
      .build()
))

Media button preferences with fast forward button

CompositionPlayer for real-time preview

The 1.9.0 release introduces CompositionPlayer under a new @ExperimentalApi annotation. The annotation indicates that it is available for experimentation, but is still under development. 

CompositionPlayer is a new component in the Media3 editing APIs designed for real-time preview of media edits. Built upon the familiar Media3 Player interface, CompositionPlayer allows users to see their changes in action before committing to the export process. It uses the same Composition object that you would pass to Transformer for exporting, streamlining the editing workflow by unifying the data model for preview and export.

We encourage you to start using CompositionPlayer and share your feedback, and keep an eye out for forthcoming posts and updates to the documentation for more details.

InAppMuxer as a default muxer in Transformer

Transformer now uses InAppMp4Muxer as the default muxer for writing media container files. Internally, InAppMp4Muxer depends on the Media3 Muxer module, providing consistent behaviour across all API versions. 

Note that while Transformer no longer uses the Android platform’s MediaMuxer by default, you can still provide FrameworkMuxer.Factory via setMuxerFactory if your use case requires it.

New speed adjustment APIs

The 1.9.0 release simplifies speed adjustments APIs for media editing. We’ve introduced new methods directly on EditedMediaItem.Builder to control speed, making the API more intuitive. You can now change the speed of a clip by calling setSpeed(SpeedProvider provider) on the EditedMediaItem.Builder:

val speedProvider = object : SpeedProvider {
    override fun getSpeed(presentationTimeUs: Long): Float {
        return speed
    }

    override fun getNextSpeedChangeTimeUs(timeUs: Long): Long {
        return C.TIME_UNSET
    }
}

EditedMediaItem speedEffectItem = EditedMediaItem.Builder(mediaItem)
    .setSpeed(speedProvider)
    .build()


This new approach replaces the previous method of using Effects#createExperimentalSpeedChangingEffects(), which we’ve deprecated and will remove in a future release.

Introducing track types for EditedMediaItemSequence

In the 1.9.0 release, EditedMediaItemSequence requires specifying desired output track types during sequence creation. This change ensures track handling is more explicit and robust across the entire Composition.

This is done via a new EditedMediaItemSequence.Builder constructor that accepts a set of track types (e.g., C.TRACK_TYPE_AUDIO, C.TRACK_TYPE_VIDEO). 

To simplify creation, we’ve added new static convenience methods:

  • EditedMediaItemSequence.withAudioFrom(List<EditedMediaItem>)

  • EditedMediaItemSequence.withVideoFrom(List<EditedMediaItem>)

  • EditedMediaItemSequence.withAudioAndVideoFrom(List<EditedMediaItem>)

We encourage you to migrate to the new constructor or the convenience methods for clearer and more reliable sequence definitions.

Example of creating a video-only sequence:

EditedMediaItemSequence videoOnlySequence =
    EditedMediaItemSequence.Builder(setOf(C.TRACK_TYPE_VIDEO))
        .addItem(editedMediaItem)
        .build()


Please get in touch via the Media3 issue Tracker if you run into any bugs, or if you have questions or feature requests. We look forward to hearing from you!

The post Media3 1.9.0 – What’s new appeared first on InShot Pro.

]]>
Goodbye Mobile Only, Hello Adaptive: Three essential updates from 2025 for building adaptive apps https://theinshotproapk.com/goodbye-mobile-only-hello-adaptive-three-essential-updates-from-2025-for-building-adaptive-apps/ Fri, 19 Dec 2025 17:00:00 +0000 https://theinshotproapk.com/goodbye-mobile-only-hello-adaptive-three-essential-updates-from-2025-for-building-adaptive-apps/ Posted by Fahd Imtiaz – Product Manager, Android Developer Goodbye Mobile Only, Hello Adaptive: Three essential updates from 2025 for ...

Read more

The post Goodbye Mobile Only, Hello Adaptive: Three essential updates from 2025 for building adaptive apps appeared first on InShot Pro.

]]>

Posted by Fahd Imtiaz – Product Manager, Android Developer




Goodbye Mobile Only, Hello Adaptive: Three essential updates from 2025 for building adaptive apps


In 2025 the Android ecosystem has grown far beyond the phone. Today, developers have the opportunity to reach over 500 million active devices, including foldables, tablets, XR, Chromebooks, and compatible cars.


These aren’t just additional screens; they represent a higher-value audience. We’ve seen that users who own both a phone and a tablet spend 9x more on apps and in-app purchases than those with just a phone. For foldable users, that average spend jumps to roughly 14x more*.


This engagement signals a necessary shift in development: goodbye mobile apps, hello adaptive apps.



To help you build for that future, we spent this year releasing tools that make adaptive the default way to build. Here are three key updates from 2025 designed to help you build these experiences.


Standardizing adaptive behavior with Android 16


To support this shift, Android 16 introduced significant changes to how apps can restrict orientation and resizability. On displays of at least 600dp, manifest and runtime restrictions are ignored, meaning apps can no longer lock themselves to a specific orientation or size. Instead, they fill the entire display window, ensuring your UI scales seamlessly across portrait and landscape modes. 


Because this means your app context will change more frequently, it’s important to verify that you are preserving UI state during configuration changes. While Android 16 offers a temporary opt-out to help you manage this transition, Android 17 (SDK37) will make this behavior mandatory. To ensure your app behaves as expected under these new conditions, use the resizable emulator in Android Studio to test your adaptive layouts today

Supporting screens beyond the tablet with Jetpack WindowManager 1.5.0

As devices evolve, our existing definitions of “large” need to evolve with them. In October, we released Jetpack WindowManager 1.5.0 to better support the growing number of very large screens and desktop environments.


On these surfaces, the standard “Expanded” layout, which usually fits two panes comfortably, often isn’t enough. On a 27-inch monitor, two panes can look stretched and sparse, leaving valuable screen real estate unused. To solve this, WindowManager 1.5.0 introduced two new width window size classes: Large (1200dp to 1600dp) and Extra-large (1600dp+).



These new breakpoints signal when to switch to high-density interfaces. Instead of stretching a typical list-detail view, you can take advantage of the width to show three or even four panes simultaneously.  Imagine an email client that comfortably displays your folders, the inbox list, the open message, and a calendar sidebar, all in a single view. Support for these window size classes was added to Compose Material 3 adaptive in the 1.2 release


Rethinking user journeys with Jetpack Navigation 3


Building a UI that morphs from a single phone screen to a multi-pane tablet layout used to require complex state management.  This often meant forcing a navigation graph designed for single destinations to handle simultaneous views. First announced at I/O 2025, Jetpack Navigation 3 is now stable, introducing a new approach to handling user journeys in adaptive apps.


Built for Compose, Nav3 moves away from the monolithic graph structure. Instead, it provides decoupled building blocks that give you full control over your back stack and state. This solves the single source of truth challenge common in split-pane layouts. Because Nav3 uses the Scenes API, you can display multiple panes simultaneously without managing conflicting back stacks, simplifying the transition between compact and expanded views.


A foundation for an adaptive future



This year delivered the tools you need, from optimizing for expansive  layouts to the granular controls of
WindowManager and Navigation 3. And, Android 16 began the shift toward truly flexible UI, with updates coming next year to deliver excellent adaptive experiences across all form factors. To learn more about adaptive development principles and get started, head over to d.android.com/adaptive-apps


The tools are ready, and the users are waiting. We can’t wait to see what you build!


*Source: internal Google data


The post Goodbye Mobile Only, Hello Adaptive: Three essential updates from 2025 for building adaptive apps appeared first on InShot Pro.

]]>
Bringing Androidify to Wear OS with Watch Face Push https://theinshotproapk.com/bringing-androidify-to-wear-os-with-watch-face-push/ Thu, 18 Dec 2025 17:00:00 +0000 https://theinshotproapk.com/bringing-androidify-to-wear-os-with-watch-face-push/ Posted by Garan Jenkin – Developer Relations Engineer A few months ago we relaunched Androidify as an app for generating ...

Read more

The post Bringing Androidify to Wear OS with Watch Face Push appeared first on InShot Pro.

]]>

Posted by Garan Jenkin – Developer Relations Engineer





A few months ago we
relaunched Androidify as an app for generating personalized Android bots. Androidify transforms your selfie photo into a playful Android bot using Gemini and Imagen.

However, given that Android spans multiple form factors, including our most recent addition, XR, we thought, how could we bring the fun of Androidify to Wear OS?

An Androidify watch face

As Androidify bots are highly-personalized, the natural place to showcase them is the watch face. Not only is it the most frequently visible surface but also the most personal surface, allowing you to represent who you are.


Personalized Androidify watch face, generated from selfie image

Androidify now has the ability to generate a watch face dynamically within the phone app and then send it to your watch, where it will automatically be set as your watch face. All of this happens within seconds!

High-level design

End-to-end flow for watch face creation and installation

In order to achieve the end-to-end experience, a number of technologies need to be combined together, as shown in this high-level design diagram.

First of all, the user’s avatar is combined with a pre-existing Watch Face Format template, which is then packaged into an APK. This is validated – for reasons which will be explained! – and sent to the watch.

On being received by the watch, the new Watch Face Push API – part of Wear OS 6- is used to install and activate the watch face.

Let’s explore the details:

Creating the watch face templates

The watch face is created from a template, itself designed in Watch Face Designer. This is our new Figma plugin that allows you to create Watch Face Format watch faces directly within Figma.


An Androidify watch face template in Watch Face Designer


The plugin allows the watch face to be exported in a range of different ways, including as Watch Face Format (WFF) resources. These can then be easily incorporated as assets within the Androidify app, for dynamically building the finalized watch face.

Packaging and validation

Once the template and avatar have been combined, the Portable Asset Compiler Kit (Pack) is used to assemble an APK.

In Androidify, Pack is used as a native library on the phone. For more details on how Androidify interfaces with the Pack library, see the GitHub repository.

As a final step before transmission, the APK is checked by the Watch Face Push validator.

This validator checks that the APK is suitable for installation. This includes checking the contents of the APK to ensure it is a valid watch face, as well as some performance checks. If it is valid, then the validator produces a token.

This token is required by the watch for installation.

Sending the watch face

The Androidify app on Wear OS uses WearableListenerService to listen for events on the Wearable Data Layer.

The phone app transfers the watch face by using a combination of MessageClient to set up the process, then ChannelClient to stream the APK.

Installing the watch face on the watch

Once the watch face is received on the Wear OS device, the Androidify app uses the new Watch Face Push API to install the watch face:

val wfpManager = 

    WatchFacePushManagerFactory.createWatchFacePushManager(context)

val response = wfpManager.listWatchFaces()


try {

    if (response.remainingSlotCount > 0) {

        wfpManager.addWatchFace(apkFd, token)

    } else {

        val slotId = response.installedWatchFaceDetails.first().slotId

        wfpManager.updateWatchFace(slotId, apkFd, token)

    }

} catch (a: WatchFacePushManager.AddWatchFaceException) {

    return WatchFaceInstallError.WATCH_FACE_INSTALL_ERROR

} catch (u: WatchFacePushManager.UpdateWatchFaceException) {

    return WatchFaceInstallError.WATCH_FACE_INSTALL_ERROR

}

Androidify uses either the addWatchFace or updateWatchFace method, depending on the scenario: Watch Face Push defines a concept of “slots” – how many watch faces a given app can have installed at any time. For Wear OS 6, this value is in fact 1.

Androidify’s approach is to install the watch face if there is a free slot, and if not, any existing watch face is swapped out for the new one.

Setting the active watch face

Installing the watch face programmatically is a great step, but Androidify seeks to ensure the watch face is also the active watch face. 

Watch Face Push introduces a new runtime permission which must be granted in order for apps to be able to achieve this:

com.google.wear.permission.SET_PUSHED_WATCH_FACE_AS_ACTIVE

Once this permission has been acquired, the wfpManager.setWatchFaceAsActive() method can be called, to set an installed watch face to being the active watch face.

However, there are a number of considerations that Androidify has to navigate:

  • setWatchFaceAsActive can only be used once.

  • SET_PUSHED_WATCH_FACE_AS_ACTIVE cannot be re-requested after being denied by the user.

  • Androidify might already be in control of the active watch face.

For more details see how Androidify implements the set active logic.

Get started with Watch Face Push for Wear OS

Watch Face Push is a versatile API, equally suited to enhancing Androidify as it is to building fully-featured watch face marketplaces.

Perhaps you have an existing phone app and are looking for opportunities to further engage and delight your users?

Or perhaps you’re an existing watch face developer looking to create your own community and gallery through releasing a marketplace app?

Take a look at these resources:

And also check out the accompanying video for a greater-depth look at how we brought Androidify to Wear OS!


We’re looking forward to what you’ll create with Watch Face Push!

The post Bringing Androidify to Wear OS with Watch Face Push appeared first on InShot Pro.

]]>
Brighten Your Real-Time Camera Feeds with Low Light Boost https://theinshotproapk.com/brighten-your-real-time-camera-feeds-with-low-light-boost/ Wed, 17 Dec 2025 17:00:00 +0000 https://theinshotproapk.com/brighten-your-real-time-camera-feeds-with-low-light-boost/ Posted by Donovan McMurray, Developer Relations Engineer We recently shared how Instagram enabled users to take stunning low light photos ...

Read more

The post Brighten Your Real-Time Camera Feeds with Low Light Boost appeared first on InShot Pro.

]]>

Posted by Donovan McMurray, Developer Relations Engineer

We recently shared how Instagram enabled users to take stunning low light photos using Night Mode. That feature is perfect for still images, where there’s time to combine multiple exposures to create a high-quality static shot. But what about the moments that happen between the photos? Users need to interact with the camera more than just the moment the shutter button is pressed. They also use the preview to compose their scene or scan QR codes.

Today, we’re diving into Low Light Boost (LLB), a powerful feature designed to brighten real-time camera streams. Unlike Night Mode, which requires a hold-still capture duration, Low Light Boost works instantaneously on your live preview and video recordings. LLB automatically adjusts how much brightening is needed based on available light, so it’s optimized for every environment.

With a recent update, LLB allows Instagram users to line up the perfect shot, and then their existing Night Mode implementation results in the same high quality low-light photos their users have been enjoying for over a year.

Why Real-time Brightness Matters

While Night Mode aims to improve final image quality, Low Light Boost is intended for usability and interactivity in dark environments. Another important factor to consider is that – even though they work together very well – you can use LLB and Night Mode independently, and you’ll see with some of these use cases, LLB has value on its own when Night Mode photos aren’t needed. Here is how LLB improves the user experience:

  • Better Framing & Capture: In dimly lit scenes, a standard camera preview can be pitch black. LLB brightens the viewfinder, allowing users to actually see what they are framing before they hit the shutter button. For this experience, you can use Night Mode for the best quality low-light photo result, or you can let LLB give the user a “what you see is what you get” photo result.

  • Reliable Scanning: QR codes are ubiquitous, but scanning them in a dark restaurant or parking garage is often frustrating. With a significantly brighter camera feed, scanning algorithms can reliably detect and decode QR codes even in very dim environments.

  • Enhanced Interactions: For apps involving live video interactions (like AI assistants or video calls) LLB increases the amount of perceivable information, ensuring the computer vision models have enough data to work with

The Difference in Instagram


The engineering team behind the Android Instagram app is always hard at work to provide a state-of-the-art camera experience for their users. You can see in the above example just what a difference LLB makes on a Pixel 10 Pro. 

It’s easy to imagine the difference this makes in the user experience. If users aren’t able to see what they’re capturing, then there’s a higher chance they’ll abandon the capture. 

Choosing Your Implementation

There are two ways to implement Low Light Boost to provide the best experience across the widest range of devices:

  1. Low Light Boost AE Mode: This is a hardware-layer auto-exposure mode. It offers the highest quality and performance because it fine-tunes the Image Signal Processor (ISP) pipeline directly. Always check for this first.

  2. Google Low Light Boost: If the device doesn’t support the AE mode, you can fall back to this software-based solution provided by Google Play services. It applies post-processing to the camera stream to brighten it. As an all-software solution, it is available on more devices, so this implementation helps you reach more devices with LLB.

Low Light Boost AE Mode (Hardware)

Mechanism:
This mode is supported on devices running Android 15 and newer and requires the OEM to have implemented the support in HAL (currently available on Pixel 10 devices). It integrates directly with the camera’s Image Signal Processor (ISP). If you set CaptureRequest.CONTROL_AE_MODE to CameraMetadata.CONTROL_AE_MODE_ON_LOW_LIGHT_BOOST_BRIGHTNESS_PRIORITY, the camera system takes control.

Behavior:
The HAL/ISP analyzes the scene and adjusts sensor and processing parameters, often including increasing exposure time, to brighten the image. This can yield frames with a significantly improved signal-to-noise ratio (SNR) because the extended exposure time, rather than an increase in digital sensor gain (ISO), allows the sensor to capture more light information.

Advantage:
Potentially better image quality and power efficiency as it leverages dedicated hardware pathways.

Trade off:
May result in a lower frame rate in very dark conditions as the sensor needs more time to capture light. The frame rate can drop to as low as 10 FPS in very low light conditions.

Google Low Light Boost (Software via Google Play Services)

Mechanism:
This solution, distributed as an optional module via Google Play services, applies post-processing to the camera stream. It uses a sophisticated realtime image enhancement technology called HDRNet.

Google HDRNet:
This deep learning model analyzes the image at a lower resolution to predict a compact set of parameters (a bilateral grid). This grid then guides the efficient, spatially-varying enhancement of the full-resolution image on the GPU. The model is trained to brighten and improve image quality in low-light conditions, with a focus on face visibility.

Process Orchestration:
The HDRNet model and its accompanying logic are orchestrated by the Low Light Boost processor. This includes:

  1. Scene Analysis:
    A custom calculator that estimates the true scene brightness using camera metadata (sensor sensitivity, exposure time, etc.) and image content. This analysis determines the boost level.

  2. HDRNet Processing:
    Applies the HDRNet model to brighten the frame. The model used is tuned for low light scenes and optimized for realtime performance.

  3. Blending:
    The original and HDRNet processed frames are blended. The amount of blending applied is dynamically controlled by the scene brightness calculator, ensuring a smooth transition between boosted and unboosted states.

Advantage:
Works on a broader range of devices (currently supports Samsung S22 Ultra, S23 Ultra, S24 Ultra, S25 Ultra, and Pixel 6 through Pixel 9) without requiring specific HAL support. Maintains the camera’s frame rate as it’s a post-processing effect.

Trade-off:
As a post-processing method, the quality is limited by the information present in the frames delivered by the sensor. It cannot recover details lost due to extreme darkness at the sensor level.

By offering both hardware and software pathways, Low Light Boost provides a scalable solution to enhance low-light camera performance across the Android ecosystem. Developers should prioritize the AE mode where available and use the Google Low Light Boost as a robust fallback.

Implementing Low Light Boost in Your App

Now let’s look at how to implement both LLB offerings. You can implement the following whether you use CameraX or Camera2 in your app. For the best results, we recommend implementing both Step 1 and Step 2.

Step 1: Low Light Boost AE Mode

Available on select devices running Android 15 and higher, LLB AE Mode functions as a specific Auto-Exposure (AE) mode.

1. Check for Availability

First, check if the camera device supports LLB AE Mode.

val cameraInfo = cameraProvider.getCameraInfo(cameraSelector)
val isLlbSupported = cameraInfo.isLowLightBoostSupported

2. Enable the Mode

If supported, you can enable LLB AE Mode using CameraX’s CameraControl object.

// After setting up your camera, use the CameraInfo object to enable LLB AE Mode.
camera = cameraProvider.bindToLifecycle(...)

if (isLlbSupported) {
  try {
    // The .await() extension suspends the coroutine until the
    // ListenableFuture completes. If the operation fails, it throws
    // an exception which we catch below.
    camera?.cameraControl.enableLowLightBoostAsync(true).await()
  } catch (e: IllegalStateException) {
    Log.e(TAG, "Failed to enable low light boost: not available on this device or with the current camera configuration", e)
  } catch (e: CameraControl.OperationCanceledException) {
    Log.e(TAG, "Failed to enable low light boost: camera is closed or value has changed", e)
  }
}

3. Monitor the State

Just because you requested the mode doesn’t mean it’s currently “boosting.” The system only activates the boost when the scene is actually dark. You can set up an Observer to update your UI (like showing a moon icon) or convert to a Flow using the extension function asFlow().

if (isLlbSupported) {
  camera?.cameraInfo.lowLightBoostState.asFlow().collectLatest { state ->
    // Update UI accordingly
    updateMoonIcon(state == LowLightBoostState.ACTIVE)
  }
}

You can read the full guide on Low Light Boost AE Mode here.

Step 2: Google Low Light Boost

For devices that don’t support the hardware AE mode, Google Low Light Boost acts as a powerful fallback. It uses a LowLightBoostSession to intercept and brighten the stream.

1. Add Dependencies

This feature is delivered via Google Play services.

implementation("com.google.android.gms:play-services-camera-low-light-boost:16.0.1-beta06")
// Add coroutines-play-services to simplify Task APIs
implementation("org.jetbrains.kotlinx:kotlinx-coroutines-play-services:1.10.2")

2. Initialize the Client

Before starting your camera, use the LowLightBoostClient to ensure the module is installed and the device is supported.

val llbClient = LowLightBoost.getClient(context)

// Check support and install if necessary
val isSupported = llbClient.isCameraSupported(cameraId).await()
val isInstalled = llbClient.isModuleInstalled().await()

if (isSupported && !isInstalled) {
    // Trigger installation
    llbClient.installModule(installCallback).await()
}

3. Create a LLB Session

Google LLB processes each frame, so you must give your display Surface to the LowLightBoostSession, and it gives you back a Surface that has the brightening applied. For Camera2 apps, you can add the resulting Surface with CaptureRequest.Builder.addTarget(). For CameraX, this processing pipeline aligns best with the CameraEffect class, where you can apply the effect with a SurfaceProcessor and provide it back to your Preview with a SurfaceProvider, as seen in this code.

// With a SurfaceOutput from SurfaceProcessor.onSurfaceOutput() and a
// SurfaceRequest from Preview.SurfaceProvider.onSurfaceRequested(),
// create a LLB Session.
suspend fun createLlbSession(surfaceRequest: SurfaceRequest, outputSurfaceForLlb: Surface) {
  // 1. Create the LLB Session configuration
  val options = LowLightBoostOptions(
    outputSurfaceForLlb,
    cameraId,
    surfaceRequest.resolution.width,
    surfaceRequest.resolution.height,
    true // Start enabled
  )

  // 2. Create the session.
  val llbSession = llbClient.createSession(options, callback).await()

  // 3. Get the surface to use.
  val llbInputSurface = llbSession.getCameraSurface()

  // 4. Provide the surface to the CameraX Preview UseCase.
  surfaceRequest.provideSurface(llbInputSurface, executor, resultListener)

  // 5. Set the scene detector callback to monitor how much boost is being applied.
  val onSceneBrightnessChanged = object : SceneDetectorCallback {
    override fun onSceneBrightnessChanged(
      session: LowLightBoostSession,
      boostStrength: Float
    ) {
      // Monitor the boostStrength from 0 (no boosting) to 1 (maximum boosting)
    }
  }
  llbSession.setSceneDetectorCallback(onSceneBrightnessChanged, null)
}

4. Pass in the Metadata

For the algorithm to work, it needs to analyze the camera’s auto-exposure state. You must pass capture results back to the LLB session. In CameraX, this can be done by extending your Preview.Builder with Camera2Interop.Extender.setSessionCaptureCallback().

Camera2Interop.Extender(previewBuilder).setSessionCaptureCallback(
  object : CameraCaptureSession.CaptureCallback() {
    override fun onCaptureCompleted(
      session: CameraCaptureSession,
      request: CaptureRequest,
      result: TotalCaptureResult
    ) {
      super.onCaptureCompleted(session, request, result)
      llbSession?.processCaptureResult(result)
    }
  }
)

Detailed implementation steps for the client and session can be found in the Google Low Light Boost guide.

Next Steps

By implementing these two options, you ensure that your users can see clearly, scan reliably, and interact effectively, regardless of the lighting conditions.

To see these features in action within a complete, production-ready codebase, check out the Jetpack Camera App on GitHub. It implements both LLB AE Mode and Google LLB, giving you a reference for your own integration.

The post Brighten Your Real-Time Camera Feeds with Low Light Boost appeared first on InShot Pro.

]]>
Build smarter apps with Gemini 3 Flash https://theinshotproapk.com/build-smarter-apps-with-gemini-3-flash/ Wed, 17 Dec 2025 16:13:00 +0000 https://theinshotproapk.com/build-smarter-apps-with-gemini-3-flash/ Posted by Thomas Ezan, Senior Developer Relations Engineer Today, we’re expanding the Gemini 3 model family with the release of ...

Read more

The post Build smarter apps with Gemini 3 Flash appeared first on InShot Pro.

]]>

Posted by Thomas Ezan, Senior Developer Relations Engineer



Today, we’re expanding the Gemini 3 model family with the release of Gemini 3 Flash, frontier intelligence built for speed at a fraction of the cost. You can start building with it immediately, as we’re officially launching Gemini 3 Flash on Firebase AI Logic. Available globally, you can securely access the Gemini 3 Flash preview model directly from your app via the Gemini Developer API or the Vertex AI Gemini API using Firebase AI Logic client SDKs. Gemini 3 Flash’s strong performance in reasoning, tool use, and multimodal capabilities is ideal for developers looking to do more complex video analysis, data extraction and visual Q&A.

Gemini 3 optimized for low-latency

Gemini 3 is our most intelligent model family to date. With the launch of Gemini 3 Flash, we are making that intelligence more accessible for low-latency and cost-effective use cases. While Gemini 3 Pro is designed for complex reasoning, Gemini 3 Flash is engineered to be significantly faster and more cost-effective for your production apps.

Seamless integration with Firebase AI Logic

Just like the Pro model, Gemini 3 Flash is available in preview directly through the Firebase AI Logic SDK. This means you can integrate it into your Android app without needing to do any complex server side setup.

Here is how to add it to your Kotlin code:


val model = Firebase.ai(backend = GenerativeBackend.googleAI())
    .generativeModel(
        modelName = "gemini-3-flash-preview")

Scale with Confidence

In addition, Firebase enables you to keep your growth secure and manageable with:

AI Monitoring

The Firebase AI monitoring dashboard gives you visibility into latency, success rates, and costs, allowing you to slice data by model name to see exactly how the model performs.

Server Prompt Templates

You can use server prompt templates to store your prompt and schema securely on Firebase servers instead of hardcoding them in your app binary. This capability ensures your sensitive prompts remain secure, prevents unauthorized prompt extraction, and allows for faster iteration without requiring app updates.

---
model: 'gemini-3-flash-preview'
input:
  schema:
    topic:
      type: 'string'
      minLength: 2
      maxLength: 40
    length:
      type: 'number'
      minimum: 1
      maximum: 200
    language:
      type: 'string'
---

{{role "system"}}
You're a storyteller that tells nice and joyful stories with happy endings.

{{role "user"}}
Create a story about {{topic}} with the length of {{length}} words in the {{language}} language.

Prompt template defined on the Firebase Console  

val generativeModel = Firebase.ai.templateGenerativeModel()
val response = generativeModel.generateContent("storyteller-v10",
    mapOf(
        "topic" to topic,
        "length" to length,
        "language" to language
    )
)
_output.value = response.text

Code snippet to access to the prompt template

Gemini 3 Flash for AI development assistance in Android Studio

Gemini 3 Flash is also available for AI assistance in Android Studio. While Gemini 3 Pro Preview is our best model for coding and agentic experiences, Gemini 3 Flash is engineered for speed, and great for common development tasks and questions.

 
The new model is rolling out to developers using Gemini in Android Studio at no-cost (default model) starting today. For higher usage rate limits and longer sessions with Agent Mode, you can use an AI Studio API key to leverage the full capabilities of either Gemini 3 Flash or Gemini 3 Pro. We’re also rolling out Gemini 3 model family access with higher usage rate limits to developers who have Gemini Code Assist Standard or Enterprise licenses. Your IT administrator will need to enable access to preview models through the Google Cloud console.

Get Started Today

You can start experimenting with Gemini 3 Flash via Firebase AI Logic today. Learn more about it in the Android and Firebase documentation. Try out any of the new Gemini 3 models in Android Studio for development assistance, and let us know what you think! As always you can follow us across LinkedIn, Blog, YouTube, and X.

The post Build smarter apps with Gemini 3 Flash appeared first on InShot Pro.

]]>