InShot Pro https://theinshotproapk.com/ Download InShot Pro APK for Android, iOS, and PC Fri, 13 Mar 2026 17:00:00 +0000 en-US hourly 1 https://theinshotproapk.com/wp-content/uploads/2021/07/cropped-Inshot-Pro-APK-Logo-1-32x32.png InShot Pro https://theinshotproapk.com/ 32 32 Room 3.0 – Modernizing the Room https://theinshotproapk.com/room-3-0-modernizing-the-room/ Fri, 13 Mar 2026 17:00:00 +0000 https://theinshotproapk.com/room-3-0-modernizing-the-room/ Posted by Daniel Santiago Rivera, Software Engineer The first alpha of Room 3.0 has been released! Room 3.0 is a ...

Read more

The post Room 3.0 – Modernizing the Room appeared first on InShot Pro.

]]>

Posted by Daniel Santiago Rivera, Software Engineer

The first alpha of Room 3.0 has been released! Room 3.0 is a major breaking version of the library that focuses on Kotlin Multiplatform (KMP) and adds support for JavaScript and WebAssembly (WASM) on top of the existing Android, iOS and JVM desktop support.

In this blog we outline the breaking changes, the reasoning behind Room 3.0, and the various things you can do to migrate from Room 2.0.

Breaking changes

Room 3.0 includes the following breaking API changes:

  • Dropping SupportSQLite APIs: Room 3.0 is fully backed by the androidx.sqlite driver APIs. The SQLiteDriver APIs are KMP-compatible and removing Room’s dependency on Android’s API simplifies the API surface for Android since it avoids having two possible backends.

  • No more Java code generation: Room 3.0 exclusively generates Kotlin code. This aligns with the evolving Kotlin-first paradigm but also simplifies the codebase and development process, enabling faster iterations.

  • Focus on KSP: We are also dropping support for Java Annotation Processing (AP) and KAPT. Room 3.0 is solely a KSP (Kotlin Symbol Processing) processor, allowing for better processing of Kotlin codebases without being limited by the Java language.

  • Coroutines first: Room 3.0 embraces Kotlin coroutines, making its APIs coroutine-first. Coroutines is the KMP-compatible asynchronous framework and making Room be asynchronous by nature is a critical requirement for supporting web platforms.
A new package

To prevent compatibility issues with existing Room 2.x implementations and for libraries with transitive dependencies to Room (for example, WorkManager), Room 3.0 resides in a new package which means it also has a new maven group and artifact ids. For example, androidx.room:room-runtime has become androidx.room3:room3-runtime and classes such as androidx.room.RoomDatabase will now be located at android.room3.RoomDatabase.

Kotlin and Coroutines First

With no more Java code generation, Room 3.0 also requires KSP and the Kotlin compiler even if the codebase interacting with Room is in Java. It is recommended to have a multi-module project where Room usage is concentrated and the Kotlin Gradle Plugin and KSP can be applied without affecting the rest of the codebase.

Room 3.0 also requires Coroutines and more specifically DAO functions have to be suspending unless they are returning a reactive type, such as a Flow. Room 3.0 disallows blocking DAO functions. See the Coroutines on Android documentation on getting started integrating Coroutines into your application.

Migration to SQLiteDriver APIs

With the shift away from SupportSQLite, apps will need to migrate to the SQLiteDriver APIs. This migration is essential to leveraging the full benefits of Room 3.0, including allowing the use of the bundled SQLite library via the BundledSQLiteDriver. You can start migrating to the driver APIs today with Room 2.7.0+. We strongly encourage you to avoid any further usage of SupportSQLite. If you migrate your Room integrations to SQLiteDriver APIs, then the transition to Room 3.0 is easier since the package change mostly involves updating symbol references (imports) and might require minimal changes to call-sites.

For a brief overview of the SQLiteDriver APIs, check out the SQLiteDriver APIs documentation.

For more details on how to migrate Room to use SQLiteDriver APIs, check out the official documentation to migrate from SupportSQLite.

Room SupportSQLite wrapper

We understand completely removing SupportSQLite might not be immediately feasible for all projects. To ease this transition, Room 2.8.0, the latest version of the Room 2.0 series, introduced a new artifact called androidx.room:room-sqlite-wrapper. This artifact offers a compatibility API that allows you to convert a RoomDatabase into a SupportSQLiteDatabase, even if the SupportSQLite APIs in the database have been disabled due to a SQLiteDriver being installed. This provides a temporary bridge for developers who need more time to fully migrate their codebase. This artifact continues to exist in Room 3.0 as androidx.room3:room3-sqlite-wrapper to enable the migration to Room 3.0 while still supporting critical SupportSQLite usage.

For example, invocations of Database.openHelper.writableDatabase can be replaced by roomDatabase.getSupportWrapper() and a wrapper would be provided even if setDriver() is called on Room’s builder.
For more details check out the room-sqlite-wrapper documentation.
Room and SQLite Web Support

Support for the Kotlin Multiplatform targets JS and WasmJS and brings some of the most significant API changes. Specifically, many APIs in Room 3.0 are suspend functions since proper support for web storage is asynchronous. The SQLiteDriver APIs have also been updated to support the Web and a new web asynchronous driver is available in androidx.sqlite:sqlite-web. It is a Web Worker based driver that enables persisting the database in the Origin private file system (OPFS).

For more details on how to set up Room for the Web check out the Room 3.0 release notes.

Custom DAO Return Types

Room 3.0 introduces the ability to add custom integrations to Room similar to RxJava and Paging. Through a new annotation API called @DaoReturnTypeConverter you can create your own integration such that Room’s generated code becomes accessible at runtime, this enables  @Dao functions having their custom return types without having to wait for the Room team to add the support. Existing integrations are migrated to use this functionality and thus will now require for those who rely on it to add the converters to the @Database or @Dao definitions.

For example, the Paging converter will be located in the android.room3:room3-paging artifact and it’s called PagingSourceDaoReturnTypeConverter. Meanwhile for LiveData the converter is in android.room3:room3-livedata and is called LiveDataReturnTypeConverter.
For more details check out the DAO Return Type Converters section in the Room 3.0 release notes.

Maintenance mode of Room 2.x

Since the development of Room will be focused on Room 3, the current Room 2.x version enters maintenance mode. This means that no major features will be developed but patch releases (2.8.1, 2.8.2, etc.) will still occur with bug fixes and dependency updates. The team is committed to this work until Room 3 becomes stable.

Final thoughts

We are incredibly excited about the potential of Room 3.0 and the opportunities it unlocks for the Kotlin ecosystem. Stay tuned for more updates as we continue this journey!

The post Room 3.0 – Modernizing the Room appeared first on InShot Pro.

]]>
TikTok reduces code size by 58% and improves app performance for new features with Jetpack Compose https://theinshotproapk.com/tiktok-reduces-code-size-by-58-and-improves-app-performance-for-new-features-with-jetpack-compose/ Fri, 13 Mar 2026 13:00:00 +0000 https://theinshotproapk.com/tiktok-reduces-code-size-by-58-and-improves-app-performance-for-new-features-with-jetpack-compose/ Posted by Ajesh R Pai, Developer Relations Engineer & Ben Trengrove, Developer Relations Engineer TikTok is a global short-video platform ...

Read more

The post TikTok reduces code size by 58% and improves app performance for new features with Jetpack Compose appeared first on InShot Pro.

]]>

Posted by Ajesh R Pai, Developer Relations Engineer & Ben Trengrove, Developer Relations Engineer

TikTok is a global short-video platform known for its massive user base and innovative features. The team is constantly releasing updates, experiments, and new features for their users. Faced with the challenge of maintaining velocity while managing technical debt, the TikTok Android team turned to Jetpack Compose.

The team wanted to enable faster, higher-quality iteration of product requirements. By leveraging Compose, the team sought to improve engineering efficiency by writing less code and reducing cognitive load, while also achieving better performance and stability.

Streamlining complex UI to accelerate developer productivity

TikTok pages are often more complex than they appear, containing numerous layered conditional requirements. This complexity often resulted in difficult-to-maintain, sub-optimally structured View hierarchies and excessive View nesting, which caused performance degradation due to an increased number of measure passes.

Compose offered a direct solution to this structural problem.

Furthermore, Compose’s measurement strategy helps reduce double taxation, making measure performance easier to optimize. 

To improve developer productivity, TikTok’s central Design System team provides a component library for teams working on different app features.  The team observed that Development in Compose is simple; leveraging small composables is highly effective, while incorporating large UI blocks with conditional logic is both straightforward and has minimal overhead.


Building a path forward through strategic migration

By strategically adopting Jetpack Compose, TikTok was able to stay on top of technical debt, while also continuing to focus on creating great experiences for their users. The ability of Compose to handle conditional logic cleanly and streamline composition allowed the team to achieve up to a 78% reduction in page loading time on new or fully rewritten pages. This improvement was 20–30% in smaller cases, and 70–80% for full rewrites and new features. They also were able to reduce their code size by 58%, when compared to the same feature built in Views. The team has further shared a couple of learnings:  

TikTok team’s overall strategy was to incrementally migrate specific user journeys. This gave them an opportunity to migrate, confirm measurable benefits, then scale to more screens. They started with using Compose to simplify the overall structure in the QR code feature and saw the improvements. The team later expanded the migration to the Login and Sign-up experiences. 

The team shared some additional learnings:  

While checking performance during migration, the TikTok team found that using many small ComposeViews to replace elements inside a single ViewHolder, caused composition overhead. They achieved better results by expanding the migration to use one single ComposeView for the entire ViewHolder.

When migrating a Fragment inside ViewPager, which has custom height logic and conditional logic to hide and show ui based on experiments, the performance wasn’t impacted. In this case, migrating the ViewPager to Composable performed better than migrating the Fragment. 

Jun Shen really likes that Compose “reduces the amount of code required for feature development, improves testability, and accelerates delivery”. The team plans to steadily increase Compose adoption, making it their preferred framework in the long term. Jetpack Compose proved to be a powerful solution for improving both their developer experience and production metrics at scale.

Get Started with Jetpack Compose

Learn more about how Jetpack Compose can help your team.

The post TikTok reduces code size by 58% and improves app performance for new features with Jetpack Compose appeared first on InShot Pro.

]]>
Level Up: Test Sidekick and prepare for upcoming program milestones https://theinshotproapk.com/level-up-test-sidekick-and-prepare-for-upcoming-program-milestones/ Wed, 11 Mar 2026 20:02:00 +0000 https://theinshotproapk.com/level-up-test-sidekick-and-prepare-for-upcoming-program-milestones/ Posted by Maru Ahues Bouza, PM Director, Games on Google Play Last September, we shared our vision for the future ...

Read more

The post Level Up: Test Sidekick and prepare for upcoming program milestones appeared first on InShot Pro.

]]>

Posted by Maru Ahues Bouza, PM Director, Games on Google Play

Last September, we shared our vision for the future of Google Play Games grounded in a core belief: the best way to drive your game’s success is to deliver a world-class player experience. We launched the Google Play Games Level Up program to recognize and reward great gaming experiences, while providing you with a powerful toolkit and new promotional opportunities to grow your games.  

The momentum since our announcement has been incredibly positive, with more than 600 million gamers now using Play Games Services every month. Developers are also finding success, with one-third of all game installs on the Play Store now coming from editorially-driven organic discovery. In fact, in 2025, Level Up features have driven over 2.5 billion incremental acquisitions for featured games, in addition to an average uplift of 25% in installs during the featuring windows.   

Today, we’re inviting you to start testing Play Games Sidekick to keep your players in the action, sharing new Play Console updates to optimize your reach, and helping you prepare for our upcoming program milestones.


Boost retention and immersion with Play Games Sidekick

Play Games Sidekick is a helpful in-game overlay that gives players instant access to relevant gaming information—like rewards, offers, achievements, and quest progress— keeping them immersed while driving higher engagement for developers.  It serves as a seamless bridge to the highly visible “You” tab, connecting your game to 160 million monthly active users already engaging there and doubles as an active gaming companion that enhances the player experience with helpful, AI-generated Game Tips.

Deep Rock Galactic: Survivor keeps players in the action with Play Games Sidekick


Today, Sidekick officially debuts in over 90 games, with the experience expanding to all Level Up titles later this year. But you don’t need to wait for the broader rollout to get your game ready. You can now enable Sidekick through Play Console to preview and test how your players will interact with features like Achievements, Streaks, Play Points Coupons, and Game Tips.  Upon completing your testing, be sure to push Sidekick for production to ensure your game meets the Level Up user experience guidelines. 

Enable Play Games Sidekick in Play Console to begin testing

Optimize reach and operations with new Play Console updates

We are also rolling out two new Play Console updates to help you optimize your reach and streamline operations:

  • Pre-reg device breakdowns: To aid launch decisions, you can now analyze the device distribution of your pre-registered audience by key device attributes including Android version, RAM and SoC. This enables you to optimize game performance, minimum specs, and marketing spend for the players already waiting for your game.


Identify launch-day risks and optimize performance for your players

with new pre-registration device breakdowns

  • Real-time feedback:  With Level Up+, our tier for high-performing games, qualifying titles can unlock promotional content featuring and tools like deep-links and audience targeting. While submissions must meet Play’s quality guidelines, you no longer have to wait 24 hours to learn about issues. You can now get immediate feedback on quality whenever possible.

Your 2026 checklist: Securing your Level Up benefits

Today, all games on Google Play qualify for Google Play Games Level Up. However, in order to maintain access to Level Up benefits like Play Points offers, expanded APK size limits, Play Store collections and campaigns consideration, or access to high-visibility surfaces like You tab and Sidekick, you’ll need to ensure your game meets user experience guidelines by their upcoming milestones: 

By July 2026: 

    • Integrate Play Games Sidekick to offer a quick and easy entry point to access rewards, offers, and achievements through an in-game overlay.
    • Implement achievements with Play Games Services, to support authentication with the modern Gamer Profile, and to keep players engaged across the lifespan of your game.
 
By November 2026:

    • Implement cloud save to enable progress sync across devices.

Last week, we announced that we’re working on an expanded Level Up program that builds on our successful foundation to further improve gaming experiences. The update will introduce new requirements that will unlock additional benefits like lower service fees. Engaging with the program now ensures your work is strategically aligned with these future updates. We’ll share more details in the coming months.

In the meantime, the path to your first program milestone begins today. By prioritizing these user experience guidelines now, you’re investing in the long-term value of your game and ensuring it’s built to thrive for every player. Head over to Play Console to start testing Sidekick and take the next step in your Level Up journey.

The post Level Up: Test Sidekick and prepare for upcoming program milestones appeared first on InShot Pro.

]]>
Expanding our stage for PC and paid titles https://theinshotproapk.com/expanding-our-stage-for-pc-and-paid-titles/ Wed, 11 Mar 2026 20:02:00 +0000 https://theinshotproapk.com/expanding-our-stage-for-pc-and-paid-titles/  Posted by Aurash Mahbod, VP and GM, Games on Google Play Google Play is proud to be the home of ...

Read more

The post Expanding our stage for PC and paid titles appeared first on InShot Pro.

]]>

 Posted by Aurash Mahbod, VP and GM, Games on Google Play

Google Play is proud to be the home of over 200,000 games—many of which defined the mobile-first era. But as cross-platform becomes the standard for players, we are evolving our ecosystem to match the scale of your ambitions. In recent years, we focused on elevating Android gaming quality while significantly deepening our support for native PC titles.

We know that maximizing your game’s reach across different platforms is complex. The Level Up program serves as your strategic roadmap, helping you prioritize optimizations that drive great experiences on Android. Building on this foundation, we’re doubling down on our investment to make Play the most accessible home for every category of play. We’re adding new tools for paid games and making the PC game discovery to purchase seamless. Keep reading to learn more about how we’re creating a bigger stage for your games.


Scale your discovery across mobile and PC platforms

Building a bigger stage starts with making your games easier to find—and easier to buy—no matter which device your players prefer. We’re expanding your reach by bringing cross-platform discovery directly to the mobile storefront. 

  • With the new PC section in the Games tab, your PC titles gain high visibility placement among our most active mobile players. 

  • The PC badge ensures your cross-platform investment is recognized. This creates more opportunities to acquire players on mobile and transition them seamlessly to your high-fidelity PC experience.

PC in the Games tab and PC badging expands your game’s reach

  • With ‘buy once play anywhere’ pricing, we’re making it easier to sell your games across different devices.  If you choose to opt-in your mobile game for Google Play Games on PC, you can now offer a single price that covers both mobile and PC versions. We’re rolling out this feature in EAP with select games including Brotato: Premium

  • For PC-only games, players can now complete the full purchase journey on Google Play Games on PC with the same trusted security and privacy standards they expect from Google Play. 


Buy once play anywhere’ pricing to sell your games across devices


Lower the purchase barrier with Game Trials 

To help you convert high-intent buyers with less friction, we’re introducing Game Trials, a feature that enables players to experience your game for a limited time before making a purchase on mobile. Accessible directly from your game’s store listing, Game Trials provides a fast-track for players to start exploring your world with a single tap. Game trials are now in testing with select titles and we’ll roll it out to more titles soon. 

  • To ensure this is low maintenance for you, Game Trials is added directly into your Android App Bundle. This enables you to offer a high quality trial without the burden of a separate codebase or a demo version of your app. 

  • Play ensures trials are secure and seamless. Game Trials are once per user and protects your game while the trial is active. When it ends, players can purchase your game and keep their progress. 

  • We’re also working on tools that will give you more control—such as specifying a custom time limit or an in-game event to conclude the trial. 



Game Trial for DREDGE to help convert high-intent buyers

Diversify your revenue with a dedicated player community on Play Pass

Play Pass is another way to diversify revenue and grow your player audience. It has been a strong launchpad for indie hits such as Isle of Arrows, Slay the Spire, and Dead Cells. With Play Pass, you can reach highly dedicated players seeking a more curated gaming experience, free of ads and in-app purchases. To help you deepen engagement, paid titles on Play Pass can now opt in to Google Play Games on PC — making it easy for players to find and play your games on a larger screen. Later this year, you can nominate your game through a streamlined opt-in process directly in Play Console.   


Drive long term sales with Wishlists and Discounts

Wishlists and Discounts are one of the most effective ways to capture player intent and drive long term sales. To support players at every stage of their purchase journey, we’re integrating them directly into Play. Players can save titles to their wishlist and manage them from library settings. To keep your game top-of-mind, players will receive automated notifications for your latest discounts — starting with mobile and expanding soon to PC games. 

Wishlist and discount notifications drives long term sales, rolling out today

How leading studios are finding a new path to success on Play

We’re thrilled to welcome Sledding Game, 9 Kings, Potion Craft, Moonlight Peaks, and Low Budget Repairs to Play [1]. It marks an exciting expansion of our catalog and a step forward in our mission to build a bigger gaming ecosystem for all developers. This growth is fueled by our developer community, whose feedback continues to shape our roadmap and help us better support your success.

Sledding Game, 9 Kings, Potion Craft, Moonlight Peaks, and Low Budget Repairs is coming to Play.

That mission brings us to GDC and the Independent Games Festival (IGF) Awards [2], where the next generation of games awaits! This year, we’re inviting you to come along for the ride as we go backstage to chat with the finalists and winners, sharing the moments of triumph and the creative stories behind their development. Not joining us at GDC? You can take the next step in your journey to launch your game on Google Play today.

1. Sledding Game, 9 Kings, Potion Craft, and Moonlight Peaks are coming to Google Play in 2026. Low Budget Repairs is scheduled for release in 2027. [Back]

2. Independent Games Festival (IGF) Awards is hosted by Game Developers Conference (GDC) and requires a valid GDC pass for entry. [Back]

The post Expanding our stage for PC and paid titles appeared first on InShot Pro.

]]>
Boosting Android Performance: Introducing AutoFDO for the Kernel https://theinshotproapk.com/boosting-android-performance-introducing-autofdo-for-the-kernel/ Tue, 10 Mar 2026 23:00:00 +0000 https://theinshotproapk.com/boosting-android-performance-introducing-autofdo-for-the-kernel/ Posted by Yabin Cui, Software Engineer We are the Android LLVM toolchain team. One of our top priorities is to improve ...

Read more

The post Boosting Android Performance: Introducing AutoFDO for the Kernel appeared first on InShot Pro.

]]>

Posted by Yabin Cui, Software Engineer

We are the Android LLVM toolchain team. One of our top priorities is to improve Android performance through optimization techniques in the LLVM ecosystem. We are constantly searching for ways to make Android faster, smoother, and more efficient. While much of our optimization work happens in userspace, the kernel remains the heart of the system. Today, we’re excited to share how we are bringing Automatic Feedback-Directed Optimization (AutoFDO) to the Android kernel to deliver significant performance wins for users.


What is AutoFDO?

During a standard software build, the compiler makes thousands of small decisions, such as whether to inline a function and which branch of a conditional is likely to be taken, based on static code hints.While these heuristics are useful, they don’t always accurately predict code execution during real-world phone usage.

AutoFDO changes this by using real-world execution patterns to guide the compiler. These patterns represent the most common instruction execution paths the code takes during actual use, captured by recording the CPU’s branching history. While this data can be collected from fleet devices, for the kernel we synthesize it in a lab environment using representative workloads, such as running the top 100 most popular apps. We use a sampling profiler to capture this data, identifying which parts of the code are ‘hot’ (frequently used) and which are ‘cold’. When we rebuild the kernel with these profiles, the compiler can make much smarter optimization decisions tailored to actual Android workloads.

To understand the impact of this optimization, consider these key facts:

  • On Android, the kernel accounts for about 40% of CPU time.
  • We are already using AutoFDO to optimize native executables and libraries in the userspace, achieving about 4% cold app launch improvement and a 1% boot time reduction.

Real-World Performance Wins

We have seen impressive improvements across key Android metrics by leveraging profiles from controlled lab environments. These profiles were collected using app crawling and launching, and measured on Pixel devices across the 6.1, 6.6, and 6.12 kernels.

The most noticeable improvements are listed below. Details on the AutoFDO profiles for these kernel versions can be found in the respective Android kernel repositories for android16-6.12 and android15-6.6 kernels.

These aren’t just theoretical numbers. They translate to a snappier interface, faster app switching, extended battery life, and an overall more responsive device for the end user.

How It Works: The Pipeline

Our deployment strategy involves a sophisticated pipeline to ensure profiles stay relevant and performance remains stable.


Step 1: Profile Collection

While we rely on our internal test fleet to profile userspace binaries, we shifted to a controlled lab environment for the Generic Kernel Image (GKI). Decoupling profiling from the device release cycle allows for flexible, immediate updates independent of deployed kernel versions. Crucially, tests confirm that this lab-based data delivers performance gains comparable to those from real-world fleets.

  • Tools & Environment: We flash test devices with the latest kernel image and use simpleperf to capture instruction execution streams. This process relies on hardware capabilities to record branching history, specifically utilizing ARM Embedded Trace Extension (ETE) and ARM Trace Buffer Extension (TRBE) on Pixel devices.
  • Workloads: We construct a representative workload using the top 100 most popular apps from the Android App Compatibility Test Suite (C-Suite). To capture the most accurate data, we focus on:
    • App Launching: Optimizing for the most visible user delays
    • AI-Driven App Crawling: Simulating contiguous, evolving user interactions
    • System-Wide Monitoring: Capturing not only foreground app activities, but also critical background workloads and inter-process communications
  • Validation: This synthesized workload shows an 85% similarity to execution patterns collected from our internal fleet.
  • Targeted Data: By repeating these tests sufficiently, we capture high-fidelity execution patterns that accurately represent real-world user interaction with the most popular applications. Furthermore, this extensible framework allows us to seamlessly integrate additional workloads and benchmarks to broaden our coverage.

Step 2: Profile Processing

We post-process the raw trace data to ensure it is clean, effective, and ready for the compiler.

  • Aggregation: We consolidate data from multiple test runs and devices into a single system view.
  • Conversion: We convert raw traces into the AutoFDO profile format, filtering out unwanted symbols as needed.
  • Profile Trimming: We trim profiles to remove data for “cold” functions, allowing them to use standard optimization. This prevents regressions in rarely used code and avoids unnecessary increases in binary size.

Step 3: Profile Testing

Before deployment, profiles undergo rigorous verification to ensure they deliver consistent performance wins without stability risks.

  • Profile & Binary Analysis: We strictly compare the new profile’s content (including hot functions, sample counts, and profile size) against previous versions. We also use the profile to build a new kernel image, analyzing binaries to ensure that changes to the text section are consistent with expectations.
  • Performance Verification: We run targeted benchmarks on the new kernel image. This confirms that it maintains the performance improvements established by previous baselines.

Continuous Updates

Code naturally “drifts” over time, so a static profile would eventually lose its effectiveness. To maintain peak performance, we run the pipeline continuously to drive regular updates:

  • Regular Refresh: We refresh profiles in Android kernel LTS branches ahead of each GKI release, ensuring every build includes the latest profile data.
  • Future Expansion: We are currently delivering these updates to the android16-6.12 and android15-6.6 branches and will expand support to newer GKI versions, such as the upcoming android17-6.18.

Ensuring Stability

A common question with profile-guided optimization is whether it introduces stability risks. Because AutoFDO primarily influences compiler heuristics, such as function inlining and code layout, rather than altering the source code’s logic, it preserves the functional integrity of the kernel. This technology has already been proven at scale, serving as a standard optimization for Android platform libraries, ChromeOS, and Google’s own server infrastructure for years.

To further guarantee consistent behavior, we apply a “conservative by default” strategy. Functions not captured in our high-fidelity profiles are optimized using standard compiler methods. This ensures that the “cold” or rarely executed parts of the kernel behave exactly as they would in a standard build, preventing performance regressions or unexpected behaviors in corner cases.

Looking Ahead

We are currently deploying AutoFDO across the android16-6.12 and android15-6.6 branches. Beyond this initial rollout, we see several promising avenues to further enhance the technology:

  • Expanded Reach: We look forward to deploying AutoFDO profiles to newer GKI kernel versions and additional build targets beyond the current aarch64 support.

  • GKI Module Optimization: Currently, our optimization is focused on the main kernel binary (vmlinux). Expanding AutoFDO to GKI modules could bring performance benefits to a larger portion of the kernel subsystem.

  • Vendor Module Support: We are also interested in supporting AutoFDO for vendor modules built using the Driver Development Kit (DDK). With support already available in our build system (Kleaf) and profiling tools (simpleperf), this allows vendors to apply these same optimization techniques to their specific hardware drivers.

  • Broader Profile Coverage: There is potential to collect profiles from a wider range of Critical User Journeys (CUJs) to optimize them.

By bringing AutoFDO to the Android kernel, we’re ensuring that the very foundation of the OS is optimized for the way you use your device every day.


The post Boosting Android Performance: Introducing AutoFDO for the Kernel appeared first on InShot Pro.

]]>
A new era for choice and openness https://theinshotproapk.com/a-new-era-for-choice-and-openness/ Sat, 07 Mar 2026 12:01:53 +0000 https://theinshotproapk.com/a-new-era-for-choice-and-openness/ Posted by Sameer Samat, President of Android Ecosystem Android has always driven innovation in the industry through its unique flexibility and ...

Read more

The post A new era for choice and openness appeared first on InShot Pro.

]]>

Posted by Sameer Samat, President of Android Ecosystem


Android has always driven innovation in the industry through its unique flexibility and openness. At this important moment, we want to continue leading the way in how developers distribute their apps and games to people on billions of devices across many form factors. A modern platform must be flexible, providing developers and users with choice and openness as well as a safe experience.


Today we are announcing substantial updates that evolve our business model and build on our long history of openness globally.  We’re doing that in three ways: more billing options, a program for registered app stores, and lower fees and new programs for developers.

Expanded billing choice on Google Play for users and developers

Google Play is giving developers even more billing choice and freedom in how they handle transactions. Mobile developers will have the option to use their own billing systems in their app alongside Google Play’s billing, or they can guide users outside of their app to their own websites for purchases. Our goal is to offer this flexibility in a way that maximizes choice and safety for users. 

Leading the way in store choice 

We’re introducing a program that makes sideloading qualified app stores even easier. Our new Registered App Stores program will provide a more streamlined installation flow for Android app stores that meet certain quality and safety benchmarks. 

Once this change has rolled out, app stores that choose to participate in this optional program will have registered with us and so users who sideload them will have a more simplified installation flow (see graphic below).  If a store chooses not to participate, nothing changes for them and they retain the same experience as any other sideloaded app on Android.  

This gives app stores more ways to reach users and gives users more ways to easily and safely access the apps and games they love. 

This Registered App Store program will begin outside of the US first, and we intend to bring it to the US as well, subject to court approval.

Lower pricing and new programs to support developers  

Google Play’s fees are already the lowest among major app stores, and today we are taking this even further by introducing a new business model that decouples fees for using our billing system and introduces new, lower service fees. Once this rolls out:

  1. Billing: For those developers who choose to use Google Play’s billing system, they will be charged a market-specific rate separate from the service fee. In the European Economic Area (EEA), UK, and US that rate will be 5%.

  2. Service Fees:  

    1. For new installs (first time installs from users after the new fees are launched in a region), we are reducing the in-app purchase (IAP) service fee to 20%.  

    2. We are launching an Apps Experience Program and revamping our Google Play Games Level Up program to incentivize building great software experiences across Android form factors associated with clear quality benchmarks and enhanced user benefits.  Those developers who choose to participate in these programs will have even lower rates. Participating IAP developers will have a 20% service fee for transactions from existing installs and a 15% fee on transactions from new app installs.

    3. Our service fee for recurring subscriptions will be 10%.


Rollout timelines 

This is a significant evolution, and we plan to share additional details in the coming months. To make sure we have enough time to build the necessary technical infrastructure, enable a seamless transition for developers, and ensure alignment with local regulations, these updated fees will roll out on the following staggered schedule:

  • By June 30: EEA, the United Kingdom and the US.

  • By September 30: Australia  

  • By December 31:  Korea and Japan

  • By September 30, 2027: The updates will reach the rest of the world.

We will also launch the updated Google Play Games Level Up program and new App Experience program by September 30 for EEA, UK, US, and Australia and then it will roll out in line with the rest of the schedule above. 

We plan to launch Registered App Stores with a version of a major Android release by the end of the year.

Resolving disputes with Epic Games 

With these updates, we have also resolved our disputes worldwide with Epic Games. 

We believe these changes will make for a stronger Android ecosystem with even more successful developers and higher-quality apps and games available across more form factors for everyone. We look forward to our continued work with the developer community to build the next generation of digital experiences.

The post A new era for choice and openness appeared first on InShot Pro.

]]>
Instagram and Facebook deliver instant playback and boost user engagement with Media3 PreloadManager https://theinshotproapk.com/instagram-and-facebook-deliver-instant-playback-and-boost-user-engagement-with-media3-preloadmanager/ Thu, 05 Mar 2026 18:03:00 +0000 https://theinshotproapk.com/instagram-and-facebook-deliver-instant-playback-and-boost-user-engagement-with-media3-preloadmanager/ Posted by Mayuri Khinvasara Khabya, Developer Relations Engineer (LinkedIn and X) In the dynamic world of social media, user attention is won or ...

Read more

The post Instagram and Facebook deliver instant playback and boost user engagement with Media3 PreloadManager appeared first on InShot Pro.

]]>

Posted by Mayuri Khinvasara Khabya, Developer Relations Engineer (LinkedIn and X)





In the dynamic world of social media, user attention is won or lost quickly. Meta apps (Facebook and Instagram) are among the world’s largest social platforms and serve billions of users globally. For Meta, delivering videos seamlessly isn’t just a feature, it’s the core of their user experience. Short-form videos, particularly Facebook Newsfeed and Instagram Reels, have become a primary driver of engagement. They enable creative expression and rapid content consumption; connecting and entertaining people around the world. 

This blog post takes you through the journey of how Meta transformed video playback for billions by delivering true instant playback.

The latency gap in short form videos


Short-form videos lead to highly fast paced interactions as users quickly scroll through their feeds. Delivering a seamless transition between videos in an ever-changing feed introduces unique hurdles for instantaneous playback. Hence we need solutions that go beyond traditional disk caching and standard reactive playback strategies.


The path forward with Media3 PreloadManager


To address the shifts in consumption habits from rise in short form content and the limitations of traditional long form playback architecture, Jetpack Media3 introduced PreloadManager. This component allows developers to move beyond disk caching, offering granular control and customization to keep media ready in memory before the user hits play. Read this blog series to understand technical details about media playback with PreloadManager.


How Meta achieved true instant playback

Existing Complexities


Previously, Meta used a combination of warmup (to get players ready) and prefetch (to cache content on disk) for video delivery. While these methods helped improve network efficiency, they introduced significant challenges. Warmup required instantiating multiple player instances sequentially, which consumed significant memory and limited preloading to only a few videos. This high resource demand meant that a more scalable robust solution could be applied to deliver the instant playback expected on modern, fast-scrolling social feeds.


Integrating Media3 PreloadManager


To achieve truly instant playback, Meta’s Media Foundation Client team integrated the Jetpack Media3 PreloadManager into Facebook and Instagram. They chose the DefaultPreloadManager to unify their preloading and playback systems. This integration required refactoring Meta’s existing architecture to enable efficient resource sharing between the PreloadManager and ExoPlayer instances. This strategic shift provided a key architectural advantage: the ability to parallelize preloading tasks and manage many videos using a single player instance. This dramatically increased preloading capacity while eliminating the high memory complexities of their previous approach.





Optimization and Performance Tuning

The team then performed extensive testing and iterations to optimize performance across Meta’s diverse global device ecosystem. Initial aggressive preloading sometimes caused issues, including increased memory usage and scroll performance slowdowns. To solve this, they fine-tuned the implementation by using careful memory measurements, considering device fragmentation, and tailoring the system to specific UI patterns.


Fine tuning implementation to specific UI patterns

Meta applied different preloading strategies and tailored the behavior to match the specific UI patterns of each app:

  • Facebook Newsfeed: The UI prioritizes the video currently coming into view. The manager preloads only the current video to ensure it starts the moment the user pauses their scroll. This “current-only” focus minimizes data and memory footprints in an environment where users may see many static posts between videos. While the system is presently designed to preload just the video in view, it can be adjusted to also preload upcoming (future) videos. 

  • Instagram Reels: This is a pure video environment where users swipe vertically. For this UI, the team implemented an “adjacent preload” strategy. The PreloadManager keeps the videos immediately after the current Reel ready in memory. This bi-directional approach ensures that whether a user swipes up or down, the transition remains instant and smooth. The result was a dramatic improvement in the Quality of Experience (QoE) including improvements in Playback Start and Time to First Frame for the user.


Scaling for a diverse global device ecosystem

Scaling a high-performance video stack across billions of devices requires more than just aggressive preloading; it requires intelligence. Meta faced initial challenges with memory pressure and scroll lag, particularly on mid-to-low-end hardware. To solve this, they built a Device Stress Detection system around the Media3 implementation. The apps now monitor I/O and CPU signals in real-time. If a device is under heavy load, preloading is paused to prioritize UI responsiveness.


This device-aware optimization ensures that the benefit of instant playback doesn’t come at the cost of system stability, allowing even users on older hardware to experience a smoother, uninterrupted feed.



Architectural wins and code health

Beyond the user-facing metrics, the migration to Media3 PreloadManageroffered long-term architectural benefits. While the integration and tuning process needed multiple iterations to balance performance, the resulting codebase is more maintainable. The team found that the PreloadManager API integrated cleanly with the existing Media3 ecosystem, allowing for better resource sharing. For Meta, the adoption of Media3 PreloadManager was a strategic investment in the future of video consumption. 


By adopting preloading and adding device-intelligent gates, they successfully increased total watch time on their apps and improved the overall engagement of their global community. 


Resulting impact on Instagram and Facebook


The proactive architecture delivered immediate and measurable improvements across both platforms. 


  • Facebook experienced faster playback starts, decreased playback stall rates and a reduction in bad sessions (like rebuffering, delayed start time, lower quality,etc) which overall resulted in higher watch time. 


  • Instagram saw faster playback starts and an increase in total watch time. Eliminating join latency (the interval from the user’s action to the first frame display) directly increased engagement metrics. The fewer interruptions due to reduced buffering meant users watched more content, which showed through engagement metrics.


Key engineering learnings at scale


As media consumption habits evolve, the demand for instant experiences will continue to grow. Implementing proactive memory management and optimizing for scale and device diversity ensures your application can meet these expectations efficiently.


  • Prioritize intelligent preloading

Focus on delivering a reliable experience by minimizing stutters and loading times through preloading. Rather than simple disk caching, leveraging memory-level preloading ensures that content is ready the moment a user interacts with it.


  • Align your implementation with UI patterns

Customize preloading behavior as per your apps’s UI. For example, use a “current-only” focus for mixed feeds like Facebook to save memory, and an “adjacent preload” strategy for vertical environments like Instagram Reels.

  • Leverage Media3 for long-term code health

Integrating with Media3 APIs rather than a custom caching solution allows for better resource sharing between the player and the PreloadManager, enabling you to manage multiple videos with a single player instance. This results in a future-proof codebase that is easier for engineering teams to not only maintain and optimize over time but also benefit from the latest feature updates.

  • Implement device aware optimizations

Broaden your market reach by testing on various devices, including mid-to-low-end models. Use real-time signals like CPU, memory, and I/O to adapt features and resource usage dynamically.

Learn More


To get started and learn more, visit 

Now you know the secrets for instant playback. Go try them out!


The post Instagram and Facebook deliver instant playback and boost user engagement with Media3 PreloadManager appeared first on InShot Pro.

]]>
Elevating AI-assisted Android development and improving LLMs with Android Bench https://theinshotproapk.com/elevating-ai-assisted-android-development-and-improving-llms-with-android-bench/ Thu, 05 Mar 2026 14:03:00 +0000 https://theinshotproapk.com/elevating-ai-assisted-android-development-and-improving-llms-with-android-bench/ Posted by Matthew McCullough, VP of Product Management, Android Developer We want to make it faster and easier for you ...

Read more

The post Elevating AI-assisted Android development and improving LLMs with Android Bench appeared first on InShot Pro.

]]>

Posted by Matthew McCullough, VP of Product Management, Android Developer

We want to make it faster and easier for you to build high-quality Android apps, and one way we’re helping you be more productive is by putting AI at your fingertips. We know you want AI that truly understands the nuances of the Android platform, which is why we’ve been measuring how LLMs perform Android development tasks. Today we released the first version of Android Bench, our official leaderboard of LLMs for Android development.

Our goal is to provide model creators with a benchmark to evaluate LLM capabilities for Android development. By establishing a clear, reliable baseline for what high quality Android development looks like, we’re helping model creators identify gaps and accelerate improvements—which empowers developers to work more efficiently with a wider range of helpful models to choose for AI assistance—which ultimately will lead to higher quality apps across the Android ecosystem.

Designed with real-world Android development tasks

We created the benchmark by curating a task set against a range of common Android development areas. It is composed of real challenges of varying difficulty, sourced from public GitHub Android repositories. Scenarios include resolving breaking changes across Android releases, domain-specific tasks like networking on wearables, and migrating to the latest version of Jetpack Compose, to name a few.

Each evaluation attempts to have an LLM fix the issue reported in the task, which we then verify using unit or instrumentation tests. This model-agnostic approach allows us to measure a model’s ability to navigate complex codebases, understand dependencies, and solve the kind of problems you encounter every day. 

We validated this methodology with several LLM makers, including JetBrains.


Measuring AI’s impact on Android is a massive challenge, so it’s great to see a framework that’s this sound and realistic. While we’re active in benchmarking ourselves, Android Bench is a unique and welcome addition. This methodology is exactly the kind of rigorous evaluation Android developers need right now.”  

– Kirill Smelov, Head of AI Integrations at JetBrains.

The first Android Bench results

For this initial release, we wanted to purely measure model performance and not focus on agentic or tool use. The models were able to successfully complete 16-72% of the tasks. This is a wide range that demonstrates some LLMs already have a strong baseline for Android knowledge, while others have more room for improvement. Regardless of where the models are at now, we’re anticipating continued improvement as we encourage LLM makers to enhance their models for Android development. 

The LLM with the highest average score for this first release is Gemini 3.1 Pro, followed closely by Claude Opus 4.6. You can try all of the models we evaluated for AI assistance for your Android projects by using API keys in the latest stable version of Android Studio.

Providing developers and LLM makers with transparency

We value an open and transparent approach, so we made our methodology, dataset, and test harness publicly available on GitHub.

One challenge for any public benchmark is the risk of data contamination, where models may have seen evaluation tasks during their training process. We have taken measures to ensure our results reflect genuine reasoning rather than memorization or guessing, including a thorough manual review of agent trajectories, or the integration of a canary string to discourage training. 

Looking ahead, we will continue to evolve our methodology to preserve the integrity of the dataset, while also making improvements for future releases of the benchmark—for example, growing the quantity and complexity of tasks.

We’re looking forward to how Android Bench can improve AI assistance long-term. Our vision is to close the gap between concept and quality code. We’re building the foundation for a future where no matter what you imagine, you can build it on Android. 

The post Elevating AI-assisted Android development and improving LLMs with Android Bench appeared first on InShot Pro.

]]>
Battery Technical Quality Enforcement is Here: How to Optimize Common Wake Lock Use Cases https://theinshotproapk.com/battery-technical-quality-enforcement-is-here-how-to-optimize-common-wake-lock-use-cases/ Thu, 05 Mar 2026 00:00:00 +0000 https://theinshotproapk.com/battery-technical-quality-enforcement-is-here-how-to-optimize-common-wake-lock-use-cases/ Posted by Alice Yuan, Senior Developer Relations Engineer In recognition that excessive battery drain is top of mind for Android ...

Read more

The post Battery Technical Quality Enforcement is Here: How to Optimize Common Wake Lock Use Cases appeared first on InShot Pro.

]]>

Posted by Alice Yuan, Senior Developer Relations Engineer

In recognition that excessive battery drain is top of mind for Android users, Google has been taking significant steps to help developers build more power-efficient apps. On March 1st, 2026, Google Play Store began rolling out the wake lock technical quality treatments to improve battery drain. This treatment will roll out gradually to impacted apps over the following weeks. Apps that consistently exceed the “Excessive Partial Wake Lock” threshold in Android vitals may see tangible impacts on their store presence, including warnings on their store listing and exclusion from discovery surfaces such as recommendations.

Users may see a warning on your store listing if your app exceeds the bad behavior threshold.

This initiative elevated battery efficiency to a core vital metric alongside stability metrics like crashes and ANRs. The “bad behavior threshold” is defined as holding a non-exempted partial wake lock for at least two hours on average while the screen is off in more than 5% of user sessions in the past 28 days. A wake lock is exempted if it is a system held wake lock that offers clear user benefits that cannot be further optimized, such as audio playback, location access, or user-initiated data transfer. You can view the full definition of excessive wake locks in our Android vitals documentation.

As part of our ongoing initiative to improve battery life across the Android ecosystem, we have analyzed thousands of apps and how they use partial wake locks. While wake locks are sometimes necessary, we often see apps holding them inefficiently or unnecessarily, when more efficient solutions exist. This blog will go over the most common scenarios where excessive wake locks occur and our recommendations for optimizing wake locks.  We have already seen measurable success from partners like WHOOP, who leveraged these recommendations to optimize their background behavior.

Using a foreground service vs partial wake locks

We’ve often seen developers struggle to understand the difference between two concepts when doing background execution: foreground service and partial wake locks.

A foreground service is a lifecycle API that signals to the system that an app is performing user-perceptible work and should not be killed to reclaim memory, but it does not automatically prevent the CPU from sleeping when the screen turns off. In contrast, a partial wake lock is a mechanism specifically designed to keep the CPU running even while the screen is off. 

While a foreground service is often necessary to continue a user action, a manual acquisition of a partial wake lock is only necessary in conjunction with a foreground service for the duration of the CPU activity. In addition, you don’t need to use a wake lock if you’re already utilizing an API that keeps the device awake. 

Refer to the flow chart in Choose the right API to keep the device awake to ensure you have a strong understanding of what tool to use to avoid acquiring a wake lock in scenarios where it’s not necessary.

Third party libraries acquiring wake locks

It is common for an app to discover that it is flagged for excessive wake locks held by a third-party SDK or system API acting on its behalf. To identify and resolve these wake locks, we recommend the following steps:

  • Check Android vitals: Find the exact name of the offending wake lock in the excessive partial wake locks dashboard. Cross-reference this name with the Identify wake locks created by other APIs guidance to see if it was created by a known system API or Jetpack library. If it is, you may need to optimize your usage of the API and can refer to the recommended guidance.

  • Capture a System Trace: If the wake lock cannot be easily identified, reproduce the wake lock issue locally using a system trace and inspect it with the Perfetto UI. You can learn more about how to do this in the Debugging other types of excessive wake locks section of this blog post.

  • Evaluate Alternatives: If an inefficient third-party library is responsible and cannot be configured to respect battery life, consider communicating the issue with the SDK’s owners, finding an alternative SDK or building the functionality in-house.

Common wake lock scenarios

Below is a breakdown of some of the specific use cases we have reviewed, along with the recommended path to optimize your wake lock implementation.

User-Initiated Upload or Download

Example use cases: 

  • Video streaming apps where the user triggers a download of a large file for offline access.

  • Media backup apps where the user triggers uploading their recent photos via a notification prompt.

How to reduce wake locks: 

  • Do not acquire a manual wake lock. Instead, use the User-Initiated Data Transfer (UIDT) API. This is the designated path for long running data transfer tasks initiated by the user, and it is exempted from excessive wake lock calculations.

One-Time or Periodic Background Syncs

Example use cases: 

  • An app performs periodic background syncs to fetch data for offline access. 

  • Pedometer apps that fetch step count periodically.

How to reduce wake locks: 

  • Do not acquire a manual wake lock. Use WorkManager configured for one-time or periodic work.  WorkManager respects system health by batching tasks and has a minimum periodic interval (15 minutes), which is generally sufficient for background updates. 

  • If you identify wake locks created by WorkManager or JobScheduler with high wake lock usage, it may be because you’ve misconfigured your worker to not complete in certain scenarios. Consider analyzing the worker stop reasons, particularly if you’re seeing high occurrences of STOP_REASON_TIMEOUT.

workManager.getWorkInfoByIdFlow(syncWorker.id)
  .collect { workInfo ->
      if (workInfo != null) {
        val stopReason = workInfo.stopReason
        logStopReason(syncWorker.id, stopReason)
      }
  }
  • In addition to logging worker stop reasons, refer to our documentation on debugging your workers. Also, consider collecting and analyzing system traces to understand when wake locks are acquired and released.

  • Finally, check out our case study with WHOOP, where they were able to discover an issue with configuration of their workers and reduce their wake lock impact significantly.

Bluetooth Communication

Example use cases: 

  • Companion device app prompts the user to pair their Bluetooth external device.

  • Companion device app listens for hardware events on an external device and user visible change in notification.

  • Companion device app’s user initiates a file transfer between the mobile and bluetooth device.

  • Companion device app performs occasional firmware updates to an external device via Bluetooth.

How to reduce wake locks: 

  • Use companion device pairing to pair Bluetooth devices to avoid acquiring a manual wake lock during Bluetooth pairing. 

  • Consult the Communicate in the background guidance to understand how to do background Bluetooth communication. 

  • Using WorkManager is often sufficient if there is no user impact to a delayed communication. If a manual wake lock is deemed necessary, only hold the wake lock for the duration of Bluetooth activity or processing of the activity data.

Location Tracking

Example use cases: 

  • Fitness apps that cache location data for later upload such as plotting running routes

  • Food delivery apps that pull location data at a high frequency to update progress of delivery in a notification or widget UI.

How to reduce wake locks: 

  • Consult our guidance to Optimize location usage. Consider implementing timeouts, leveraging location request batching, or utilizing passive location updates to ensure battery efficiency.

  • When requesting location updates using the FusedLocationProvider or LocationManager APIs, the system automatically triggers a device wake-up during the location event callback. This brief, system-managed wake lock is exempted from excessive partial wake lock calculations.

  • Avoid acquiring a separate, continuous wake lock for caching location data, as this is redundant. Instead, persist location events in memory or local storage and leverage WorkManager to process them at periodic intervals.

override fun onCreate(savedInstanceState: Bundle?) {
    locationCallback = object : LocationCallback() {
        override fun onLocationResult(locationResult: LocationResult?) {
            locationResult ?: return
            // System wakes up CPU for short duration
            for (location in locationResult.locations){
                // Store data in memory to process at another time
            }
        }
    }
}

High Frequency Sensor Monitoring

Example use cases: 

  • Pedometer apps that passively collect steps, or distance traveled. 

  • Safety apps that monitor the device sensors for rapid changes in real time, to provide features such as crash detection or fall detection.

How to reduce wake locks: 

  • If using SensorManager, reduce usage to periodic intervals and only when the user has explicitly granted access through a UI interaction. High frequency sensor monitoring can drain the battery heavily due to the number of CPU wake-ups and processing that occurs.

  • If you’re tracking step counts or distance traveled, rather than using SensorManager, leverage Recording API or consider utilizing Health Connect to access historical and aggregated device step counts to capture data in a battery-efficient manner.

  • If you’re registering a sensor with SensorManager, specify a maxReportLatencyUs of 30 seconds or more to leverage sensor batching to minimize the frequency of CPU interrupts. When the device is subsequently woken by another trigger such as a user interaction, location retrieval, or a scheduled job, the system will immediately dispatch the cached sensor data.

val accelerometer = sensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER)

sensorManager.registerListener(this,
                 accelerometer,
                 samplingPeriodUs, // How often to sample data
                 maxReportLatencyUs // Key for sensor batching 
              )

  • If your app requires both location and sensor data, synchronize their event retrieval and processing. By piggybacking sensor readings onto the brief wake lock the system holds for location updates, you avoid needing a wake lock to keep the CPU awake. Use a worker or a short-duration wake lock to handle the upload and processing of this combined data.

Remote Messaging

Example use cases: 

  • Video or sound monitoring companion apps that need to monitor events that occur on an external device connected using a local network.

  • Messaging apps that maintain a network socket connection with the desktop variant.

How to reduce wake locks: 

  • If the network events can be processed on the server side, use FCM to receive information on the client. You may choose to schedule an expedited worker if additional processing of FCM data is required. 

  • If events must be processed on the client side via a socket connection, a wake lock is not needed to listen for event interrupts. When data packets arrive at the Wi-Fi or Cellular radio, the radio hardware triggers a hardware interrupt in the form of a kernel wake lock. You may then choose to schedule a worker or acquire a wake lock to process the data.

  • For example, if you’re using ktor-network to listen for data packets on a network socket, you should only acquire a wake lock when packets have been delivered to the client and need to be processed.


val readChannel = socket.openReadChannel()
while (!readChannel.isClosedForRead) {
    // CPU can safely sleep here while waiting for the next packet
    val packet = readChannel.readRemaining(1024) 
    if (!packet.isEmpty) {
         // Data Arrived: The system woke the CPU and we should keep it awake via manual wake lock (urgent) or scheduling a worker (non-urgent)
         performWorkWithWakeLock { 
              val data = packet.readBytes()
              // Additional logic to process data packets
         }
    }
}

Summary

By adopting these recommended solutions for common use cases like background syncs, location tracking, sensor monitoring and network communication, developers can work towards reducing unnecessary wake lock usage. To continue learning, read our other technical blog post or watch our technical video on how to discover and debug wake locks: Optimize your app battery using Android vitals wake lock metric. Also, consult our updated wakelock documentation. To help us continue improving our technical resources, please share any additional feedback on our guidance in our documentation feedback survey.

The post Battery Technical Quality Enforcement is Here: How to Optimize Common Wake Lock Use Cases appeared first on InShot Pro.

]]>
How WHOOP decreased excessive partial wake lock sessions by over 90% https://theinshotproapk.com/how-whoop-decreased-excessive-partial-wake-lock-sessions-by-over-90/ Wed, 04 Mar 2026 18:00:00 +0000 https://theinshotproapk.com/how-whoop-decreased-excessive-partial-wake-lock-sessions-by-over-90/ Posted by Breana Tate, Developer Relations Engineer, Mayank Saini, Senior Android Engineer, Sarthak Jagetia, Senior Android Engineer and Manmeet Tuteja, ...

Read more

The post How WHOOP decreased excessive partial wake lock sessions by over 90% appeared first on InShot Pro.

]]>

Posted by Breana Tate, Developer Relations Engineer, Mayank Saini, Senior Android Engineer, Sarthak Jagetia, Senior Android Engineer and Manmeet Tuteja, Android Engineer II

Building an Android app for a wearable means the real work starts when the screen turns off. WHOOP helps members understand how their body responds to training, recovery, sleep, and stress, and for the many WHOOP members on Android, reliable background syncing and connectivity are what make those insights possible.


Earlier this year, Google Play released a new metric in Android vitals: Excessive partial wake locks. This metric measures the percentage of user sessions where cumulative, non-exempt wake lock usage exceeds 2 hours in a 24-hour period. The aim of this metric is to help you identify and address possible sources of battery drain, which is crucial for delivering a great user experience.


Beginning March 1, 2026, apps that continue to not meet the quality threshold may be excluded from Google Play discovery surfaces. A warning may also be placed on the Google Play Store listing, indicating the app might use more battery than expected.


According to Mayank Saini, Senior Android Engineer at WHOOP,  this “presented the team with an opportunity to raise the bar on Android efficiency,” after Android vitals flagged the app’s excessive partial wake lock % as 15%—which exceeded the recommended 5% threshold.



The team viewed the Android vitals metric as a clear signal that their background work was holding the CPU awake longer than necessary. Resolving this would allow them to continue to deliver a great user experience while simultaneously decreasing wasted background time and maintaining reliable and timely Bluetooth connectivity and syncing.


Identifying the issue


To figure out where to get started, the team first turned to Android vitals for more insight into which wake locks were affecting the metric. By consulting the Android vitals excessive partial wake locks dashboard, they were able to identify the biggest contributor to excessive partial wake locks as one of their WorkManager workers (identified in the dashboard as androidx.work.impl.background.systemjob.SystemJobService). To support the WHOOP “always-on experience”, the app uses WorkManager for background tasks like periodic syncing and delivering recurring updates to the wearable. 


While the team was aware that WorkManager acquires a wake lock while executing tasks in the background, they previously did not have visibility into how all of their background work (beyond just WorkManager) was distributed until the introduction of the excessive partial wake locks metric in Android vitals.


With the dashboard identifying WorkManager as the main contributor, the team was then able to focus their efforts on identifying which of their workers was contributing the most and work towards resolving the issue.


Making use of internal metrics and data to better narrow down the cause


WHOOP already had internal infrastructure set up to monitor WorkManager metrics. They periodically monitor:

  1. Average Runtime: For how long does the worker run?

  2. Timeouts: How often is the worker timing out instead of completing?

  3. Retries: How often does the worker retry if the work timed out or failed?

  4. Cancellations: How often was the work cancelled?


Tracking more than just worker successes and failures gives the team visibility into their work’s efficiency.


The internal metrics flagged high average runtime for a select few workers, enabling them to narrow the investigation down even further. 


In addition to their internal metrics, the team also used Android Studio’s Background Task Inspector to inspect and debug the workers of interest, with a specific focus on associated wake locks, to align with the metric flagged in Android vitals.


Investigation: Distinguishing between worker variants


WHOOP uses both one-time and periodic scheduling for some workers. This allows the app to reuse the same Worker logic for identical tasks with the same success criteria, differing only in timing.


Using their internal metrics made it possible to narrow their search to a specific worker, but they couldn’t tell if the bug occurred when the worker was one-time, periodic, or both. So, they rolled out an update to use WorkManager’s setTraceTag method to distinguish between the one-time and periodic variants of the same Worker.


This extra detail would allow them to definitively identify which Worker variant (periodic or one-time) was contributing the most to sessions with excessive partial wake locks. However, the team was surprised when the data revealed that neither variant appeared to be contributing more than the other.


Manmeet Tuteja, Android Engineer II at WHOOP said “that split also helped us confirm the issue was happening in both variants, which pointed away from scheduling configuration and toward a shared business logic problem inside the worker implementation.”


Diving deeper on worker behavior and fixing the root cause


With the knowledge that they needed to take a look at logic within the worker,  the team re-examined worker behavior for the workers that had been flagged during their investigation. Specifically, they were looking for instances in which work may have been getting stuck and not completing.


All of this culminated in finding the root cause of the excessive wake locks:


A CoroutineWorker that was designed to wait for a connection to the WHOOP sensor before proceeding. 


If the work started with no sensor connected, whoopSensorFlow–which indicates if the sensor is connected– was null. The SensorWorker didn’t treat this as an early-exit condition and kept running, effectively waiting indefinitely for a connection. As a result, WorkManager held a partial wake lock until the work timed out, leading to high background wake lock usage and frequent, unwanted rescheduling of the SensorWorker.


To address this, the WHOOP team updated the worker logic to check the connection status before attempting to execute the core business logic.


If the sensor isn’t available, the worker exits, avoiding a timeout scenario and releasing the wake lock. The following code snippet shows the solution:

class SensorWorker(appContext: Context, params: WorkerParameters): CoroutineWorker(appContext, params) {
   override suspend fun doWork(): Result {
      ...
      // Check the sensor state and perform work or return failure
       return whoopSensorFlow.replayCache
            .firstOrNull()
            ?.let { cachedData ->
                processSensorData(cachedData)
                Result.success()
            } ?: run {
                Result.failure()
            }
}


Achieving a 90% decrease in sessions with excessive partial wake locks


After rolling out the fix, the team continued to monitor the Android vitals dashboard to confirm the impact of the changes. 


Ultimately, WHOOP saw their excessive partial wake lock percentage drop from 15% to less than 1% just 30 days after implementing the changes to their Worker.


As a result of the changes, the team has seen fewer instances of work timing out without completing, resulting in lower average runtimes. 


The WHOOP team’s advice to other developers that want to improve their background work’s efficiency:


Get Started

If you’re interested in trying to reduce your app’s excessive partial wake locks or trying to improve worker efficiency, view your app’s excessive partial wake locks metric in Android vitals, and review the wake locks documentation for more best practices and debugging strategies. 

The post How WHOOP decreased excessive partial wake lock sessions by over 90% appeared first on InShot Pro.

]]>