Android https://theinshotproapk.com/category/app/android/ Download InShot Pro APK for Android, iOS, and PC Thu, 26 Mar 2026 20:00:00 +0000 en-US hourly 1 https://theinshotproapk.com/wp-content/uploads/2021/07/cropped-Inshot-Pro-APK-Logo-1-32x32.png Android https://theinshotproapk.com/category/app/android/ 32 32 The Third Beta of Android 17 https://theinshotproapk.com/the-third-beta-of-android-17/ Thu, 26 Mar 2026 20:00:00 +0000 https://theinshotproapk.com/the-third-beta-of-android-17/ Posted by Matthew McCullough, VP of Product Management, Android Developer Android 17 has officially reached platform stability today with Beta ...

Read more

The post The Third Beta of Android 17 appeared first on InShot Pro.

]]>

Posted by Matthew McCullough, VP of Product Management, Android Developer

Android 17 has officially reached platform stability today with Beta 3. That means that the API surface is locked; you can perform final compatibility testing and push your Android 17-targeted apps to the Play Store. In addition, Beta 3 brings a host of new capabilities to help you build better, more secure, and highly integrated applications.

Get your apps, libraries, tools, and game engines ready!

If you develop an SDK, library, tool, or game engine, it’s even more important to prepare any necessary updates now to prevent your downstream app and game developers from being blocked by compatibility issues and allow them to target the latest SDK features. Please let your downstream developers know if updates are needed to fully support Android 17.

Testing involves installing your production app or a test app making use of your library or engine using Google Play or other means onto a device or emulator running Android 17 Beta 3. Work through all your app’s flows and look for functional or UI issues. Review the behavior changes to focus your testing. Each release of Android contains platform changes that improve privacy, security, and overall user experience, and these changes can affect your apps. Here are some changes to focus on:

  • Resizability on large screens: Once you target Android 17, you can no longer opt out of maintaining orientation, resizability and aspect ratio constraints on large screens.
  • Dynamic code loading: If your app targets Android 17 or higher, the Safer Dynamic Code Loading (DCL) protection introduced in Android 14 for DEX and JAR files now extends to native libraries. All native files loaded using System.load() must be marked as read-only. Otherwise, the system throws UnsatisfiedLinkError.
  • Enable CT by default: Certificate transparency (CT) is enabled by default. (On Android 16, CT is available but apps had to opt in.)
  • Local network protections: Apps targeting Android 17 or higher have local network access blocked by default. Switch to using privacy preserving pickers if possible, and use the new ACCESS_LOCAL_NETWORK for broad, persistent access.

Media and camera enhancements

Photo Picker customization options

Android now allows you to tailor the visual presentation of the photo picker to better complement your app’s user interface. By leveraging the new PhotoPickerUiCustomizationParams API, you can modify the grid view aspect ratio from the standard 1:1 square to a 9:16 portrait display. This flexibility extends to both the ACTION_PICK_IMAGES intent and the embedded photo picker, enabling you to maintain a cohesive aesthetic when users interact with media.

This is all part of our effort to help make the privacy-preserving Android photo picker fit seamlessly with your app experience. Learn more about how you can embed the photo picker directly into your app for the most native experience.

val params = PhotoPickerUiCustomizationParams.Builder()
    .setAspectRatio(PhotoPickerUiCustomizationParams.ASPECT_RATIO_PORTRAIT_9_16)
    .build()

val intent = Intent(MediaStore.ACTION_PICK_IMAGES).apply {
    putExtra(MediaStore.EXTRA_PICK_IMAGES_UI_CUSTOMIZATION_PARAMS, params)
}

startActivityForResult(intent, REQUEST_CODE)

Support for the RAW14 image format: Android 17 introduces support for the RAW14 image format — the de-facto industry standard for high-end digital photography — via the new ImageFormat.RAW14 constant. RAW14 is a single-channel, 14-bit per pixel format that uses a densely packed layout where every four consecutive pixels are packed into seven bytes.

Vendor-defined camera extensions: Android 17 adds Vendor-defined extensions to enable hardware partners define and implement custom camera extension modes to provide you access to the best and latest camera features, such as ‘Super Resolution’ or cutting-edge AI-driven enhancements. You can query for these modes using the isExtensionSupported(int) API.

Camera device type APIs: New Android 17 APIs allow you to query the underlying device type to identify if a camera is built-in hardware, an external USB webcam, or a virtual camera.

Bluetooth LE Audio hearing aid support

Android now includes a specific device category for Bluetooth Low Energy (BLE) Audio hearing aids. With the addition of the AudioDeviceInfo.TYPE_BLE_HEARING_AID constant, your app can now distinguish hearing aids from regular headsets.

val audioManager = getSystemService(Context.AUDIO_SERVICE) as AudioManager
val devices = audioManager.getDevices(AudioManager.GET_DEVICES_OUTPUTS)
val isHearingAidConnected = devices.any { it.type == AudioDeviceInfo.TYPE_BLE_HEARING_AID }

Granular audio routing for hearing aids

Android 17 allows users to independently manage where specific system sounds are played. They can choose to route notifications, ringtones, and alarms to connected hearing aids or the device’s built-in speaker.

Extended HE-AAC software encoder

Android 17 introduces a system-provided Extended HE-AAC software encoder. This encoder supports both low and high bitrates using unified speech and audio coding. You can access this encoder via the MediaCodec API using the name c2.android.xheaac.encoder or by querying for the audio/mp4a-latm MIME type.

val encoder = MediaCodec.createByCodecName("c2.android.xheaac.encoder")
val format = MediaFormat.createAudioFormat(MediaFormat.MIMETYPE_AUDIO_AAC, 48000, 1)
format.setInteger(MediaFormat.KEY_BIT_RATE, 24000)
format.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectXHE)
encoder.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE)

Performance and Battery Enhancements

Reduce wakelocks with listener support for allow-while-idle alarms

Android 17 introduces a new variant of AlarmManager.setExactAndAllowWhileIdle that accepts an OnAlarmListener instead of a PendingIntent. This new callback-based mechanism is ideal for apps that currently rely on continuous wakelocks to perform periodic tasks, such as messaging apps maintaining socket connections.

val alarmManager = getSystemService(AlarmManager::class.java)
val listener = AlarmManager.OnAlarmListener {
    // Do work here
}

alarmManager.setExactAndAllowWhileIdle(
    AlarmManager.ELAPSED_REALTIME_WAKEUP,
    SystemClock.elapsedRealtime() + 60000,
    listener,
    null
)

Privacy updates

System-provided Location Button

Android is introducing a system-rendered location button that you will be able to embed directly into your app’s layout using an Android Jetpack library. When a user taps this system button, your app is granted precise location access for the current session only. To implement this, you need to declare the USE_LOCATION_BUTTON permission.

Discrete password visibility settings for touch and physical keyboards

This feature splits the existing “Show passwords” system setting into two distinct user preferences: one for touch-based inputs and another for physical (hardware) keyboard inputs. Characters entered via physical keyboards are now hidden immediately by default.

val isPhysical = event.source and InputDevice.SOURCE_KEYBOARD == InputDevice.SOURCE_KEYBOARD
val shouldShow = android.text.ShowSecretsSetting.shouldShowPassword(context, isPhysical)

Security

Enforced read-only dynamic code loading

To improve security against code injection attacks, Android now enforces that dynamically loaded native libraries must be read-only. If your app targets Android 17 or higher, all native files loaded using System.load() must be marked as read-only beforehand.

val libraryFile = File(context.filesDir, "my_native_lib.so")
// Mark the file as read-only before loading to comply with Android 17+ security requirements
libraryFile.setReadOnly()

System.load(libraryFile.absolutePath)

Post-Quantum Cryptography (PQC) Hybrid APK Signing

To prepare for future advancements in quantum computing, Android is introducing support for Post-Quantum Cryptography (PQC) through the new v3.2 APK Signature Scheme. This scheme utilizes a hybrid approach, combining a classical signature with an ML-DSA signature.

User experience and system UI

Better support for widgets on external displays

This feature improves the visual consistency of app widgets when they are shown on connected or external displays with different pixel densities using DP or SP units.

val options = appWidgetManager.getAppWidgetOptions(appWidgetId)
val displayId = options.getInt(AppWidgetManager.OPTION_APPWIDGET_DISPLAY_ID)

val remoteViews = RemoteViews(context.packageName, R.layout.widget_layout)
remoteViews.setViewPadding(
    R.id.container,
    16f, 8f, 16f, 8f,
    TypedValue.COMPLEX_UNIT_DIP
)

Hidden app labels on the home screen


Android now provides a user setting to hide app names (labels) on the home screen workspace. Ensure your app icon is distinct and recognizable.

Desktop Interactive Picture-in-Picture

Unlike traditional Picture-in-Picture, these pinned windows remain interactive while staying always-on-top of other application windows in desktop mode.

val appTask: ActivityManager.AppTask = activity.getSystemService(ActivityManager::class.java).appTasks[0]
appTask.requestWindowingLayer(
    ActivityManager.AppTask.WINDOWING_LAYER_PINNED,
    context.mainExecutor,
    object : OutcomeReceiver<Int, Exception> {
        override fun onResult(result: Int) {
            if (result == ActivityManager.AppTask.WINDOWING_LAYER_REQUEST_GRANTED) {
                // Task successfully moved to pinned layer
            }
        }
        override fun onError(error: Exception) {}
    }
)

Redesigned screen recording toolbar

Core functionality

VPN app exclusion settings

By using the new ACTION_VPN_APP_EXCLUSION_SETTINGS Intent, your app can launch a system-managed Settings screen where users can select applications to bypass the VPN tunnel.

val intent = Intent(Settings.ACTION_VPN_APP_EXCLUSION_SETTINGS)
if (intent.resolveActivity(packageManager) != null) {
    startActivity(intent)
}

OpenJDK 25 and 21 API updates

This update brings extensive features and refinements from OpenJDK 21 and OpenJDK 25, including the latest Unicode support and enhanced SSL support for named groups in TLS.

Get started with Android 17

You can enroll any supported Pixel device or use the 64-bit system images with the Android Emulator.

  • Compile against the new SDK and report issues on the feedback page.
  • Test your current app for compatibility and learn whether your app is affected by changes in Android 17.

For complete information, visit the Android 17 developer site.

The post The Third Beta of Android 17 appeared first on InShot Pro.

]]>
Instagram and Facebook deliver instant playback and boost user engagement with Media3 PreloadManager https://theinshotproapk.com/instagram-and-facebook-deliver-instant-playback-and-boost-user-engagement-with-media3-preloadmanager/ Thu, 05 Mar 2026 18:03:00 +0000 https://theinshotproapk.com/instagram-and-facebook-deliver-instant-playback-and-boost-user-engagement-with-media3-preloadmanager/ Posted by Mayuri Khinvasara Khabya, Developer Relations Engineer (LinkedIn and X) In the dynamic world of social media, user attention is won or ...

Read more

The post Instagram and Facebook deliver instant playback and boost user engagement with Media3 PreloadManager appeared first on InShot Pro.

]]>

Posted by Mayuri Khinvasara Khabya, Developer Relations Engineer (LinkedIn and X)





In the dynamic world of social media, user attention is won or lost quickly. Meta apps (Facebook and Instagram) are among the world’s largest social platforms and serve billions of users globally. For Meta, delivering videos seamlessly isn’t just a feature, it’s the core of their user experience. Short-form videos, particularly Facebook Newsfeed and Instagram Reels, have become a primary driver of engagement. They enable creative expression and rapid content consumption; connecting and entertaining people around the world. 

This blog post takes you through the journey of how Meta transformed video playback for billions by delivering true instant playback.

The latency gap in short form videos


Short-form videos lead to highly fast paced interactions as users quickly scroll through their feeds. Delivering a seamless transition between videos in an ever-changing feed introduces unique hurdles for instantaneous playback. Hence we need solutions that go beyond traditional disk caching and standard reactive playback strategies.


The path forward with Media3 PreloadManager


To address the shifts in consumption habits from rise in short form content and the limitations of traditional long form playback architecture, Jetpack Media3 introduced PreloadManager. This component allows developers to move beyond disk caching, offering granular control and customization to keep media ready in memory before the user hits play. Read this blog series to understand technical details about media playback with PreloadManager.


How Meta achieved true instant playback

Existing Complexities


Previously, Meta used a combination of warmup (to get players ready) and prefetch (to cache content on disk) for video delivery. While these methods helped improve network efficiency, they introduced significant challenges. Warmup required instantiating multiple player instances sequentially, which consumed significant memory and limited preloading to only a few videos. This high resource demand meant that a more scalable robust solution could be applied to deliver the instant playback expected on modern, fast-scrolling social feeds.


Integrating Media3 PreloadManager


To achieve truly instant playback, Meta’s Media Foundation Client team integrated the Jetpack Media3 PreloadManager into Facebook and Instagram. They chose the DefaultPreloadManager to unify their preloading and playback systems. This integration required refactoring Meta’s existing architecture to enable efficient resource sharing between the PreloadManager and ExoPlayer instances. This strategic shift provided a key architectural advantage: the ability to parallelize preloading tasks and manage many videos using a single player instance. This dramatically increased preloading capacity while eliminating the high memory complexities of their previous approach.





Optimization and Performance Tuning

The team then performed extensive testing and iterations to optimize performance across Meta’s diverse global device ecosystem. Initial aggressive preloading sometimes caused issues, including increased memory usage and scroll performance slowdowns. To solve this, they fine-tuned the implementation by using careful memory measurements, considering device fragmentation, and tailoring the system to specific UI patterns.


Fine tuning implementation to specific UI patterns

Meta applied different preloading strategies and tailored the behavior to match the specific UI patterns of each app:

  • Facebook Newsfeed: The UI prioritizes the video currently coming into view. The manager preloads only the current video to ensure it starts the moment the user pauses their scroll. This “current-only” focus minimizes data and memory footprints in an environment where users may see many static posts between videos. While the system is presently designed to preload just the video in view, it can be adjusted to also preload upcoming (future) videos. 

  • Instagram Reels: This is a pure video environment where users swipe vertically. For this UI, the team implemented an “adjacent preload” strategy. The PreloadManager keeps the videos immediately after the current Reel ready in memory. This bi-directional approach ensures that whether a user swipes up or down, the transition remains instant and smooth. The result was a dramatic improvement in the Quality of Experience (QoE) including improvements in Playback Start and Time to First Frame for the user.


Scaling for a diverse global device ecosystem

Scaling a high-performance video stack across billions of devices requires more than just aggressive preloading; it requires intelligence. Meta faced initial challenges with memory pressure and scroll lag, particularly on mid-to-low-end hardware. To solve this, they built a Device Stress Detection system around the Media3 implementation. The apps now monitor I/O and CPU signals in real-time. If a device is under heavy load, preloading is paused to prioritize UI responsiveness.


This device-aware optimization ensures that the benefit of instant playback doesn’t come at the cost of system stability, allowing even users on older hardware to experience a smoother, uninterrupted feed.



Architectural wins and code health

Beyond the user-facing metrics, the migration to Media3 PreloadManageroffered long-term architectural benefits. While the integration and tuning process needed multiple iterations to balance performance, the resulting codebase is more maintainable. The team found that the PreloadManager API integrated cleanly with the existing Media3 ecosystem, allowing for better resource sharing. For Meta, the adoption of Media3 PreloadManager was a strategic investment in the future of video consumption. 


By adopting preloading and adding device-intelligent gates, they successfully increased total watch time on their apps and improved the overall engagement of their global community. 


Resulting impact on Instagram and Facebook


The proactive architecture delivered immediate and measurable improvements across both platforms. 


  • Facebook experienced faster playback starts, decreased playback stall rates and a reduction in bad sessions (like rebuffering, delayed start time, lower quality,etc) which overall resulted in higher watch time. 


  • Instagram saw faster playback starts and an increase in total watch time. Eliminating join latency (the interval from the user’s action to the first frame display) directly increased engagement metrics. The fewer interruptions due to reduced buffering meant users watched more content, which showed through engagement metrics.


Key engineering learnings at scale


As media consumption habits evolve, the demand for instant experiences will continue to grow. Implementing proactive memory management and optimizing for scale and device diversity ensures your application can meet these expectations efficiently.


  • Prioritize intelligent preloading

Focus on delivering a reliable experience by minimizing stutters and loading times through preloading. Rather than simple disk caching, leveraging memory-level preloading ensures that content is ready the moment a user interacts with it.


  • Align your implementation with UI patterns

Customize preloading behavior as per your apps’s UI. For example, use a “current-only” focus for mixed feeds like Facebook to save memory, and an “adjacent preload” strategy for vertical environments like Instagram Reels.

  • Leverage Media3 for long-term code health

Integrating with Media3 APIs rather than a custom caching solution allows for better resource sharing between the player and the PreloadManager, enabling you to manage multiple videos with a single player instance. This results in a future-proof codebase that is easier for engineering teams to not only maintain and optimize over time but also benefit from the latest feature updates.

  • Implement device aware optimizations

Broaden your market reach by testing on various devices, including mid-to-low-end models. Use real-time signals like CPU, memory, and I/O to adapt features and resource usage dynamically.

Learn More


To get started and learn more, visit 

Now you know the secrets for instant playback. Go try them out!


The post Instagram and Facebook deliver instant playback and boost user engagement with Media3 PreloadManager appeared first on InShot Pro.

]]>
Supercharge your Android development with 6 expert tips for Gemini in Android Studio https://theinshotproapk.com/supercharge-your-android-development-with-6-expert-tips-for-gemini-in-android-studio/ Mon, 02 Mar 2026 14:00:00 +0000 https://theinshotproapk.com/supercharge-your-android-development-with-6-expert-tips-for-gemini-in-android-studio/ Posted by Trevor Johns, Developer Relations Engineer In January we announced Android Studio Otter 3 Feature Drop in stable, including Agent ...

Read more

The post Supercharge your Android development with 6 expert tips for Gemini in Android Studio appeared first on InShot Pro.

]]>

Posted by Trevor Johns, Developer Relations Engineer








In January we announced Android Studio Otter 3 Feature Drop in stable, including Agent Mode enhancements and many other updates to provide more control and flexibility over using AI to help you build high quality Android apps. To help you get the most out of Gemini in Android Studio and all the new capabilities, we sat down with Google engineers and Google Developer Experts to gather their best practices for working with the latest features—including Agent mode and the New Project Assistant. Here are some useful insights to help you get the best out of your development:


  1. Build apps from scratch with the New Project Assistant


The new Project Assistant—now available in the latest Canary builds—integrates Gemini with the Studio’s New Project wizard. By simply providing prompts and (optionally) design mockups, you can generate entire applications from scratch, including scaffolding, architecture, and Jetpack Compose layouts.


Integrated with the Android Emulator, it can deploy your build and “walk through” the app, making sure it’s functioning correctly and that the rendered screens actually match your vision. Additionally, you can use Agent Mode to then continue to work on the app and iterate, leveraging Gemini to refine your app to fit your vision.


Also, while this feature works with the default (no-cost) model, we highly recommend using this feature with an AI Studio API Key to access the latest models — like Gemini 3.1 Pro or 3.0 Flash — which excel at agentic workflows. Additionally, adding your API Key allows the New Project Assistant to use Nano Banana behind the scenes to help with ideating on UI design, improving the visual fidelity of the generated application! – Trevor Johns, Developer Relations Engineer.


Dialog for setting up a new project.


2. Ask the Agent to refine your code by providing it with ‘intentional’ contexts


When using Gemini Agents, the quality of the output is directly tied to the boundaries you set. Don’t just ask it to “fix this code”— be very intentional with the context that you provide it and be specific about what you want (and what you don’t). Improve the output by providing recent blogs or docs so the model can make accurate suggestions based on these.


Ask the Agent to simplify complex logic, or if it see’s any fundamental problems with it, or even ask it to scan for security risks in areas where you feel uncertain. Being firm with your instructions—even telling the model “please do not invent things” in instances where you are using very new or experimental APIs—helps keep the AI focused on the outputs you are trying to achieve. – Alejandra Stamato, Android Google Developer Expert and Android Engineer at HubSpot.


3. Use documentation with Agent mode to provide context for new libraries


To prevent the model from hallucinating code for niche or brand-new libraries, leverage Android Studio’s Agent tools, to have access to documentation: Search Android Docs and Fetch Android Docs. You can direct Gemini to search the Android Knowledge Base or specific documentation articles. The model can choose to use this if it thinks it’s missing some information, which is good especially when you use niche API’s, or one’s which aren’t as common. 


If you are certain you want the model to consult the documentation and to make sure those tools are triggered, a good trick is to add something like ‘search the official documentation’ or ‘check the docs’ to your prompts. And for documentation on different libraries which aren’t Android specific, install a MCP Server that lets you access documentation like Context7 (or something similar). – Jose Alcérreca, Android Developer Relations Engineer, Google.


4. Use AI to help build Agents.md files for using custom frameworks, libraries and design systems


To make sure Agent uses custom frameworks, libraries and design systems you have two options 1) In settings, Android Studio allows you to specify rules to be followed when Gemini is performing these actions for you. Or 2) Create Agents.md files in your application and specify how things should be done or act as guidance for when AI is performing a task, specific frameworks, design systems, or specific ways of doing things (such as the exact architecture, things to do or what not to do), in a standard bullet point way to give AI clear instructions. 


Manage AGENTS.md files as context

Manage AGENTS.md files as context.


You can also use Agents.md file at the root of the project, and can have them in different modules (or even subdirectories) of your project as well! The more context you have or the more guidance available when you’re working, that will be available for AI to access. If you get stuck creating these Agents.md files you can use AI to help build them, or give you foundations based on the projects you have and then edit them so you don’t have to start from scratch. – Joe Birch, Android Google Developer Expert and Staff Engineer at Buffer.


5. Offload the tedious tasks to Agent and save yourself time


You can get Gemini in Android Studio agent to help you make tasks such as writing and reviewing faster. For example it can help writing commit messages, giving you a good summary which you can then review and save yourself time. Additionally, get it to write tests; under your direction the Agent can look at the other tests in your project and write a good test for you to run following best practices just by looking at them. Another good example of a tedious task is writing a new parser for a certain JSON format. Just give Gemini a few examples and it will get you started very quickly. – Diego Perez, Android Software Engineer, Google 


6. Control what you are sharing with AI using simple opt-outs or commands, alongside paid models.


If you want to control what is shared with AI whilst on the no-cost plans, you can opt out some or all your code from model training by adding an AI exclusions file (‘.aiexclude’) to your project. This file uses glob pattern matching similar to a .gitignore file, specifying sensitive directories or files that should be hidden from the AI. You can place .aiexclude files anywhere within the project and its VCS roots to control which files AI features are allowed to access.


An example of an `.aiexclude` file in Android Studio.


Alternatively, in Android Studio settings, you can also opt out of context sharing either on a per project or per user basis (although this method limits the functionality of a number of features because the AI won’t see your code).


Remember, paid plans never use your code for model training. This includes both users using an AI Studio API Key, and businesses who are subscribed to Gemini Code Assist. – Trevor Johns, Developer Relations Engineer.


Hear more from the Android team and Google Developer Experts about Gemini in Android Studio in our recent fireside chat and download Android Studio to get started.


The post Supercharge your Android development with 6 expert tips for Gemini in Android Studio appeared first on InShot Pro.

]]>
The Second Beta of Android 17 https://theinshotproapk.com/the-second-beta-of-android-17/ Thu, 26 Feb 2026 21:08:00 +0000 https://theinshotproapk.com/the-second-beta-of-android-17/ Posted by Matthew McCullough, VP Product Management, Android Developer Today we’re releasing the second beta of Android 17, continuing our ...

Read more

The post The Second Beta of Android 17 appeared first on InShot Pro.

]]>

Posted by Matthew McCullough, VP Product Management, Android Developer

Today we’re releasing the second beta of Android 17, continuing our work to build a platform that prioritizes privacy, security, and refined performance. This update delivers a range of new capabilities, including the EyeDropper API and a privacy-preserving Contacts Picker. We’re also adding advanced ranging, cross-device handoff APIs, and more.

This release continues the shift in our release cadence, following this annual major SDK release in Q2 with a minor SDK update.

User Experience & System UI

Bubbles

Bubbles is a windowing mode feature that offers a new floating UI experience separate from the messaging bubbles API. Users can create an app bubble on their phone, foldable, or tablet by long-pressing an app icon on the launcher. On large screens, there is a bubble bar as part of the taskbar where users can organize, move between, and move bubbles to and from anchored points on the screen.


You should follow the guidelines for supporting multi-window mode to ensure your apps work correctly as bubbles.
EyeDropper API

A new system-level EyeDropper API allows your app to request a color from any pixel on the display without requiring sensitive screen capture permissions.


val eyeDropperLauncher = registerForActivityResult(ActivityResultContracts.StartActivityForResult()) {
  result -> if (result.resultCode == Activity.RESULT_OK) {
    val color = result.data?.getIntExtra(Intent.EXTRA_COLOR, Color.BLACK)
    // Use the picked color in your app
  }
}

fun launchColorPicker() {
  val intent = Intent(Intent.ACTION_OPEN_EYE_DROPPER)
  eyeDropperLauncher.launch(intent)
}

Contacts Picker

A new system-level contacts picker via ACTION_PICK_CONTACTS grants temporary, session-based read access to only the specific data fields requested by the user, reducing the need for the broad READ_CONTACTS permissions. It also allows for selections from the device’s personal or work profiles.




val contactPicker = rememberLauncherForActivityResult(StartActivityForResult()) {
    if (it.resultCode == RESULT_OK) {
        val uri = it.data?.data ?: return@rememberLauncherForActivityResult
        // Handle result logic
        processContactPickerResults(uri)
    }
}

val dataFields = arrayListOf(Email.CONTENT_ITEM_TYPE, Phone.CONTENT_ITEM_TYPE)
val intent = Intent(ACTION_PICK_CONTACTS).apply {
    putStringArrayListExtra(EXTRA_PICK_CONTACTS_REQUESTED_DATA_FIELDS, dataFields)
    putExtra(EXTRA_ALLOW_MULTIPLE, true)
    putExtra(EXTRA_PICK_CONTACTS_SELECTION_LIMIT, 5)
}

contactPicker.launch(intent)

Easier pointer capture compatibility with touchpads

Previously, touchpads reported events in a very different way from mice when an app had captured the pointer, reporting the locations of fingers on the pad rather than the relative movements that would be reported by a mouse. This made it quite difficult to support touchpads properly in first-person games. Now, by default the system will recognize pointer movement and scrolling gestures when the touchpad is captured, and report them just like mouse events. You can still request the old, detailed finger location data by explicitly requesting capture in the new “absolute” mode.


// To request the new default relative mode (mouse-like events)
// This is the same as requesting with View.POINTER_CAPTURE_MODE_RELATIVE
view.requestPointerCapture()

// To request the legacy absolute mode (raw touch coordinates)
view.requestPointerCapture(View.POINTER_CAPTURE_MODE_ABSOLUTE)
Interactive Chooser resting bounds
By calling getInitialRestingBounds on Android’s ChooserSession, your app can identify the target position the Chooser occupies after animations and data loading are complete, enabling better UI adjustments.

Connectivity & Cross-Device

Cross-device app handoff

A new Handoff API allows you to specify application state to be resumed on another device, such as an Android tablet. When opted in, the system synchronizes state via CompanionDeviceManager and displays a handoff suggestion in the launcher of the user’s nearby devices. This feature is designed to offer seamless task continuity, enabling users to pick up exactly where they left off in their workflow across their Android ecosystem. Critically, Handoff supports both native app-to-app transitions and app-to-web fallback, providing maximum flexibility and ensuring a complete experience even if the native app is not installed on the receiving device.

Advanced ranging APIs

We are adding support for 2 new ranging technologies – 

  1. UWB DL-TDOA which enables apps to use UWB for indoor navigation. This API surface is FIRA (Fine Ranging Consortium) 4.0 DL-TDOA spec compliant and enables privacy preserving indoor navigation  (avoiding tracking of the device by the anchor).

  2. Proximity Detection which enables apps to use the new ranging specification being adopted by WFA (WiFi Alliance). This technology provides improved reliability and accuracy compared to existing Wifi Aware based ranging specification.

Data plan enhancements

To optimize media quality, your app can now retrieve carrier-allocated maximum data rates for streaming applications using getStreamingAppMaxDownlinkKbps and getStreamingAppMaxUplinkKbps.

Core Functionality, Privacy & Performance

Local Network Access

Android 17 introduces the ACCESS_LOCAL_NETWORK runtime permission to protect users from unauthorized local network access. Because this falls under the existing NEARBY_DEVICES permission group, users who have already granted other NEARBY_DEVICES permissions will not be prompted again. By declaring and requesting this permission, your app can discover and connect to devices on the local area network (LAN), such as smart home devices or casting receivers. This prevents malicious apps from exploiting unrestricted local network access for covert user tracking and fingerprinting. Apps targeting Android 17 or higher will now have two paths to maintain communication with LAN devices: adopt system-mediated, privacy-preserving device pickers to skip the permission prompt, or explicitly request this new permission at runtime to maintain local network communication.

Time zone offset change broadcast

Android now provides a reliable broadcast intent, ACTION_TIMEZONE_OFFSET_CHANGED, triggered when the system’s time zone offset changes, such as during Daylight Saving Time transitions. This complements the existing broadcast intents ACTION_TIME_CHANGED and ACTION_TIMEZONE_CHANGED, which are triggered when the Unix timestamp changes and when the time zone ID changes, respectively.


NPU Management and Prioritization

Apps targeting Android 17 that need to directly access the NPU must declare FEATURE_NEURAL_PROCESSING_UNIT in their manifest to avoid being blocked from accessing the NPU. This includes apps that use the LiteRT NPU delegate, vendor-specific SDKs, as well as the deprecated NNAPI.


ICU 78 and Unicode 17 support

Core internationalization libraries have been updated to ICU 78, expanding support for new scripts, characters, and emoji blocks, and enabling direct formatting of time objects.

SMS OTP protection

Android is expanding its SMS OTP protection by automatically delaying access to SMS messages with OTP. Previously, the protection was primarily focused on the SMS Retriever format wherein the delivery of messages containing an SMS retriever hash is delayed for most apps for three hours. However, for certain apps like the default SMS app, etc and the app that corresponds to the hash are exempt from this delay. This update extends the protection to all SMS messages with OTP. For most apps, SMS messages containing an OTP will only be accessible after a delay of three hours to help prevent OTP hijacking. The SMS_RECEIVED_ACTION broadcast will be withheld and sms provider database queries will be filtered. The SMS message will be available to these apps after the delay.


Delayed access to WebOTP format SMS messages

If the app has the permission to read SMS messages but is not the intended recipient of the OTP (as determined by domain verification), the WebOTP format SMS message will only be accessible after three hours have elapsed. This change is designed to improve user security by ensuring that only apps associated with the domain mentioned in the message can programmatically read the verification code. This change applies to all apps regardless of their target API level.

Delayed access to standard SMS messages with OTP

For SMS messages containing an OTP that do not use the WebOTP or SMS Retriever formats, the OTP SMS will only be accessible after three hours for most apps. This change only applies to apps that target Android 17 (API level 37) or higher.

Certain apps such as the default SMS, assistant app, along with connected device companion apps, etc will be exempt from this delay.

All apps that rely on reading SMS messages for OTP extraction should transition to using SMS Retriever or SMS User Consent APIs to ensure continued functionality.

The Android 17 schedule

We’re going to be moving quickly from this Beta to our Platform Stability milestone, targeted for March. At this milestone, we’ll deliver final SDK/NDK APIs. From that time forward, your app can target SDK 37 and publish to Google Play to help you complete your testing and collect user feedback in the several months before the general availability of Android 17.


A year of releases

We plan for Android 17 to continue to get updates in a series of quarterly releases. The upcoming release in Q2 is the only one where we introduce planned app breaking behavior changes. We plan to have a minor SDK release in Q4 with additional APIs and features.





Get started with Android 17

You can enroll any supported Pixel device to get this and future Android Beta updates over-the-air. If you don’t have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio.

If you are currently in the Android Beta program, you will be offered an over-the-air update to Beta 2.

If you have Android 26Q1 Beta and would like to take the final stable release of 26Q1 and exit Beta, you need to ignore the over-the-air update to 26Q2 Beta 2 and wait for the release of 26Q1.

We’re looking for your feedback so please report issues and submit feature requests on the feedback page. The earlier we get your feedback, the more we can include in our work on the final release.

For the best development experience with Android 17, we recommend that you use the latest preview of Android Studio (Panda). Once you’re set up, here are some of the things you should do:

  • Compile against the new SDK, test in CI environments, and report any issues in our tracker on the feedback page.

  • Test your current app for compatibility, learn whether your app is affected by changes in Android 17, and install your app onto a device or emulator running Android 17 and extensively test it.

We’ll update the preview/beta system images and SDK regularly throughout the Android 17 release cycle. Once you’ve installed a beta build, you’ll automatically get future updates

over-the-air for all later previews and Betas.

For complete information, visit the Android 17 developer site.

Join the conversation

As we move toward Platform Stability and the general availability of Android 17 later this year, your feedback remains our most valuable asset. Whether you’re an early adopter on   the Canary channel or an app developer testing on Beta 2, consider joining our communities and filing feedback. We’re listening.

The post The Second Beta of Android 17 appeared first on InShot Pro.

]]>
The Intelligent OS: Making AI agents more helpful for Android apps https://theinshotproapk.com/the-intelligent-os-making-ai-agents-more-helpful-for-android-apps/ Wed, 25 Feb 2026 23:47:00 +0000 https://theinshotproapk.com/the-intelligent-os-making-ai-agents-more-helpful-for-android-apps/ Posted by Matthew McCullough, VP of Product Management, Android Development User expectations for AI on their devices are fundamentally shifting ...

Read more

The post The Intelligent OS: Making AI agents more helpful for Android apps appeared first on InShot Pro.

]]>


Posted by Matthew McCullough, VP of Product Management, Android Development


User expectations for AI on their devices are fundamentally shifting how they interact with their apps. Instead of opening apps to do tasks step-by-step, they’re asking AI to do the heavy lifting for them. In this new interaction model, success is shifting from getting users to open your app, to successfully fulfilling their tasks and helping them get more done faster. 

To help you evolve your apps for this agentic future, we’re introducing early stage developer capabilities that bridge the gap between your apps and agentic apps and personalized assistants, such as Google Gemini. While we are in the early, beta stages of this journey, we’re designing these features with privacy and security at their core as our first step in exploring this paradigm shift as an app ecosystem.

Empowering apps with AppFunctions

Android AppFunctions allows apps to expose data and functionality directly to AI agents and assistants. With the AppFunctions Jetpack library and platform APIs, developers can create self-describing functions that agentic apps can discover and execute via natural language. Mirroring how backend capabilities are declared via MCP cloud servers, AppFunctions provides an on-device solution for Android apps. Much like WebMCP, it executes these functions locally on the device rather than on a server.

The Samsung Gallery integration with Gemini on the Galaxy S26 series showcases AppFunctions in action. Instead of manually scrolling through photo albums, you can now simply ask Gemini to “Show me pictures of my cat from Samsung Gallery.” Gemini takes the user query, intelligently identifies and triggers the right function, and presents the returned photos from Samsung Gallery directly in the Gemini app, so users never need to leave. This experience is multimodal and can be done via voice or text. Users can even use the returned photos in follow-up conversations, like sending them to friends in a text message.


This integration is currently available on the Galaxy S26 series and will soon expand to Samsung devices running OneUI 8.5 and higher. Through AppFunctions, Gemini can already automate tasks across app categories like Calendar, Notes, and Tasks, on devices from multiple manufacturers. Whether it’s coordinating calendar events, organizing notes, or setting to-do reminders, users can streamline daily activities in one place.

Enabling agentic apps with intelligent UI automation

While AppFunctions provides a structured framework and more control for apps to communicate with AI agents and assistants, we know that not every interaction has a dedicated integration yet. We’re also developing a UI automation framework for AI agents and assistants to intelligently execute generic tasks on users’ installed apps, with user transparency and control built in. This is the platform doing the heavy lifting, so developers can get agentic reach with zero code. It’s a low-effort way to extend their reach without a major engineering lift right now. 

To get feedback as we refine this framework, we’re starting with an early preview on the Galaxy S26 series and select Pixel 10 devices, where users will be able to delegate multi-step tasks to Gemini with just a long press of the power button. Launching as a beta feature in the Gemini app, this will support a curated selection of apps in the food delivery, grocery, and rideshare categories in the US and Korea to start. Whether users need to place a complex pizza order for their family members with particular tastes, coordinate a multi-stop rideshare with co-workers, or reorder their last grocery purchase, Gemini can help complete tasks using the context already available from your apps, without any developer work needed.


Users are in control while a task is being actioned in the background through UI automation. For any automation action, users have the option to monitor a task’s progress via notifications or “live view” and can switch to manual control at any point to take over the experience. Gemini is also designed to alert users before completing sensitive tasks, such as making a purchase.

Looking ahead

In Android 17, we’re looking to broaden these capabilities to reach even more users, developers, and device manufacturers.

We are currently building experiences with a small set of app developers, focusing on high-quality user experiences as the ecosystem evolves. We plan to share more details later this year on how you can use AppFunctions and UI automation to enable agentic integrations for your app. Stay tuned for updates.

The post The Intelligent OS: Making AI agents more helpful for Android apps appeared first on InShot Pro.

]]>
The First Beta of Android 17 https://theinshotproapk.com/the-first-beta-of-android-17/ Fri, 13 Feb 2026 19:23:00 +0000 https://theinshotproapk.com/the-first-beta-of-android-17/ Posted by Matthew McCullough, VP of Product Management, Android Developer Today we’re releasing the first beta of Android 17, continuing our ...

Read more

The post The First Beta of Android 17 appeared first on InShot Pro.

]]>

Posted by Matthew McCullough, VP of Product Management, Android Developer

Today we’re releasing the first beta of Android 17, continuing our work to build a platform that prioritizes privacy, security, and refined performance. This build continues our work for more adaptable Android apps, introduces significant enhancements to camera and media capabilities, new tools for optimizing connectivity, and expanded profiles for companion devices. This release also highlights a fundamental shift in the way we’re bringing new releases to the developer community, from the traditional Developer Preview model to the Android Canary program

Beyond the Developer Preview

Android has replaced the traditional “Developer Preview” with a continuous Canary channel. This new “always-on” model offers three main benefits:

  • Faster Access: Features and APIs land in Canary as soon as they pass internal testing, rather than waiting for a quarterly release.
  • Better Stability: Early “battle-testing” in Canary results in a more polished Beta experience with new APIs and behavior changes that are closer to being final.
  • Easier Testing: Canary supports OTA updates (no more manual flashing) and, as a separate update channel, more easily integrates with CI workflows and gives you the earliest window to give immediate feedback on upcoming potential changes.

The Android 17 schedule

We’re going to be moving quickly from this Beta to our Platform Stability milestone, targeted for March. At this milestone, we’ll deliver final SDK/NDK APIs and largely final app-facing behaviors. From that time you’ll have several months before the final release to complete your testing.



A year of releases


We plan for Android 17 to continue to get updates in a series of quarterly releases. The upcoming release in Q2 is the only one where we introduce planned app breaking behavior changes. We plan to have a minor SDK release in Q4 with additional APIs and features.

Orientation and resizability restrictions


With the release of the Android 17 Beta, we’re moving to the next phase of our adaptive roadmap: Android 17 (API level 37) removes the developer opt-out for orientation and resizability restrictions on large screen devices (sw > 600 dp).

When your app targets SDK 37, it must be ready to adapt. Users expect their apps to work everywhere—whether multitasking on a tablet, unfolding a device, or using a desktop windowing environment—and they expect the UI to fill the space and respect their device posture.

Key Changes for SDK 37

Apps targeting Android 17 must ensure compatibility with the phase-out of manifest attributes and runtime APIs introduced in Android 16. When running on a large screen (smaller dimension ≥ 600dp), the following attributes and APIs will be ignored:

Manifest attributes/API Ignored values
screenOrientation portrait, reversePortrait, sensorPortrait, userPortrait, landscape, reverseLandscape, sensorLandscape, userLandscape
setRequestedOrientation() portrait, reversePortrait, sensorPortrait, userPortrait, landscape, reverseLandscape, sensorLandscape, userLandscape
resizeableActivity all
minAspectRatio all
maxAspectRatio all


Exemptions and User Control

These changes are specific to large screens; they do not apply to screens smaller than sw600dp (including traditional slate form factor phones). Additionally, apps categorized as games (based on the android:appCategory flag) are exempt from these restrictions.

It is also important to note that users remain in control. They can explicitly opt-in/out to using an app’s default behavior via the system’s aspect ratio settings.

Updates to configuration changes

To improve app compatibility and help minimize interrupted video playback, dropped input, and other types of disruptive state loss, we are updating the default behavior for Activity recreation. Starting with Android 17, the system will no longer restart activities by default for specific configuration changes that typically do not require a UI recreation, including CONFIG_KEYBOARD, CONFIG_KEYBOARD_HIDDEN, CONFIG_NAVIGATION, CONFIG_UI_MODE (when only UI_MODE_TYPE_DESK is changed), CONFIG_TOUCHSCREEN, and CONFIG_COLOR_MODE. Instead, running activities will simply receive these updates via onConfigurationChanged. If your application relies on a full restart to reload resources for these changes, you must now explicitly opt-in using the new android:recreateOnConfigChanges manifest attribute, which allows you to specify which configuration changes should trigger a complete activity lifecycle (from stop, to destroy and creation again), together with the related constants mcc, mnc, and the new ones keyboard, keyboardHidden, navigation, touchscreen and colorMode.

Prepare Your App

We’ve released tools and documentation to make it easy for you. Our focused blog post has more guidance, with strategies to address common issues. Apps will need to support landscape and portrait layouts for window sizes across the full range of aspect ratios, as restricting orientation or aspect ratio will no longer be an option. We recommend testing your app using the Android 17 Beta 1 with Pixel Tablet or Pixel Fold emulators (configured to targetSdkPreview = “CinnamonBun”) or by using the app compatibility framework to enable UNIVERSAL_RESIZABLE_BY_DEFAULT on Android 16 devices.


Performance

Lock-free MessageQueue


In Android 17, apps targeting SDK 37 or higher will receive a new implementation of android.os.MessageQueue where the implementation is lock-free. The new implementation improves performance and reduces missed frames, but may break clients that reflect on MessageQueue private fields and methods.

Generational garbage collection


Android 17 introduces generational garbage collection to ART‘s Concurrent Mark-Compact collector. This optimization introduces more frequent, less resource-intensive young-generation collections alongside full-heap collections. aiming to reduce overall garbage collection CPU cost and time duration. ART improvements are also available to over a billion devices running Android 12 (API level 31) and higher through Google Play System updates.

Static final fields now truly final

Starting from Android 17 apps targeting Android 17 or later won’t be able to modify “static final” fields, allowing the runtime to apply performance optimizations more aggressively. An attempt to do so via reflection (and deep reflection) will always lead to IllegalAccessException being thrown. Modifying them via JNI’s SetStatic<Type>Field methods family will immediately crash the application.

Custom Notification View Restrictions

To reduce memory usage we are restricting the size of custom notification views. This update closes a loophole that allows apps to bypass existing limits using URIs. This behavior is gated by the target SDK version and takes effect for apps targeting API 37 and higher.

New performance debugging ProfilingManager triggers

We’ve introduced several new system triggers to ProfilingManager to help you collect in-depth data to debug performance issues. These triggers are TRIGGER_TYPE_COLD_START, TRIGGER_TYPE_OOM, and TRIGGER_TYPE_KILL_EXCESSIVE_CPU_USAGE.

To understand how to set up the new system triggers, check out the trigger-based profiling and retrieve and analyze profiling data documentation.

Media and Camera

Android 17 brings professional-grade tools to media and camera apps, with features like seamless transitions and standardized loudness.

Dynamic Camera Session Updates


We have introduced updateOutputConfigurations() to CameraCaptureSession. This allows you to dynamically attach and detach output surfaces without the need to reconfigure the entire camera capture session. This change enables seamless transitions between camera use cases and modes (such as shooting still images vs shooting videos) without the memory cost and code complexity of configuring and holding onto all camera output surfaces that your app might need during camera start up. This helps to eliminate user-visible glitches or freezes during operation.


fun updateCameraSession(session: CameraCaptureSession, newOutputConfigs:  List<OutputConfiguration>)) {
    // Dynamically update the session without closing and reopening
    try {
        
        // Update the output configurations
        session.updateOutputConfigurations(newOutputConfigs)
    } catch (e: CameraAccessException) {
        // Handle error
    }
}

Logical multi-camera device metadata

When working with logical cameras that combine multiple physical camera sensors, you can now request additional metadata from all active physical cameras involved in a capture, not just the primary one. Previously, you had to implement workarounds, sometimes allocating unnecessary physical streams, to obtain metadata from secondary active cameras (e.g., during a lens switch for zoom where a follower camera is active). This feature introduces a new key, LOGICAL_MULTI_CAMERA_ADDITIONAL_RESULTS, in CaptureRequest and CaptureResult. By setting this key to ON in your CaptureRequest, the TotalCaptureResult will include metadata from these additional active physical cameras. You can access this comprehensive metadata using TotalCaptureResult.getPhysicalCameraTotalResults() to get more detailed information that may enable you to optimize resource usage in your camera applications.

Versatile Video Coding (VVC) Support

Android 17 adds support for the Versatile Video Coding (VVC) standard. This includes defining the video/vvc MIME type in MediaFormat, adding new VVC profiles in MediaCodecInfo, and integrating support into MediaExtractor. This feature will be coming to devices with hardware decode support and capable drivers.

Constant Quality for Video Recording

We have added setVideoEncodingQuality() to MediaRecorder. This allows you to configure a constant quality (CQ) mode for video encoders, giving you finer control over video quality beyond simple bitrate settings.

Background Audio Hardening

Starting in Android 17, the audio framework will enforce restrictions on background audio interactions including audio playback, audio focus requests, and volume change APIs to ensure that these changes are started intentionally by the user. 

If the app tries to call audio APIs while the application is not in a valid lifecycle, the audio playback and volume change APIs will fail silently without an exception thrown or failure message provided. The audio focus API will fail with the result code AUDIOFOCUS_REQUEST_FAILED.

Privacy and Security

Deprecation of Cleartext Traffic Attribute

The android:usesCleartextTraffic attribute is now deprecated. If your app targets (Android 17) or higher and relies on usesCleartextTraffic=”true” without a corresponding Network Security Configuration, it will default to disallowing cleartext traffic. You are encouraged to migrate to Network Security Configuration files for granular control.

HPKE Hybrid Cryptography

We are introducing a public Service Provider Interface (SPI) for an implementation of HPKE hybrid cryptography, enabling secure communication using a combination of public key and symmetric encryption (AEAD).

Connectivity and Telecom

Enhanced VoIP Call History

We are introducing user preference management for app VoIP call history integration. This includes support for caller and participant avatar URIs in the system dialer, enabling granular user control over call log privacy and enriching the visual display of integrated VoIP call logs.

Wi-Fi Ranging and Proximity

Wi-Fi Ranging has been enhanced with new Proximity Detection capabilities, supporting continuous ranging and secure peer-to-peer discovery. Updates to Wi-Fi Aware ranging include new APIs for peer handles and PMKID caching for 11az secure ranging.

Developer Productivity and Tools

Updates for companion device apps

We have introduced two new profiles to the CompanionDeviceManager to improve device distinction and permission handling:

  • Medical Devices: This profile allows medical device mobile applications to request all necessary permissions with a single tap, simplifying the setup process.

  • Fitness Trackers: The DEVICE_PROFILE_FITNESS_TRACKER profile allows companion apps to explicitly indicate they are managing a fitness tracker. This ensures accurate user experiences with distinct icons while reusing existing watch role permissions.

Also, the CompanionDeviceManager now offers a unified dialog for device association and Nearby permission requests. You can leverage the new setExtraPermissions method in AssociationRequest.Builder to bundle nearby permission prompts within the existing association flow, reducing the number of dialogs presented to the user.

Get started with Android 17


You can enroll any supported Pixel device to get this and future Android Beta updates over-the-air. If you don’t have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio.

If you are currently in the Android Beta program, you will be offered an over-the-air update to Beta 1.

If you have Android 26Q1 Beta and would like to take the final stable release of 26Q1 and exit Beta, you need to ignore the over-the-air update to 26Q2 Beta 1 and wait for the release of 26Q1.

We’re looking for your feedback so please report issues and submit feature requests on the feedback page. The earlier we get your feedback, the more we can include in our work on the final release.

For the best development experience with Android 17, we recommend that you use the latest preview of Android Studio (Panda). Once you’re set up, here are some of the things you should do:

  • Compile against the new SDK, test in CI environments, and report any issues in our tracker on the feedback page.

  • Test your current app for compatibility, learn whether your app is affected by changes in Android 17, and install your app onto a device or emulator running Android 17 and extensively test it.

We’ll update the preview/beta system images and SDK regularly throughout the Android 17 release cycle. Once you’ve installed a beta build, you’ll automatically get future updates over-the-air for all later previews and Betas.

For complete information, visit the Android 17 developer site.


Join the conversation

As we move toward Platform Stability and the final stable release of Android 17 later this year, your feedback remains our most valuable asset. Whether you’re an early adopter on   the Canary channel or an app developer testing on Beta 1, consider joining our communities and filing feedback. We’re listening.

The post The First Beta of Android 17 appeared first on InShot Pro.

]]>
The Embedded Photo Picker https://theinshotproapk.com/the-embedded-photo-picker/ Mon, 02 Feb 2026 12:04:21 +0000 https://theinshotproapk.com/the-embedded-photo-picker/ Posted by Roxanna Aliabadi Walker, Product Manager and Yacine Rezgui, Developer Relations Engineer The Embedded Photo Picker: A more seamless ...

Read more

The post The Embedded Photo Picker appeared first on InShot Pro.

]]>

Posted by Roxanna Aliabadi Walker, Product Manager and Yacine Rezgui, Developer Relations Engineer

The Embedded Photo Picker: A more seamless way to privately request photos and videos in your app



Get ready to enhance your app’s user experience with an exciting new way to use the Android photo picker! The new embedded photo picker offers a seamless and privacy-focused way for users to select photos and videos, right within your app’s interface. Now your app can get all the same benefits available with the photo picker, including access to cloud content, integrated directly into your app’s experience.

Why embedded?

We understand that many apps want to provide a highly integrated and seamless experience for users when selecting photos or videos. The embedded photo picker is designed to do just that, allowing users to quickly access their recent photos without ever leaving your app. They can also explore their full library in their preferred cloud media provider (e.g., Google Photos), including favorites, albums and search functionality. This eliminates the need for users to switch between apps or worry about whether the photo they want is stored locally or in the cloud.

Seamless integration, enhanced privacy


With the embedded photo picker, your app doesn’t need access to the user’s photos or videos until they actually select something. This means greater privacy for your users and a more streamlined experience. Plus, the embedded photo picker provides users with access to their entire cloud-based media library, whereas the standard photo permission is restricted to local files only.

The embedded photo picker in Google Messages


Google Messages showcases the power of the embedded photo picker. Here’s how they’ve integrated it:
  • Intuitive placement: The photo picker sits right below the camera button, giving users a clear choice between capturing a new photo or selecting an existing one.
  • Dynamic preview: Immediately after a user taps a photo, they see a large preview, making it easy to confirm their selection. If they deselect the photo, the preview disappears, keeping the experience clean and uncluttered.
  • Expand for more content: The initial view is simplified, offering easy access to recent photos. However, users can easily expand the photo picker to browse and choose from all photos and videos in their library, including cloud content from Google Photos.
  • Respecting user choices: The embedded photo picker only grants access to the specific photos or videos the user selects, meaning they can stop requesting the photo and video permissions altogether. This also saves the Messages from needing to handle situations where users only grant limited access to photos and videos.

Implementation

Integrating the embedded photo picker is made easy with the Photo Picker Jetpack library.  

Jetpack Compose

First, include the Jetpack Photo Picker library as a dependency.


implementation("androidx.photopicker:photopicker-compose:1.0.0-alpha01")


The EmbeddedPhotoPicker composable function provides a mechanism to include the embedded photo picker UI directly within your Compose screen. This composable creates a SurfaceView which hosts the embedded photo picker UI. It manages the connection to the EmbeddedPhotoPicker service, handles user interactions, and communicates selected media URIs to the calling application. 

@Composable
fun EmbeddedPhotoPickerDemo() {
    // We keep track of the list of selected attachments
    var attachments by remember { mutableStateOf(emptyList<Uri>()) }

    val coroutineScope = rememberCoroutineScope()
    // We hide the bottom sheet by default but we show it when the user clicks on the button
    val scaffoldState = rememberBottomSheetScaffoldState(
        bottomSheetState = rememberStandardBottomSheetState(
            initialValue = SheetValue.Hidden,
            skipHiddenState = false
        )
    )

    // Customize the embedded photo picker
    val photoPickerInfo = EmbeddedPhotoPickerFeatureInfo
        .Builder()
        // Set limit the selection to 5 items
        .setMaxSelectionLimit(5)
        // Order the items selection (each item will have an index visible in the photo picker)
        .setOrderedSelection(true)
        // Set the accent color (red in this case, otherwise it follows the device's accent color)
        .setAccentColor(0xFF0000)
        .build()

    // The embedded photo picker state will be stored in this variable
    val photoPickerState = rememberEmbeddedPhotoPickerState(
        onSelectionComplete = {
            coroutineScope.launch {
                // Hide the bottom sheet once the user has clicked on the done button inside the picker
                scaffoldState.bottomSheetState.hide()
            }
        },
        onUriPermissionGranted = {
            // We update our list of attachments with the new Uris granted
            attachments += it
        },
        onUriPermissionRevoked = {
            // We update our list of attachments with the Uris revoked
            attachments -= it
        }
    )

       SideEffect {
        val isExpanded = scaffoldState.bottomSheetState.targetValue == SheetValue.Expanded

        // We show/hide the embedded photo picker to match the bottom sheet state
        photoPickerState.setCurrentExpanded(isExpanded)
    }

    BottomSheetScaffold(
        topBar = {
            TopAppBar(title = { Text("Embedded Photo Picker demo") })
        },
        scaffoldState = scaffoldState,
        sheetPeekHeight = if (scaffoldState.bottomSheetState.isVisible) 400.dp else 0.dp,
        sheetContent = {
            Column(Modifier.fillMaxWidth()) {
                // We render the embedded photo picker inside the bottom sheet
                EmbeddedPhotoPicker(
                    state = photoPickerState,
                    embeddedPhotoPickerFeatureInfo = photoPickerInfo
                )
            }
        }
    ) { innerPadding ->
        Column(Modifier.padding(innerPadding).fillMaxSize().padding(horizontal = 16.dp)) {
            Button(onClick = {
                coroutineScope.launch {
                    // We expand the bottom sheet, which will trigger the embedded picker to be shown
                    scaffoldState.bottomSheetState.partialExpand()
                }
            }) {
                Text("Open photo picker")
            }
            LazyVerticalGrid(columns = GridCells.Adaptive(minSize = 64.dp)) {
                // We render the image using the Coil library
                itemsIndexed(attachments) { index, uri ->
                    AsyncImage(
                        model = uri,
                        contentDescription = "Image ${index + 1}",
                        contentScale = ContentScale.Crop,
                        modifier = Modifier.clickable {
                            coroutineScope.launch {
                                // When the user clicks on the media from the app's UI, we deselect it
                                // from the embedded photo picker by calling the method deselectUri
                                photoPickerState.deselectUri(uri)
                            }
                        }
                    )
                }
            }
        }
    }
}

Views

First, include the Jetpack Photo Picker library as a dependency.

implementation("androidx.photopicker:photopicker:1.0.0-alpha01")

To add the embedded photo picker, you need to add an entry to your layout file. 

<view class="androidx.photopicker.EmbeddedPhotoPickerView"
    android:id="@+id/photopicker"
    android:layout_width="match_parent"
    android:layout_height="match_parent" />

And initialize it in your activity/fragment.
// We keep track of the list of selected attachments
private val _attachments = MutableStateFlow(emptyList<Uri>())
val attachments = _attachments.asStateFlow()

private lateinit var picker: EmbeddedPhotoPickerView
private var openSession: EmbeddedPhotoPickerSession? = null

val pickerListener = object EmbeddedPhotoPickerStateChangeListener {
    override fun onSessionOpened (newSession: EmbeddedPhotoPickerSession) {
        openSession = newSession
    }

    override fun onSessionError (throwable: Throwable) {}

    override fun onUriPermissionGranted(uris: List<Uri>) {
        _attachments += uris
    }

    override fun onUriPermissionRevoked (uris: List<Uri>) {
        _attachments -= uris
    }

    override fun onSelectionComplete() {
        // Hide the embedded photo picker as the user is done with the photo/video selection
    }
}

override fun onCreate(savedInstanceState: Bundle?) {
    super.onCreate(savedInstanceState)
    setContentView(R.layout.main_view)
    
    //
    // Add the embedded photo picker to a bottom sheet to allow the dragging to display the full photo library
    //

    picker = findViewById(R.id.photopicker)
    picker.addEmbeddedPhotoPickerStateChangeListener(pickerListener)
    picker.setEmbeddedPhotoPickerFeatureInfo(
        // Set a custom accent color
        EmbeddedPhotoPickerFeatureInfo.Builder().setAccentColor(0xFF0000).build()
    )
}


You can call different methods of EmbeddedPhotoPickerSession to interact with the embedded picker.
// Notify the embedded picker of a configuration change
openSession.notifyConfigurationChanged(newConfig)

// Update the embedded picker to expand following a user interaction
openSession.notifyPhotoPickerExpanded(/* expanded: */ true)

// Resize the embedded picker
openSession.notifyResized(/* width: */ 512, /* height: */ 256)

// Show/hide the embedded picker (after a form has been submitted)
openSession.notifyVisibilityChanged(/* visible: */ false)

// Remove unselected media from the embedded picker after they have been
// unselected from the host app's UI
openSession.requestRevokeUriPermission(removedUris)

It’s important to note that the embedded photo picker experience is available for users running Android 14 (API level 34) or higher with SDK Extensions 15+. Read more about photo picker device availability.

For enhanced user privacy and security, the system renders the embedded photo picker in a way that prevents any drawing or overlaying. This intentional design choice means that your UX should consider the photo picker’s display area as a distinct and dedicated element, much like you would plan for an advertising banner.

If you have any feedback or suggestions, submit tickets to our issue tracker.



The post The Embedded Photo Picker appeared first on InShot Pro.

]]>
Beyond the smartphone: How JioHotstar optimized its UX for foldables and tablets https://theinshotproapk.com/beyond-the-smartphone-how-jiohotstar-optimized-its-ux-for-foldables-and-tablets/ Mon, 02 Feb 2026 12:04:16 +0000 https://theinshotproapk.com/beyond-the-smartphone-how-jiohotstar-optimized-its-ux-for-foldables-and-tablets/ Posted by Prateek Batra, Developer Relations Engineer, Android Adaptive Apps Beyond Phones: How JioHotstar Built an Adaptive UX JioHotstar is ...

Read more

The post Beyond the smartphone: How JioHotstar optimized its UX for foldables and tablets appeared first on InShot Pro.

]]>
Posted by Prateek Batra, Developer Relations Engineer, Android Adaptive Apps

Beyond Phones: How JioHotstar Built an Adaptive UX
JioHotstar is a leading streaming platform in India, serving a user base exceeding 400 million. With a vast content library encompassing over 330,000 hours of video on demand (VOD) and real-time delivery of major sporting events, the platform operates at a massive scale.

To help ensure a premium experience for its vast audience, JioHotstar elevated the viewing experience by optimizing their app for foldables and tablets. They accomplished this by following Google’s adaptive app guidance and utilizing resources like  samples, codelabs, cookbooks, and documentation to help create a consistently seamless and engaging experience across all display sizes.


JioHotstar’s large screen challenge


JioHotstar offered an excellent user experience on standard phones and the team wanted to take advantage of new form factors. To start, the team evaluated their app against the large screen app quality guidelines to understand the optimizations required to extend their user experience to foldables and tablets. To achieve Tier 1 large screen app status, the team implemented two strategic updates to adapt the app across various form factors and differentiate on foldables. By addressing the unique challenges posed by foldable and tablet devices, JioHotstar aims to deliver a high-quality and immersive experience across all display sizes and aspect ratios.


What they needed to do


JioHotstar’s user interface, designed primarily for standard phone displays, encountered challenges in adapting hero image aspect ratios, menus, and show screens to the diverse screen sizes and resolutions of other form factors. This often led to image cropping, letterboxing, low resolution, and unutilized space, particularly in landscape mode. To help fully leverage the capabilities of tablets and foldables and deliver an optimized user experience across these device types, JioHotstar focused on refining the UI to ensure optimal layout flexibility, image rendering, and navigation across a wider range of devices.


What they did


For a better viewing experience on large screens, JioHotstar took the initiative to enhance its app by incorporating WindowSizeClass and creating optimized layouts for compact, medium and extended widths. This allowed the app to adapt its user interface to various screen dimensions and aspect ratios, ensuring a consistent and visually appealing UI across different devices.

JioHotstar followed this pattern using Material 3 Adaptive library to know how much space the app has available. First invoking the currentWindowAdaptiveInfo() function, then using new layouts accordingly for the three window size classes:


val sizeClass = currentWindowAdaptiveInfo().windowSizeClass

if(sizeClass.isWidthAtLeastBreakpoint(WIDTH_DP_EXPANDED_LOWER_BOUND)) {
    showExpandedLayout()
} else if(sizeClass.isHeightAtLeastBreakpoint(WIDTH_DP_MEDIUM_LOWER_BOUND)) {
    showMediumLayout()
} else {
    showCompactLayout()
}

The breakpoints are in order, from the biggest to the smallest, as internally the API checks for with a greater or equal then, so any width that is at least greater or equal then EXPANDED will always be greater than MEDIUM.


JioHotstar is able to provide the premium experience unique to foldable devices: Tabletop Mode. This feature conveniently relocates the video player to the top half of the screen and the video controls to the bottom half when a foldable device is partially folded for a handsfree experience.



To accomplish this, also using the Material 3 Adaptive library, the same currentWindowAdaptiveInfo() can be used to query for the tabletop mode. Once the device is held in tabletop mode, a change of layout to match the top and bottom half of the posture can be done with a column to place the player in the top half and the controllers in the bottom half:

val isTabletTop = currentWindowAdaptiveInfo().windowPosture.isTabletop
if(isTabletopMode) {
   Column {
       Player(Modifier.weight(1f))
       Controls(Modifier.weight(1f))
   }
} else {
   usualPlayerLayout()
}

JioHotstar is now meeting the Large Screen app quality guidelines for Tier 1. The team leveraged adaptive app guidance, utilizing samples, codelabs, cookbooks, and documentation to incorporate these recommendations.


To further improve the user experience, JioHotstar increased touch target sizes, to the recommended 48dp, on video discovery pages, ensuring accessibility across large screen devices. Their video details page is now adaptive, adjusting to screen sizes and orientations. They moved beyond simple image scaling, instead leveraging window size classes to detect window size and density in real time and load the most appropriate hero image for each form factor, helping to enhance visual fidelity. Navigation was also improved, with layouts adapting to suit different screen sizes.


Now users can view their favorite content from JioHotstar on large screens devices with an improved and highly optimized viewing experience.

Achieving Tier 1 large screen app status with Google is a milestone that reflects the strength of our shared vision. At JioHotstar, we have always believed that optimizing for large screen devices goes beyond adaptability, it’s about elevating the viewing experience for audiences who are rapidly embracing foldables, tablets, and connected TVs.

Leveraging Google’s Jetpack libraries and guides allowed us to combine our insights on content consumption with their expertise in platform innovation. This collaboration allowed both teams to push boundaries, address gaps, and co-create a seamless, immersive experience across every screen size.

Together, we’re proud to bring this enhanced experience to millions of users and to set new benchmarks in how India and the world experience streaming.

Sonu Sanjeev
Senior Software Development Engineer

The post Beyond the smartphone: How JioHotstar optimized its UX for foldables and tablets appeared first on InShot Pro.

]]>
Android 16 QPR2 is Released https://theinshotproapk.com/android-16-qpr2-is-released/ Tue, 02 Dec 2025 19:00:00 +0000 https://theinshotproapk.com/android-16-qpr2-is-released/ Posted by Matthew McCullough, VP of Product Management, Android Developer Faster Innovation with Android’s first Minor SDK Release Today we’re ...

Read more

The post Android 16 QPR2 is Released appeared first on InShot Pro.

]]>

Posted by Matthew McCullough, VP of Product Management, Android Developer




Faster Innovation with Android’s first Minor SDK Release

Today we’re releasing Android 16 QPR2, bringing a host of enhancements to user experience, developer productivity, and media capabilities. It marks a significant milestone in the evolution of the Android platform as the first release to utilize a minor SDK version.

A Milestone for Platform Evolution: The Minor SDK Release

Minor SDK releases allow us to deliver APIs and features more rapidly outside of the major yearly platform release cadence, ensuring that the platform and your apps can innovate faster with new functionality. Unlike major releases that may include behavior changes impacting app compatibility, the changes in QPR2 are largely additive, minimizing the need for regression testing. Behavior changes in QPR2 are largely focused on security or accessibility, such as SMS OTP protection, or the support for the expanded dark theme.

To support this, we have introduced new fields to the Build class as of Android 16, allowing your app to check for these new APIs using SDK_INT_FULL and VERSION_CODES_FULL.

if ((Build.VERSION.SDK_INT >= Build.VERSION_CODES.BAKLAVA) && (Build.VERSION.SDK_INT_FULL >= Build.VERSION_CODES_FULL.BAKLAVA_1)) {
    // Call new APIs from the Android 16 QPR2 release
}

Enhanced User Experience and Customization

QPR2 improves Android’s personalization and accessibility, giving users more control over how their devices look and feel.

Expanded Dark Theme

To create a more consistent user experience for users who have low vision, photosensitivity, or simply those who prefer a dark system-wide appearance, QPR2 introduced an expanded option under dark theme.

The old Fitbit app showing the impact of expanded dark theme; the new Fitbit app directly supports a dark theme

When the expanded dark theme setting is enabled by a user, the system uses your app’s isLightTheme theme attribute to determine whether to apply inversion. If your app inherits from one of the standard DayNight themes, this is done automatically for you. If it does not, make sure to declare isLightTheme=”false” in your dark theme to ensure your app is not inadvertently inverted. Standard Android Views, Composables, and WebViews will be inverted, while custom rendering engines like Flutter will not.

This is largely intended as an accessibility feature. We strongly recommend implementing a native dark theme, which gives you full control over your app’s appearance; you can protect your brand’s identity, ensure text is readable, and prevent visual glitches from happening when your UI is automatically inverted, guaranteeing a polished, reliable experience for your users.

Custom Icon Shapes & Auto-Theming

In QPR2, users can select specific shapes for their app icons, which apply to all icons and folder previews. Additionally, if your app does not provide a dedicated themed icon, the system can now automatically generate one by applying a color filtering algorithm to your existing launcher icon.

Custom Icon Shapes

Test Icon Shape & Color in Android Studio

Automatic system icon color filtering

Interactive Chooser Sessions

The sharing experience is now more dynamic. Apps can keep the UI interactive even when the system sharesheet is open, allowing for real-time content updates within the Chooser.

Boosting Your Productivity and App Performance

We are introducing tools and updates designed to streamline your workflow and improve app performance.

Linux Development Environment with GUI Applications

The Linux development environment feature has been expanded to support running Linux GUI applications directly within the terminal environment.

Wilber, the GIMP mascot, designed by Aryeom Han, is licensed under CC BY-SA 4.0. The screenshot of the GIMP interface is used with courtesy.

Generational Garbage Collection

The Android Runtime (ART) now includes a Generational Concurrent Mark-Compact (CMC) Garbage Collector. This focuses collection on newly allocated objects, resulting in reduced CPU usage and improved battery efficiency.

Widget Engagement Metrics

You can now query user interaction events—such as clicks, scrolls, and impressions—to better understand how users engage with your widgets.

16KB Page Size Readiness

To help prepare for future architecture requirements, we have added early warning dialogs for debuggable apps that are not 16KB page-aligned.

Media, Connectivity, and Health

QPR2 brings robust updates to media standards and device connectivity.

IAMF and Audio Sharing

We have added software decoding support for Immersive Audio Model and Formats (IAMF), an open-source spatial audio format. Additionally, Personal Audio Sharing for Bluetooth LE Audio is now integrated directly into the system Output Switcher.

Health Connect Updates

Health Connect now automatically tracks steps using the device’s sensors. If your app has the READ_STEPS permission, this data will be available from the “android” package. Not only does this simplify the code needed to do step tracking, it’s also more power efficient. It also can now track weight, set index, and Rate of Perceived Exertion (RPE) in exercise segments.

Smoother Migrations

A new 3rd-party Data Transfer API enables more reliable data migration between Android and iOS devices.

Strengthening Privacy and Security

Security remains a top priority with new features designed to protect user data and device integrity.

Developer Verification

We introduced APIs to support developer verification during app installation along with new ADB commands to simulate verification outcomes. As a developer, you are free to install apps without verification by using ADB, so you can continue to test apps that are not intended or not yet ready to distribute to the wider consumer population.

SMS OTP Protection

The delivery of messages containing an SMS retriever hash will be delayed for most apps for three hours to help prevent OTP hijacking. The RECEIVE_SMS broadcast will be withheld and sms provider database queries will be filtered. The SMS will be available to these apps after the three hour delay.

Secure Lock Device

A new system-level security state, Secure Lock Device, is being introduced. When enabled (e.g., remotely via “Find My Device”), the device locks immediately and requires the primary PIN, pattern, or password to unlock, heightening security. When active, notifications and quick affordances on the lock screen will be hidden, and biometric unlock may be temporarily disabled.

Get Started

If you’re not in the Beta or Canary programs, your Pixel device should get the Android 16 QPR2 release shortly. If you don’t have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio. If you are currently on the Android 16 QPR2 Beta and have not yet installed the Android 16 QPR3 beta, you can opt out of the program and you will then be offered the release version of Android 16 QPR2 over the air.
For the best development experience with Android 16 QPR2, we recommend that you use the latest Canary build of Android Studio Otter.
Thank you again to everyone who participated in our Android beta program. We’re looking forward to seeing how your apps take advantage of the updates in Android 16 QPR2.

For complete information on Android 16 QPR2 please visit the Android 16 developer site.

The post Android 16 QPR2 is Released appeared first on InShot Pro.

]]>
Optimize your app battery using Android vitals wake lock metric https://theinshotproapk.com/optimize-your-app-battery-using-android-vitals-wake-lock-metric/ Thu, 02 Oct 2025 16:00:00 +0000 https://theinshotproapk.com/optimize-your-app-battery-using-android-vitals-wake-lock-metric/ Written by Alice Yuan, Senior Developer Relations Engineer Most of the content of this post is also available in video ...

Read more

The post Optimize your app battery using Android vitals wake lock metric appeared first on InShot Pro.

]]>

Written by Alice Yuan, Senior Developer Relations Engineer


Most of the content of this post is also available in video format, go give it a watch!

Battery life is a crucial aspect of user experience and wake locks play a major role. Are you using them excessively? In this blog post we’ll explore what wake locks are, what are some best practices for using them and how you can better understand your own app’s behavior with the Play Console metric.

Excessive partial wake lock usage in Android Vitals

The Play Console now monitors battery drain, with a focus on excessive partial wake lock usage, as a key performance indicator.

This feature elevates the importance of battery efficiency alongside existing core metric stability indicators: excessive user-perceived crashes and ANRs. Currently, an app exceeding the threshold will not be less discoverable on Google Play.

The excessive wake lock warning in the Android vitals overview.

For mobile devices, the Android vitals metric applies to non-exempted wake locks acquired while the screen is off and the app is in the background or running a foreground service. Android vitals considers partial wake lock usage excessive if:

  • Wake locks are held for at least two hours within a 24-hour period.

  • It affects more than 5% of your app’s sessions, averaged over 28 days.

Wake locks created by audio, location, and JobScheduler user initiated APIs are exempted from the wake lock calculation.

Understanding wake locks

A wake lock is a mechanism that allows an app to keep a device’s CPU running even when the user isn’t actively interacting with it. 

A partial wake lock keeps the CPU running even if the screen is off, preventing the CPU from entering a low-power “suspend” state. A full wake lock keeps both the screen and the CPU running.

There are 2 methods partial wake locks are acquired:

  • The app manually acquires and releases the wake lock using PowerManager APIs for a specific use case, often this is acquired in conjunction with a Foreground Service – a platform lifecycle API intended for user-perceptible operation.

  • Alternatively, the wake lock is acquired by another API, and attributed to the app due to usage of the API, more on this in the best practices section.

While wake locks are necessary for tasks like completing a user-initiated download of a large file, their excessive or improper use can lead to significant battery drain. We’ve seen cases where apps hold wake locks for hours or fail to release them properly, leading to user complaints about significant battery drain even when they’re not interacting with the app.

Best Practices for Wake Lock Usage

Before we go over how to debug excessive wake lock usage, ensure you’re following wake lock best practices. 

Consider these four critical questions.

1. Have you considered alternative wake lock options?

Before considering acquiring a manual partial wake lock, follow this decision-making flowchart:

Flowchart to decide when to manually acquire a wake lock

  1. Does the screen need to stay on?

  • Is the application running a foreground service? 

    • No: You don’t need to manually acquire a wake lock.

  • Is it detrimental to the user experience if the device suspends? 

    • No: For instance, updating a notification after the device wakes up doesn’t require a wake lock. 

    • Yes: If it’s critical to prevent the device from suspending, like ongoing communication with an external device, proceed.

  • Is there already an API keeping the device awake on your behalf?

    • You can leverage the documentation Identify wake locks created by other APIs to identify scenarios where wake locks created by other APIs to identify scenarios where wake locks are created by other APIs such as LocationManager.

    • If no APIs exist, proceed to the final question.

  • If you’ve answered all these questions and determined no alternative exists, you should proceed with manually acquiring a wake lock.

  • 2. Are you naming the wake lock correctly?

    When manually acquiring wake locks, proper naming is important for debugging:

    • Leave out any Personally Identifiable Information (PII) in the name like email addresses. If PII is detected, the wake lock is logged as _UNKNOWN, hindering debugging.

    • Don’t name your wake lock programmatically using class or method names, as these can be obfuscated by tools like Proguard. Instead, use a hard-coded string.

    • Do not add counters or unique identifiers to wake lock tags. The same tag should be used every time the wake lock runs to allow the system to aggregate usage by name, making abnormal behavior easier to detect.

    3. Is the acquired wake lock always released?

    If you’re acquiring a wake lock manually, ensure the wake lock release always executes. Failing to release a wake lock can cause significant battery drain. 

    For example, if an uncaught exception is thrown during processingWork(), the release() call might never happen. Instead, you can use a try-finally block to guarantee the wake lock is released, even if an exception occurs.

    Additionally, you can add a timeout to the wake lock to ensure it releases after a specific period, preventing it from being held indefinitely.

    fun processingWork() {
        wakeLock.apply {
            try {
                acquire(60 * 10 * 1000) // timeout after 10 minutes
                doTheWork()
            } finally {
                release()
            }
        }
    }


    4. Can you reduce the wake-up frequency?

    For periodic data requests, reducing how often your app wakes up the device is key to battery optimization. Some examples of reducing wake-up frequency include:

    You can view more details in the wake lock best practices documentation.

    Debugging excessive wake lock usage

    Even with the best intentions, excessive wake lock usage can occur. If your app is flagged in the Play Console, here’s how to debug it:
    Initial identification with Play Console

    The Android vitals excessive partial wake lock dashboard provides breakdowns of non-exempted wake lock names associated with your app, showing affected sessions and durations. Reminder to use the documentation to help you identify if the wake lock name is app-held or held by another API.

    The Android vitals excessive partial wake lock dashboard scrolled down to the breakdowns section to view excessive wake lock tags.

    Debugging excessive wake locks held by workers/jobs


    You can identify worker-held wake locks with this wake lock name:

    *job*/<package_name>/androidx.work.impl.background.systemjob.SystemJobService

    The full list of variations of worker-held wake lock names is available in documentation. To debug these wake locks, you can use Background Task Inspector to debug locally, or leverage getStopReason to debug issues in the field. 


    Android Studio Background Task Inspector

    Screen capture of the Background Task Inspector, where it has been able to identify a worker “WeatherSyncWorker” that has frequently retried and failed.

    For local debugging of WorkManager issues, use this tool on an emulator or connected device (API level 26+). It shows a list of workers and their statuses (finished, executing, enqueued), allowing you to inspect details and understand worker chains. 

    For instance, it can reveal if a worker is frequently failing or retrying due to hitting system limitations. 

    See Background Task Inspector documentation for more details.

    WorkManager getStopReason

    For in-field debugging of workers with excessive wake locks, use WorkInfo.getStopReason() on WorkManager 2.9.0+ or for JobScheduler, JobParameters.getStopReason() available on SDK 31+. 

    This API helps log the reason why a worker stopped (e.g., STOP_REASON_TIMEOUT, STOP_REASON_QUOTA), pinpointing issues like frequent timeouts due to exhausting runtime duration.

    backgroundScope.launch {
        WorkManager.getInstance(context)
            .getWorkInfoByIdFlow(workRequest.id)
            .collect { workInfo ->
                logStopReason(workRequest.id, workInfo?.stopReason)
            }
    }
    

    Debugging other types of excessive wake locks

    For more complex scenarios involving manually held wake locks or APIs holding the wake lock, we recommend you use system trace collection to debug.

    System trace collection

    A system trace is a powerful debugging tool that captures a detailed record of system activity over a period, providing insights into CPU state, thread activity, network activity, and battery-related metrics like job duration and wake lock usage.

    You can capture a system trace using several methods: 

    Enable “power:PowerManagement” Atrace category in the Perfetto UI under the Android apps & svcs tab. 

    Regardless of the chosen method, it’s crucial to ensure that you are collecting the “power:PowerManagement” Atrace category to enable viewing of device state tracks. 

    Perfetto UI inspection and SQL analysis

    System traces can be opened and inspected in the Perfetto UI. When you open the trace, you will see a visualization of various processes on a timeline. The tracks we will be focused on in this guide are the ones under “Device State”.

    Pin the tracks under “Device State” such as “Top app”, “Screen state”, “Long Wake locks”, and “Jobs” tracks to visually identify long-running wake lock slices.

    Each block lists the name of the event, when the event started, and when it ended. In Perfetto, this is called a slice.

    For scalable analysis of multiple traces, you can use Perfetto’s SQL analysis. A SQL query can find all wake locks sorted by duration, helping identify the top contributors to excessive usage.

    Here’s an example query summing all the wake lock tags that occurred in the system trace, ordered by total duration:

    SELECT slice.name as name, track.name as track_name,
    SUM(dur / 100000) as total_dur_ms
    FROM slice
    JOIN track ON slice.track_id = track.id
    WHERE track.name = 'WakeLocks'
    GROUP BY slice.name, track.name
    ORDER BY total_dur_ms DESC
    


    Use ProfilingManager for in-field trace collection

    For hard-to-reproduce issues, ProfilingManager (added in SDK 35) is a programmatic API that allows developers to collect system traces in the field with start and end triggers. It offers more control over the start and end trigger points for profile collection and enforces system-level rate limiting to prevent device performance impact. 

    Check out the ProfilingManager documentation for further steps on how to implement in field system trace collection which include how to programmatically capture a trace, analyze profiling data, and use local debug commands.

    The system traces collected using ProfilingManager will look similar to the ones collected manually, but system processes and other app processes are redacted from the trace.

    Conclusion

    The excessive partial wake lock metric in Android vitals is only a small part of our ongoing commitment to supporting developers in reducing battery drain and improving app quality. 

    By understanding and properly implementing wake locks, you can significantly optimize your app’s battery performance. Leveraging alternative APIs, adhering to wake lock best practices, and using powerful debugging tools such as Background Task Inspector, system traces and ProfilingManager are key to ensuring your app’s success on Google Play.


    The post Optimize your app battery using Android vitals wake lock metric appeared first on InShot Pro.

    ]]>