InShot Pro https://theinshotproapk.com/ Download InShot Pro APK for Android, iOS, and PC Tue, 17 Feb 2026 20:00:00 +0000 en-US hourly 1 https://theinshotproapk.com/wp-content/uploads/2021/07/cropped-Inshot-Pro-APK-Logo-1-32x32.png InShot Pro https://theinshotproapk.com/ 32 32 Get ready for Google I/O May 19-20 https://theinshotproapk.com/get-ready-for-google-i-o-may-19-20/ Tue, 17 Feb 2026 20:00:00 +0000 https://theinshotproapk.com/get-ready-for-google-i-o-may-19-20/ Posted by The Google I/O Team Google I/O returns May 19–20 Google I/O is back! Join us online as we ...

Read more

The post Get ready for Google I/O May 19-20 appeared first on InShot Pro.

]]>

Posted by The Google I/O Team


Google I/O returns May 19–20

Google I/O is back! Join us online as we share our latest AI breakthroughs and updates in products across the company, from Gemini to Android, Chrome, Cloud, and more.

Tune in to learn about agentic coding and the latest Gemini model updates. The event will feature keynote addresses from Google leaders, forward-looking panel discussions, and product demos designed to showcase the next frontier of technology.

Register now and tune in live

Visit io.google and register to receive updates about Google I/O. Kicking off May 19 at 10am PT, this year we’ll be livestreaming keynotes, demos, and more sessions across two days. We’ll also be bringing back the popular Dialogues sessions featuring big thinkers and bold leaders discussing how AI is shaping our future.


The post Get ready for Google I/O May 19-20 appeared first on InShot Pro.

]]>
Under the hood: Android 17’s lock-free MessageQueue https://theinshotproapk.com/under-the-hood-android-17s-lock-free-messagequeue/ Tue, 17 Feb 2026 16:00:00 +0000 https://theinshotproapk.com/under-the-hood-android-17s-lock-free-messagequeue/ Posted by Shai Barack, Android Platform Performance Lead and Charles Munger, Principal Software Engineer In Android 17, apps targeting SDK ...

Read more

The post Under the hood: Android 17’s lock-free MessageQueue appeared first on InShot Pro.

]]>

Posted by Shai Barack, Android Platform Performance Lead and Charles Munger, Principal Software Engineer


In Android 17, apps targeting SDK 37 or higher will receive a new implementation of MessageQueue where the implementation is lock-free. The new implementation improves performance and reduces missed frames, but may break clients that reflect on MessageQueue private fields and methods. To learn more about the behavior change and how you can mitigate impact, check out the MessageQueue behavior change documentation. This technical blog post provides an overview of the MessageQueue rearchitecture and how you can analyze lock contention issues using Perfetto.

The Looper drives the UI thread of every Android application. It pulls work from a MessageQueue, dispatches it to a Handler, and repeats. For two decades, MessageQueue used a single monitor lock (i.e. a synchronized code block) to protect its state.

Android 17 introduces a significant update to this component: a lock-free implementation named DeliQueue.

This post explains how locks affect UI performance, how to analyze these issues with Perfetto, and the specific algorithms and optimizations used to improve the Android main thread.

The problem: Lock Contention and Priority Inversion

The legacy MessageQueue functioned as a priority queue protected by a single lock. If a background thread posts a message while the main thread performs queue maintenance, the background thread blocks the main thread.

When two or more threads are competing for exclusive use of the same lock, this is called Lock contention. This contention can cause Priority Inversion, leading to UI jank and other performance problems.

Priority inversion can happen when a high-priority thread (like the UI thread) is made to wait for a low-priority thread. Consider this sequence:

  1. A low priority background thread acquires the MessageQueue lock to post the result of work that it did.

  2. A medium priority thread becomes runnable and the Kernel’s scheduler allocates it CPU time, preempting the low priority thread.

  3. The high priority UI thread finishes its current task and attempts to read from the queue, but is blocked because the low priority thread holds the lock.

The low-priority thread blocks the UI thread, and the medium-priority work delays it further.

A visual representation of priority inversion. It shows 'Task L' (Low) holding a lock, blocking 'Task H' (High). 'Task M' (Medium) then preempts 'Task L', effectively delaying 'Task H' for the duration of 'Task M's' execution.

Analyzing contention with Perfetto

You can diagnose these issues using Perfetto. In a standard trace, a thread blocked on a monitor lock enters the sleeping state, and Perfetto shows a slice indicating the lock owner.

When you query trace data, look for slices named “monitor contention with …” followed by the name of the thread that owns the lock and the code site where the lock was acquired.

Case study: Launcher jank

To illustrate, let’s analyze a trace where a user experienced jank while navigating home on a Pixel phone immediately after taking a photo in the camera app. Below we see a screenshot of Perfetto showing the events leading up to the missed frame:


A Perfetto trace screenshot diagnosing the Launcher jank. The 'Actual Timeline' shows a red missed frame. Coinciding with this, the main thread track contains a large green slice labeled 'monitor contention with owner BackgroundExecutor,' indicating that the UI thread was blocked because a background thread held the MessageQueue lock.

  • Symptom: The Launcher main thread missed its frame deadline. It blocked for 18ms, which exceeds the 16ms deadline required for 60Hz rendering.

  • Diagnosis: Perfetto showed the main thread blocked on the MessageQueue lock. A “BackgroundExecutor” thread owned the lock.

  • Root Cause: The BackgroundExecutor runs at Process.THREAD_PRIORITY_BACKGROUND (very low priority). It performed a non-urgent task (checking app usage limits). Simultaneously, medium priority threads were using CPU time to process data from the camera. The OS scheduler preempted the BackgroundExecutor thread to run the camera threads.

This sequence caused the Launcher’s UI thread (high priority) to become indirectly blocked by the camera worker thread (medium priority), which was keeping the Launcher’s background thread (low priority) from releasing the lock.

Querying traces with PerfettoSQL

You can use PerfettoSQL to query trace data for specific patterns. This is useful if you have a large bank of traces from user devices or tests, and you’re searching for specific traces that demonstrate a problem.

For example, this query finds MessageQueue contention coincident with dropped frames (jank):

INCLUDE PERFETTO MODULE android.monitor_contention;
INCLUDE PERFETTO MODULE android.frames.jank_type;

SELECT
  process_name,
  -- Convert duration from nanoseconds to milliseconds
  SUM(dur) / 1000000 AS sum_dur_ms,
  COUNT(*) AS count_contention
FROM android_monitor_contention
WHERE is_blocked_thread_main
AND short_blocked_method LIKE "%MessageQueue%" 

-- Only look at app processes that had jank
AND upid IN (
  SELECT DISTINCT(upid)
  FROM actual_frame_timeline_slice
  WHERE android_is_app_jank_type(jank_type) = TRUE
)
GROUP BY process_name
ORDER BY SUM(dur) DESC;

In this more complex examples, join trace data that spans multiple tables to identify MessageQueue contention during app startup:

INCLUDE PERFETTO MODULE android.monitor_contention; 
INCLUDE PERFETTO MODULE android.startup.startups; 

-- Join package and process information for startups
DROP VIEW IF EXISTS startups; 
CREATE VIEW startups AS 
SELECT startup_id, ts, dur, upid 
FROM android_startups 
JOIN android_startup_processes USING(startup_id); 

-- Intersect monitor contention with startups in the same process.
DROP TABLE IF EXISTS monitor_contention_during_startup; 
CREATE VIRTUAL TABLE monitor_contention_during_startup 
USING SPAN_JOIN(android_monitor_contention PARTITIONED upid, startups PARTITIONED upid); 

SELECT 
  process_name, 
  SUM(dur) / 1000000 AS sum_dur_ms, 
  COUNT(*) AS count_contention 
FROM monitor_contention_during_startup 
WHERE is_blocked_thread_main 
AND short_blocked_method LIKE "%MessageQueue%" 
GROUP BY process_name 
ORDER BY SUM(dur) DESC;

You can use your favorite LLM to write PerfettoSQL queries to find other patterns.

At Google, we use BigTrace to run PerfettoSQL queries across millions of traces. In doing so, we confirmed that what we saw anecdotally was, in fact, a systemic issue. The data revealed that MessageQueue lock contention impacts users across the entire ecosystem, substantiating the need for a fundamental architectural change.

Solution: lock-free concurrency

We addressed the MessageQueue contention problem by implementing a lock-free data structure, using atomic memory operations rather than exclusive locks to synchronize access to shared state. A data structure or algorithm is lock-free if at least one thread can always make progress regardless of the scheduling behavior of the other threads. This property is generally hard to achieve, and is usually not worth pursuing for most code.

The atomic primitives

Lock-free software often relies on atomic Read-Modify-Write primitives that the hardware provides.

On older generation ARM64 CPUs, atomics used a Load-Link/Store-Conditional (LL/SC) loop. The CPU loads a value and marks the address. If another thread writes to that address, the store fails, and the loop retries. Because the threads can keep trying and succeed without waiting for another thread, this operation is lock-free.

ARM64 LL/SC loop example
retry:
    ldxr    x0, [x1]        // Load exclusive from address x1 to x0
    add     x0, x0, #1      // Increment value by 1
    stxr    w2, x0, [x1]    // Store exclusive.
                            // w2 gets 0 on success, 1 on failure
    cbnz    w2, retry       // If w2 is non-zero (failed), branch to retr


(view in Compiler Explorer)


Newer ARM architectures (ARMv8.1) support Large System Extensions (LSE) which include instructions in the form of Compare-And-Swap (CAS) or Load-And-Add (demonstrated below). In Android 17 we added support to the Android Runtime (ART) compiler to detect when LSE is supported and emit optimized instructions:

/ ARMv8.1 LSE atomic example
ldadd   x0, x1, [x2]    // Atomic load-add.
                        // Faster, no loop required.

In our benchmarks, high-contention code that uses CAS achieves a ~3x speedup over the LL/SC variant.

The Java programming language offers atomic primitives via java.util.concurrent.atomic that rely on these and other specialized CPU instructions.

The Data Structure: DeliQueue

To remove lock contention from MessageQueue, our engineers designed a novel data structure called DeliQueue. DeliQueue separates Message insertion from Message processing:

  1. The list of Messages (Treiber stack): A lock-free stack. Any thread can push new Messages here without contention.

  2. The priority queue (Min-heap): A heap of Messages to handle, exclusively owned by the Looper thread (hence no synchronization or locks are needed to access).

Enqueue: pushing to a Treiber stack

The list of Messages is kept in a Treiber stack [1], a lock-free stack that uses a CAS loop to update the head pointer.

public class TreiberStack <E> {
    AtomicReference<Node<E>> top =
            new AtomicReference<Node<E>>();
    public void push(E item) {
        Node<E> newHead = new Node<E>(item);
        Node<E> oldHead;
        do {
            oldHead = top.get();
            newHead.next = oldHead;
        } while (!top.compareAndSet(oldHead, newHead));
    }

    public E pop() {
        Node<E> oldHead;
        Node<E> newHead;
        do {
            oldHead = top.get();
            if (oldHead == null) return null;
            newHead = oldHead.next;
        } while (!top.compareAndSet(oldHead, newHead));
        return oldHead.item;
    }
}

Source code based on Java Concurrency in Practice [2], available online and released to the public domain

Any producer can push new Messages to the stack at any time. This is like pulling a ticket at a deli counter – your number is determined by when you showed up, but the order you get your food in doesn’t have to match. Because it’s a linked stack, every Message is a sub-stack – you can see what the Message queue was like at any point in time by tracking the head and iterating forwards – you won’t see any new Messages pushed on top, even if they’re being added during your traversal.

Dequeue: bulk transfer to a min-heap

To find the next Message to handle, the Looper processes new Messages from the Treiber stack by walking the stack starting from the top and iterating until it finds the last Message that it previously processed. As the Looper traverses down the stack, it inserts Messages into the deadline-ordered min-heap. Since the Looper exclusively owns the heap, it orders and processes Messages without locks or atomics.

A system diagram illustrating the DeliQueue architecture. Concurrent producer threads (left) push messages onto a shared 'Lock-Free Treiber Stack' using atomic CAS operations. The single consumer 'Looper Thread' (right) claims these messages via an atomic swap, merges them into a private 'Local Min-heap' sorted by timestamp, and then executes them.


In walking down the stack, the Looper also creates links from stacked Messages back to their predecessors, thus forming a doubly-linked list. Creating the linked list is safe because links pointing down the stack are added via the Treiber stack algorithm with CAS, and links up the stack are only ever read and modified by the Looper thread. These back links are then used to remove Messages from arbitrary points in the stack in O(1) time.

This design provides O(1) insertion for producers (threads posting work to the queue) and amortized O(log N) processing for the consumer (the Looper).

Using a min-heap to order Messages also addresses a fundamental flaw in the legacy MessageQueue, where Messages were kept in a singly-linked list (rooted at the top). In the legacy implementation, removal from the head was O(1), but insertion had a worst case of O(N) – scaling poorly for overloaded queues! Conversely, insertion to and removal from the min-heap scale logarithmically, delivering competitive average performance but really excelling in tail latencies.


Legacy (locked) MessageQueue

DeliQueue

Insert

O(N)

O(1) for calling thread

O(logN) for Looper thread

Remove from head

O(1)

O(logN)

In the legacy queue implementation, producers and the consumer used a lock to coordinate exclusive access to the underlying singly-linked list. In DeliQueue, the Treiber stack handles concurrent access, and the single consumer handles ordering its work queue.

Removal: consistency via tombstones

DeliQueue is a hybrid data structure, joining a lock-free Treiber stack with a single-threaded min-heap. Keeping these two structures in sync without a global lock presents a unique challenge: a message might be physically present in the stack but logically removed from the queue.

To solve this, DeliQueue uses a technique called “tombstoning.” Each Message tracks its position in the stack via the backwards and forwards pointers, its index in the heap’s array, and a boolean flag indicating whether it has been removed. When a Message is ready to run, the Looper thread will CAS its removed flag, then remove it from the heap and stack.

When another thread needs to remove a Message, it doesn’t immediately extract it from the data structure. Instead, it performs the following steps:

  1. Logical removal: the thread uses a CAS to atomically set the Message’s removal flag from false to true. The Message remains in the data structure as evidence of its pending removal, a so-called “tombstone”. Once a Message is flagged for removal, DeliQueue treats it as if it no longer exists in the queue whenever it’s found.

  2. Deferred cleanup: The actual removal from the data structure is the responsibility of the Looper thread, and is deferred until later. Rather than modifying the stack or heap, the remover thread adds the Message to another lock-free freelist stack.

  3. Structural removal: Only the Looper can interact with the heap or remove elements from the stack. When it wakes up, it clears the freelist and processes the Messages it contained. Each Message is then unlinked from the stack and removed from the heap. 


This approach keeps all management of the heap single-threaded. It minimizes the number of concurrent operations and memory barriers required, making the critical path faster and simpler.

Traversal: benign Java memory model data races

Most concurrency APIs, such as Future in the Java standard library, or Kotlin’s Job and Deferred, include a mechanism to cancel work before it completes. An instance of one of these classes matches 1:1 with a unit of underlying work, and calling cancel on an object cancels the specific operations associated with them.


Today’s Android devices have multi-core CPUs and concurrent, generational garbage collection. But when Android was first developed, it was too expensive to allocate one object for each unit of work. Consequently, Android’s Handler supports cancellation via numerous overloads of removeMessages rather than removing a specific Message, it removes all Messages that match the specified criteria. In practice, this requires iterating through all Messages inserted before removeMessages was called and removing the ones that match.


When iterating forward, a thread only requires one ordered atomic operation, to read the current head of the stack. After that, ordinary field reads are used to find the next
Message. If the Looper thread modifies the next fields while removing Messages, the Looper’s write and another thread’s read are unsynchronized – this is a data race. Normally, a data race is a serious bug that can cause huge problems in your app – leaks, infinite loops, crashes, freezes, and more. However, under certain narrow conditions, data races can be benign within the Java Memory Model. Suppose we start with a stack of:

A diagram showing the initial state of the message stack as a linked list. The 'Head' points to 'Message A', which links sequentially to 'Message B', 'Message C', and 'Message D'.


We perform an atomic read of the head, and see A. A’s next pointer points to B. At the same time as we process B, the looper might remove B and C, by updating A to point to C and then D.

A diagram illustrating a benign data race during list traversal. The Looper thread has updated 'Message A' to point directly to 'Message D', effectively removing 'Message B' and 'Message C'. Simultaneously, a concurrent thread reads a stale 'next' pointer from A, traverses through the logically removed messages B and C, and eventually rejoins the live list at 'Message D'.


Even though B and C are logically removed, B retains its next pointer to C, and C to D. The reading thread continues traversing through the detached removed nodes and eventually rejoins the live stack at D.


By designing DeliQueue to handle races between traversal and removal, we allow for safe, lock-free iteration.

Quitting: Native refcount

Looper is backed by a native allocation that must be manually freed once the Looper has quit. If some other thread is adding Messages while the Looper is quitting, it could use the native allocation after it’s freed, a memory safety violation. We prevent this using a tagged refcount, where one bit of the atomic is used to indicate whether the Looper is quitting.


Before using the native allocation, a thread reads the refcount atomic. If the quitting bit is set, it returns that the Looper is quitting and the native allocation must not be used. If not, it attempts a CAS to increment the number of active threads using the native allocation. After doing what it needs to, it decrements the count. If the quitting bit was set after its increment but before the decrement, and the count is now zero, then it wakes up the Looper thread.

When the Looper thread is ready to quit, it uses CAS to set the quitting bit in the atomic. If the refcount was 0, it can proceed to free its native allocation. Otherwise, it parks itself, knowing that it will be woken up when the last user of the native allocation decrements the refcount. This approach does mean that the Looper thread waits for the progress of other threads, but only when it’s quitting. That only happens once and is not performance sensitive, and it keeps the other code for using the native allocation fully lock-free.

A state diagram illustrating the tagged refcount mechanism for safe termination. It defines three states based on the atomic value's layout (Bit 63 for teardown, Bits 0-62 for refcount):  Active (Green): The teardown bit is 0. Workers successfully increment and decrement the reference count.  Draining (Yellow): The Looper has set the teardown bit to 1. New worker increments fail, but existing workers continue to decrement.  Terminated (Red): Occurs when the reference count reaches 0 while draining. The Looper is signaled that it is safe to destroy the native allocation.



There’s a lot of other tricks and complexity in the implementation. You can learn more about DeliQueue by reviewing the source code.

Optimization: branchless programming

While developing and testing DeliQueue, the team ran many benchmarks and carefully profiled the new code. One issue identified using the simpleperf tool was pipeline flushes caused by the Message comparator code.

A standard comparator uses conditional jumps, with the condition for deciding which Message comes first simplified below:

static int compareMessages(@NonNull Message m1, @NonNull Message m2) {
    if (m1 == m2) {
        return 0;
    }

    // Primary queue order is by when.
    // Messages with an earlier when should come first in the queue.
    final long whenDiff = m1.when - m2.when;
    if (whenDiff > 0) return 1;
    if (whenDiff < 0) return -1;

    // Secondary queue order is by insert sequence.
    // If two messages were inserted with the same `when`, the one inserted
    // first should come first in the queue.
    final long insertSeqDiff = m1.insertSeq - m2.insertSeq;
    if (insertSeqDiff > 0) return 1;
    if (insertSeqDiff < 0) return -1;

    return 0;
}


This code compiles to conditional jumps (b.le and cbnz instructions). When the CPU encounters a conditional branch, it can’t know whether the branch is taken until the condition is computed, so it doesn’t know which instruction to read next, and has to guess, using a technique called branch prediction. In a case like binary search, the branch direction will be unpredictably different at each step, so it’s likely that half the predictions will be wrong. Branch prediction is often ineffective in searching and sorting algorithms (such as the one used in a min-heap), because the cost of guessing wrong is larger than the improvement from guessing correctly. When the branch predictor guesses wrong, it must throw away the work it did after assuming the predicted value, and start again from the path that was actually taken – this is called a pipeline flush.

To find this issue, we profiled our benchmarks using the branch-misses performance counter, which records stack traces where the branch predictor guesses wrong. We then visualized the results with Google pprof, as shown below:

A screenshot from the pprof web UI showing branch misses in MessageQueue code to compare Message instances while performing heap operations.

Recall that the original MessageQueue code used a singly-linked list for the ordered queue. Insertion would traverse the list in sorted order as a linear search, stopping at the first element that’s past the point of insertion and linking the new Message ahead of it. Removal from the head simply required unlinking the head. Whereas DeliQueue uses a min-heap, where mutations require reordering some elements (sifting up or down) with logarithmic complexity in a balanced data structure, where any comparison has an even chance of directing the traversal to a left child or to a right child. The new algorithm is asymptotically faster, but exposes a new bottleneck as the search code stalls on branch misses half the time.

Realizing that branch misses were slowing down our heap code, we optimized the code using branch-free programming:

// Branchless Logic
static int compareMessages(@NonNull Message m1, @NonNull Message m2) {
    final long when1 = m1.when;
    final long when2 = m2.when;
    final long insertSeq1 = m1.insertSeq;
    final long insertSeq2 = m2.insertSeq;

    // signum returns the sign (-1, 0, 1) of the argument,
    // and is implemented as pure arithmetic:
    // ((num >> 63) | (-num >>> 63))
    final int whenSign = Long.signum(when1 - when2);
    final int insertSeqSign = Long.signum(insertSeq1 - insertSeq2);

    // whenSign takes precedence over insertSeqSign,
    // so the formula below is such that insertSeqSign only matters
    // as a tie-breaker if whenSign is 0.
    return whenSign * 2 + insertSeqSign;
}

To understand the optimization, disassemble the two examples in Compiler Explorer and use LLVM-MCA, a CPU simulator that can generate an estimated timeline of CPU cycles.

The original code:
Index     01234567890123
[0,0]     DeER .    .  .   sub  x0, x2, x3
[0,1]     D=eER.    .  .   cmp  x0, #0
[0,2]     D==eER    .  .   cset w0, ne
[0,3]     .D==eER   .  .   cneg w0, w0, lt
[0,4]     .D===eER  .  .   cmp  w0, #0
[0,5]     .D====eER .  .   b.le #12
[0,6]     . DeE---R .  .   mov  w1, #1
[0,7]     . DeE---R .  .   b    #48
[0,8]     . D==eE-R .  .   tbz  w0, #31, #12
[0,9]     .  DeE--R .  .   mov  w1, #-1
[0,10]    .  DeE--R .  .   b    #36
[0,11]    .  D=eE-R .  .   sub  x0, x4, x5
[0,12]    .   D=eER .  .   cmp  x0, #0
[0,13]    .   D==eER.  .   cset w0, ne
[0,14]    .   D===eER  .   cneg w0, w0, lt
[0,15]    .    D===eER .   cmp  w0, #0
[0,16]    .    D====eER.   csetm        w1, lt
[0,17]    .    D===eE-R.   cmp  w0, #0
[0,18]    .    .D===eER.   csinc        w1, w1, wzr, le
[0,19]    .    .D====eER   mov  x0, x1
[0,20]    .    .DeE----R   ret

Note the one conditional branch, b.lewhich avoids comparing the insertSeq fields if the result is already known from comparing the when fields.

The branchless code:
Index     012345678
[0,0]     DeER .  .   sub       x0, x2, x3
[0,1]     DeER .  .   sub       x1, x4, x5
[0,2]     D=eER.  .   cmp       x0, #0
[0,3]     .D=eER  .   cset      w0, ne
[0,4]     .D==eER .   cneg      w0, w0, lt
[0,5]     .DeE--R .   cmp       x1, #0
[0,6]     . DeE-R .   cset      w1, ne
[0,7]     . D=eER .   cneg      w1, w1, lt
[0,8]     . D==eeER   add       w0, w1, w0, lsl #1
[0,9]     .  DeE--R   ret


Here, the branchless implementation takes fewer cycles and instructions than even the shortest path through the branchy code – it’s better in all cases. The faster implementation plus the elimination of mispredicted branches resulted in a 5x improvement in some of our benchmarks!


However, this technique is not always applicable. Branchless approaches generally require doing work that will be thrown away, and if the branch is predictable most of the time, that wasted work can slow your code down. In addition, removing a branch often introduces a data dependency. Modern CPUs execute multiple operations per cycle, but they can’t execute an instruction until its inputs from a previous instruction are ready. In contrast, a CPU can speculate about data in branches, and work ahead if a branch is predicted correctly.

Testing and Validation

Validating the correctness of lock-free algorithms is notoriously difficult!

In addition to standard unit tests for continuous validation during development, we also wrote rigorous stress tests to verify queue invariants and to attempt to induce data races if they existed. In our test labs we could run millions of test instances on emulated devices and on real hardware.

With Java ThreadSanitizer (JTSan) instrumentation, we could use the same tests to also detect some data races in our code. JTSan did not find any problematic data races in DeliQueue, but – surprisingly -actually detected two concurrency bugs in the Robolectric framework, which we promptly fixed.

To improve our debugging capabilities, we built new analysis tools. Below is an example showing an issue in Android platform code where one thread is overloading another thread with Messages, causing a large backlog, visible in Perfetto thanks to the MessageQueue instrumentation feature that we added.

A screenshot of Perfetto UI, demonstrating flows and metadata for Messages being posted to a MessageQueue and delivered to a worker thread.

To enable MessageQueue tracing in the system_server process, include the following in your Perfetto configuration:

data_sources {
  config {
    name: "track_event"
    target_buffer: 0  # Change this per your buffers configuration
    track_event_config {
      enabled_categories: "mq"
    }
  }
}

Impact

DeliQueue improves system and app performance by eliminating locks from MessageQueue.

  • Synthetic benchmarks: multi-threaded insertions into busy queues is up to 5,000x faster than the legacy MessageQueue, thanks to improved concurrency (the Treiber stack) and faster insertions (the min-heap).

  • In Perfetto traces acquired from internal beta testers, we see a reduction of 15% in app main thread time spent in lock contention.

  • On the same test devices, the reduced lock contention leads to significant improvements to the user experience, such as:

    • -4% missed frames in apps.

    • -7.7% missed frames in System UI and Launcher interactions.

    • -9.1% in time from app startup to the first frame drawn, at the 95%ile.

Next steps

DeliQueue is rolling out to apps in Android 17. App developers should review preparing your app for the new lock-free MessageQueue on the Android Developers blog to learn how to test their apps.

References

[1] Treiber, R.K., 1986. Systems programming: Coping with parallelism. International Business Machines Incorporated, Thomas J. Watson Research Center.

[2] Goetz, B., Peierls, T., Bloch, J., Bowbeer, J., Holmes, D., & Lea, D. (2006). Java Concurrency in Practice. Addison-Wesley Professional.




The post Under the hood: Android 17’s lock-free MessageQueue appeared first on InShot Pro.

]]>
Prepare your app for the resizability and orientation changes in Android 17 https://theinshotproapk.com/prepare-your-app-for-the-resizability-and-orientation-changes-in-android-17/ Fri, 13 Feb 2026 19:34:00 +0000 https://theinshotproapk.com/prepare-your-app-for-the-resizability-and-orientation-changes-in-android-17/ Posted by Miguel Montemayor, Developer Relations Engineer, Android  With the release of Android 16 in 2025, we shared our vision ...

Read more

The post Prepare your app for the resizability and orientation changes in Android 17 appeared first on InShot Pro.

]]>

Posted by Miguel Montemayor, Developer Relations Engineer, Android 

With the release of Android 16 in 2025, we shared our vision for a device ecosystem where apps adapt seamlessly to any screen—whether it’s a phone, foldable, tablet, desktop, car display, or XR. Users expect their apps to work everywhere. Whether multitasking on a tablet, unfolding a device to read comfortably, or running apps in a desktop windowing environment, users expect the UI to fill the available display space and adapt to the device posture.

We introduced significant changes to orientation and resizability APIs to facilitate adaptive behavior, while providing a temporary opt-out to help you make the transition. We’ve already seen many developers successfully adapt to this transition when targeting API level 36.

Now with the release of the Android 17 Beta, we’re moving to the next phase of our adaptive roadmap: Android 17 (API level 37) removes the developer opt-out for orientation and resizability restrictions on large screen devices (sw > 600 dp). When you target API level 37, your app must be capable of adapting to a variety of display sizes.

The behavior changes ensure that the Android ecosystem offers a consistent, high-quality experience on all device form factors.

What’s changing in Android 17

Apps targeting Android 17 must ensure their compatibility with the phase out of manifest attributes and runtime APIs introduced in Android 16. We understand for some apps this may be a big transition, so we’ve included best practices and tools for helping avoid common issues later in this blog post.

No new changes have been introduced since Android 16, but the developer opt-out is no longer possible. As a reminder: when your app is running on a large screen—where large screen means that the smaller dimension of the display is greater than or equal to 600 dp—the following manifest attributes and APIs are ignored:

Note: As previously mentioned with Android 16, these changes do not apply for screens that are smaller than sw 600 dp or apps categorized as games based on the android:appCategory flag.

Manifest attributes/API Ignored values
screenOrientation portrait, reversePortrait, sensorPortrait, userPortrait, landscape, reverseLandscape, sensorLandscape, userLandscape
setRequestedOrientation() portrait, reversePortrait, sensorPortrait, userPortrait, landscape, reverseLandscape, sensorLandscape, userLandscape
resizeableActivity all
minAspectRatio all
maxAspectRatio all

Also, users retain control. In the aspect ratio settings, users can explicitly opt-in to using the app’s requested behavior.

Prepare your app

Apps will need to support landscape and portrait layouts for display sizes in the full range of aspect ratios in which users can choose to use apps, including resizable windows, as there will no longer be a way to restrict the aspect ratio and orientation to portrait or to landscape.

Test your app

Your first step is to test your app with these changes to make sure the app works well across display sizes.

Use Android 17 Beta 1 with the Pixel Tablet and Pixel Fold series emulators in Android Studio, and set the targetSdkPreview = “CinnamonBun”. Alternatively, you can use the app compatibility framework by enabling the UNIVERSAL_RESIZABLE_BY_DEFAULT flag if your app does not target API level 36 yet.

We have additional tools to ensure your layouts adapt correctly. You can automatically audit your UI and get suggestions to make your UI more adaptive with Compose UI Check, and simulate specific display characteristics in your tests using DeviceConfigurationOverride.

For apps that have historically restricted orientation and aspect ratio, we commonly see issues with skewed or misoriented camera previews, stretched layouts, inaccessible buttons, or loss of user state when handling configuration changes. 

Let’s take a look at some strategies for addressing these common issues.

Ensure camera compatibility

A common problem on landscape foldables or for aspect ratio calculations in scenarios like multi-window, desktop windowing, or connected displays, is when the camera preview appears stretched, rotated, or cropped.

Ensure your camera preview isn’t stretched or rotated.

This issue often happens on large screen and foldable devices because apps assume fixed relationships between camera features (like aspect ratio and sensor orientation) and device features (like device orientation and natural orientation).

To ensure your camera preview adapts correctly to any window size or orientation, consider these four solutions:

Solution 1: Jetpack CameraX (preferred) 

The simplest and most robust solution is to use the Jetpack CameraX library. Its PreviewView UI element is designed to handle all preview complexities automatically:

  • PreviewView correctly adjusts for sensor orientation, device rotation, and scaling

  • PreviewView maintains the aspect ratio of the camera image, typically by centering and cropping (FILL_CENTER)

  • You can set the scale type to FIT_CENTER to letterbox the preview if needed

For more information, see Implement a preview in the CameraX documentation.

Solution 2: CameraViewfinder 

If you are using an existing Camera2 codebase, the CameraViewfinder library (backward compatible to API level 21) is another modern solution. It simplifies displaying the camera feed by using a TextureView or SurfaceView and applying all the necessary transformations (aspect ratio, scale, and rotation) for you.

For more information, see the Introducing Camera Viewfinder blog post and Camera preview developer guide.

Solution 3: Manual Camera2 implementation 

If you can’t use CameraX or CameraViewfinder, you must manually calculate the orientation and aspect ratio and ensure the calculations are updated on each configuration change:

  • Get the camera sensor orientation (for example, 0, 90, 180, 270 degrees) from CameraCharacteristics

  • Get the device’s current display rotation (for example, 0, 90, 180, 270 degrees)

  • Use the camera sensor orientation and display rotation values to determine the necessary transformations for your SurfaceView or TextureView

  • Ensure the aspect ratio of your output Surface matches the aspect ratio of the camera preview to prevent distortion

Important: Note the camera app might be running in a portion of the screen, either in multi-window or desktop windowing mode or on a connected display. For this reason, screen size should not be used to determine the dimensions of the camera viewfinder; use window metrics instead. Otherwise you risk a stretched camera preview.

For more information, see the Camera preview developer guide and Your Camera app on different form factors video.

Solution 4: Perform basic camera actions using an Intent 

If you don’t need many camera features, a simple and straightforward solution is to perform basic camera actions like capturing a photo or video using the device’s default camera application. In this case, you can simply use an Intent instead of integrating with a camera library, for easier maintenance and adaptability. 

For more information, see Camera intents.

Avoid stretched UI or inaccessible buttons

If your app assumes a specific device orientation or display aspect ratio, the app may run into issues when it’s now used across various orientations or window sizes.

Ensure buttons, textfields, and other elements aren’t stretched on large screens.

You may have set buttons, text fields, and cards to fillMaxWidth or match_parent. On a phone, this looks great. However, on a tablet or foldable in landscape, UI elements stretch across the entire large screen. In Jetpack Compose, you can use the widthIn modifier to set a maximum width for components to avoid stretched content:

Box(
    contentAlignment = Alignment.Center,
    modifier = Modifier.fillMaxSize()
) {
    Column(
        modifier = Modifier
            .widthIn(max = 300.dp) // Prevents stretching beyond 300dp
            .fillMaxWidth()        // Fills width up to 300dp
            .padding(16.dp)
    ) {
        // Your content
    }
}


If a user opens your app in landscape orientation on a foldable or tablet, action buttons like Save or Login at the bottom of the screen may be rendered offscreen. If the container is not scrollable, the user can be blocked from proceeding. In Jetpack Compose, you can add a verticalScroll modifier to your component:

Column(
    modifier = Modifier
        .fillMaxSize()
        .verticalScroll(rememberScrollState())
        .padding(16.dp)
)


By combining max-width constraints with vertical scrolling, you ensure your app remains functional and usable, regardless of how wide or short the app window size becomes.


See our guide on building adaptive layouts.


Preserve state with configuration changes

Removing orientation and aspect ratio restrictions means your app’s window size will change much more frequently. Users may rotate their device, fold/unfold it, or resize your app dynamically in split-screen or desktop windowing modes.

By default, these configuration changes destroy and recreate your activity. If your app does not properly manage this lifecycle event, users will have a frustrating experience: scroll positions are reset to the top, half-filled forms are wiped clean, and navigation history is lost. To ensure a seamless adaptive experience, it’s critical your app preserves state through these configuration changes. With Jetpack Compose, you can opt-out of recreation, and instead allow window size changes to recompose your UI to reflect the new amount of space available.

See our guide on saving UI state.

Targeting API level 37 by August 2027

If your app previously opted out of these changes when targeting API level 36, your app will only be impacted by the Android 17 opt-out removal after your app targets API level 37. To help you plan ahead and make the necessary adjustments to your app, here’s the timeline when these changes will take effect:


  • Android 17: Changes described above will be the baseline experience for large screen devices (smallest screen width > 600 dp) for apps that target API level 37. Developers will not have an option to opt-out.


The deadlines for targeting a specific API level are app-store specific. For Google Play, new apps and updates will be required to target API level 37, making this behavior mandatory for distribution in August 2027.

Preparing for Android 17

Refer to the Android 17 changes page for all changes impacting apps in Android 17. To test your app, download Android 17 Beta 1 and update to targetSdkPreview = “CinnamonBun” or use the app compatibility framework to enable specific changes.

The future of Android is adaptive, and we’re here to help you get there. As you prepare for Android 17, we encourage you to review our guides for building adaptive layouts and our large screen quality guidelines. These resources are designed to help you handle multiple form factors and window sizes with confidence.

Don’t wait. Start getting ready for Android 17 today!

The post Prepare your app for the resizability and orientation changes in Android 17 appeared first on InShot Pro.

]]>
The First Beta of Android 17 https://theinshotproapk.com/the-first-beta-of-android-17/ Fri, 13 Feb 2026 19:23:00 +0000 https://theinshotproapk.com/the-first-beta-of-android-17/ Posted by Matthew McCullough, VP of Product Management, Android Developer Today we’re releasing the first beta of Android 17, continuing our ...

Read more

The post The First Beta of Android 17 appeared first on InShot Pro.

]]>

Posted by Matthew McCullough, VP of Product Management, Android Developer

Today we’re releasing the first beta of Android 17, continuing our work to build a platform that prioritizes privacy, security, and refined performance. This build continues our work for more adaptable Android apps, introduces significant enhancements to camera and media capabilities, new tools for optimizing connectivity, and expanded profiles for companion devices. This release also highlights a fundamental shift in the way we’re bringing new releases to the developer community, from the traditional Developer Preview model to the Android Canary program

Beyond the Developer Preview

Android has replaced the traditional “Developer Preview” with a continuous Canary channel. This new “always-on” model offers three main benefits:

  • Faster Access: Features and APIs land in Canary as soon as they pass internal testing, rather than waiting for a quarterly release.
  • Better Stability: Early “battle-testing” in Canary results in a more polished Beta experience with new APIs and behavior changes that are closer to being final.
  • Easier Testing: Canary supports OTA updates (no more manual flashing) and, as a separate update channel, more easily integrates with CI workflows and gives you the earliest window to give immediate feedback on upcoming potential changes.

The Android 17 schedule

We’re going to be moving quickly from this Beta to our Platform Stability milestone, targeted for March. At this milestone, we’ll deliver final SDK/NDK APIs and largely final app-facing behaviors. From that time you’ll have several months before the final release to complete your testing.



A year of releases


We plan for Android 17 to continue to get updates in a series of quarterly releases. The upcoming release in Q2 is the only one where we introduce planned app breaking behavior changes. We plan to have a minor SDK release in Q4 with additional APIs and features.

Orientation and resizability restrictions


With the release of the Android 17 Beta, we’re moving to the next phase of our adaptive roadmap: Android 17 (API level 37) removes the developer opt-out for orientation and resizability restrictions on large screen devices (sw > 600 dp).

When your app targets SDK 37, it must be ready to adapt. Users expect their apps to work everywhere—whether multitasking on a tablet, unfolding a device, or using a desktop windowing environment—and they expect the UI to fill the space and respect their device posture.

Key Changes for SDK 37

Apps targeting Android 17 must ensure compatibility with the phase-out of manifest attributes and runtime APIs introduced in Android 16. When running on a large screen (smaller dimension ≥ 600dp), the following attributes and APIs will be ignored:

Manifest attributes/API Ignored values
screenOrientation portrait, reversePortrait, sensorPortrait, userPortrait, landscape, reverseLandscape, sensorLandscape, userLandscape
setRequestedOrientation() portrait, reversePortrait, sensorPortrait, userPortrait, landscape, reverseLandscape, sensorLandscape, userLandscape
resizeableActivity all
minAspectRatio all
maxAspectRatio all


Exemptions and User Control

These changes are specific to large screens; they do not apply to screens smaller than sw600dp (including traditional slate form factor phones). Additionally, apps categorized as games (based on the android:appCategory flag) are exempt from these restrictions.

It is also important to note that users remain in control. They can explicitly opt-in/out to using an app’s default behavior via the system’s aspect ratio settings.

Updates to configuration changes

To improve app compatibility and help minimize interrupted video playback, dropped input, and other types of disruptive state loss, we are updating the default behavior for Activity recreation. Starting with Android 17, the system will no longer restart activities by default for specific configuration changes that typically do not require a UI recreation, including CONFIG_KEYBOARD, CONFIG_KEYBOARD_HIDDEN, CONFIG_NAVIGATION, CONFIG_UI_MODE (when only UI_MODE_TYPE_DESK is changed), CONFIG_TOUCHSCREEN, and CONFIG_COLOR_MODE. Instead, running activities will simply receive these updates via onConfigurationChanged. If your application relies on a full restart to reload resources for these changes, you must now explicitly opt-in using the new android:recreateOnConfigChanges manifest attribute, which allows you to specify which configuration changes should trigger a complete activity lifecycle (from stop, to destroy and creation again), together with the related constants mcc, mnc, and the new ones keyboard, keyboardHidden, navigation, touchscreen and colorMode.

Prepare Your App

We’ve released tools and documentation to make it easy for you. Our focused blog post has more guidance, with strategies to address common issues. Apps will need to support landscape and portrait layouts for window sizes across the full range of aspect ratios, as restricting orientation or aspect ratio will no longer be an option. We recommend testing your app using the Android 17 Beta 1 with Pixel Tablet or Pixel Fold emulators (configured to targetSdkPreview = “CinnamonBun”) or by using the app compatibility framework to enable UNIVERSAL_RESIZABLE_BY_DEFAULT on Android 16 devices.


Performance

Lock-free MessageQueue


In Android 17, apps targeting SDK 37 or higher will receive a new implementation of android.os.MessageQueue where the implementation is lock-free. The new implementation improves performance and reduces missed frames, but may break clients that reflect on MessageQueue private fields and methods.

Generational garbage collection


Android 17 introduces generational garbage collection to ART‘s Concurrent Mark-Compact collector. This optimization introduces more frequent, less resource-intensive young-generation collections alongside full-heap collections. aiming to reduce overall garbage collection CPU cost and time duration. ART improvements are also available to over a billion devices running Android 12 (API level 31) and higher through Google Play System updates.

Static final fields now truly final

Starting from Android 17 apps targeting Android 17 or later won’t be able to modify “static final” fields, allowing the runtime to apply performance optimizations more aggressively. An attempt to do so via reflection (and deep reflection) will always lead to IllegalAccessException being thrown. Modifying them via JNI’s SetStatic<Type>Field methods family will immediately crash the application.

Custom Notification View Restrictions

To reduce memory usage we are restricting the size of custom notification views. This update closes a loophole that allows apps to bypass existing limits using URIs. This behavior is gated by the target SDK version and takes effect for apps targeting API 37 and higher.

New performance debugging ProfilingManager triggers

We’ve introduced several new system triggers to ProfilingManager to help you collect in-depth data to debug performance issues. These triggers are TRIGGER_TYPE_COLD_START, TRIGGER_TYPE_OOM, and TRIGGER_TYPE_KILL_EXCESSIVE_CPU_USAGE.

To understand how to set up the new system triggers, check out the trigger-based profiling and retrieve and analyze profiling data documentation.

Media and Camera

Android 17 brings professional-grade tools to media and camera apps, with features like seamless transitions and standardized loudness.

Dynamic Camera Session Updates


We have introduced updateOutputConfigurations() to CameraCaptureSession. This allows you to dynamically attach and detach output surfaces without the need to reconfigure the entire camera capture session. This change enables seamless transitions between camera use cases and modes (such as shooting still images vs shooting videos) without the memory cost and code complexity of configuring and holding onto all camera output surfaces that your app might need during camera start up. This helps to eliminate user-visible glitches or freezes during operation.


fun updateCameraSession(session: CameraCaptureSession, newOutputConfigs:  List<OutputConfiguration>)) {
    // Dynamically update the session without closing and reopening
    try {
        
        // Update the output configurations
        session.updateOutputConfigurations(newOutputConfigs)
    } catch (e: CameraAccessException) {
        // Handle error
    }
}

Logical multi-camera device metadata

When working with logical cameras that combine multiple physical camera sensors, you can now request additional metadata from all active physical cameras involved in a capture, not just the primary one. Previously, you had to implement workarounds, sometimes allocating unnecessary physical streams, to obtain metadata from secondary active cameras (e.g., during a lens switch for zoom where a follower camera is active). This feature introduces a new key, LOGICAL_MULTI_CAMERA_ADDITIONAL_RESULTS, in CaptureRequest and CaptureResult. By setting this key to ON in your CaptureRequest, the TotalCaptureResult will include metadata from these additional active physical cameras. You can access this comprehensive metadata using TotalCaptureResult.getPhysicalCameraTotalResults() to get more detailed information that may enable you to optimize resource usage in your camera applications.

Versatile Video Coding (VVC) Support

Android 17 adds support for the Versatile Video Coding (VVC) standard. This includes defining the video/vvc MIME type in MediaFormat, adding new VVC profiles in MediaCodecInfo, and integrating support into MediaExtractor. This feature will be coming to devices with hardware decode support and capable drivers.

Constant Quality for Video Recording

We have added setVideoEncodingQuality() to MediaRecorder. This allows you to configure a constant quality (CQ) mode for video encoders, giving you finer control over video quality beyond simple bitrate settings.

Background Audio Hardening

Starting in Android 17, the audio framework will enforce restrictions on background audio interactions including audio playback, audio focus requests, and volume change APIs to ensure that these changes are started intentionally by the user. 

If the app tries to call audio APIs while the application is not in a valid lifecycle, the audio playback and volume change APIs will fail silently without an exception thrown or failure message provided. The audio focus API will fail with the result code AUDIOFOCUS_REQUEST_FAILED.

Privacy and Security

Deprecation of Cleartext Traffic Attribute

The android:usesCleartextTraffic attribute is now deprecated. If your app targets (Android 17) or higher and relies on usesCleartextTraffic=”true” without a corresponding Network Security Configuration, it will default to disallowing cleartext traffic. You are encouraged to migrate to Network Security Configuration files for granular control.

HPKE Hybrid Cryptography

We are introducing a public Service Provider Interface (SPI) for an implementation of HPKE hybrid cryptography, enabling secure communication using a combination of public key and symmetric encryption (AEAD).

Connectivity and Telecom

Enhanced VoIP Call History

We are introducing user preference management for app VoIP call history integration. This includes support for caller and participant avatar URIs in the system dialer, enabling granular user control over call log privacy and enriching the visual display of integrated VoIP call logs.

Wi-Fi Ranging and Proximity

Wi-Fi Ranging has been enhanced with new Proximity Detection capabilities, supporting continuous ranging and secure peer-to-peer discovery. Updates to Wi-Fi Aware ranging include new APIs for peer handles and PMKID caching for 11az secure ranging.

Developer Productivity and Tools

Updates for companion device apps

We have introduced two new profiles to the CompanionDeviceManager to improve device distinction and permission handling:

  • Medical Devices: This profile allows medical device mobile applications to request all necessary permissions with a single tap, simplifying the setup process.

  • Fitness Trackers: The DEVICE_PROFILE_FITNESS_TRACKER profile allows companion apps to explicitly indicate they are managing a fitness tracker. This ensures accurate user experiences with distinct icons while reusing existing watch role permissions.

Also, the CompanionDeviceManager now offers a unified dialog for device association and Nearby permission requests. You can leverage the new setExtraPermissions method in AssociationRequest.Builder to bundle nearby permission prompts within the existing association flow, reducing the number of dialogs presented to the user.

Get started with Android 17


You can enroll any supported Pixel device to get this and future Android Beta updates over-the-air. If you don’t have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio.

If you are currently in the Android Beta program, you will be offered an over-the-air update to Beta 1.

If you have Android 26Q1 Beta and would like to take the final stable release of 26Q1 and exit Beta, you need to ignore the over-the-air update to 26Q2 Beta 1 and wait for the release of 26Q1.

We’re looking for your feedback so please report issues and submit feature requests on the feedback page. The earlier we get your feedback, the more we can include in our work on the final release.

For the best development experience with Android 17, we recommend that you use the latest preview of Android Studio (Panda). Once you’re set up, here are some of the things you should do:

  • Compile against the new SDK, test in CI environments, and report any issues in our tracker on the feedback page.

  • Test your current app for compatibility, learn whether your app is affected by changes in Android 17, and install your app onto a device or emulator running Android 17 and extensively test it.

We’ll update the preview/beta system images and SDK regularly throughout the Android 17 release cycle. Once you’ve installed a beta build, you’ll automatically get future updates over-the-air for all later previews and Betas.

For complete information, visit the Android 17 developer site.


Join the conversation

As we move toward Platform Stability and the final stable release of Android 17 later this year, your feedback remains our most valuable asset. Whether you’re an early adopter on   the Canary channel or an app developer testing on Beta 1, consider joining our communities and filing feedback. We’re listening.

The post The First Beta of Android 17 appeared first on InShot Pro.

]]>
Trade-in mode on Android 16+ https://theinshotproapk.com/trade-in-mode-on-android-16/ Mon, 02 Feb 2026 12:11:53 +0000 https://theinshotproapk.com/trade-in-mode-on-android-16/ Supporting Longevity through Faster Diagnostics Posted by Rachel S, Android Product Manager Trade-in mode: faster assessment of a factory-reset phone or ...

Read more

The post Trade-in mode on Android 16+ appeared first on InShot Pro.

]]>

Supporting Longevity through Faster Diagnostics

Posted by Rachel S, Android Product Manager

Trade-in mode: faster assessment of a factory-reset phone or tablet, bypassing setup wizard, a new feature on Android 16 and above.

Supporting device longevity

Android is committed to making devices last longer. With device longevity comes device circularity: phones and tablets traded-in and resold. GSMA reported that secondhand phones have around 80-90% lower carbon emissions than new phones. The secondhand device market has grown substantially both in volume and value, a trend projected to continue.

Android 16 and above offers an easy way to access device information on any factory reset phone or tablet via the new tradeinmode parameter, accessed via adb commands. This means you can view quality indicators of a phone or tablet, skipping each setup wizard step. Simply connect a phone or tablet with adb, and use tradeinmode commands to get information about the device.

Trade-in mode: What took minutes, now takes seconds

Faster trade-in processing – By bypassing setup wizard, trade-in mode improves device trade ins. The mode enables immediate access to understand the ‘health’ of a device, helping everyone along the secondhand value chain check the quality of devices that are wiped. We’ve already seen significant increases in processing secondhand Android devices! 


Secure evaluation To ensure the device information is only accessed in secure situations, the device must 1) be factory reset, 2) not have cellular service, 3) not have connectivity or a connected account, and 4) be running a non-debuggable build.

Get device health information with one command – You can view all the below device information with adb command from your workstation adb shell tradeinmode getstatus, skipping setup wizard: 

  • Device information 

    • Device IMEI(s) 

    • Device serial number 

    • Brand 

    • Model 

    • Manufacturer 

    • Device model, e.g., Pixel 9

    • Device brand, e.g., Google

    • Device manufacturer, e.g., Google

    • Device name, e.g., tokay

    • API level to ensure correct OS version, e.g., launch_level : 34

  • Battery heath 

    • Cycle count

    • Health

    • State, e.g., unknown, good, overheat, dead, over_voltage, unspecified_failure, cold, fair, not_available, inconsistent

    • Battery manufacturing date 

    • Date first used 

    • Serial number (to help provide indication of genuine parts, if OEM supported)

    • Part status, e.g., replaced, original, unsupported

  • Storage 

    • Useful lifetime remaining 

    • Total capacity 

  • Screen Part status, e.g., replaced, original, unsupported

  • Foldables (number of times devices has been folded and total fold lifespan) 

  • Moisture intrusion 

  • UICCS information i.e., Indication if there is an e-SIM or removable SIM and the microchip ID for the SIM slot

  • Camera count and location, e.g., 3 cameras on front and 2 on back

  • Lock detection for select device locks

  • And the list keeps growing! Stay up to date here


Run your own tests – Trade-in mode enables you to run your own diagnostic commands or applications by entering the evaluation flow using tradeinmode evaluate. The device will automatically factory reset on reboot after evaluation mode to ensure nothing remains on the device. 

Ensure the device is running an approved build – Further, when connected to the internet, with a single command tradeinmode getstatus –challenge CHALLENGE you can test the device’s operating system (OS) authenticity, to be sure the device is running a trusted build. If the build passes the test, you can be sure the diagnostics results are coming from a trusted OS. 

There’s more – You can use commands to factory reset, power off, reboot, reboot directly into trade-in mode, check if trade-in mode is active, revert to the previous mode, and pause tests until system services are ready. 


Want to try it? Learn more about the developer steps and commands


The post Trade-in mode on Android 16+ appeared first on InShot Pro.

]]>
The Embedded Photo Picker https://theinshotproapk.com/the-embedded-photo-picker/ Mon, 02 Feb 2026 12:04:21 +0000 https://theinshotproapk.com/the-embedded-photo-picker/ Posted by Roxanna Aliabadi Walker, Product Manager and Yacine Rezgui, Developer Relations Engineer The Embedded Photo Picker: A more seamless ...

Read more

The post The Embedded Photo Picker appeared first on InShot Pro.

]]>

Posted by Roxanna Aliabadi Walker, Product Manager and Yacine Rezgui, Developer Relations Engineer

The Embedded Photo Picker: A more seamless way to privately request photos and videos in your app



Get ready to enhance your app’s user experience with an exciting new way to use the Android photo picker! The new embedded photo picker offers a seamless and privacy-focused way for users to select photos and videos, right within your app’s interface. Now your app can get all the same benefits available with the photo picker, including access to cloud content, integrated directly into your app’s experience.

Why embedded?

We understand that many apps want to provide a highly integrated and seamless experience for users when selecting photos or videos. The embedded photo picker is designed to do just that, allowing users to quickly access their recent photos without ever leaving your app. They can also explore their full library in their preferred cloud media provider (e.g., Google Photos), including favorites, albums and search functionality. This eliminates the need for users to switch between apps or worry about whether the photo they want is stored locally or in the cloud.

Seamless integration, enhanced privacy


With the embedded photo picker, your app doesn’t need access to the user’s photos or videos until they actually select something. This means greater privacy for your users and a more streamlined experience. Plus, the embedded photo picker provides users with access to their entire cloud-based media library, whereas the standard photo permission is restricted to local files only.

The embedded photo picker in Google Messages


Google Messages showcases the power of the embedded photo picker. Here’s how they’ve integrated it:
  • Intuitive placement: The photo picker sits right below the camera button, giving users a clear choice between capturing a new photo or selecting an existing one.
  • Dynamic preview: Immediately after a user taps a photo, they see a large preview, making it easy to confirm their selection. If they deselect the photo, the preview disappears, keeping the experience clean and uncluttered.
  • Expand for more content: The initial view is simplified, offering easy access to recent photos. However, users can easily expand the photo picker to browse and choose from all photos and videos in their library, including cloud content from Google Photos.
  • Respecting user choices: The embedded photo picker only grants access to the specific photos or videos the user selects, meaning they can stop requesting the photo and video permissions altogether. This also saves the Messages from needing to handle situations where users only grant limited access to photos and videos.

Implementation

Integrating the embedded photo picker is made easy with the Photo Picker Jetpack library.  

Jetpack Compose

First, include the Jetpack Photo Picker library as a dependency.


implementation("androidx.photopicker:photopicker-compose:1.0.0-alpha01")


The EmbeddedPhotoPicker composable function provides a mechanism to include the embedded photo picker UI directly within your Compose screen. This composable creates a SurfaceView which hosts the embedded photo picker UI. It manages the connection to the EmbeddedPhotoPicker service, handles user interactions, and communicates selected media URIs to the calling application. 

@Composable
fun EmbeddedPhotoPickerDemo() {
    // We keep track of the list of selected attachments
    var attachments by remember { mutableStateOf(emptyList<Uri>()) }

    val coroutineScope = rememberCoroutineScope()
    // We hide the bottom sheet by default but we show it when the user clicks on the button
    val scaffoldState = rememberBottomSheetScaffoldState(
        bottomSheetState = rememberStandardBottomSheetState(
            initialValue = SheetValue.Hidden,
            skipHiddenState = false
        )
    )

    // Customize the embedded photo picker
    val photoPickerInfo = EmbeddedPhotoPickerFeatureInfo
        .Builder()
        // Set limit the selection to 5 items
        .setMaxSelectionLimit(5)
        // Order the items selection (each item will have an index visible in the photo picker)
        .setOrderedSelection(true)
        // Set the accent color (red in this case, otherwise it follows the device's accent color)
        .setAccentColor(0xFF0000)
        .build()

    // The embedded photo picker state will be stored in this variable
    val photoPickerState = rememberEmbeddedPhotoPickerState(
        onSelectionComplete = {
            coroutineScope.launch {
                // Hide the bottom sheet once the user has clicked on the done button inside the picker
                scaffoldState.bottomSheetState.hide()
            }
        },
        onUriPermissionGranted = {
            // We update our list of attachments with the new Uris granted
            attachments += it
        },
        onUriPermissionRevoked = {
            // We update our list of attachments with the Uris revoked
            attachments -= it
        }
    )

       SideEffect {
        val isExpanded = scaffoldState.bottomSheetState.targetValue == SheetValue.Expanded

        // We show/hide the embedded photo picker to match the bottom sheet state
        photoPickerState.setCurrentExpanded(isExpanded)
    }

    BottomSheetScaffold(
        topBar = {
            TopAppBar(title = { Text("Embedded Photo Picker demo") })
        },
        scaffoldState = scaffoldState,
        sheetPeekHeight = if (scaffoldState.bottomSheetState.isVisible) 400.dp else 0.dp,
        sheetContent = {
            Column(Modifier.fillMaxWidth()) {
                // We render the embedded photo picker inside the bottom sheet
                EmbeddedPhotoPicker(
                    state = photoPickerState,
                    embeddedPhotoPickerFeatureInfo = photoPickerInfo
                )
            }
        }
    ) { innerPadding ->
        Column(Modifier.padding(innerPadding).fillMaxSize().padding(horizontal = 16.dp)) {
            Button(onClick = {
                coroutineScope.launch {
                    // We expand the bottom sheet, which will trigger the embedded picker to be shown
                    scaffoldState.bottomSheetState.partialExpand()
                }
            }) {
                Text("Open photo picker")
            }
            LazyVerticalGrid(columns = GridCells.Adaptive(minSize = 64.dp)) {
                // We render the image using the Coil library
                itemsIndexed(attachments) { index, uri ->
                    AsyncImage(
                        model = uri,
                        contentDescription = "Image ${index + 1}",
                        contentScale = ContentScale.Crop,
                        modifier = Modifier.clickable {
                            coroutineScope.launch {
                                // When the user clicks on the media from the app's UI, we deselect it
                                // from the embedded photo picker by calling the method deselectUri
                                photoPickerState.deselectUri(uri)
                            }
                        }
                    )
                }
            }
        }
    }
}

Views

First, include the Jetpack Photo Picker library as a dependency.

implementation("androidx.photopicker:photopicker:1.0.0-alpha01")

To add the embedded photo picker, you need to add an entry to your layout file. 

<view class="androidx.photopicker.EmbeddedPhotoPickerView"
    android:id="@+id/photopicker"
    android:layout_width="match_parent"
    android:layout_height="match_parent" />

And initialize it in your activity/fragment.
// We keep track of the list of selected attachments
private val _attachments = MutableStateFlow(emptyList<Uri>())
val attachments = _attachments.asStateFlow()

private lateinit var picker: EmbeddedPhotoPickerView
private var openSession: EmbeddedPhotoPickerSession? = null

val pickerListener = object EmbeddedPhotoPickerStateChangeListener {
    override fun onSessionOpened (newSession: EmbeddedPhotoPickerSession) {
        openSession = newSession
    }

    override fun onSessionError (throwable: Throwable) {}

    override fun onUriPermissionGranted(uris: List<Uri>) {
        _attachments += uris
    }

    override fun onUriPermissionRevoked (uris: List<Uri>) {
        _attachments -= uris
    }

    override fun onSelectionComplete() {
        // Hide the embedded photo picker as the user is done with the photo/video selection
    }
}

override fun onCreate(savedInstanceState: Bundle?) {
    super.onCreate(savedInstanceState)
    setContentView(R.layout.main_view)
    
    //
    // Add the embedded photo picker to a bottom sheet to allow the dragging to display the full photo library
    //

    picker = findViewById(R.id.photopicker)
    picker.addEmbeddedPhotoPickerStateChangeListener(pickerListener)
    picker.setEmbeddedPhotoPickerFeatureInfo(
        // Set a custom accent color
        EmbeddedPhotoPickerFeatureInfo.Builder().setAccentColor(0xFF0000).build()
    )
}


You can call different methods of EmbeddedPhotoPickerSession to interact with the embedded picker.
// Notify the embedded picker of a configuration change
openSession.notifyConfigurationChanged(newConfig)

// Update the embedded picker to expand following a user interaction
openSession.notifyPhotoPickerExpanded(/* expanded: */ true)

// Resize the embedded picker
openSession.notifyResized(/* width: */ 512, /* height: */ 256)

// Show/hide the embedded picker (after a form has been submitted)
openSession.notifyVisibilityChanged(/* visible: */ false)

// Remove unselected media from the embedded picker after they have been
// unselected from the host app's UI
openSession.requestRevokeUriPermission(removedUris)

It’s important to note that the embedded photo picker experience is available for users running Android 14 (API level 34) or higher with SDK Extensions 15+. Read more about photo picker device availability.

For enhanced user privacy and security, the system renders the embedded photo picker in a way that prevents any drawing or overlaying. This intentional design choice means that your UX should consider the photo picker’s display area as a distinct and dedicated element, much like you would plan for an advertising banner.

If you have any feedback or suggestions, submit tickets to our issue tracker.



The post The Embedded Photo Picker appeared first on InShot Pro.

]]>
Beyond the smartphone: How JioHotstar optimized its UX for foldables and tablets https://theinshotproapk.com/beyond-the-smartphone-how-jiohotstar-optimized-its-ux-for-foldables-and-tablets/ Mon, 02 Feb 2026 12:04:16 +0000 https://theinshotproapk.com/beyond-the-smartphone-how-jiohotstar-optimized-its-ux-for-foldables-and-tablets/ Posted by Prateek Batra, Developer Relations Engineer, Android Adaptive Apps Beyond Phones: How JioHotstar Built an Adaptive UX JioHotstar is ...

Read more

The post Beyond the smartphone: How JioHotstar optimized its UX for foldables and tablets appeared first on InShot Pro.

]]>
Posted by Prateek Batra, Developer Relations Engineer, Android Adaptive Apps

Beyond Phones: How JioHotstar Built an Adaptive UX
JioHotstar is a leading streaming platform in India, serving a user base exceeding 400 million. With a vast content library encompassing over 330,000 hours of video on demand (VOD) and real-time delivery of major sporting events, the platform operates at a massive scale.

To help ensure a premium experience for its vast audience, JioHotstar elevated the viewing experience by optimizing their app for foldables and tablets. They accomplished this by following Google’s adaptive app guidance and utilizing resources like  samples, codelabs, cookbooks, and documentation to help create a consistently seamless and engaging experience across all display sizes.


JioHotstar’s large screen challenge


JioHotstar offered an excellent user experience on standard phones and the team wanted to take advantage of new form factors. To start, the team evaluated their app against the large screen app quality guidelines to understand the optimizations required to extend their user experience to foldables and tablets. To achieve Tier 1 large screen app status, the team implemented two strategic updates to adapt the app across various form factors and differentiate on foldables. By addressing the unique challenges posed by foldable and tablet devices, JioHotstar aims to deliver a high-quality and immersive experience across all display sizes and aspect ratios.


What they needed to do


JioHotstar’s user interface, designed primarily for standard phone displays, encountered challenges in adapting hero image aspect ratios, menus, and show screens to the diverse screen sizes and resolutions of other form factors. This often led to image cropping, letterboxing, low resolution, and unutilized space, particularly in landscape mode. To help fully leverage the capabilities of tablets and foldables and deliver an optimized user experience across these device types, JioHotstar focused on refining the UI to ensure optimal layout flexibility, image rendering, and navigation across a wider range of devices.


What they did


For a better viewing experience on large screens, JioHotstar took the initiative to enhance its app by incorporating WindowSizeClass and creating optimized layouts for compact, medium and extended widths. This allowed the app to adapt its user interface to various screen dimensions and aspect ratios, ensuring a consistent and visually appealing UI across different devices.

JioHotstar followed this pattern using Material 3 Adaptive library to know how much space the app has available. First invoking the currentWindowAdaptiveInfo() function, then using new layouts accordingly for the three window size classes:


val sizeClass = currentWindowAdaptiveInfo().windowSizeClass

if(sizeClass.isWidthAtLeastBreakpoint(WIDTH_DP_EXPANDED_LOWER_BOUND)) {
    showExpandedLayout()
} else if(sizeClass.isHeightAtLeastBreakpoint(WIDTH_DP_MEDIUM_LOWER_BOUND)) {
    showMediumLayout()
} else {
    showCompactLayout()
}

The breakpoints are in order, from the biggest to the smallest, as internally the API checks for with a greater or equal then, so any width that is at least greater or equal then EXPANDED will always be greater than MEDIUM.


JioHotstar is able to provide the premium experience unique to foldable devices: Tabletop Mode. This feature conveniently relocates the video player to the top half of the screen and the video controls to the bottom half when a foldable device is partially folded for a handsfree experience.



To accomplish this, also using the Material 3 Adaptive library, the same currentWindowAdaptiveInfo() can be used to query for the tabletop mode. Once the device is held in tabletop mode, a change of layout to match the top and bottom half of the posture can be done with a column to place the player in the top half and the controllers in the bottom half:

val isTabletTop = currentWindowAdaptiveInfo().windowPosture.isTabletop
if(isTabletopMode) {
   Column {
       Player(Modifier.weight(1f))
       Controls(Modifier.weight(1f))
   }
} else {
   usualPlayerLayout()
}

JioHotstar is now meeting the Large Screen app quality guidelines for Tier 1. The team leveraged adaptive app guidance, utilizing samples, codelabs, cookbooks, and documentation to incorporate these recommendations.


To further improve the user experience, JioHotstar increased touch target sizes, to the recommended 48dp, on video discovery pages, ensuring accessibility across large screen devices. Their video details page is now adaptive, adjusting to screen sizes and orientations. They moved beyond simple image scaling, instead leveraging window size classes to detect window size and density in real time and load the most appropriate hero image for each form factor, helping to enhance visual fidelity. Navigation was also improved, with layouts adapting to suit different screen sizes.


Now users can view their favorite content from JioHotstar on large screens devices with an improved and highly optimized viewing experience.

Achieving Tier 1 large screen app status with Google is a milestone that reflects the strength of our shared vision. At JioHotstar, we have always believed that optimizing for large screen devices goes beyond adaptability, it’s about elevating the viewing experience for audiences who are rapidly embracing foldables, tablets, and connected TVs.

Leveraging Google’s Jetpack libraries and guides allowed us to combine our insights on content consumption with their expertise in platform innovation. This collaboration allowed both teams to push boundaries, address gaps, and co-create a seamless, immersive experience across every screen size.

Together, we’re proud to bring this enhanced experience to millions of users and to set new benchmarks in how India and the world experience streaming.

Sonu Sanjeev
Senior Software Development Engineer

The post Beyond the smartphone: How JioHotstar optimized its UX for foldables and tablets appeared first on InShot Pro.

]]>
Accelerating your insights with faster, smarter monetization data and recommendations https://theinshotproapk.com/accelerating-your-insights-with-faster-smarter-monetization-data-and-recommendations/ Sun, 01 Feb 2026 12:07:00 +0000 https://theinshotproapk.com/accelerating-your-insights-with-faster-smarter-monetization-data-and-recommendations/ Posted by Phalene Gowling, Product Manager, Google Play To build a thriving business on Google Play, you need more than just ...

Read more

The post Accelerating your insights with faster, smarter monetization data and recommendations appeared first on InShot Pro.

]]>

Posted by Phalene Gowling, Product Manager, Google Play

To build a thriving business on Google Play, you need more than just data  you need a clear path to action. Today, we’re announcing a suite of upgrades to the Google Play Console and beyond, giving you greater visibility into your financial performance and specific, data-backed steps to improve it.

From new, actionable recommendations to more granular sales reporting, here’s how we’re helping you maximize your ROI.

New: Monetization insights and recommendations
Launch Status: Rolling out today

The Monetize with Play overview page is designed to be your ultimate command center. Today, we are upgrading it with a new dynamic insights section designed to give you a clearer view of your revenue drivers.


This new insights carousel highlights the visible and invisible value Google Play delivers to your bottom line – including recovered revenue. Alongside these insights, you can now track these critical signals alongside your core performance metrics:

  • Optimize conversion: Track your new Cart Conversion Rate.
  • Reduce churn: Track cancelled subscriptions over time.

  • Optimize pricing: Monitor your Average Revenue Per Paying User (ARPPU).

  • Increase buyer reach: Analyze how much of your engaged audience convert to buyers.

But we aren’t just showing you the data – we’re helping you act on it. Starting today, Play Console will surface customized, actionable recommendations. If there are relevant opportunities – for example, a high churn rate – we will suggest specific, high-impact steps to help you reach your next monetization goal. Recommendations include effort levels and estimated ROI (where available), helping you prioritize your roadmap based on actual business value. Learn more.



Granular visibility: Sales Channel reporting
Launch Status: Recently launched

We recently rolled out new Sales Channel data in your financial reporting. This allows you to attribute revenue to specific surfaces – including your app, the Play Store, and platforms like Google Play Games on PC. 

For native-PC game developers and media & entertainment subscription businesses alike, this granularity allows you to calculate the precise ROI of your cross-platform investments and understand exactly which channels are driving your growth. Learn more.



Operational efficiency: The Orders API
Launch Status: Available now

The Orders API provides programmatic access to one-time and recurring order transaction details. If you haven’t integrated it yet, this API allows you to ingest real-time data directly into your internal dashboards for faster reconciliation and improved customer support.

Feedback so far has been overwhelmingly positive:

Level Infinite (Tencent) says the API  “works so well that we want every app to use it.”

Continuous improvements towards objective-led reporting 

You’ve told us that the biggest challenge isn’t just accessing data, but connecting the dots across different metrics to see the full picture. We’re enhancing reporting that goes beyond data dumps to provide straightforward, actionable insights that help you reach business objectives faster.

Our goal is to create a more cohesive product experience centered around your objectives. By shifting from static reporting to dynamic, goal-orientated tools, we’re making it easier to track and optimize for revenue, conversion rates, and churn. These updates are just the beginning of a transformation designed to help you turn data into measurable growth.









The post Accelerating your insights with faster, smarter monetization data and recommendations appeared first on InShot Pro.

]]>
How Automated Prompt Optimization Unlocks Quality Gains for ML Kit’s GenAI Prompt API https://theinshotproapk.com/how-automated-prompt-optimization-unlocks-quality-gains-for-ml-kits-genai-prompt-api/ Sun, 01 Feb 2026 12:06:39 +0000 https://theinshotproapk.com/how-automated-prompt-optimization-unlocks-quality-gains-for-ml-kits-genai-prompt-api/ Posted by Chetan Tekur, PM at AI Innovation and Research, Chao Zhao, SWE at AI Innovation and Research, Paul Zhou, ...

Read more

The post How Automated Prompt Optimization Unlocks Quality Gains for ML Kit’s GenAI Prompt API appeared first on InShot Pro.

]]>

Posted by Chetan Tekur, PM at AI Innovation and Research, Chao Zhao, SWE at AI Innovation and Research, Paul Zhou, Prompt Quality Lead at GCP Cloud AI and Industry Solutions, and Caren Chang, Developer Relations Engineer at Android


To further help bring your ML Kit Prompt API use cases to production, we are excited to announce Automated Prompt Optimization (APO) targeting On-Device models on Vertex AI. Automated Prompt Optimization is a tool that helps you automatically find the optimal prompt for your use cases.

The era of On-Device AI is no longer a promise—it is a production reality. With the release of Gemini Nano v3, we are placing unprecedented language understanding and multimodal capabilities directly into the palms of users. Through the Gemini Nano family of models, we have wide coverage of supported devices across the Android Ecosystem. But for developers building the next generation of intelligent apps, access to a powerful model is only step one. The real challenge lies in customization: How do you tailor a foundation model to expert-level performance for your specific use case without breaking the constraints of mobile hardware?

In the server-side world, the larger LLMs tend to be highly capable and require less domain adaptation. Even when needed, more advanced options such as LoRA (Low-Rank Adaptation) fine-tuning can be feasible options. However, the unique architecture of Android AICore prioritizes a shared, memory-efficient system model. This means that deploying custom LoRA adapters for every individual app comes with challenges on these shared system services.

But there is an alternate path that can be equally impactful. By leveraging Automated Prompt Optimization (APO) on Vertex AI, developers can achieve quality approaching fine-tuning, all while working seamlessly within the native Android execution environment. By focusing on superior system instruction, APO enables developers to tailor model behavior with greater robustness and scalability than traditional fine-tuning solutions.

Note: Gemini Nano V3 is a quality optimized version of the highly acclaimed Gemma 3N model. Any prompt optimizations that are made on the open source Gemma 3N model will apply to Gemini Nano V3 as well. On supported devices, ML Kit GenAI APIs leverage the nano-v3 model to maximize the quality for Android Developers.


Automated Prompt Optimization (APO)

To further help bring your ML Kit Prompt API use cases to production, we are excited to announce Automated Prompt Optimization (APO) targeting On-Device models on Vertex AI. Automated Prompt Optimization is a tool that helps you automatically find the optimal prompt for your use cases.

The era of On-Device AI is no longer a promise—it is a production reality. With the release of Gemini Nano v3, we are placing unprecedented language understanding and multimodal capabilities directly into the palms of users. Through the Gemini Nano family of models, we have wide coverage of supported devices across the Android Ecosystem. But for developers building the next generation of intelligent apps, access to a powerful model is only step one. The real challenge lies in customization: How do you tailor a foundation model to expert-level performance for your specific use case without breaking the constraints of mobile hardware?

In the server-side world, the larger LLMs tend to be highly capable and require less domain adaptation. Even when needed, more advanced options such as LoRA (Low-Rank Adaptation) fine-tuning can be feasible options. However, the unique architecture of Android AICore prioritizes a shared, memory-efficient system model. This means that deploying custom LoRA adapters for every individual app comes with challenges on these shared system services.

But there is an alternate path that can be equally impactful. By leveraging Automated Prompt Optimization (APO) on Vertex AI, developers can achieve quality approaching fine-tuning, all while working seamlessly within the native Android execution environment. By focusing on superior system instruction, APO enables developers to tailor model behavior with greater robustness and scalability than traditional fine-tuning solutions.

Note: Gemini Nano V3 is a quality optimized version of the highly acclaimed Gemma 3N model. Any prompt optimizations that are made on the open source Gemma 3N model will apply to Gemini Nano V3 as well. On supported devices, ML Kit GenAI APIs leverage the nano-v3 model to maximize the quality for Android Developers

APO treats the prompt not as a static text, but as a programmable surface that can be optimized. It leverages server-side models (like Gemini Pro and Flash) to propose prompts, evaluate variations and find the optimal one for your specific task. This process employs three specific technical mechanisms to maximize performance:

  1. Automated Error Analysis: APO analyzes error patterns from training data to Automatically identify specific weaknesses in the initial prompt.

  2. Semantic Instruction Distillation: It analyzes massive training examples to distill the “true intent” of a task, creating instructions that more accurately reflect the real data distribution.

  3. Parallel Candidate Testing: Instead of testing one idea at a time, APO generates and tests numerous prompt candidates in parallel to identify the global maximum for quality.


Why APO Can Approach Fine Tuning Quality

It is a common misconception that fine-tuning always yields better quality than prompting. For modern foundation models like Gemini Nano v3, prompt engineering can be impactful by itself:

  • Preserving General capabilities: Fine-tuning ( PEFT/LoRA) forces a model’s weights to over-index on a specific distribution of data. This often leads to “catastrophic forgetting,” where the model gets better at your specific syntax but worse at general logic and safety. APO leaves the weights untouched, preserving the capabilities of the base model.

  • Instruction Following & Strategy Discovery: Gemini Nano v3 has been rigorously trained to follow complex system instructions. APO exploits this by finding the exact instruction structure that unlocks the model’s latent capabilities, often discovering strategies that might be hard for human engineers to find. 

To validate this approach, we evaluated APO across diverse production workloads. Our validation has shown consistent 5-8% accuracy gains across various use cases.Across multiple deployed on-device features, APO provided significant quality lifts.

Use Case

Task Type

Task Description

Metric

APO Improvement

Topic classification

Text classification

Classify a news article into topics such as finance, sports, etc

Accuracy

+5%

Intent classification

Text classification

Classify a customer service query into intents

Accuracy

+8.0%

Webpage translation

Text translation

Translate a webpage from English to a local language

BLEU

+8.57%

A Seamless, End-to-End Developer Workflow

It is a common misconception that fine-tuning always yields better quality than prompting. For modern foundation models like Gemini Nano v3, prompt engineering can be impactful by itself:

  • Preserving General capabilities: Fine-tuning ( PEFT/LoRA) forces a model’s weights to over-index on a specific distribution of data. This often leads to “catastrophic forgetting,” where the model gets better at your specific syntax but worse at general logic and safety. APO leaves the weights untouched, preserving the capabilities of the base model.

  • Instruction Following & Strategy Discovery: Gemini Nano v3 has been rigorously trained to follow complex system instructions. APO exploits this by finding the exact instruction structure that unlocks the model’s latent capabilities, often discovering strategies that might be hard for human engineers to find. 

To validate this approach, we evaluated APO across diverse production workloads. Our validation has shown consistent 5-8% accuracy gains across various use cases.Across multiple deployed on-device features, APO provided significant quality lifts.

Conclusion

The release of Automated Prompt Optimization (APO) marks a turning point for on-device generative AI. By bridging the gap between foundation models and expert-level performance, we are giving developers the tools to build more robust mobile applications. Whether you are just starting with Zero-Shot Optimization or scaling to production with Data-Driven refinement, the path to high-quality on-device intelligence is now clearer. Launch your on-device use cases to production today with ML Kit’s Prompt API and Vertex AI’s Automated Prompt Optimization. 

Relevant links: 
















The post How Automated Prompt Optimization Unlocks Quality Gains for ML Kit’s GenAI Prompt API appeared first on InShot Pro.

]]>
Ready to review some changes but not others? Try using Play Console’s new Save for later feature https://theinshotproapk.com/ready-to-review-some-changes-but-not-others-try-using-play-consoles-new-save-for-later-feature/ Wed, 21 Jan 2026 17:00:00 +0000 https://theinshotproapk.com/ready-to-review-some-changes-but-not-others-try-using-play-consoles-new-save-for-later-feature/ Posted by Georgia Doyle, Senior UX Writer and Content Designer, and Kanu Tibrewal, Software Engineer We’ve launched a new Save ...

Read more

The post Ready to review some changes but not others? Try using Play Console’s new Save for later feature appeared first on InShot Pro.

]]>


Posted by Georgia Doyle, Senior UX Writer and Content Designer, and Kanu Tibrewal, Software Engineer



We’ve launched a new Save for later feature on Google Play Console’s Publishing overview to give you more control over when you send changes for review. 


In the past, changes to your app were bundled together before being sent for review. This presented challenges if you needed to reprioritize changes, or if the changes were no longer relevant. For example, updates to your test tracks grouped with marketing changes that need to be rescheduled. This lack of flexibility meant that if some changes were ready for review but not others, you could end up delaying urgent fixes, or publishing changes that you weren’t quite ready to make.

Now, you have the ability to hold back the changes you’re not ready to have reviewed.

How it works

In the ‘Changes not yet sent for review’ section of the Publishing overview page, select ‘Save for later’ on the groups of changes that you don’t want to include in your next review. You can view and edit the list of saved changes, and return them to the Publishing overview if you change your mind. Once the review has started, your saved changes will be added back to ‘Changes not yet sent for review’.

Integration with our pre-review checks


Save for later also works with our pre-review checks. Pre-review checks look for issues in your changes that may prevent your app from being published, so that you can fix them before you send changes for review. If checks find issues with your app, there are two ways you can proceed:

  • If issues are isolated to an individual track, we’ll show you an error beside that change, so you know what to save for later in order to proceed to review with your other changes.
  • If you have issues that affect your whole app, for example, App content issues, Save for later will be unavailable and you will need to fix them before you can send any changes for review.

Greater flexibility in your workflows

Our goal for Save for later is to give you greater flexibility over your release schedule. With this feature you can manage what changes you send for review, and address issues affecting individual tracks without holding up ready-to-release changes, so you can iterate faster and minimize the impact of rejections on your release timeline.

So, what’s next?

We’re committed to continuously improving your publishing experience. Save for later is a significant step towards providing you with more granular control over this all-important stage in the journey to publishing your app. We’ll continue to gather your feedback and look at ways we can provide greater flexibility to the review and publishing process.

We’re excited to see how Save for later helps you to streamline your release process and bring your app innovations to users even faster.

The post Ready to review some changes but not others? Try using Play Console’s new Save for later feature appeared first on InShot Pro.

]]>