Jetpack Compose https://theinshotproapk.com/category/app/jetpack-compose/ Download InShot Pro APK for Android, iOS, and PC Fri, 19 Dec 2025 17:00:00 +0000 en-US hourly 1 https://theinshotproapk.com/wp-content/uploads/2021/07/cropped-Inshot-Pro-APK-Logo-1-32x32.png Jetpack Compose https://theinshotproapk.com/category/app/jetpack-compose/ 32 32 Goodbye Mobile Only, Hello Adaptive: Three essential updates from 2025 for building adaptive apps https://theinshotproapk.com/goodbye-mobile-only-hello-adaptive-three-essential-updates-from-2025-for-building-adaptive-apps/ Fri, 19 Dec 2025 17:00:00 +0000 https://theinshotproapk.com/goodbye-mobile-only-hello-adaptive-three-essential-updates-from-2025-for-building-adaptive-apps/ Posted by Fahd Imtiaz – Product Manager, Android Developer Goodbye Mobile Only, Hello Adaptive: Three essential updates from 2025 for ...

Read more

The post Goodbye Mobile Only, Hello Adaptive: Three essential updates from 2025 for building adaptive apps appeared first on InShot Pro.

]]>

Posted by Fahd Imtiaz – Product Manager, Android Developer




Goodbye Mobile Only, Hello Adaptive: Three essential updates from 2025 for building adaptive apps


In 2025 the Android ecosystem has grown far beyond the phone. Today, developers have the opportunity to reach over 500 million active devices, including foldables, tablets, XR, Chromebooks, and compatible cars.


These aren’t just additional screens; they represent a higher-value audience. We’ve seen that users who own both a phone and a tablet spend 9x more on apps and in-app purchases than those with just a phone. For foldable users, that average spend jumps to roughly 14x more*.


This engagement signals a necessary shift in development: goodbye mobile apps, hello adaptive apps.



To help you build for that future, we spent this year releasing tools that make adaptive the default way to build. Here are three key updates from 2025 designed to help you build these experiences.


Standardizing adaptive behavior with Android 16


To support this shift, Android 16 introduced significant changes to how apps can restrict orientation and resizability. On displays of at least 600dp, manifest and runtime restrictions are ignored, meaning apps can no longer lock themselves to a specific orientation or size. Instead, they fill the entire display window, ensuring your UI scales seamlessly across portrait and landscape modes. 


Because this means your app context will change more frequently, it’s important to verify that you are preserving UI state during configuration changes. While Android 16 offers a temporary opt-out to help you manage this transition, Android 17 (SDK37) will make this behavior mandatory. To ensure your app behaves as expected under these new conditions, use the resizable emulator in Android Studio to test your adaptive layouts today

Supporting screens beyond the tablet with Jetpack WindowManager 1.5.0

As devices evolve, our existing definitions of “large” need to evolve with them. In October, we released Jetpack WindowManager 1.5.0 to better support the growing number of very large screens and desktop environments.


On these surfaces, the standard “Expanded” layout, which usually fits two panes comfortably, often isn’t enough. On a 27-inch monitor, two panes can look stretched and sparse, leaving valuable screen real estate unused. To solve this, WindowManager 1.5.0 introduced two new width window size classes: Large (1200dp to 1600dp) and Extra-large (1600dp+).



These new breakpoints signal when to switch to high-density interfaces. Instead of stretching a typical list-detail view, you can take advantage of the width to show three or even four panes simultaneously.  Imagine an email client that comfortably displays your folders, the inbox list, the open message, and a calendar sidebar, all in a single view. Support for these window size classes was added to Compose Material 3 adaptive in the 1.2 release


Rethinking user journeys with Jetpack Navigation 3


Building a UI that morphs from a single phone screen to a multi-pane tablet layout used to require complex state management.  This often meant forcing a navigation graph designed for single destinations to handle simultaneous views. First announced at I/O 2025, Jetpack Navigation 3 is now stable, introducing a new approach to handling user journeys in adaptive apps.


Built for Compose, Nav3 moves away from the monolithic graph structure. Instead, it provides decoupled building blocks that give you full control over your back stack and state. This solves the single source of truth challenge common in split-pane layouts. Because Nav3 uses the Scenes API, you can display multiple panes simultaneously without managing conflicting back stacks, simplifying the transition between compact and expanded views.


A foundation for an adaptive future



This year delivered the tools you need, from optimizing for expansive  layouts to the granular controls of
WindowManager and Navigation 3. And, Android 16 began the shift toward truly flexible UI, with updates coming next year to deliver excellent adaptive experiences across all form factors. To learn more about adaptive development principles and get started, head over to d.android.com/adaptive-apps


The tools are ready, and the users are waiting. We can’t wait to see what you build!


*Source: internal Google data


The post Goodbye Mobile Only, Hello Adaptive: Three essential updates from 2025 for building adaptive apps appeared first on InShot Pro.

]]>
Deeper Performance Considerations https://theinshotproapk.com/deeper-performance-considerations/ Fri, 21 Nov 2025 12:04:53 +0000 https://theinshotproapk.com/deeper-performance-considerations/ Posted by Ben Weiss – Senior Developer Relations Engineer, Breana Tate – Developer Relations Engineer, Jossi Wolf – Software Engineer ...

Read more

The post Deeper Performance Considerations appeared first on InShot Pro.

]]>

Posted by Ben Weiss – Senior Developer Relations Engineer,
Breana Tate – Developer Relations Engineer,
Jossi Wolf – Software Engineer on Compose

Compose
yourselves and let us guide you through more background on performance.

Welcome
to day 3 of Performance Spotlight Week. Today we’re continuing to share details and guidance on
important
areas of app performance. We’re covering Profile Guided Optimization, Jetpack Compose
performance
improvements and considerations on working behind the scenes. Let’s dive right in.

Profile
Guided Optimization

Baseline
Profiles

and
Startup
Profiles

are foundational to improve an Android app’s startup and runtime performance. They are part of a
group of
performance optimizations called Profile Guided Optimization.

When
an app is packaged, the d8 dexer takes classes and methods and populates your app’s
classes.dex
files. When a user opens the app, these dex files are loaded, one after the other until the app
can start.
By providing a
Startup
Profile

you let d8 know which classes and methods to pack in the first
classes.dex
files. This structure allows the app to load fewer files, which in turn improves startup
speed.

Baseline
Profiles effectively move the Just in Time (JIT) compilation steps away from user devices and
onto developer
machines. The generated Ahead Of Time (AOT) compiled code has proven to reduce startup time and
rendering
issues alike.

Trello
and Baseline Profiles

We
asked engineers on the Trello app how Baseline Profiles affected their app’s performance. After
applying
Baseline Profiles to their main user journey, Trello saw a significant 25 % reduction in app
startup
time.

Trello
was able to improve their app’s startup time by 25 % by using baseline
profiles.

Baseline
Profiles at Meta

Also,
engineers at Meta recently published an article on how they are
accelerating
their Android apps with Baseline Profiles
.

Across
Meta’s apps the teams have seen various critical metrics improve by up to 40 % after
applying Baseline
Profiles.


Technical
improvements like these help you improve user satisfaction and business success as well. Sharing
this with
your product owners, CTOs and decision makers can also help speed up your app’s
performance.

Get
started with Baseline Profiles

To
generate either a Baseline or Startup Profile, you write a
macrobenchmark
test that exercises the app. During the test profile data is collected which will be used during
app
compilation. The tests are written using the new
UiAutomator
API
,
which we’ll cover tomorrow.

Writing
a benchmark like this is straightforward and you can see the full sample on
GitHub.

@Test

fun profileGenerator() {

    rule.collect(

        packageName = TARGET_PACKAGE,

        maxIterations = 15,

        stableIterations = 3,

        includeInStartupProfile = true

    ) {

        uiAutomator {

            startApp(TARGET_PACKAGE)

        }

    }

}

Considerations

Start
by writing a macrobenchmark tests Baseline Profile and a Startup Profile for the path most
traveled by your
users. This means the main entry point that your users take into your app which usually is
after
they logged in
.
Then continue to write more test cases to capture a more complete picture only for Baseline
Profiles. You do
not need to cover everything with a Baseline Profile. Stick to the most used paths and measure
performance
in the field. More on that in tomorrow’s post.

Get
started with Profile Guided Optimization

To
learn how Baseline Profiles work under the hood, watch this video from the Android Developers
Summit:




And
check out the Android Build Time episode on Profile Guided Optimization for another in-depth
look: 




We
also have extensive guidance on
Baseline
Profiles

and
Startup
Profiles

available for further reading.

Jetpack
Compose performance improvements

The
UI framework for Android has seen the performance investment of the engineering team pay off.
From version
1.9 of Jetpack Compose, scroll jank has dropped to 0.2 % during an internal long scrolling
benchmark
test. 

These
improvements were made possible because of several features packed into the most recent
releases.

Customizable
cache window

By
default, lazy layouts only compose one item ahead of time in the direction of scrolling, and
after something
scrolls off screen it is discarded. You can now customize the amount of items to retain through
a fraction
of the viewport or dp size. This helps your app perform more work upfront, and after enabling
pausable
composition in between frames, using the available time more efficiently.

To
start using customizable cache windows, instantiate a
LazyLayoutCacheWindow
and pass it to your lazy list or lazy grid. Measure your app’s performance using different cache
window
sizes, for example 50% of the viewport. The optimal value will depend on your content’s
structure and item
size.

val
dpCacheWindow = LazyLayoutCacheWindow(ahead =
150.dp,
behind =
100.dp)

val
state = rememberLazyListState(cacheWindow = dpCacheWindow)

LazyColumn(state
= state) {

    //
column contents

}

Pausable
composition

This
feature allows compositions to be paused, and their work split up over several frames. The APIs
landed in
1.9 and it is now used by default in 1.10 in lazy layout prefetch. You should see the most
benefit with
complex items with longer composition times. 


More
Compose performance optimizations

In
the versions 1.9 and 1.10 of Compose the team also made several optimizations that are a bit
less
obvious.

Several
APIs that use coroutines under the hood have been improved. For example, when using
Draggable
and
Clickable,
developers should see faster reaction times and improved allocation counts.

Optimizations
in layout rectangle tracking have improved performance of Modifiers like
onVisibilityChanged()
and
onLayoutRectChanged().
This speeds up the layout phase, even when not explicitly using these APIs.

Another
performance improvement is using cached values when observing positions via
onPlaced().

Prefetch
text in the background

Starting
with version 1.9, Compose adds the ability to prefetch text on a background thread. This enables
you to
pre-warm caches to enable faster text layout and is relevant for app rendering performance.
During layout,
text has to be passed into the Android framework where a word cache is populated. By default
this runs on
the Ui thread. Offloading prefetching and populating the word cache onto a background thread can
speed up
layout, especially for longer texts. To prefetch on a background thread you can pass a custom
executor to
any composable that’s using
BasicText
under the hood by passing a
LocalBackgroundTextMeasurementExecutor
to a
CompositionLocalProvider
like so.

val defaultTextMeasurementExecutor = Executors.newSingleThreadExecutor()

CompositionLocalProvider(

    LocalBackgroundTextMeasurementExecutor provides DefaultTextMeasurementExecutor

) {

    BasicText(“Some text that should be measured on a background thread!”)

}

Depending
on the text, this can provide a performance boost to your text rendering. To make sure that it
improves your
app’s rendering performance, benchmark and compare the results.

Background
work performance considerations

Background
Work is an essential part of many apps. You may be using libraries like WorkManager or
JobScheduler to
perform tasks like:

  • Periodically
    uploading analytical events

  • Syncing
    data between a backend service and a database

  • Processing
    media (i.e. resizing or compressing images)

A
key challenge while executing these tasks is balancing performance and power efficiency.
WorkManager allows
you to achieve this balance. It’s designed to be power-efficient, and allow work to be deferred
to an
optimal execution window influenced by a number of factors, including constraints you specify or
constraints
imposed by the system. 

WorkManager
is not a one-size-fits-all solution, though. Android also has a number of power-optimized APIs
that are
designed specifically with certain common Core User Journeys (CUJs) in
mind.  

Reference
the
Background
Work landing page

for a list of just a few of these,  including updating a widget and getting location in the
background.

Local
Debugging tools for Background Work: Common Scenarios

To
debug Background Work and understand why a task may have been delayed or failed, you need
visibility into
how the system has scheduled your tasks. 

To
help with this, WorkManager has several related

tools to help you debug locally

and optimize performance (some of these work for JobScheduler as well)! Here are some common
scenarios you
might encounter when using WorkManager, and an explanation of tools you can use to debug
them.

Debugging
why scheduled work is not executing

Scheduled
work being delayed or not executing at all can be due to a number of factors, including
specified
constraints not being met or constraints having been
imposed
by the system

The
first step in investigating why scheduled work is not running is to
confirm
the work was successfully scheduled

After confirming the scheduling status, determine whether there are any unmet constraints or
preconditions
preventing the work from executing.

There
are several tools for debugging this scenario.

Background
Task Inspector

The
Background Task Inspector is a powerful tool integrated directly into Android Studio. It
provides a visual
representation of all WorkManager tasks and their associated states (Running, Enqueued, Failed,
Succeeded). 

To
debug why scheduled work is not executing with the Background Task Inspector, consult the listed
Work
status(es). An ‘Enqueued’ status indicates your Work was scheduled, but is still waiting to
run.

Benefits:
Aside from providing an easy way to view all tasks, this tool is especially useful if you have
chained work.
The Background Task inspector offers a graph view that can visualize if a previous task failing
may have
impacted the execution of the following task.

Background
Task Inspector list view



Background
Task Inspector graph view

adb
shell dumpsys jobscheduler

This
command
returns a list of all active JobScheduler jobs (which includes WorkManager Workers) along with
specified
constraints, and system-imposed constraints. It also returns job
history. 

Use
this if you want a different way to view your scheduled work and associated constraints. For
WorkManager
versions earlier than WorkManager 2.10.0,
adb
shell dumpsys jobscheduler

will return a list of Workers with this name:

[package
name]/androidx.work.impl.background.systemjob.SystemJobService


If
your app has multiple workers, updating to WorkManager 2.10.0 will allow you to see Worker names
and easily
distinguish between workers:

#WorkerName#@[package
name]/androidx.work.impl.background.systemjob.SystemJobService


Benefits:
This
command is useful for understanding if there were any
system-imposed
constraints,
which
you cannot determine with the Background Task Inspector. For example, this will return your
app’s
standby bucket
,
which can affect the window in which scheduled work completes.

Enable
Debug logging

You
can enable
custom
logging

to see verbose WorkManager logs, which will have
WM—
attached. 

Benefits:
This allows you to gain visibility into when work is scheduled, constraints are fulfilled, and
lifecycle
events, and you can consult these logs while developing your app.

WorkInfo.StopReason

If
you notice unpredictable performance with a specific worker, you can programmatically observe
the reason
your worker was stopped on the previous run attempt with
WorkInfo.getStopReason

It’s
a good practice to configure your app to observe WorkInfo using getWorkInfoByIdFlow to identify
if your work
is being affected by background restrictions, constraints, frequent timeouts, or even stopped by
the
user.

Benefits:
You can use WorkInfo.StopReason to collect field data about your workers’
performance.

Debugging
WorkManager-attributed high wake lock duration flagged by Android vitals

Android
vitals features an excessive partial wake locks metric, which highlights wake locks contributing
to battery
drain. You may be surprised to know that
WorkManager
acquires wake locks to execute tasks
,
and if the wake locks exceed the threshold set by Google Play, can have impacts to your app’s
visibility.
How can you debug why there is so much wake lock duration attributed to your work? You can use
the following
tools.

Android
vitals dashboard

First
confirm in the
Android
vitals excessive wake lock dashboard

that the high wake lock duration
is
from WorkManager and not an alarm or other wake lock. You can use the
Identify
wake locks created by other APIs

documentation to understand which wake locks are held due to WorkManager. 

Perfetto

Perfetto
is a tool for analyzing system traces. When using it for debugging WorkManager specifically, you
can view
the “Device State” section to see when your work started, how long it ran, and how it
contributes to power
consumption. 

Under
“Device State: Jobs” track,  you can see any workers that have been executed and their
associated wake
locks.

 

Device
State section in Perfetto, showing CleanupWorker and BlurWorker execution.

Resources

Consult
the
Debug
WorkManager page

for an overview of the available debugging methods for other scenarios you might
encounter.

And
to try some of these methods hands on and learn more about debugging WorkManager, check out
the

Advanced WorkManager and Testing

codelab.

Next
steps

Today
we moved beyond code shrinking and explored how the Android Runtime and Jetpack Compose actually
render your
app. Whether it’s pre-compiling critical paths with Baseline Profiles or smoothing out scroll
states with
the new Compose 1.9 and 1.10 features, these tools focus on the
feel
of your app. And we dove deep into best practices on debugging background work.

Ask
Android

On
Friday we’re hosting a live AMA on performance. Ask your questions now using #AskAndroid and get
them
answered by the experts. 



The
challenge

We
challenged you on Monday to enable R8. Today, we are asking you to
generate
one Baseline Profile

for your app.

With
Android
Studio Otter
,
the Baseline Profile Generator module wizard makes this easier than ever. Pick your most
critical user
journey—even if it’s just your app startup and login—and generate a profile.

Once
you have it, run a Macrobenchmark to compare
CompilationMode.None
vs.
CompilationMode.Partial.

Share
your startup time improvements on social media using
#optimizationEnabled.

Tune
in tomorrow

You
have shrunk your app with R8 and optimized your runtime with Profile Guided Optimization. But
how do you
prove
these wins to your stakeholders? And how do you catch regressions before they hit
production?

Join
us tomorrow for
Day
4: The Performance Leveling Guide
,
where we will map out exactly how to measure your success, from field data in Play Vitals to
deep local
tracing with Perfetto.

The post Deeper Performance Considerations appeared first on InShot Pro.

]]>
Androidify: Building AI first Android Experiences with Gemini using Jetpack Compose and Firebase https://theinshotproapk.com/androidify-building-ai-first-android-experiences-with-gemini-using-jetpack-compose-and-firebase/ Sat, 06 Sep 2025 12:03:47 +0000 https://theinshotproapk.com/androidify-building-ai-first-android-experiences-with-gemini-using-jetpack-compose-and-firebase/ Posted by Rebecca Franks – Developer Relations Engineer, Tracy Agyemang – Product Marketer, and Avneet Singh – Product Manager Androidify ...

Read more

The post Androidify: Building AI first Android Experiences with Gemini using Jetpack Compose and Firebase appeared first on InShot Pro.

]]>

Posted by Rebecca Franks – Developer Relations Engineer, Tracy Agyemang – Product Marketer, and Avneet Singh – Product Manager

Androidify is our new app that lets you build your very own Android bot, using a selfie and AI. We walked you through some of the components earlier this year, and starting today it’s available on the web or as an app on Google Play. In the new Androidify, you can upload a selfie or write a prompt of what you’re looking for, add some accessories, and watch as AI builds your unique bot. Once you’ve had a chance to try it, come back here to learn more about the AI APIs and Android tools we used to create the app. Let’s dive in!

Key technical integrations

The Androidify app combines powerful technologies to deliver a seamless and engaging user experience. Here’s a breakdown of the core components and their roles:

AI with Gemini and Firebase

Androidify leverages the Firebase AI Logic SDK to access Google’s powerful Gemini and Imagen* models. This is crucial for several key features:

  • Image validation: The app first uses Gemini 2.5 Flash to validate the user’s photo. This includes checking that the image contains a clear, focused person and meets safety standards before any further processing. This is a critical first step to ensure high-quality and safe outputs.
  • Image captioning: Once validated, the model generates a detailed caption of the user’s image. This is done using structured output, which means the model returns a specific JSON format, making it easier for the app to parse the information. This detailed description helps create a more accurate and creative final result.
  • Android Bot Generation: The generated caption is then used to enrich the prompt for the final image generation. A specifically fine-tuned version of the Imagen 3 model is then called to generate the custom Android bot avatar based on the enriched prompt. This custom fine-tuning ensures the results are unique and align with the app’s playful and stylized aesthetic.
  • The Androidify app also has a “Help me write” feature which uses Gemini 2.5 Flash to create a random description for a bot’s clothing and hairstyle, adding a bit of a fun “I’m feeling lucky” element.

    gif showcasing the help me write button

    UI with Jetpack Compose and CameraX

    The app’s user interface is built entirely with Jetpack Compose, enabling a declarative and responsive design across form factors. The app uses the latest Material 3 Expressive design, which provides delightful and engaging UI elements like new shapes, motion schemes, and custom animations.

    For camera functionality, CameraX is used in conjunction with the ML Kit Pose Detection API. This intelligent integration allows the app to automatically detect when a person is in the camera’s view, enabling the capture button and adding visual guides for the user. It also makes the app’s camera features responsive to different device types, including foldables in tabletop mode.

    Androidify also makes extensive use of the latest Compose features, such as:

  • Adaptive layouts: It’s designed to look great on various screen sizes, from phones to foldables and tablets, by leveraging WindowSizeClass and reusable composables.
  • Shared element transitions: The app uses the new Jetpack Navigation 3 library to create smooth and delightful screen transitions, including morphing shape animations that add a polished feel to the user experience.
  • Auto-sizing text: With Compose 1.8, the app uses a new parameter that automatically adjusts font size to fit the container’s available size, which is used for the app’s main “Customize your own Android Bot” text.
  • chart illustrating the behavior of Androidify app flow

    Figure 1. Androidify Flow

    Latest updates

    In the latest version of Androidify, we’ve added some new powerful AI driven features.

    Background vibe generation with Gemini Image editing

    Using the latest Gemini 2.5 Flash Image model, we combine the Android bot with a preset background “vibe” to bring the Android bots to life.

    a three-part image showing an Android bot on the left, text prompt in the middle reads A vibrant 3D illustration of a vibrant outdoor garden with fun plants. the flowers in thisscene have an alien-like qulaity to them and are brightly colored. the entire scene is rendered with a meticulous mixture of rounded, toy-like objects, creating a clean, minimalist aesthetic..., and image on the right is the Android bot from the first image stanging in a toy like garen scene surrounded by brightly colored flowers. A whitre picket fence is in the background, and a red watering can sits on the ground next to the driod bot

    Figure 2. Combining the Android bot with a background vibe description to generate your new Android Bot in a scene

    This is achieved by using Firebase AI Logic – passing a prompt for the background vibe, and the input image bitmap of the bot, with instructions to Gemini on how to combine the two together.

    override suspend fun generateImageWithEdit(
            image: Bitmap,
            backgroundPrompt: String = "Add the input image android bot as the main subject to the result... with the background that has the following vibe...",
        ): Bitmap {
            val model = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel(
                modelName = "gemini-2.5-flash-image-preview",
                generationConfig = generationConfig {
                    responseModalities = listOf(
                        ResponseModality.TEXT,
                        ResponseModality.IMAGE,
                    )
                },
            )
    	  // We combine the backgroundPrompt with the input image which is the Android Bot, to produce the new bot with a background
            val prompt = content {
                text(backgroundPrompt)
                image(image)
            }
            val response = model.generateContent(prompt)
            val image = response.candidates.firstOrNull()
                ?.content?.parts?.firstNotNullOfOrNull { it.asImageOrNull() }
            return image ?: throw IllegalStateException("Could not extract image from model response")
        }

    Sticker mode with ML Kit Subject Segmentation

    The app also includes a “Sticker mode” option, which integrates the ML Kit Subject Segmentation library to remove the background on the bot. You can use “Sticker mode” in apps that support stickers.

    backgroud removal

    Figure 3. White background removal of Android Bot to create a PNG that can be used with apps that support stickers

    The code for the sticker implementation first checks if the Subject Segmentation model has been downloaded and installed, if it has not – it requests that and waits for its completion. If the model is installed already, the app passes in the original Android Bot image into the segmenter, and calls process on it to remove the background. The foregroundBitmap object is then returned for exporting.

    override suspend fun generateImageWithEdit(
            image: Bitmap,
            backgroundPrompt: String = "Add the input image android bot as the main subject to the result... with the background that has the following vibe...",
        ): Bitmap {
            val model = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel(
                modelName = "gemini-2.5-flash-image-preview",
                generationConfig = generationConfig {
                    responseModalities = listOf(
                        ResponseModality.TEXT,
                        ResponseModality.IMAGE,
                    )
                },
            )
    	  // We combine the backgroundPrompt with the input image which is the Android Bot, to produce the new bot with a background
            val prompt = content {
                text(backgroundPrompt)
                image(image)
            }
            val response = model.generateContent(prompt)
            val image = response.candidates.firstOrNull()
                ?.content?.parts?.firstNotNullOfOrNull { it.asImageOrNull() }
            return image ?: throw IllegalStateException("Could not extract image from model response")
        }

    See the LocalSegmentationDataSource for the full source implementation

    Learn more

    To learn more about Androidify behind the scenes, take a look at the new solutions walkthrough, inspect the code or try out the experience for yourself at androidify.com or download the app on Google Play.

    moving demo of Androidfiy app

    *Check responses. Compatibility and availability varies. 18+.

    The post Androidify: Building AI first Android Experiences with Gemini using Jetpack Compose and Firebase appeared first on InShot Pro.

    ]]>
    How Dashlane Brought Credential Manager to Wear OS with Only 78 New Lines of Code https://theinshotproapk.com/how-dashlane-brought-credential-manager-to-wear-os-with-only-78-new-lines-of-code/ Wed, 03 Sep 2025 18:06:00 +0000 https://theinshotproapk.com/how-dashlane-brought-credential-manager-to-wear-os-with-only-78-new-lines-of-code/ Posted by John Zoeller – Developer Relations Engineer, Loyrn Hairston – Product Marketing Manager, and Jonathan Salamon – Dashlane Staff ...

    Read more

    The post How Dashlane Brought Credential Manager to Wear OS with Only 78 New Lines of Code appeared first on InShot Pro.

    ]]>

    Posted by John Zoeller – Developer Relations Engineer, Loyrn Hairston – Product Marketing Manager, and Jonathan Salamon – Dashlane Staff Software Engineer

    Dashlane is a password management and provision tool that provides a secure way to manage user credentials, access control, and authentication across multiple systems and applications.

    Dashlane has over 18 million users and 20,000 businesses in 180 countries. It’s available on Android, Wear OS, iOS, macOS, Windows, and as a web app with an extension for Chrome, Firefox, Edge, and Safari.

    Recently, they expanded their offerings by creating a Wear OS app with a Credential Provider integration from the Credential Manager API, bringing passkeys to their clients and users on smartwatches.

    Streamlining Authentication on Wear OS

    Dashlane users have frequently requested a Wear OS solution that provides standalone authentication for their favorite apps. In the past, Wear OS lacked the key APIs necessary for this request, which kept Dashlane from being able to provide the functionality. In their words:

    “Our biggest challenge was the lack of a standard credentials API on Wear OS, which meant that it was impossible to bring our core features to this platform.”

    This has changed with the introduction of the new Credential Manager API on Wear OS.

    Credential Manager provides a simplified, standardized user sign-in experience with built-in authentication options for passkeys, passwords, and federated identities like Sign in with Google. Conveniently, it can be implemented with minimal effort by reusing the same code as the mobile version.

    The Dashlane team was thrilled to learn about this, as it meant they could save a lot of time and effort: “[The] CredentialManager API provides the same API on phones and Wear OS; you write the code only once to support multiple form factors.”

    Credential selection Screenshot

    Selecting Dashlane-provided credentials is simple for users

    After Dashlane had planned out their roadmap, they were able execute their vision for the new app with only a small engineering investment, reusing 92% of the Credential Manager code from their mobile app. And because the developers built Dashlane’s app UI with Jetpack Compose for Wear OS, 60% of their UI code was also reused.

    Quote from Sebastien Eggenspieler, Senior engineer at Dashlane

    Developing for Wear OS

    To provide credentials to other apps with Credential Manager, Dashlane needed to implement the Credential Provider interface on Wear OS. This proved to be a simple exercise in calling their existing mobile code, where Dashlane had already implemented behavior for credential querying and credential selection.

    For example, Dashlane was able to reuse their logic to handle client invocations of CredentialManager.getCredential. When a client invokes this, the Android framework propagates the client’s getCredentialRequest to Dashlane’s CredentialProviderService.onBeginGetCredentialRequest implementation to retrieve the credentials specified in the request.

    Dashlane delegates the logic for onBeginGetCredentialRequest to their handleGetCredentials function, below, which is shared between their mobile and Wear OS implementations.

    // When a Credential Manager client calls 'getCredential', the Android
    // framework invokes `onBeginGetCredentialRequest`. Dashlane
    // implemented this `handleGetCredentials` function to handle some of
    // the logic needed for `onBeginGetCredentialRequest`
    override fun handleGetCredentials(
        context: Context,
        request: BeginGetCredentialRequest):
    List<CredentialEntry> =
      request.beginGetCredentialOptions.flatMap { option ->
        when (option) {
          // Handle passkey credential
          is BeginGetPublicKeyCredentialOption -> {
            val passkeyRequestOptions = Gson().fromJson(
                option.requestJson, PasskeyRequestOptions::class.java)
    
            credentialLoader.loadPasskeyCredentials(
              passkeyRequestOptions.rpId,
              passkeyRequestOptions.allowCredentials ?: listOf()
            ).map { passkey ->
              val passkeyDisplayName = getSuggestionTitle(passkey, context)
    
              PublicKeyCredentialEntry.Builder(
                context,
                passkeyDisplayName,
                pendingIntentForGet(context, passkey.id),
                option
              )
              .setLastUsedTime(passkey.locallyViewedDate)
              .setIcon(buildMicroLogomarkIcon(context = context))
              .setDisplayName(passkeyDisplayName)
              .build()
    // Handle other credential types
    

    Reusing precise logic flows like this made it a breeze for Dashlane to implement their Wear OS app.

    “The Credential Manager API is unified across phones and Wear OS, which was a huge advantage. It meant we only had to write our code once.”

    Impact and Improved Growth

    The team is excited to be among the first credential providers on wearables: “Being one of the first on Wear OS was a key differentiator for us. It reinforces our brand as an innovator, focusing on the user experience, better meeting and serving our users where they are.”

    As an early adopter of this new technology, Dashlanes Wear OS app has already shown early promise, as described by Dashlane software engineer, Sebastien Eggenspieler: “In the first 3 months, our Wear OS app organically grew to represent 1% of our active device install base.”

    With their new experience launched, Wear OS apps can now rely on Dashlane as a trusted credential provider for their own Credential Manager integrations, using Dashlane to allow users to log in with a single tap; and users can view details about their credentials right from their wrist.

    app homescreen screenshot

    Dashlane’s innovative design helps users manage their credentials

    Dashlane’s Recommendations to Wear OS Developers

    With their implementation complete, the Dashlane team can offer some advice for other developers who are considering the Credential Manager API. Their message is clear: “the future is passwordless… and passkeys are leading the way, [so] provide a passkey option.”

    As a true innovator in their field, and the preferred credential provider for so many users, we are thrilled to have Dashlane support Credential Manager. They truly inspired us with their commitment to providing Wear OS users with the best experience possible:

    “We hope that in the future every app developer will migrate their existing users to the Credential Manager API.”

    Get Started with Credential Manager

    With its elegant simplicity and built-in secure authentication methods, the Credential Manager API provides a simple, straightforward authentication experience for users that changes the game in Wear OS.

    Want to find out more about how Dashlane is driving the future of end-user authentication? Check out our video blog with their team in Paris, and read about how they found a 70% in sign-in conversion rates with passkeys.

    To learn more about how you can implement Credential Manager, read our official developer and UX guides, and be sure to check out our brand new blog post and video blog as part of Wear OS Spotlight week!

    We’ve also expanded our existing Credential Manager sample to support Wear OS, to help guide you along the way, and if you’d like to provide credentials like Dashlane, you can use our Credential Provider sample.

    Finally, explore how you can start developing additional experiences for Wear OS today with our documentation and samples.

    The post How Dashlane Brought Credential Manager to Wear OS with Only 78 New Lines of Code appeared first on InShot Pro.

    ]]>
    What’s new in the Jetpack Compose August ’25 release https://theinshotproapk.com/whats-new-in-the-jetpack-compose-august-25-release/ Wed, 13 Aug 2025 18:00:00 +0000 https://theinshotproapk.com/whats-new-in-the-jetpack-compose-august-25-release/ Posted by Meghan Mehta – Developer Relations Engineer and Nick Butcher – Product Manager Today, the Jetpack Compose August ‘25 ...

    Read more

    The post What’s new in the Jetpack Compose August ’25 release appeared first on InShot Pro.

    ]]>

    Posted by Meghan Mehta – Developer Relations Engineer and Nick Butcher – Product Manager

    Today, the Jetpack Compose August ‘25 release is stable. This release contains version 1.9 of core compose modules (see the full BOM mapping), introducing new APIs for rendering shadows, 2D scrolling, rich styling of text transformations, improved list performance, and more!

    To use today’s release, upgrade your Compose BOM version to 2025.08.00:

    implementation(platform("androidx.compose:compose-bom:2025.08.00"))
    

    Shadows

    We’re happy to introduce two highly requested modifiers: Modifier.dropShadow() and Modifier.innerShadow() allowing you to render box-shadow effects (compared to the existing Modifier.shadow() which renders elevation based shadows based on a lighting model).

    Modifier.dropShadow()

    The dropShadow() modifier draws a shadow behind your content. You can add it to your composable chain and specify the radius, color, and spread. Remember, content that should appear on top of the shadow (like a background) should be drawn after the dropShadow() modifier.

    @Composable
    @Preview(showBackground = true)
    fun SimpleDropShadowUsage() {
        val pinkColor = Color(0xFFe91e63)
        val purpleColor = Color(0xFF9c27b0)
        Box(Modifier.fillMaxSize()) {
            Box(
                Modifier
                    .size(200.dp)
                    .align(Alignment.Center)
                    .dropShadow(
                        RoundedCornerShape(20.dp),
                        dropShadow = DropShadow(
                            15.dp,
                            color = pinkColor,
                            spread = 10.dp,
                            alpha = 0.5f
                        )
                    )
                    .background(
                        purpleColor,
                        shape = RoundedCornerShape(20.dp)
                    )
            )
        }
    }
    

    drop shadow drawn all around shape

    Figure 1. Drop shadow drawn all around shape

    Modifier.innerShadow()

    The Modifier.innerShadow() draws shadows on the inset of the provided shape:

    @Composable
    @Preview(showBackground = true)
    fun SimpleInnerShadowUsage() {
        val pinkColor = Color(0xFFe91e63)
        val purpleColor = Color(0xFF9c27b0)
        Box(Modifier.fillMaxSize()) {
            Box(
                Modifier
                    .size(200.dp)
                    .align(Alignment.Center)
                    .background(
                        purpleColor,
                        shape = RoundedCornerShape(20.dp)
                    )
                    .innerShadow(
                        RoundedCornerShape(20.dp),
                        innerShadow = InnerShadow(
                            15.dp,
                            color = Color.Black,
                            spread = 10.dp,
                            alpha = 0.5f
                        )
                    )
            )
        }
    }
    

    modifier.innerShadow() applied to a shape

    Figure 2. Modifier.innerShadow() applied to a shape

    The order for inner shadows is very important. The inner shadow draws on top of the content, so for the example above, we needed to move the inner shadow modifier after the background modifier. We’d need to do something similar when using it on top of something like an Image. In this example, we’ve placed a separate Box to render the shadow in the layer above the image:

    @Composable
    @Preview(showBackground = true)
    fun PhotoInnerShadowExample() {
        Box(Modifier.fillMaxSize()) {
            val shape = RoundedCornerShape(20.dp)
            Box(
                Modifier
                    .size(200.dp)
                    .align(Alignment.Center)
            ) {
                Image(
                    painter = painterResource(id = R.drawable.cape_town),
                    contentDescription = "Image with Inner Shadow",
                    contentScale = ContentScale.Crop,
                    modifier = Modifier.fillMaxSize()
                        .clip(shape)
                )
                Box(
                    modifier = Modifier.fillMaxSize()
                        .innerShadow(
                            shape,
                            innerShadow = InnerShadow(15.dp,
                                spread = 15.dp)
                        )
                )
            }
        }
    }
    

    Inner shadow on top of an image

    Figure 3.Inner shadow on top of an image

    New Visibility modifiers

    Compose UI 1.8 introduced onLayoutRectChanged, a new performant way to track the location of elements on screen. We’re building on top of this API to support common use cases by introducing onVisibilityChanged and onFirstVisible. These APIs accept optional parameters for the minimum fraction or amount of time the item has been visible for before invoking your action.

    Use onVisibilityChanged for UI changes or side effects that should happen based on visibility, like automatically playing and pausing videos or starting an animation:

    LazyColumn {
      items(feedData) { video ->
        VideoRow(
            video,
            Modifier.onVisibilityChanged(minDurationMs = 500, minFractionVisible = 1f) {
              visible ->
                if (visible) video.play() else video.pause()
              },
        )
      }
    }
    

    Use onFirstVisible for use cases when you wish to react to an element first becoming visible on screen for example to log impressions:

    LazyColumn {
        items(100) {
            Box(
                Modifier
                    // Log impressions when item has been visible for 500ms
                    .onFirstVisible(minDurationMs = 500) { /* log impression */ }
                    .clip(RoundedCornerShape(16.dp))
                    .drawBehind { drawRect(backgroundColor) }
                    .fillMaxWidth()
                    .height(100.dp)
            )
        }
    }
    

    Rich styling in OutputTransformation

    BasicTextField now supports applying styles like color and font weight from within an OutputTransformation.

    The new TextFieldBuffer.addStyle() methods let you apply a SpanStyle or ParagraphStyle to change the appearance of text, without changing the underlying TextFieldState. This is useful for visually formatting input, like phone numbers or credit cards. This method can only be called inside an OutputTransformation.

    // Format a phone number and color the punctuation
    val phoneTransformation = OutputTransformation {
        // 1234567890 -> (123) 456-7890
        if (length == 10) {
            insert(0, "(")
            insert(4, ") ")
            insert(9, "-")
    
            // Color the added punctuation
            val gray = Color(0xFF666666)
            addStyle(SpanStyle(color = gray), 0, 1)
            addStyle(SpanStyle(color = gray), 4, 5)
            addStyle(SpanStyle(color = gray), 9, 10)
        }
    }
    
    BasicTextField(
        state = myTextFieldState,
        outputTransformation = phoneTransformation
    )
    

    LazyLayout

    The building blocks of LazyLayout are all now stable! Check out LazyLayoutMeasurePolicy, LazyLayoutItemProvider, and LazyLayoutPrefetchState to build your own Lazy components.

    Prefetch Improvements

    There are now significant scroll performance improvements in Lazy List and Lazy Grid with the introduction of new prefetch behavior. You can now define a LazyLayoutCacheWindow to prefetch more content. By default, only one item is composed ahead of time in the direction of scrolling, and after something scrolls off screen it is discarded. You can now customize the amount of items ahead to prefetch and behind to retain through a fraction of the viewport or dp size. When you opt into using LazyLayoutCacheWindow, items begin prefetching in the ahead area straight away.

    The configuration entry point for this is on LazyListState, which takes in the cache window size:

    @OptIn(ExperimentalFoundationApi::class)
    @Composable
    private fun LazyColumnCacheWindowDemo() {
        // Prefetch items 150.dp ahead and retain items 100.dp behind the visible viewport
        val dpCacheWindow = LazyLayoutCacheWindow(ahead = 150.dp, behind = 100.dp)
        // Alternatively, prefetch/retain items as a fraction of the list size
        // val fractionCacheWindow = LazyLayoutCacheWindow(aheadFraction = 1f, behindFraction = 0.5f)
        val state = rememberLazyListState(cacheWindow = dpCacheWindow)
        LazyColumn(state = state) {
            items(1000) { Text(text = "$it", fontSize = 80.sp) }
        }
    }
    

    lazylayout in Compose 1.9 release

    Note: Prefetch composes more items than are currently visible — the new cache window API will likely increase prefetching. This means that item’s LaunchedEffects and DisposableEffects may run earlier – do not use this as a signal for visibility e.g. for impression tracking. Instead, we recommend using the new onFirstVisible and onVisibilityChanged APIs. Even if you’re not manually customizing LazyLayoutCacheWindow now, avoid using composition effects as a signal of content visibility, as this new prefetch mechanism will be enabled by default in a future release.

    Scroll

    2D Scroll APIs

    Following the release of Draggable2D, Scrollable2D is now available, bringing two-dimensional scrolling to Compose. While the existing Scrollable modifier handles single-orientation scrolling, Scrollable2D enables both scrolling and flinging in 2D. This allows you to create more complex layouts that move in all directions, such as spreadsheets or image viewers. Nested scrolling is also supported, accommodating 2D scenarios.

    val offset = remember { mutableStateOf(Offset.Zero) }
    Box(
        Modifier.size(150.dp)
            .scrollable2D(
                state =
                    rememberScrollable2DState { delta ->
                        offset.value = offset.value + delta // update the state
                        delta // indicate that we consumed all the pixels available
                    }
            )
            .background(Color.LightGray),
        contentAlignment = Alignment.Center,
    ) {
        Text(
            "X=${offset.value.x.roundToInt()} Y=${offset.value.y.roundToInt()}",
            style = TextStyle(fontSize = 32.sp),
        )
    }
    

    moving image of 2D scroll API demo

    Scroll Interop Improvements

    There are bug fixes and new features to improve scroll and nested scroll interop with Views, including the following:

      • Fixed the dispatching of incorrect velocities during fling animations between Compose and Views.
      • Compose now correctly invokes the View’s nested scroll callbacks in the appropriate order.

    Improve crash analysis by adding source info to stack traces

    We have heard from you that it can be hard to debug Compose crashes when your own code does not appear in the stack trace. To address this we’re providing a new, opt-in API to provide richer crash location details, including composable names and locations enabling you to:

      • Efficiently identify and resolve crash sources.
      • More easily isolate crashes for reproducible samples.
      • Investigate crashes that previously only showed internal stack frames.

    Note that we do not recommend using this API in release builds due to the performance impact of collecting this extra information, nor does it work in minified apks.

    To enable this feature, add the line below to the application entry point. Ideally, this configuration should be performed before any compositions are created to ensure that the stack trace information is collected:

    class App : Application() {
       override fun onCreate() {
            // Enable only for debug flavor to avoid perf regressions in release
            Composer.setDiagnosticStackTraceEnabled(BuildConfig.DEBUG)
       }
    }
    

    New annotations and Lint checks

    We are introducing a new runtime-annotation library that exposes annotations used by the compiler and tooling (such as lint checks). This allows non-Compose modules to use these annotations without a dependency on the Compose runtime library. The @Stable, @Immutable, and @StableMarker annotations have moved to runtime-annotation, allowing you to annotate classes and functions that do not depend on Compose.

    Additionally, we have added two new annotations and corresponding lint checks:

      • @RememberInComposition: An annotation that can mark constructors, functions, and property getters, to indicate that they must not be called directly inside composition without being remembered. Errors will be raised by a corresponding lint check.
      • @FrequentlyChangingValue: An annotation that can mark functions, and property getters, to indicate that they should not be called directly inside composition, as this may cause frequent recompositions (for example, marking scroll position values and animating values). Warnings are provided by a corresponding lint check.

    Additional updates

    Get started

    We appreciate all bug reports and feature requests submitted to our issue tracker. Your feedback allows us to build the APIs you need in your apps. Happy composing!

    The post What’s new in the Jetpack Compose August ’25 release appeared first on InShot Pro.

    ]]>
    Top 3 Updates for Android Developer Productivity @ Google I/O ‘25 https://theinshotproapk.com/top-3-updates-for-android-developer-productivity-google-i-o-25/ Mon, 23 Jun 2025 17:01:00 +0000 https://theinshotproapk.com/top-3-updates-for-android-developer-productivity-google-i-o-25/ Posted by Meghan Mehta – Android Developer Relations Engineer #1 Agentic AI is available for Gemini in Android Studio Gemini ...

    Read more

    The post Top 3 Updates for Android Developer Productivity @ Google I/O ‘25 appeared first on InShot Pro.

    ]]>

    Posted by Meghan Mehta – Android Developer Relations Engineer

    #1 Agentic AI is available for Gemini in Android Studio

    Gemini in Android Studio is the AI-powered coding companion that makes you more productive at every stage of the dev lifecycle. At Google I/O 2025 we previewed new agentic AI experiences: Journeys for Android Studio and Version Upgrade Agent. These innovations make it easier for you to build and test code. We also announced Agent Mode, which was designed to handle complex, multi-stage development tasks that go beyond typical AI assistant capabilities, invoking multiple tools to accomplish tasks on your behalf. We’re excited to see how you leverage these agentic AI experiences which are now available in the latest preview version of Android Studio on the canary release channel.

    You can also use Gemini to automatically generate Jetpack Compose previews, as well as transform UI code using natural language, saving you time and effort. Give Gemini more context by attaching images and project files to your prompts, so you can get more relevant responses. And if you’re looking for enterprise-grade privacy and security features backed by Google Cloud, Gemini in Android Studio for businesses is now available. Developers and admins can unlock these features and benefits by subscribing to Gemini Code Assist Standard or Enterprise editions.

    #2 Build better apps faster with the latest stable release of Jetpack Compose

    Compose is our recommended UI toolkit for Android development, used by over 60% of the top 1K apps on Google Play. We released a new version of our Jetpack Navigation library: Navigation 3, which has been rebuilt from the ground up to give you more flexibility and control over your implementation. We unveiled the new Material 3 Expressive update which provides tools to enhance your product’s appeal by harnessing emotional UX, making it more engaging, intuitive, and desirable for your users. The latest stable Bill of Materials (BOM) release for Compose adds new features such as autofill support, auto-sizing text, visibility tracking, animate bounds modifier, accessibility checks in tests, and more! This release also includes significant rewrites and improvements to multiple sub-systems including semantics, focus and text optimizations.

    These optimizations are available to you with no code changes other than upgrading your Compose dependency. If you’re looking to try out new Compose functionality, the alpha BOM offers new features that we’re working on including pausable composition, updates to LazyLayout prefetch, context menus, and others. Finally, we’ve added Compose support to CameraX and Media3, making it easier to integrate camera capture and video playback into your UI with Compose idiomatic components.

    #3 The new Kotlin Multiplatform (KMP) shared module template helps you share business logic

    KMP enables teams to deliver quality Android and iOS apps with less development time. The KMP ecosystem continues to grow: last year alone, over 900 new KMP libraries were published. At Google I/O we released a new Android Studio KMP shared module template to help you craft and manage business logic, updated Jetpack libraries and new codelabs (Getting started with Kotlin Multiplatform and Migrating your Room database to KMP) to help you get started with KMP. We also shared additional announcements at KotlinConf.

    Learn more about what we announced at Google I/O 2025 to help you build better apps, faster.

    The post Top 3 Updates for Android Developer Productivity @ Google I/O ‘25 appeared first on InShot Pro.

    ]]>
    Top 3 updates for building excellent, adaptive apps at Google I/O ‘25 https://theinshotproapk.com/top-3-updates-for-building-excellent-adaptive-apps-at-google-i-o-25/ Tue, 10 Jun 2025 18:01:00 +0000 https://theinshotproapk.com/top-3-updates-for-building-excellent-adaptive-apps-at-google-i-o-25/ Posted by Mozart Louis – Developer Relations Engineer Today, Android is launching a few updates across the platform! This includes ...

    Read more

    The post Top 3 updates for building excellent, adaptive apps at Google I/O ‘25 appeared first on InShot Pro.

    ]]>

    Posted by Mozart Louis – Developer Relations Engineer

    Today, Android is launching a few updates across the platform! This includes the start of Android 16’s rollout, with details for both developers and users, a Developer Preview for enhanced Android desktop experiences with connected displays, and updates for Android users across Google apps and more, plus the June Pixel Drop. We’re also recapping all the Google I/O updates for Android developers focused on building excellent, adaptive Android apps.

    Google I/O 2025 brought exciting advancements to Android, equipping you with essential knowledge and powerful tools you need to build outstanding, user-friendly applications that stand out.

    If you missed any of the key #GoogleIO25 updates and just saw the release of Android 16 or you’re ready to dive into building excellent adaptive apps, our playlist is for you. Learn how to craft engaging experiences with Live Updates in Android 16, capture video effortlessly with CameraX, process it efficiently using Media3’s editing tools, and engage users across diverse platforms like XR, Android for Cars, Android TV, and Desktop.

    Check out the Google I/O playlist for all the session details.

    Here are three key announcements directly influencing how you can craft deeply engaging experiences and truly connect with your users:

    #1: Build adaptively to unlock 500 million devices

    In today’s diverse device ecosystem, users expect their favorite applications to function seamlessly across various form factors, including phones, tablets, Chromebooks, automobiles, and emerging XR glasses and headsets. Our recommended approach for developing applications that excel on each of these surfaces is to create a single, adaptive application. This strategy avoids the need to rebuild the application for every screen size, shape, or input method, ensuring a consistent and high-quality user experience across all devices.

    The talk emphasizes that you don’t need to rebuild apps for each form factor. Instead, small, iterative changes can unlock an app’s potential.

    Here are some resources we encourage you to use in your apps:

    New feature support in Jetpack Compose Adaptive Libraries

      • We’re continuing to make it as easy as possible to build adaptively with Jetpack Compose Adaptive Libraries. with new features in 1.1 like pane expansion and predictive back. By utilizing canonical layout patterns such as List Detail or Supporting Pane layouts and integrating your app code, your application will automatically adjust and reflow when resized.

    Navigation 3

      • The alpha release of the Navigation 3 library now supports displaying multiple panes. This eliminates the need to alter your navigation destination setup for separate list and detail views. Instead, you can adjust the setup to concurrently render multiple destinations when sufficient screen space is available.

    Updates to Window Manager Library

      • AndroidX.window 1.5 introduces two new window size classes for expanded widths, facilitating better layout adaptation for large tablets and desktops. A width of 1600dp or more is now categorized as “extra large,” while widths between 1200dp and 1600dp are classified as “large.” These subdivisions offer more granularity for developers to optimize their applications for a wider range of window sizes.

    Support all orientations and be resizable

    Extend to Android XR

    Upgrade your Wear OS apps to Material 3 Design

    You should build a single, adaptive mobile app that brings the best experiences to all Android surfaces. By building adaptive apps, you meet users where they are today and in the future, enhancing user engagement and app discoverability. This approach represents a strategic business decision that optimizes an app’s long-term success.

    #2: Enhance your app’s performance optimization

    Get ready to take your app’s performance to the next level! Google I/O 2025, brought an inside look at cutting-edge tools and techniques to boost user satisfaction, enhance technical performance metrics, and drive those all-important key performance indicators. Imagine an end-to-end workflow that streamlines performance optimization.

    Redesigned UiAutomator API

      • To make benchmarking reliable and reproducible, there’s the brand new UiAutomator API. Write robust test code and run it on your local devices or in Firebase Test Lab, ensuring consistent results every time.

    Macrobenchmarks

      • Once your tests are in place, it’s time to measure and understand. Macrobenchmarks give you the hard data, while App Startup Insights provide actionable recommendations for improvement. Plus, you can get a quick snapshot of your app’s health with the App Performance Score via DAC. These tools combined give you a comprehensive view of your app’s performance and where to focus your efforts.

    R8, More than code shrinking and obfuscation

      • You might know R8 as a code shrinking tool, but it’s capable of so much more! The talk dives into R8’s capabilities using the “Androidify” sample app. You’ll see how to apply R8, troubleshoot any issues (like crashes!), and configure it for optimal performance. It’ll also be shown how library developers can include “consumer Keep rules” so that their important code is not touched when used in an application.

    #3: Build Richer Image and Video Experiences

    In today’s digital landscape, users increasingly expect seamless content creation capabilities within their apps. To meet this demand, developers require robust tools for building excellent camera and media experiences.

    Media3Effects in CameraX Preview

      • At Google I/O, developers delve into practical strategies for capturing high-quality video using CameraX, while simultaneously leveraging the Media3Effects on the preview.

    Google Low-Light Boost

      • Google Low Light Boost in Google Play services enables real-time dynamic camera brightness adjustment in low light, even without device support for Low Light Boost AE Mode.

    New Camera & Media Samples!

    Learn more about how CameraX & Media3 can accelerate your development of camera and media related features.

    Learn how to build adaptive apps

    Want to learn more about building excellent, adaptive apps? Watch this playlist to learn more about all the session details.

    The post Top 3 updates for building excellent, adaptive apps at Google I/O ‘25 appeared first on InShot Pro.

    ]]>
    Google I/O 2025: Build adaptive Android apps that shine across form factors https://theinshotproapk.com/google-i-o-2025-build-adaptive-android-apps-that-shine-across-form-factors/ Wed, 04 Jun 2025 12:03:09 +0000 https://theinshotproapk.com/google-i-o-2025-build-adaptive-android-apps-that-shine-across-form-factors/ Posted by Fahd Imtiaz – Product Manager, Android Developer If your app isn’t built to adapt, you’re missing out on ...

    Read more

    The post Google I/O 2025: Build adaptive Android apps that shine across form factors appeared first on InShot Pro.

    ]]>

    Posted by Fahd Imtiaz – Product Manager, Android Developer

    If your app isn’t built to adapt, you’re missing out on the opportunity to reach a giant swath of users across 500 million devices! At Google I/O this year, we are exploring how adaptive development isn’t just a good idea, but essential to building apps that shine across the expanding Android device ecosystem. This is your guide to meeting users wherever they are, with experiences that are perfectly tailored to their needs.

    The advantage of building adaptive

    In today’s multi-device world, users expect their favorite applications to work flawlessly and intuitively, whether they’re on a smartphone, tablet, or Chromebook. This expectation for seamless experiences isn’t just about convenience; it’s an important factor for user engagement and retention.

    For example, entertainment apps (including Prime Video, Netflix, and Hulu) users on both phone and tablet spend almost 200% more time in-app (nearly 3x engagement) than phone-only users in the US*.

    Peacock, NBCUniversal’s streaming service has seen a trend of users moving between mobile and large screens and building adaptively enables a single build to work across the different form factors.

    “This allows Peacock to have more time to innovate faster and deliver more value to its customers.”

    – Diego Valente, Head of Mobile, Peacock and Global Streaming

    Adaptive Android development offers the strategic solution, enabling apps to perform effectively across an expanding array of devices and contexts through intelligent design choices that emphasize code reuse and scalability. With Android’s continuous growth into new form factors and upcoming enhancements such as desktop windowing and connected displays in Android 16, an app’s ability to seamlessly adapt to different screen sizes is becoming increasingly crucial for retaining users and staying competitive.

    Beyond direct user benefits, designing adaptively also translates to increased visibility. The Google Play Store actively helps promote developers whose apps excel on different form factors. If your application delivers a great experience on tablets or is excellent on ChromeOS, users on those devices will have an easier time discovering your app. This creates a win-win situation: better quality apps for users and a broader audience for you.

    examples of form factors across small phones, tablets, laoptops, and auto

    Latest in adaptive Android development from Google I/O

    To help you more effectively build compelling adaptive experiences, we shared several key updates at I/O this year.

    Build for the expanding Android device ecosystem

    Your mobile apps can now reach users beyond phones on over 500 million active devices, including foldables, tablets, Chromebooks, and even compatible cars, with minimal changes. Android 16 introduces significant advancements in desktop windowing for a true desktop-like experience on large screens and when devices are connected to external displays. And, Android XR is opening a new dimension, allowing your existing mobile apps to be available in immersive virtual environments.

    The mindset shift to Adaptive

    With the expanding Android device ecosystem, adaptive app development is a fundamental strategy. It’s about how the same mobile app runs well across phones, foldables, tablets, Chromebooks, connected displays, XR, and cars, laying a strong foundation for future devices and differentiating for specific form factors. You don’t need to rebuild your app for each form factor; but rather make small, iterative changes, as needed, when needed. Embracing this adaptive mindset today isn’t just about keeping pace; it’s about leading the charge in delivering exceptional user experiences across the entire Android ecosystem.

    examples of form factors including vr headset

    Leverage powerful tools and libraries to build adaptive apps:

      • Compose Adaptive Layouts library: This library makes adaptive development easier by allowing your app code to fit into canonical layout patterns like list-detail and supporting pane, that automatically reflow as your app is resized, flipped or folded. In the 1.1 release, we introduced pane expansion, allowing users to resize panes. The Socialite demo app showcased how one codebase using this library can adapt across six form factors. New adaptation strategies like “Levitate” (elevating a pane, e.g., into a dialog or bottom sheet) and “Reflow” (reorganizing panes on the same level) were also announced in 1.2 (alpha). For XR, component overrides can automatically spatialize UI elements.

      • Jetpack Navigation 3 (Alpha): This new navigation library simplifies defining user journeys across screens with less boilerplate code, especially for multi-pane layouts in Compose. It helps handle scenarios where list and detail panes might be separate destinations on smaller screens but shown together on larger ones. Check out the new Jetpack Navigation library in alpha.

      • Jetpack Compose input enhancements: Compose’s layered architecture, strong input support, and single location for layout logic simplify creating adaptive UIs. Upcoming in Compose 1.9 are right-click context menus and enhanced trackpad/mouse functionality.

      • Window Size Classes: Use window size classes for top-level layout decisions. AndroidX.window 1.5 introduces two new width size classes – “large” (1200dp to 1600dp) and “extra-large” (1600dp and larger) – providing more granular breakpoints for large screens. This helps in deciding when to expand navigation rails or show three panes of content. Support for these new breakpoints was also announced in the Compose adaptive layouts library 1.2 alpha, along with design guidance.

      • Compose previews: Get quick feedback by visualizing your layouts across a wide variety of screen sizes and aspect ratios. You can also specify different devices by name to preview your UI on their respective sizes and with their inset values.

      • Testing adaptive layouts: Validating your adaptive layouts is crucial and Android Studio offers various tools for testing – including previews for different sizes and aspect ratios, a resizable emulator to test across different screen sizes with a single AVD, screenshot tests, and instrumental behavior tests. And with Journeys with Gemini in Android Studio, you can define tests using natural language for even more robust testing across different window sizes.

    Ensuring app availability across devices

    Avoid unnecessarily declaring required features (like specific cameras or GPS) in your manifest, as this can prevent your app from appearing in the Play Store on devices that lack those specific hardware components but could otherwise run your app perfectly.

    Handling different input methods

    Remember to handle various input methods like touch, keyboard, and mouse, especially with Chromebook detachables and connected displays.

    Prepare for orientation and resizability API changes in Android 16

    Beginning in Android 16, for apps targeting SDK 36, manifest and runtime restrictions on orientation, resizability, and aspect ratio will be ignored on displays that are at least 600dp in both dimensions. To meet user expectations, your apps will need layouts that work for both portrait and landscape windows, and support resizing at runtime. There’s a temporary opt-out manifest flag at both the application and activity level to delay these changes until targetSdk 37, and these changes currently do not apply to apps categorized as “Games”. Learn more about these API changes.

    Adaptive considerations for games

    Games need to be adaptive too and Unity 6 will add enhanced support for configuration handling, including APIs for screenshots, aspect ratio, and density. Success stories like Asphalt Legends Unite show significant user retention increases on foldables after implementing adaptive features.

    examples of form factors including vr headset

    Start building adaptive today

    Now is the time to elevate your Android apps, making them intuitively responsive across form factors. With the latest tools and updates we’re introducing, you have the power to build experiences that seamlessly flow across all devices, from foldables to cars and beyond. Implementing these strategies will allow you to expand your reach and delight users across the Android ecosystem.

    Get inspired by the “Adaptive Android development makes your app shine across devices” talk, and explore all the resources you’ll need to start your journey at developer.android.com/adaptive-apps!

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.

    *Source: internal Google data

    The post Google I/O 2025: Build adaptive Android apps that shine across form factors appeared first on InShot Pro.

    ]]>
    Androidify: Building delightful UIs with Compose https://theinshotproapk.com/androidify-building-delightful-uis-with-compose/ Tue, 03 Jun 2025 12:07:48 +0000 https://theinshotproapk.com/androidify-building-delightful-uis-with-compose/ Posted by Rebecca Franks – Developer Relations Engineer Androidify is a new sample app we built using the latest best ...

    Read more

    The post Androidify: Building delightful UIs with Compose appeared first on InShot Pro.

    ]]>

    Posted by Rebecca Franks – Developer Relations Engineer

    Androidify is a new sample app we built using the latest best practices for mobile apps. Previously, we covered all the different features of the app, from Gemini integration and CameraX functionality to adaptive layouts. In this post, we dive into the Jetpack Compose usage throughout the app, building upon our base knowledge of Compose to add delightful and expressive touches along the way!

    Material 3 Expressive

    Material 3 Expressive is an expansion of the Material 3 design system. It’s a set of new features, updated components, and design tactics for creating emotionally impactful UX.

    It’s been released as part of the alpha version of the Material 3 artifact (androidx.compose.material3:material3:1.4.0-alpha10) and contains a wide range of new components you can use within your apps to build more personalized and delightful experiences. Learn more about Material 3 Expressive’s component and theme updates for more engaging and user-friendly products.

    Material Expressive Component updates

    Material Expressive Component updates

    In addition to the new component updates, Material 3 Expressive introduces a new motion physics system that’s encompassed in the Material theme.

    In Androidify, we’ve utilized Material 3 Expressive in a few different ways across the app. For example, we’ve explicitly opted-in to the new MaterialExpressiveTheme and chosen MotionScheme.expressive() (this is the default when using expressive) to add a bit of playfulness to the app:

    @Composable
    fun AndroidifyTheme(
       content: @Composable () -> Unit,
    ) {
       val colorScheme = LightColorScheme
    
    
       MaterialExpressiveTheme(
           colorScheme = colorScheme,
           typography = Typography,
           shapes = shapes,
           motionScheme = MotionScheme.expressive(),
           content = {
               SharedTransitionLayout {
                   CompositionLocalProvider(LocalSharedTransitionScope provides this) {
                       content()
                   }
               }
           },
       )
    }
    

    Some of the new componentry is used throughout the app, including the HorizontalFloatingToolbar for the Prompt type selection:

    moving example of expressive button shapes in slow motion

    The app also uses MaterialShapes in various locations, which are a preset list of shapes that allow for easy morphing between each other. For example, check out the cute cookie shape for the camera capture button:

    Material Expressive Component updates

    Camera button with a MaterialShapes.Cookie9Sided shape

    Animations

    Wherever possible, the app leverages the Material 3 Expressive MotionScheme to obtain a themed motion token, creating a consistent motion feeling throughout the app. For example, the scale animation on the camera button press is powered by defaultSpatialSpec(), a specification used for animations that move something across a screen (such as x,y or rotation, scale animations):

    val interactionSource = remember { MutableInteractionSource() }
    val animationSpec = MaterialTheme.motionScheme.defaultSpatialSpec<Float>()
    Spacer(
       modifier
           .indication(interactionSource, ScaleIndicationNodeFactory(animationSpec))
           .clip(MaterialShapes.Cookie9Sided.toShape())
           .size(size)
           .drawWithCache {
               //.. etc
           },
    )
    

    Camera button scale interaction

    Camera button scale interaction

    Shared element animations

    The app uses shared element transitions between different screen states. Last year, we showcased how you can create shared elements in Jetpack Compose, and we’ve extended this in the Androidify sample to create a fun example. It combines the new Material 3 Expressive MaterialShapes, and performs a transition with a morphing shape animation:

    moving example of expressive button shapes in slow motion

    To do this, we created a custom Modifier that takes in the target and resting shapes for the sharedBounds transition:

    @Composable
    fun Modifier.sharedBoundsRevealWithShapeMorph(
       sharedContentState: 
    SharedTransitionScope.SharedContentState,
       sharedTransitionScope: SharedTransitionScope = 
    LocalSharedTransitionScope.current,
       animatedVisibilityScope: AnimatedVisibilityScope = 
    LocalNavAnimatedContentScope.current,
       boundsTransform: BoundsTransform = 
    MaterialTheme.motionScheme.sharedElementTransitionSpec,
       resizeMode: SharedTransitionScope.ResizeMode = 
    SharedTransitionScope.ResizeMode.RemeasureToBounds,
       restingShape: RoundedPolygon = RoundedPolygon.rectangle().normalized(),
       targetShape: RoundedPolygon = RoundedPolygon.circle().normalized(),
    )
    

    Then, we apply a custom OverlayClip to provide the morphing shape, by tying into the AnimatedVisibilityScope provided by the LocalNavAnimatedContentScope:

    val animatedProgress =
       animatedVisibilityScope.transition.animateFloat(targetValueByState = targetValueByState)
    
    
    val morph = remember {
       Morph(restingShape, targetShape)
    }
    val morphClip = MorphOverlayClip(morph, { animatedProgress.value })
    
    
    return this@sharedBoundsRevealWithShapeMorph
       .sharedBounds(
           sharedContentState = sharedContentState,
           animatedVisibilityScope = animatedVisibilityScope,
           boundsTransform = boundsTransform,
           resizeMode = resizeMode,
           clipInOverlayDuringTransition = morphClip,
           renderInOverlayDuringTransition = renderInOverlayDuringTransition,
       )
    

    View the full code snippet for this Modifer on GitHub.

    Autosize text

    With the latest release of Jetpack Compose 1.8, we added the ability to create text composables that automatically adjust the font size to fit the container’s available size with the new autoSize parameter:

    BasicText(text,
    style = MaterialTheme.typography.titleLarge,
    autoSize = TextAutoSize.StepBased(maxFontSize = 220.sp),
    )
    

    This is used front and center for the “Customize your own Android Bot” text:

    Text reads Customize your own Android Bot with an inline moving image

    “Customize your own Android Bot” text with inline GIF

    This text composable is interesting because it needed to have the fun dancing Android bot in the middle of the text. To do this, we use InlineContent, which allows us to append a composable in the middle of the text composable itself:

    @Composable
    private fun DancingBotHeadlineText(modifier: Modifier = Modifier) {
       Box(modifier = modifier) {
           val animatedBot = "animatedBot"
           val text = buildAnnotatedString {
               append(stringResource(R.string.customize))
               // Attach "animatedBot" annotation on the placeholder
               appendInlineContent(animatedBot)
               append(stringResource(R.string.android_bot))
           }
           var placeHolderSize by remember {
               mutableStateOf(220.sp)
           }
           val inlineContent = mapOf(
               Pair(
                   animatedBot,
                   InlineTextContent(
                       Placeholder(
                           width = placeHolderSize,
                           height = placeHolderSize,
                           placeholderVerticalAlign = PlaceholderVerticalAlign.TextCenter,
                       ),
                   ) {
                       DancingBot(
                           modifier = Modifier
                               .padding(top = 32.dp)
                               .fillMaxSize(),
                       )
                   },
               ),
           )
           BasicText(
               text,
               modifier = Modifier
                   .align(Alignment.Center)
                   .padding(bottom = 64.dp, start = 16.dp, end = 16.dp),
               style = MaterialTheme.typography.titleLarge,
               autoSize = TextAutoSize.StepBased(maxFontSize = 220.sp),
               maxLines = 6,
               onTextLayout = { result ->
                   placeHolderSize = result.layoutInput.style.fontSize * 3.5f
               },
               inlineContent = inlineContent,
           )
       }
    }
    

    Composable visibility with onLayoutRectChanged

    With Compose 1.8, a new modifier, Modifier.onLayoutRectChanged, was added. This modifier is a more performant version of onGloballyPositioned, and includes features such as debouncing and throttling to make it performant inside lazy layouts.

    In Androidify, we’ve used this modifier for the color splash animation. It determines the position where the transition should start from, as we attach it to the “Let’s Go” button:

    var buttonBounds by remember {
       mutableStateOf<RelativeLayoutBounds?>(null)
    }
    var showColorSplash by remember {
       mutableStateOf(false)
    }
    Box(modifier = Modifier.fillMaxSize()) {
       PrimaryButton(
           buttonText = "Let's Go",
           modifier = Modifier
               .align(Alignment.BottomCenter)
               .onLayoutRectChanged(
                   callback = { bounds ->
                       buttonBounds = bounds
                   },
               ),
           onClick = {
               showColorSplash = true
           },
       )
    }
    

    We use these bounds as an indication of where to start the color splash animation from.

    moving image of a blue color splash transition between Androidify demo screens

    Learn more delightful details

    From fun marquee animations on the results screen, to animated gradient buttons for the AI-powered actions, to the path drawing animation for the loading screen, this app has many delightful touches for you to experience and learn from.

    animated marquee example

    animated gradient button for AI powered actions example

    animated loading screen example

    Check out the full codebase at github.com/android/androidify and learn more about the latest in Compose from using Material 3 Expressive, the new modifiers, auto-sizing text and of course a couple of delightful interactions!

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.

    The post Androidify: Building delightful UIs with Compose appeared first on InShot Pro.

    ]]>
    Androidify: Building powerful AI-driven experiences with Jetpack Compose, Gemini and CameraX https://theinshotproapk.com/androidify-building-powerful-ai-driven-experiences-with-jetpack-compose-gemini-and-camerax/ Mon, 02 Jun 2025 12:07:28 +0000 https://theinshotproapk.com/androidify-building-powerful-ai-driven-experiences-with-jetpack-compose-gemini-and-camerax/ Posted by Rebecca Franks – Developer Relations Engineer The Android bot is a beloved mascot for Android users and developers, ...

    Read more

    The post Androidify: Building powerful AI-driven experiences with Jetpack Compose, Gemini and CameraX appeared first on InShot Pro.

    ]]>

    Posted by Rebecca Franks – Developer Relations Engineer

    The Android bot is a beloved mascot for Android users and developers, with previous versions of the bot builder being very popular – we decided that this year we’d rebuild the bot maker from the ground up, using the latest technology backed by Gemini. Today we are releasing a new open source app, Androidify, for learning how to build powerful AI driven experiences on Android using the latest technologies such as Jetpack Compose, Gemini through Firebase, CameraX, and Navigation 3.

    a moving image of various droid bots dancing individually

    Androidify app demo

    Here’s an example of the app running on the device, showcasing converting a photo to an Android bot that represents my likeness:

    moving image showing the conversion of an image of a woman in a pink dress holding na umbrella into a 3D image of a droid bot wearing a pink dress holding an umbrella

    Under the hood

    The app combines a variety of different Google technologies, such as:

      • Gemini API – through Firebase AI Logic SDK, for accessing the underlying Imagen and Gemini models.
      • Jetpack Compose – for building the UI with delightful animations and making the app adapt to different screen sizes.
      • Navigation 3 – the latest navigation library for building up Navigation graphs with Compose.
      • CameraX Compose and Media3 Compose – for building up a custom camera with custom UI controls (rear camera support, zoom support, tap-to-focus) and playing the promotional video.

    This sample app is currently using a standard Imagen model, but we’ve been working on a fine-tuned model that’s trained specifically on all of the pieces that make the Android bot cute and fun; we’ll share that version later this year. In the meantime, don’t be surprised if the sample app puts out some interesting looking examples!

    How does the Androidify app work?

    The app leverages our best practices for Architecture, Testing, and UI to showcase a real world, modern AI application on device.

    Flow chart describing Androidify app flow

    Androidify app flow chart detailing how the app works with AI

    AI in Androidify with Gemini and ML Kit

    The Androidify app uses the Gemini models in a multitude of ways to enrich the app experience, all powered by the Firebase AI Logic SDK. The app uses Gemini 2.5 Flash and Imagen 3 under the hood:

      • Image validation: We ensure that the captured image contains sufficient information, such as a clearly focused person, and assessing for safety. This feature uses the multi-modal capabilities of Gemini API, by giving it a prompt and image at the same time:

    val response = generativeModel.generateContent(
       content {
           text(prompt)
           image(image)
       },
    )
    

      • Text prompt validation: If the user opts for text input instead of image, we use Gemini 2.5 Flash to ensure the text contains a sufficiently descriptive prompt to generate a bot.

      • Image captioning: Once we’re sure the image has enough information, we use Gemini 2.5 Flash to perform image captioning., We ask Gemini to be as descriptive as possible,focusing on the clothing and its colors.

      • “Help me write” feature: Similar to an “I’m feeling lucky” type feature, “Help me write” uses Gemini 2.5 Flash to create a random description of the clothing and hairstyle of a bot.

      • Image generation from the generated prompt: As the final step, Imagen generates the image, providing the prompt and the selected skin tone of the bot.

    The app also uses the ML Kit pose detection to detect a person in the viewfinder and enable the capture button when a person is detected, as well as adding fun indicators around the content to indicate detection.

    Explore more detailed information about AI usage in Androidify.

    Jetpack Compose

    The user interface of Androidify is built using Jetpack Compose, the modern UI toolkit that simplifies and accelerates UI development on Android.

    Delightful details with the UI

    The app uses Material 3 Expressive, the latest alpha release that makes your apps more premium, desirable, and engaging. It provides delightful bits of UI out-of-the-box, like new shapes, componentry, and using the MotionScheme variables wherever a motion spec is needed.

    MaterialShapes are used in various locations. These are a preset list of shapes that allow for easy morphing between each other—for example, the cute cookie shape for the camera capture button:

    Androidify app UI showing camera button

    Camera button with a MaterialShapes.Cookie9Sided shape

    Beyond using the standard Material components, Androidify also features custom composables and delightful transitions tailored to the specific needs of the app:

      • There are plenty of shared element transitions across the app—for example, a morphing shape shared element transition is performed between the “take a photo” button and the camera surface.

        moving example of expressive button shapes in slow motion

      • Custom enter transitions for the ResultsScreen with the usage of marquee modifiers.

        animated marquee example

      • Fun color splash animation as a transition between screens.

        moving image of a blue color splash transition between Androidify demo screens

      • Animating gradient buttons for the AI-powered actions.

        animated gradient button for AI powered actions example

    To learn more about the unique details of the UI, read Androidify: Building delightful UIs with Compose

    Adapting to different devices

    Androidify is designed to look great and function seamlessly across candy bar phones, foldables, and tablets. The general goal of developing adaptive apps is to avoid reimplementing the same app multiple times on each form factor by extracting out reusable composables, and leveraging APIs like WindowSizeClass to determine what kind of layout to display.

    a collage of different adaptive layouts for the Androidify app across small and large screens

    Various adaptive layouts in the app

    For Androidify, we only needed to leverage the width window size class. Combining this with different layout mechanisms, we were able to reuse or extend the composables to cater to the multitude of different device sizes and capabilities.

      • Responsive layouts: The CreationScreen demonstrates adaptive design. It uses helper functions like isAtLeastMedium() to detect window size categories and adjust its layout accordingly. On larger windows, the image/prompt area and color picker might sit side-by-side in a Row, while on smaller windows, the color picker is accessed via a ModalBottomSheet. This pattern, called “supporting pane”, highlights the supporting dependencies between the main content and the color picker.

      • Foldable support: The app actively checks for foldable device features. The camera screen uses WindowInfoTracker to get FoldingFeature information to adapt to different features by optimizing the layout for tabletop posture.

      • Rear display: Support for devices with multiple displays is included via the RearCameraUseCase, allowing for the device camera preview to be shown on the external screen when the device is unfolded (so the main content is usually displayed on the internal screen).

    Using window size classes, coupled with creating a custom @LargeScreensPreview annotation, helps achieve unique and useful UIs across the spectrum of device sizes and window sizes.

    CameraX and Media3 Compose

    To allow users to base their bots on photos, Androidify integrates CameraX, the Jetpack library that makes camera app development easier.

    The app uses a custom CameraLayout composable that supports the layout of the typical composables that a camera preview screen would include— for example, zoom buttons, a capture button, and a flip camera button. This layout adapts to different device sizes and more advanced use cases, like the tabletop mode and rear-camera display. For the actual rendering of the camera preview, it uses the new CameraXViewfinder that is part of the camerax-compose artifact.

    CameraLayout in Compose

    CameraLayout composable that takes care of different device configurations, such as table top mode

    CameraLayout in Compose

    CameraLayout composable that takes care of different device configurations, such as table top mode

    The app also integrates with Media3 APIs to load an instructional video for showing how to get the best bot from a prompt or image. Using the new media3-ui-compose artifact, we can easily add a VideoPlayer into the app:

    @Composable
    private fun VideoPlayer(modifier: Modifier = Modifier) {
        val context = LocalContext.current
        var player by remember { mutableStateOf<Player?>(null) }
        LifecycleStartEffect(Unit) {
            player = ExoPlayer.Builder(context).build().apply {
                setMediaItem(MediaItem.fromUri(Constants.PROMO_VIDEO))
                repeatMode = Player.REPEAT_MODE_ONE
                prepare()
            }
            onStopOrDispose {
                player?.release()
                player = null
            }
        }
        Box(
            modifier
                .background(MaterialTheme.colorScheme.surfaceContainerLowest),
        ) {
            player?.let { currentPlayer ->
                PlayerSurface(currentPlayer, surfaceType = SURFACE_TYPE_TEXTURE_VIEW)
            }
        }
    }
    

    Using the new onLayoutRectChanged modifier, we also listen for whether the composable is completely visible or not, and play or pause the video based on this information:

    var videoFullyOnScreen by remember { mutableStateOf(false) }     
    
    LaunchedEffect(videoFullyOnScreen) {
         if (videoFullyOnScreen) currentPlayer.play() else currentPlayer.pause()
    } 
    
    // We add this onto the player composable to determine if the video composable is visible, and mutate the videoFullyOnScreen variable, that then toggles the player state. 
    Modifier.onVisibilityChanged(
                    containerWidth = LocalView.current.width,
                    containerHeight = LocalView.current.height,
    ) { fullyVisible -> videoFullyOnScreen = fullyVisible }
    
    // A simple version of visibility changed detection
    fun Modifier.onVisibilityChanged(
        containerWidth: Int,
        containerHeight: Int,
        onChanged: (visible: Boolean) -> Unit,
    ) = this then Modifier.onLayoutRectChanged(100, 0) { layoutBounds ->
        onChanged(
            layoutBounds.boundsInRoot.top > 0 &&
                layoutBounds.boundsInRoot.bottom < containerHeight &&
                layoutBounds.boundsInRoot.left > 0 &&
                layoutBounds.boundsInRoot.right < containerWidth,
        )
    }
    

    Additionally, using rememberPlayPauseButtonState, we add on a layer on top of the player to offer a play/pause button on the video itself:

    val playPauseButtonState = rememberPlayPauseButtonState(currentPlayer)
                OutlinedIconButton(
                    onClick = playPauseButtonState::onClick,
                    enabled = playPauseButtonState.isEnabled,
                ) {
                    val icon =
                        if (playPauseButtonState.showPlay) R.drawable.play else R.drawable.pause
                    val contentDescription =
                        if (playPauseButtonState.showPlay) R.string.play else R.string.pause
                    Icon(
                        painterResource(icon),
                        stringResource(contentDescription),
                    )
                }
    

    Check out the code for more details on how CameraX and Media3 were used in Androidify.

    Navigation 3

    Screen transitions are handled using the new Jetpack Navigation 3 library androidx.navigation3. The MainNavigation composable defines the different destinations (Home, Camera, Creation, About) and displays the content associated with each destination using NavDisplay. You get full control over your back stack, and navigating to and from destinations is as simple as adding and removing items from a list.

    @Composable
    fun MainNavigation() {
       val backStack = rememberMutableStateListOf<NavigationRoute>(Home)
       NavDisplay(
           backStack = backStack,
           onBack = { backStack.removeLastOrNull() },
           entryProvider = entryProvider {
               entry<Home> { entry ->
                   HomeScreen(
                       onAboutClicked = {
                           backStack.add(About)
                       },
                   )
               }
               entry<Camera> {
                   CameraPreviewScreen(
                       onImageCaptured = { uri ->
                           backStack.add(Create(uri.toString()))
                       },
                   )
               }
               // etc
           },
       )
    }
    

    Notably, Navigation 3 exposes a new composition local, LocalNavAnimatedContentScope, to easily integrate your shared element transitions without needing to keep track of the scope yourself. By default, Navigation 3 also integrates with predictive back, providing delightful back experiences when navigating between screens, as seen in this prior shared element transition:

    CameraLayout in Compose

    Learn more about Jetpack Navigation 3, currently in alpha.

    Learn more

    By combining the declarative power of Jetpack Compose, the camera capabilities of CameraX, the intelligent features of Gemini, and thoughtful adaptive design, Androidify is a personalized avatar creation experience that feels right at home on any Android device. You can find the full code sample at github.com/android/androidify where you can see the app in action and be inspired to build your own AI-powered app experiences.

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.

    The post Androidify: Building powerful AI-driven experiences with Jetpack Compose, Gemini and CameraX appeared first on InShot Pro.

    ]]>