Developer Tools https://theinshotproapk.com/category/app/developer-tools/ Download InShot Pro APK for Android, iOS, and PC Tue, 09 Sep 2025 12:03:02 +0000 en-US hourly 1 https://theinshotproapk.com/wp-content/uploads/2021/07/cropped-Inshot-Pro-APK-Logo-1-32x32.png Developer Tools https://theinshotproapk.com/category/app/developer-tools/ 32 32 Designing with personality: Introducing Material 3 Expressive for Wear OS https://theinshotproapk.com/designing-with-personality-introducing-material-3-expressive-for-wear-os/ Tue, 09 Sep 2025 12:03:02 +0000 https://theinshotproapk.com/designing-with-personality-introducing-material-3-expressive-for-wear-os/ Posted by Chiara Chiappini – Android Developer Relations Engineer, and Kevin Hufnagle – Android Technical Writer This post is part ...

Read more

The post Designing with personality: Introducing Material 3 Expressive for Wear OS appeared first on InShot Pro.

]]>

Posted by Chiara Chiappini – Android Developer Relations Engineer, and Kevin Hufnagle – Android Technical Writer

This post is part of Wear OS Spotlight Week. Today, we’re focusing on creating modern, premium designs using the Material 3 Expressive design system.

When crafting the user interface for your Wear OS app or tile, consider how your experience expresses your brand while respecting the performance guidelines for watches, particularly battery use. With the new Material 3 Expressive design system, you can build performant UIs that truly shine on a wearable device.

A gallery of Wear OS screens that demonstrate Material 3 Expressive, including a curved edge button, a wavy progress circle, and different shapes for “cancel” and “confirm” buttons.

A gallery of Material 3 Expressive experiences on Wear OS

This blog post walks you through the key principles of this new design system and how you can implement them to create more engaging and intuitive user experiences.

What’s new in Material 3 Expressive?

As mentioned in our announcement at I/O earlier this year and our unveiling of Google Pixel Watch 4 last week, Material 3 Expressive introduces several fundamental improvements from previous Wear OS design guidance, which aim to give your apps and tiles more personality and help users feel confident that they’re successfully taking quick actions on a round screen.

The key design principles include the following:

    • Embrace the round form factor: Use the full screen with components like the edge-hugging button to complement the watch’s form factor. This makes the UI feel well-suited for a user’s wrist.
    • Apply the proper screen layout on each surface: Take advantage of the new layouts and components—such as the 3-slot tile PrimaryLayout and the TransformingLazyColumn – to create more consistent, glanceable, and fluid user experiences for tiles and apps.
    • Elevate your experience: The dynamic color system provides a richer palette for more vibrant themes in apps. Variable fonts allow for dynamic, customizable typography.
    • Show off expressive animations: Light up your Wear OS experience with meaningful movement, such as spring animations and shape morphing.

      Embrace the round form factor

      Material 3 Expressive for Wear OS differentiates itself from systems designed for rectangular screens, offering a framework of components that are designed specifically for round screens, using the entire circular canvas to its full potential.

      A button that appears near the bottom of the screen has a flat top but a curved bottom, forming a half-moon shape that better fits the circular screen.

      The edge-hugging button’s animated entrance and shape emphasizes the round form factor

      One of the most noticeable examples of this is the edge-hugging button. It features a curved bottom edge that perfectly complements the round display. It’s a small but significant detail that helps make Material 3 Expressive feel right at home on your users’ wrists.

      Apply the proper screen layout on each surface

      Apps

      For apps that let users scroll through content, Material 3 Expressive introduces the TransformingLazyColumn component. It provides built-in support for expressive and fluid scrolling animations that follow the side edges of the display. We’ve also added a new ScrollIndicator that provides a clear visual cue of the user’s position within a list. (This appears automatically when you use ScreenScaffold.) This, combined with the fluid animations of the TransformingLazyColumn, creates a more intuitive and engaging scrolling experience.

      When the user scrolls through the list, the items near the top and bottom shrink in width.

      When using a TransformingLazyColumn, elements appear to get smaller as they get close to the top and bottom edge of the screen

      For apps that don’t require scrolling, such as media players or confirmation dialogs, Material 3 Expressive provides templates that are optimized for glanceability and focus. These layouts rely on breakpoints and pagination to present a single task or set of controls to the user, minimizing distractions.

      Tiles

      The Material 3 Expressive design system also lets designers and developers create tiles that are both functional and visually engaging:

      The middle part of the tile shows information about the current number of glasses of water having been consumed today, and the bottom part includes a button that lets users add another glass.

      Tiles offer at-a-glance information and support quick actions to indicate progress on a task, such as drinking more water

      Tiles can show a static message about a recent update, invite users to get started, and show progress of an ongoing activity related to fitness, media, and more.

      The new 3-slot tile layout is designed to work for each of these use cases, as well as across a range of screen sizes, to provide a clear and consistent structure for your tile’s content.

      Elevate your experience

      Give your app or tile a signature look using extended color palettes and custom typography.

      Color

      The updated color system in Material 3 Expressive supports more colors—such as tertiary colors—to let you better reflect your brand’s personality and create a more immersive user experience. Use this color system to create themes that perfectly capture the mood of your brand, whether that’s a calming meditation app, the high-energy vibe of a fitness tracker, or something in between.

      With Material 3 Expressive, apps and tiles can either follow the dynamic system color or stick to the brand colors. We especially recommend following the dynamic system colors in your tiles, for higher cohesion with other tiles. You can embrace dynamic colors in your app as well, for instance exposing settings to the user.

      Based on the main colors in the user’s chosen watch face, the design system extracts the 2 most common hues and dynamically chooses several more complementary colors. These colors are applied to the tiles that appear on the user’s watch.

      Dynamic color theme derived from the user-selected watch face (left), applied to a tile (right)

      Typography

      Typography is another key element of expressive design. Material 3 Expressive moves beyond static font weights and styles and embraces the versatility of variable fonts.

      A single font contains adjustable axes, including weight and width. With Material 3 Expressive, you can tap into these customized looks to create dynamic and delightful typographic experiences.

      The text “book club” is thicker than normal, using a larger font weight.

      A font that uses an adjusted weight. If desired, you can also use a different width to s t r e t c h the text.

      Show off expressive animations

      A foundational pillar of Material 3 Expressive’s animation capabilities is the concept of fluid motion, made possible primarily through shape morphing.

      In the 3x3 grid of buttons 1 through 9, when the 9 button is pressed, its left edge moves to the left, and the 8 button shrinks its width to accommodate.

      When the “9” button is pressed, the “8” button moves out of the way to accommodate the expanded size of the “9” button.

      Components no longer have to be rigid – they can now dynamically change their shape in response to user input! Buttons, in particular, can transform shape and size to achieve eye-catching springy animation effects and provide more visual contrast between states such as “play” and “pause.” This not only makes the UI more visually interesting but also helps in guiding the user’s attention and providing clear feedback.

      An experience that’s ready for prime time!

      By adopting the Material 3 Expressive design system, you can create Wear OS apps and tiles that feel more dynamic, personal, and intuitive. By applying principles like rounded components, screen layouts, richer color palettes, and spring animations, you can build experiences that feel perfectly designed for use on a user’s wrist.

      To get you inspired, we’ve included some examples from some of Google’s apps below:

      On the left, the accept call button is a bottom edge-hugging button; on the right-hand side of each item in the list, there’s a toggle button to turn a given alarm on and off.

      Edge-hugging button for an incoming call using the Phone app (left); toggle buttons in the Alarms app (right)

      On the left, The tile includes selectable icons in the middle, such as navigating home, and a bottom edge-hugging button that lets you search for a particular destination; on the right A wavy progress bar moves around the play/pause button in the middle of the tile.

      At-a-glance actions within the tile for the Google Maps app (left); progress of ongoing audio playback in the Media Controls (right)

      Get started with Material 3 Expressive for Wear OS

      To learn more, explore the following resources:

      We can’t wait to see the designs that you create and share with the Wear OS community!

The post Designing with personality: Introducing Material 3 Expressive for Wear OS appeared first on InShot Pro.

]]>
Welcome to Wear OS Spotlight Week https://theinshotproapk.com/welcome-to-wear-os-spotlight-week/ Mon, 08 Sep 2025 12:03:49 +0000 https://theinshotproapk.com/welcome-to-wear-os-spotlight-week/ Posted by Chiara Chiappini – Android Developer Relations Engineer, and Kevin Hufnagle – Android Technical Writer Wear OS is rapidly ...

Read more

The post Welcome to Wear OS Spotlight Week appeared first on InShot Pro.

]]>

Posted by Chiara Chiappini – Android Developer Relations Engineer, and Kevin Hufnagle – Android Technical Writer

Wear OS is rapidly expanding its presence in the market, presenting a unique and significant opportunity for developers. With a growing number of users wearing and interacting with their smartwatches daily, building for Wear OS allows you to reach an even broader audience of Android users and boost your app’s engagement more than ever before. The introduction of new hardware like the Pixel Watch 4 is a key driver of this momentum, enabling developers to bring premium Wear OS experiences to this expanding user base.

This week, we’re putting a special focus on Wear OS: Welcome to Wear OS Spotlight Week!

Throughout the week, we’ll dive into the different Wear OS surfaces where you can develop on-the-watch experiences. This blog post will be updated throughout the week with links to new announcements and resources, so check back here daily for updates.

Day 1: Material 3 Expressive on Wear OS

Monday, August 25, 2025

Learn how you can build beautiful and tailored Wear OS apps and tiles using the Material 3 Expressive Design language and Jetpack libraries for Wear OS.

To further explore the key principles and main features of Wear OS’s new design system, consult our updated design guidance on the Android developer documentation website.

Day 2: Build apps, tiles, and complication on Wear OS

Tuesday, August 26, 2025

Discover how to build engaging experiences across Wear OS’s surfaces, including apps, tiles, complications, and notifications. Learn how to create quick, glanceable content using familiar tools like Jetpack Compose and the ProtoLayout library, all while leveraging the beautiful new Material 3 Expressive design system.

Next, understand how to build beautiful and effective Wear OS tiles using the Material 3 Expressive design system and a new collection of resources.

Dive deep into building complications that support data sources which helps you show useful information to the user directly on the watch face, and can drive engagement with your app.

Lastly, find out how Todoist has applied the latest updates on Wear OS including new integrations with Material 3 Expressive and Credential Manager. Improvements like these have helped Todoist become the world’s top task and time management app.

Day 3: Watch faces

Wednesday, August 27, 2025

Explore the wonderful world of watch faces! The Watch Face Push API for Wear OS 6 is here to unlock a world of dynamic watch faces. This powerful tool lets you create your own marketplace experience for the watch faces you create. Dive in and explore how you can promote your engaging watch faces today! 🎨.

Next, discover how Amoledwatchfaces, a leading creator, successfully migrated to the Watch Face Format for their 190+ watch faces. This switch led to faster development, improved battery life, and more customizable designs.

Lastly, learn about Watch Face Designer, a new Figma plugin that’s available for watch face designers and developers to create watch faces with greater ease.

Day 4: Credential Manager on Wear OS

Thursday, August 28, 2025

Discover how to streamline authentication on your Wear OS app by using Credential Manager.

We’ve got some great new resources to help you learn how everything works, and to help you get started crafting your own implementation:

Day 5: #AskAndroid

Friday, August 29, 2025

Join us for a live Q&A on Wear OS! Ask your questions at Android Developers on X and Android by Google at Linkedin using the tag #AskAndroid.

Explore and create your own experiences with Wear OS

Explore the latest updates for Wear OS, and delve into the wealth of resources shared during the week. We’re excited to see the results of your explorations with building apps, tiles and watch faces for Wear OS.

Happy coding!

The post Welcome to Wear OS Spotlight Week appeared first on InShot Pro.

]]>
Ever-present and useful: Building complication data sources for Wear OS https://theinshotproapk.com/ever-present-and-useful-building-complication-data-sources-for-wear-os/ Sun, 07 Sep 2025 12:00:52 +0000 https://theinshotproapk.com/ever-present-and-useful-building-complication-data-sources-for-wear-os/ Posted by Garan Jenkin – Developer Relations Engineer This post is part of Wear OS Spotlight Week. Today, we’re focusing ...

Read more

The post Ever-present and useful: Building complication data sources for Wear OS appeared first on InShot Pro.

]]>

Posted by Garan Jenkin – Developer Relations Engineer

This post is part of Wear OS Spotlight Week. Today, we’re focusing on creating engaging experiences across the various surfaces available on the wrist.

Put your app’s unique information directly on a user’s watch face by building your own complications. These are the small, glanceable details on a watch face, like step count, date, or weather, that are used to convey additional information, beyond simply telling the time.

Watches such as the recently-launched Pixel Watch 4 feature watch faces with as many as 8 complications. These small, powerful display elements are a great way to provide quick, valuable information and keep users connected to your app.

Let’s look at how you can build your own complication data sources, surfacing useful information to the user directly on their watch face, and helping drive engagement with your app.

A round, analog Wear OS watch face showing 8 complications

A watch face showing 8 complications – 4 arc-style complications around the edge of the watch face, and 4 circular complications within the center of the watch face

Key principles of complications

In order to help understand complications, let’s first review some of the key architectural aspects of their design:

    • Apps provide only a complication data source – the watch face takes care of all layout and rendering.
    • Complication data is typed – both complication data sources and watch faces specify which types are supported respectively.
    • Watch faces define slots – these are spaces on the watch face that can host complications.

A flow chart illustrating the flow of requests and ComplicationData between the Wear OS system, watch face, and complication data source

The flow of requests and ComplicationData between the Wear OS system, watch face, and complication data source

What are complications good for?

Complications are great for providing the user with bite-size data during the course of the day. Additionally, complications can provide a great launch point into your full app experience.

Complications Data source types (full list) include SHORT_TEXT and SMALL_IMAGE. Similarly, watch faces declare what types they can render.

For example, if you’re building an app which includes fitness goals, a good choice for a complication data source might be one that provides the GOAL_PROGRESS or RANGED_VALUE data types, to show progress toward that goal.

Conversely, complications are less appropriate for larger amounts of data, such as the contents of a chat message. They’re also not suitable for very frequent updates, such as real-time fitness metrics generated by your app.

Creating a complication data source

Let’s look at creating a complication data source for that fitness goal mentioned above.

First, we create a service that extends SuspendingComplicationDataSourceService:

class MyDataSourceService : SuspendingComplicationDataSourceService() {
    override suspend fun onComplicationRequest(request: ComplicationRequest): ComplicationData? {
        // Handle both GOAL_PROGRESS and RANGED_VALUE
        return when (request.complicationType) {
            ComplicationType.GOAL_PROGRESS -> goalProgressComplicationData()
            ComplicationType.RANGED_VALUE -> rangedValueComplicationData()
            else -> NoDataComplicationData()
        }
    }

    // Apps should override this so that watch face previews contain
    // complication data
    override fun getPreviewData(type: ComplicationType) = createPreviewData()
}

To create the actual data to return, we create a ComplicationData object, shown here for GOAL_PROGRESS:

fun goalProgressComplicationData(): ComplicationData {
    val goalProgressText = PlainComplicationText
        .Builder("${goalProgressValue.toInt()} km")
        .build()
    return GoalProgressComplicationData.Builder(
        value = goalProgressValue,
        targetValue = goalTarget,
        contentDescription = goalProgressText
    )
    // Set some additional optional data
    .setText(goalProgressText)
    .setTapAction(tapAction)
    .setMonochromaticImage(...)
    .build()
}

Note: The GoalProgressComplicationData has numerous optional fields in addition to the mandatory ones. You should try to populate as many of these as you can.

Finally, add the data source to the manifest:

<service
    android:name=".WorkoutStatusDataSourceService"
    android:exported="true"
    android:directBootAware="true"
    android:label="@string/status_complication_label"
    android:permission="com.google.android.wearable.permission.BIND_COMPLICATION_PROVIDER">
    <intent-filter>
        <action android:name="android.support.wearable.complications.ACTION_COMPLICATION_UPDATE_REQUEST" />
    </intent-filter>

    <!--
      Supported data types. Note that the preference order of the watch face,
      not the complication data source, decides which type will be chosen.
    -->
    <meta-data
        android:name="android.support.wearable.complications.SUPPORTED_TYPES"
        android:value="GOAL_PROGRESS,RANGED_VALUE" />
    <meta-data
        android:name="android.support.wearable.complications.UPDATE_PERIOD_SECONDS"
        android:value="300" />
</service>

Note: The use of the directBootAware attribute on the service lets the complication service run before the user has unlocked the device on boot.

Choosing your update model

Complications support both a push and a pull-style update mechanism. In the example above, UPDATE_PERIOD_SECONDS is set such that the data is refreshed every 5 minutes. Wear OS will check the updated value of the complication data source with that frequency.

This works well for a scenario such as a weather complication, but in other scenarios, it may make more sense for the updates to be driven by the app. To achieve this, you can:

  1. Set UPDATE_PERIOD_SECONDS to 0 to indicate that the app will drive the update process.
  2. Using ComplicationDataSourceUpdateRequester in your app code to signal to the Wear OS system that an update should be requested, for example in a WorkManager job, or in WearableListenerService.

Leveraging platform bindings for high-frequency data

Particularly for health-related complications, we can take advantage of platform data sources, to improve our goal progress complication. We can use these data sources with dynamic expressions to create complication content which is dynamically re-evaluated every second while the watch face is in interactive mode (that is, when it’s not in system ambient / always-on mode).

Let’s update the complication so that instead of just showing the distance, it shows a celebratory message when the target is reached. First we create a dynamic string as follows:

val distanceKm = PlatformHealthSources.dailyDistanceMeters().div(1000f)
val formatter = DynamicBuilders.DynamicFloat.FloatFormatter.Builder()
    .setMaxFractionDigits(2)
    .setMinFractionDigits(0)
    .build()
val goalProgressText = DynamicBuilders.DynamicString
    .onCondition(distanceKm.lt(distanceKmTarget))
    .use(
        distanceKm
            .format(formatter)
            .concat(DynamicBuilders.DynamicString.constant(" km"))
    )
    .elseUse(
        DynamicBuilders.DynamicString.constant("Success!")
    )

Then we include this text, and the dynamic value distanceKm, with the dynamic version of the complication builder.

In this way, the distance is updated every second, with no need for further requests to the data source. This means UPDATE_PERIOD_SECONDS can be set to a large value, saving battery, and the celebratory text is immediately shown the moment they pass their target!

Configuring complications

For some data sources, it is useful to let the user configure what data should be shown. In the fitness goal example, consider that the user might have weekly, monthly, and yearly goals.

Adding a configuration activity allows them to select which goal should be shown by the complication. To do this, add the PROVIDER_CONFIG_ACTION metadata to your service definition, and implement an activity with a filter for this intent, for example:

<service android:name=".MyGoalDataSourceService" ...>
  <!-- ... -->

  <meta-data
      android:name="android.support.wearable.complications.PROVIDER_CONFIG_ACTION"
      android:value="com.myapp.MY_GOAL_CONFIG" />
</service>

<activity android:name=".MyGoalConfigurationActivity" ...>
  <intent-filter>
    <action android:name="com.myapp.MY_GOAL_CONFIG" />
    <category android:name="android.support.wearable.complications.category.PROVIDER_CONFIG" />
    <category android:name="android.intent.category.DEFAULT" />
  </intent-filter>
</activity>

In the activity itself, the details of the complication being configured can be extracted from the intent:

// Keys defined on ComplicationDataSourceService
// (-1 assigned when the ID or type was not available)
val id = intent.getIntExtra(EXTRA_CONFIG_COMPLICATION_ID, -1)
val type = intent.getIntExtra(EXTRA_CONFIG_COMPLICATION_TYPE, -1)
val source = intent.getStringExtra(EXTRA_CONFIG_DATA_SOURCE_COMPONENT)

To indicate a successful configuration, the activity should set the result when exiting:

setResult(Activity.RESULT_OK) // Or RESULT_CANCELED to cancel configuration
finish()

The ID is the same ID passed in ComplicationRequest to the complication data source service. The Activity should write any configuration to a data store, using the ID as a key, and the service can retrieve the appropriate configuration to determine what data to return in response to each onComplicationRequest().

Working efficiently with time and events

In the example above, UPDATE_PERIOD_SECONDS is set at 5 minutes – this is the smallest value that can be set for the update period. Ideally this value should be set as large as is acceptable for the use case: This reduces requests and improves battery life.

Consider these examples:

    • A known list of events – For example a calendar. In this case, use SuspendingTimelineComplicationDataSourceService.
    • This allows you to provide the series of events in advance, with no need for the watch face to request updates. The calendar data source would only need to push updates if a change is made, such as another event being scheduled for the day, offering timeliness and efficiency.

      ComplicationDataTimeline requires a defaultComplicationData as well as the list of entries: This is used in the case where none of the timeline entries are valid for the current time. For example, for a calendar it could contain the text “No event” where the user has nothing booked. Where there are overlapping entries, the entry with the shortest interval is chosen.

override suspend fun onComplicationRequest(request: ComplicationRequest): ComplicationDataTimeline? {
    return ComplicationDataTimeline(
        // The default for when there is no event in the calendar
        defaultComplicationData = noEventComplicationData,
        // A list of calendar entries
        timelineEntries = listOf(
            TimelineEntry(
                validity = TimeInterval(event1.start, event1.end),
                complicationData = event1.complicationData
            ),
            TimelineEntry(
                validity = TimeInterval(event2.start, event2.end),
                complicationData = event2.complicationData
            )
        )
    )
}

    • Working with time or timers – If your complication data contains time or a timer, such as a countdown to a particular event, use built-in classes such as TimeDifferenceComplicationText and TimeFormatComplicationText – this keeps the data up-to-date while avoiding regular requests to the data source service.
    • For example, to create a countdown to the New Year:

TimeDifferenceComplicationText.Builder(
    TimeDifferenceStyle.SHORT_SINGLE_UNIT,
    CountDownTimeReference(newYearInstant)
)
.setDisplayAsNow(true)
.build()

    • Data that should be shown at a specific time and/or duration – use setValidTimeRange() to control when complication data should be shown, again avoiding repeated updates.
    • This can be useful in the case where it is not possible to use a timeline but where data can become stale, allowing you to control the visibility of this data.

Working with activation and deactivation

It can be very useful to track whether your complication is currently in use on the active watch face or not. This can help with:

  1. Avoiding unnecessary work – for example, if a weather complication has not been set in the active watch face, then there is no need to enable a WorkManager job to periodically fetch weather updates, saving battery and network usage.
  2. Aiding discovery – if onComplicationActivated has never been called, then the user has never used your complication on a watch face.
  3. This can be a useful signal to provide an educational moment in your phone or Wear OS app, drawing attention to this feature, and sharing potential benefits with the user that they may not be aware of.

To facilitate these use cases, override the appropriate methods in your complication service:

class MyDataSourceService() : SuspendingComplicationDataSourceService() {
    override fun onComplicationActivated(complicationInstanceId: Int, type: ComplicationType) {
        super.onComplicationActivated(complicationInstanceId, type)

        // Keep track of which complication has been enabled, and
        // start any necessary work such as registering periodic
        // WorkManager jobs
    }
    
    override fun onComplicationDeactivated(complicationInstanceId: Int) {
        super.onComplicationDeactivated(complicationInstanceId)

        // Complication instance has been disabled, so remove all
        // registered work
    }

Some additional points to consider when implementing your data sources:

    • Support multiple types to maximize usefulness and compatibility – Watch faces will support some complication data types, but likely not all of them.
    • Adding support to your data source for multiple types makes it most useful to the user. In the above example, we implemented both RANGED_VALUE and GOAL_PROGRESS, as both can be used to represent progress-type data.

      Similarly, if you were to implement a calendar complication, you could use both SHORT_TEXT and LONG_TEXT to maximize compatibility with the available slots on the watch face.

    • Use different data sources for different user journeys – Your app is not limited to providing one complication data source. You should support more than one if you have different use cases to cater for. For example, your health and fitness app might have a complication to provide your progress towards your goals, but also a separate complication to show sleep stats.
    • Avoid heavy work in onComplicationRequest() For example,if the progress toward a fitness goal involves intensive processing of a large number of workouts, do this elsewhere. The request to the complication data source should ideally just return the value with minimal computation.
    • Avoid your service having extensive dependencies on other app components – When in use, your data source service will be started when the Wear OS device starts up, and at other times during the day. You should avoid the service needing too many other components from within your app to be started in order to run, to maintain good system performance.
    • Consider backup and restore – If the complication is configurable, it might make sense to restore these settings – learn how to implement backup and restore for complication data sources.
    • Think about the discovery journey – Your complications will be available as an option on the user’s watch face when your app is installed on the watch. Consider how you can promote and educate the user on this functionality, both in your phone app and your Wear OS app, and leverage methods such as onComplicationActivated() to inform this process.
    • Resources for creating complications

      Complications are a great way to elevate your app experience for users, and to differentiate your app from others.

      Check out these resources for more information on creating complication data sources. We look forward to seeing what you can do.

      Happy Coding!

The post Ever-present and useful: Building complication data sources for Wear OS appeared first on InShot Pro.

]]>
Building experiences for Wear OS https://theinshotproapk.com/building-experiences-for-wear-os/ Wed, 03 Sep 2025 12:01:09 +0000 https://theinshotproapk.com/building-experiences-for-wear-os/ Posted by Michael Stillwell – Developer Relations Engineer This post is part of Wear OS Spotlight Week. Today, we’re focusing ...

Read more

The post Building experiences for Wear OS appeared first on InShot Pro.

]]>

Posted by Michael Stillwell – Developer Relations Engineer

This post is part of Wear OS Spotlight Week. Today, we’re focusing on creating engaging experiences across the various surfaces available on the wrist.

Developing for the growing ecosystem of Wear OS is a unique and rewarding challenge that encourages you to think beyond mobile patterns. Wear’s design philosophy focuses on crafting experiences for a device that’s always with the user, where meaningful interactions take seconds, not minutes. A successful wearable app doesn’t attempt to maximize screen time; it instead aims to deliver meaningful glanceable experiences that help people stay present and productive while on the go. This vision is now fully enabled by the next generation of hardware, which we explored last week with the introduction of the new Pixel Watch 4.

Wear OS devices also introduce constraints that push you to innovate. Power efficiency is critical, requiring you to build experiences that are both beautiful and battery-conscious. You’ll also tackle challenges like handling offline use cases and catering for a variety of screen sizes.

Despite these differences, you’ll find yourself on familiar technical foundations. Wear OS is based on Android, which means you can leverage your existing knowledge of the platform, architecture, developer APIs, and tools to create wearable experiences.

Wear OS surfaces

Wear OS offers a range of surfaces to inform and engage users. This allows you to tailor your app’s presence on the watch, providing the right information at the right time and scaling your development investment to best meet your users’ needs.

Watch faces display the time and are the first thing a user sees when they look at their watch. We’ll cover watch faces in more detail in other blog posts across Wear OS Spotlight week.

A round, analog Wear OS watch face

The Watch face is the first thing a user sees when they look at their watch

Apps provide a richer, more immersive UI for complex tasks that are too involved for other surfaces.

A scrollable app experience on a round, digital watch face showing daily goals for drinking water, vegtable consumption, and fiber intake

Apps support complex tasks and can scroll vertically

Notifications provide glanceable, time-sensitive information and actions.

A calendar notification for a dentist appointment on a round watch face

A notification provides glanceable, time-sensitive information

Complications display highly-glanceable, relevant data from your app directly on the user’s chosen watch face. Learn more about building complication data sources for Wear OS.

A complications display on a round watch face

Complications display glanceable data from your app directly on the user’s watch face.

Tiles (Widgets for Wear OS) offer fast, predictable access to information and actions with a simple swipe from the watch face.

An example of a tile conveying information for daily step count on a round watch face

Tiles offer fast, predictable information and actions

Whilst a variety of Wear OS surfaces let developers to engage with users in different ways, it may be overwhelming to get started. We recommend approaching Wear OS development in phases and scale up your investment over time:

illustration of the recommended 3-step Wear OS development process

Recommended Wear OS development phases: enhance the wearable experience of your Android app, build Tiles and complications, and then create a complete wearable experience.

    • Improve the wearable experience of your mobile app. You can improve the wearable experience with minimal effort. By default, notifications from your phone app are automatically bridged to the watch. You can start by enhancing these with wearable-specific actions using NotificationCompat.WearableExtender, offering a more tailored experience without building a full Wear OS experience.
    • Build a companion experience. When you’re ready for a dedicated UI, create a tethered app experience that depends on the phone app for its core features and data. This involves creating a tethered app that works in tandem with your phone app, allowing you to design a customized UI for the wrist and take advantage of surfaces like tiles and complications.
    • Graduate to a standalone app. Finally, you can evolve your app into a standalone experience that works independently of a phone, which is ideal for offline scenarios like exercising. This provides the most flexibility but also requires more effort to optimize for constraints like power efficiency.

Notifications

Notifications are a core part of the Wear OS experience, delivering glanceable, time-sensitive information and actions for the user. Because Wear OS is based on Android, it shares the same notification system as mobile devices, letting you leverage your existing knowledge to build rich experiences for the wrist.

From a development perspective, it helps to think of a notification not as a simple alert, but as a declarative UI data structure that is shared between the user’s devices. You define the content and actions, and the system intelligently renders that information to best suit the context and form factor. This declarative approach has become increasingly powerful. On Wear OS, for example, it’s the mechanism behind ongoing activities.

Alert-style notifications

One great thing about notifications is that you don’t even need a Wear OS app for your users to see them on their watch. By default, notifications generated by your phone app are automatically “bridged”, or mirrored, to a connected watch, providing an instant wearable presence for your app with no extra work. These bridged notifications include an action to open the app on the phone.

You can enhance this default behavior by adding wearable-specific functionality to your phone notifications. Using NotificationCompat.WearableExtender, you can add actions that only appear on the watch, offering a more tailored experience without needing to build a full Wear OS app.

// Prerequisites:
//
//   1. You've created the notification channel CHANNEL_ID
//   2. You've obtained the POST_NOTIFICATIONS permission

val channelId = "my_channel_id"
val sender = "Clem"
val subject = "..."

val notification =
    NotificationCompat.Builder(applicationContext, channelId)
        .apply {
            setContentTitle("New mail from $sender")
            setContentText(subject)
            setSmallIcon(R.drawable.new_mail_mobile)
            // Added for Wear OS
            extend(
                NotificationCompat.WearableExtender().apply {
                    setSmallIcon(R.drawable.new_mail_wear)
                }
            )
        }
        .build()

NotificationManagerCompat.from(applicationContext).notify(0, notification)

Prevent duplicate notifications

Once you build a dedicated app for Wear OS, you’ll need to develop a clear notification strategy to avoid a common challenge: duplicate notifications. Since notifications from your phone app are bridged by default, a user with both your phone and watch apps installed could see two alerts for the same event.

Wear OS provides a straightforward way to manage this:

  1. On the mobile app’s notification, assign a string identifier using setBridgeTag().
  2. In your Wear OS app, you can then programmatically prevent notifications with certain tags from being bridged using a BridgingConfig. This gives you fine-grained control, allowing you to bridge some notifications while handling others natively in your Wear OS app.

If your mobile and watch apps generate similar but distinct notifications, you can link them using setDismissalId(). When a user dismisses a notification on one device, any notification with the same dismissal ID on another connected device is also dismissed.

Creating interactive experiences

From a user’s perspective, apps and tiles may feel very similar. Both are full-screen experiences that are visually rich, support animations, and handle user interaction. The main differences are in how they are launched, and their specific capabilities:

    • Apps can be deeply immersive and handle complex, multi-step tasks. They are the obvious choice when handling data that must be synced between the watch app and its associated phone app, and the only choice for long-running tasks like tracking workouts and listening to music.
    • Tiles are designed for fast, predictable access to the information and actions users need most, providing glanceable content with a simple swipe from the watch face. Think of tiles as widgets for Wear OS.

Apps and tiles are built using distinct technologies. Apps can be built with Jetpack Compose, while tiles are defined declaratively using the ProtoLayout library. This distinction allows each surface to be highly optimized for its specific role – apps can provide rich, interactive experiences while tiles remain fast and power-efficient.

Building apps

Apps provide the richest experience on Wear OS. Jetpack Compose for Wear OS is the recommended UI toolkit for building them – it works seamlessly with other Jetpack libraries and accelerates development productivity. Many prominent apps, like Gmail, Calendar and Todoist, are built entirely with Compose for Wear OS.

Compose for Wear OS for beautiful UIs

If you’ve used Jetpack Compose for mobile development, you’ll find that Compose for Wear OS shares the same foundational principles and mental model. However, building for the wrist requires some different techniques, and the toolkit provides a specialized UI component library optimized for watches.

Wear OS has its own dedicated Material Design, foundation, and navigation libraries to use instead of the mobile Jetpack libraries. These libraries provide UI components tailored for round screens and glanceable interactions, and are each supported by Android Studio’s preview system.

    • Lists: On mobile, you might use a LazyColumn to display a vertical collection of items. On Wear OS, the TransformingLazyColumn is the equivalent component. It supports scaling and transparency effects to items at the edge of a round screen, improving legibility. It also has built-in support for scrolling with rotary input.
    • Navigation: Handling screen transitions and the back stack also requires a component that’s specific to Wear OS. Instead of the standard NavHost, you must use SwipeDismissableNavHost. This component works with the system’s swipe-to-dismiss gesture, ensuring users can intuitively navigate back to the previous screen.

Learn how to use Jetpack Compose on Wear OS to get started, including sample code.

Implementing core app features

Wear OS also provides APIs designed for power efficiency and the on-wrist use case, as well as Wear OS versions of mobile APIs:

    • Authentication: Credential Manager API unifies the user sign-in process and supports modern, secure methods like passkeys, passwords, and federated identity services (like Sign-in with Google), providing a seamless and secure experience without relying on a companion phone.
    • Health and Fitness (sensor data): While you can use the standard Android Sensor APIs, it’s not recommended for performance reasons, especially for long-running workouts. Instead, use Health Services on Wear OS. It acts as an intermediary to the various sensors, providing your app with batched, power-efficient updates for everything from heart rate to running metrics, without needing to manage the underlying sensors directly.

Building tiles

Tiles offer quick, predictable access to the information and actions users need most, accessible with a simple swipe from the watch face. By using platform data bindings to display sources like step count or heart rate, you can provide timely and useful information in your tile.

Tiles are built declaratively using the ProtoLayout libraries, which are optimized for performance and power efficiency—critical considerations on a wearable device. Learn more about how to get started with tiles and how to make use of sample tile layouts.

More resources for building experiences for Wear OS

    • Wear OS Documentation Hub: The essential resource for developers looking to create experiences for Wear OS, from design guidelines to code samples.
    • WearTilesKotlin sample app: Demonstrates the fundamentals of building a tile but also includes templates for common layouts, letting you quickly bootstrap your own designs while following best practices.

There has never been a better time to start building for Wear OS. If you have feedback on the APIs, please let us know using the issue trackers for Wear Compose and Tiles. We look forward to seeing what you build!

The post Building experiences for Wear OS appeared first on InShot Pro.

]]>
The latest Gemini Nano with on-device ML Kit GenAI APIs https://theinshotproapk.com/the-latest-gemini-nano-with-on-device-ml-kit-genai-apis/ Fri, 22 Aug 2025 16:00:00 +0000 https://theinshotproapk.com/the-latest-gemini-nano-with-on-device-ml-kit-genai-apis/ Posted by Caren Chang – Developer Relations Engineer, Joanna (Qiong) Huang – Software Engineer, and Chengji Yan – Software Engineer ...

Read more

The post The latest Gemini Nano with on-device ML Kit GenAI APIs appeared first on InShot Pro.

]]>

Posted by Caren Chang – Developer Relations Engineer, Joanna (Qiong) Huang – Software Engineer, and Chengji Yan – Software Engineer

The latest version of Gemini Nano, our most powerful multi-modal on-device model, just launched on the Pixel 10 device series and is now accessible through the ML Kit GenAI APIs. Integrate capabilities such as summarization, proofreading, rewriting, and image description directly into your apps.

With GenAI APIs we’re focused on giving you access to the latest version of Gemini Nano while providing consistent quality across devices and model upgrades. Here’s a sneak peak behind the scenes of some of the things we’ve done to achieve this.

Adapting GenAI APIs for the latest Gemini Nano

We want to make it as easy as possible for you to build AI powered features, using the most powerful models. To ensure GenAI APIs provide consistent quality across different model versions, we make many behind the scenes improvements including rigorous evals and adapter training.

  1. Evaluation pipeline: For each supported language, we prepare an evaluation dataset. We then benchmark the evals through a combination of: LLM-based raters, statistical metrics and human raters.
  2. Adapter training: With results from the evaluation pipeline, we then determine if we need to train feature-specific LoRA adapters to be deployed on top of the Gemini Nano base model. By shipping GenAI APIs with LoRA adapters, we ensure each API meets our quality bar regardless of the version of Gemini Nano running on a device.

The latest Gemini Nano performance

One area we’re excited about is how this updated version of Gemini Nano pushes performance even higher, especially the prefix speed – that is how fast the model processes input.

For example, here are results when running text-to-text and image-to-text benchmarks on a Pixel 10 Pro.

Prefix Speed – Gemini nano-v2 on Pixel 9 Pro Prefix Speed – Gemini nano-v2* on Pixel 10 Pro Prefix Speed – Gemini nano-v3 on Pixel 10 Pro
Text-to-text 510 tokens/second 610 tokens/second 940 tokens/second
Image-to-text 510 tokens/second + 0.8 seconds for image encoding 610 tokens/second + 0.7 seconds for image encoding 940 tokens/second + 0.6 seconds for image encoding

*Experimentation with Gemini nano-v2 on Pixel 10 Pro for benchmarking purposes. All Pixel 10 Pros launched with Gemini nano-v3.

The future of Gemini Nano with GenAI APIs

As we continue to improve the Gemini Nano model, the team is committed to using the same process to ensure consistent and high quality results from GenAI APIs.

We hope this will significantly reduce the effort to integrate Gemini Nano in your Android apps while still allowing you to take full advantage of new versions and their improved capabilites.

Learn more about GenAI APIs

Start implementing GenAI APIs in your Android apps today with guidance from our official documentation and samples: GenAI API Catalog and ML Kit GenAI APIs quickstart samples.

The post The latest Gemini Nano with on-device ML Kit GenAI APIs appeared first on InShot Pro.

]]>
What’s new in the Jetpack Compose August ’25 release https://theinshotproapk.com/whats-new-in-the-jetpack-compose-august-25-release/ Wed, 13 Aug 2025 18:00:00 +0000 https://theinshotproapk.com/whats-new-in-the-jetpack-compose-august-25-release/ Posted by Meghan Mehta – Developer Relations Engineer and Nick Butcher – Product Manager Today, the Jetpack Compose August ‘25 ...

Read more

The post What’s new in the Jetpack Compose August ’25 release appeared first on InShot Pro.

]]>

Posted by Meghan Mehta – Developer Relations Engineer and Nick Butcher – Product Manager

Today, the Jetpack Compose August ‘25 release is stable. This release contains version 1.9 of core compose modules (see the full BOM mapping), introducing new APIs for rendering shadows, 2D scrolling, rich styling of text transformations, improved list performance, and more!

To use today’s release, upgrade your Compose BOM version to 2025.08.00:

implementation(platform("androidx.compose:compose-bom:2025.08.00"))

Shadows

We’re happy to introduce two highly requested modifiers: Modifier.dropShadow() and Modifier.innerShadow() allowing you to render box-shadow effects (compared to the existing Modifier.shadow() which renders elevation based shadows based on a lighting model).

Modifier.dropShadow()

The dropShadow() modifier draws a shadow behind your content. You can add it to your composable chain and specify the radius, color, and spread. Remember, content that should appear on top of the shadow (like a background) should be drawn after the dropShadow() modifier.

@Composable
@Preview(showBackground = true)
fun SimpleDropShadowUsage() {
    val pinkColor = Color(0xFFe91e63)
    val purpleColor = Color(0xFF9c27b0)
    Box(Modifier.fillMaxSize()) {
        Box(
            Modifier
                .size(200.dp)
                .align(Alignment.Center)
                .dropShadow(
                    RoundedCornerShape(20.dp),
                    dropShadow = DropShadow(
                        15.dp,
                        color = pinkColor,
                        spread = 10.dp,
                        alpha = 0.5f
                    )
                )
                .background(
                    purpleColor,
                    shape = RoundedCornerShape(20.dp)
                )
        )
    }
}

drop shadow drawn all around shape

Figure 1. Drop shadow drawn all around shape

Modifier.innerShadow()

The Modifier.innerShadow() draws shadows on the inset of the provided shape:

@Composable
@Preview(showBackground = true)
fun SimpleInnerShadowUsage() {
    val pinkColor = Color(0xFFe91e63)
    val purpleColor = Color(0xFF9c27b0)
    Box(Modifier.fillMaxSize()) {
        Box(
            Modifier
                .size(200.dp)
                .align(Alignment.Center)
                .background(
                    purpleColor,
                    shape = RoundedCornerShape(20.dp)
                )
                .innerShadow(
                    RoundedCornerShape(20.dp),
                    innerShadow = InnerShadow(
                        15.dp,
                        color = Color.Black,
                        spread = 10.dp,
                        alpha = 0.5f
                    )
                )
        )
    }
}

modifier.innerShadow() applied to a shape

Figure 2. Modifier.innerShadow() applied to a shape

The order for inner shadows is very important. The inner shadow draws on top of the content, so for the example above, we needed to move the inner shadow modifier after the background modifier. We’d need to do something similar when using it on top of something like an Image. In this example, we’ve placed a separate Box to render the shadow in the layer above the image:

@Composable
@Preview(showBackground = true)
fun PhotoInnerShadowExample() {
    Box(Modifier.fillMaxSize()) {
        val shape = RoundedCornerShape(20.dp)
        Box(
            Modifier
                .size(200.dp)
                .align(Alignment.Center)
        ) {
            Image(
                painter = painterResource(id = R.drawable.cape_town),
                contentDescription = "Image with Inner Shadow",
                contentScale = ContentScale.Crop,
                modifier = Modifier.fillMaxSize()
                    .clip(shape)
            )
            Box(
                modifier = Modifier.fillMaxSize()
                    .innerShadow(
                        shape,
                        innerShadow = InnerShadow(15.dp,
                            spread = 15.dp)
                    )
            )
        }
    }
}

Inner shadow on top of an image

Figure 3.Inner shadow on top of an image

New Visibility modifiers

Compose UI 1.8 introduced onLayoutRectChanged, a new performant way to track the location of elements on screen. We’re building on top of this API to support common use cases by introducing onVisibilityChanged and onFirstVisible. These APIs accept optional parameters for the minimum fraction or amount of time the item has been visible for before invoking your action.

Use onVisibilityChanged for UI changes or side effects that should happen based on visibility, like automatically playing and pausing videos or starting an animation:

LazyColumn {
  items(feedData) { video ->
    VideoRow(
        video,
        Modifier.onVisibilityChanged(minDurationMs = 500, minFractionVisible = 1f) {
          visible ->
            if (visible) video.play() else video.pause()
          },
    )
  }
}

Use onFirstVisible for use cases when you wish to react to an element first becoming visible on screen for example to log impressions:

LazyColumn {
    items(100) {
        Box(
            Modifier
                // Log impressions when item has been visible for 500ms
                .onFirstVisible(minDurationMs = 500) { /* log impression */ }
                .clip(RoundedCornerShape(16.dp))
                .drawBehind { drawRect(backgroundColor) }
                .fillMaxWidth()
                .height(100.dp)
        )
    }
}

Rich styling in OutputTransformation

BasicTextField now supports applying styles like color and font weight from within an OutputTransformation.

The new TextFieldBuffer.addStyle() methods let you apply a SpanStyle or ParagraphStyle to change the appearance of text, without changing the underlying TextFieldState. This is useful for visually formatting input, like phone numbers or credit cards. This method can only be called inside an OutputTransformation.

// Format a phone number and color the punctuation
val phoneTransformation = OutputTransformation {
    // 1234567890 -> (123) 456-7890
    if (length == 10) {
        insert(0, "(")
        insert(4, ") ")
        insert(9, "-")

        // Color the added punctuation
        val gray = Color(0xFF666666)
        addStyle(SpanStyle(color = gray), 0, 1)
        addStyle(SpanStyle(color = gray), 4, 5)
        addStyle(SpanStyle(color = gray), 9, 10)
    }
}

BasicTextField(
    state = myTextFieldState,
    outputTransformation = phoneTransformation
)

LazyLayout

The building blocks of LazyLayout are all now stable! Check out LazyLayoutMeasurePolicy, LazyLayoutItemProvider, and LazyLayoutPrefetchState to build your own Lazy components.

Prefetch Improvements

There are now significant scroll performance improvements in Lazy List and Lazy Grid with the introduction of new prefetch behavior. You can now define a LazyLayoutCacheWindow to prefetch more content. By default, only one item is composed ahead of time in the direction of scrolling, and after something scrolls off screen it is discarded. You can now customize the amount of items ahead to prefetch and behind to retain through a fraction of the viewport or dp size. When you opt into using LazyLayoutCacheWindow, items begin prefetching in the ahead area straight away.

The configuration entry point for this is on LazyListState, which takes in the cache window size:

@OptIn(ExperimentalFoundationApi::class)
@Composable
private fun LazyColumnCacheWindowDemo() {
    // Prefetch items 150.dp ahead and retain items 100.dp behind the visible viewport
    val dpCacheWindow = LazyLayoutCacheWindow(ahead = 150.dp, behind = 100.dp)
    // Alternatively, prefetch/retain items as a fraction of the list size
    // val fractionCacheWindow = LazyLayoutCacheWindow(aheadFraction = 1f, behindFraction = 0.5f)
    val state = rememberLazyListState(cacheWindow = dpCacheWindow)
    LazyColumn(state = state) {
        items(1000) { Text(text = "$it", fontSize = 80.sp) }
    }
}

lazylayout in Compose 1.9 release

Note: Prefetch composes more items than are currently visible — the new cache window API will likely increase prefetching. This means that item’s LaunchedEffects and DisposableEffects may run earlier – do not use this as a signal for visibility e.g. for impression tracking. Instead, we recommend using the new onFirstVisible and onVisibilityChanged APIs. Even if you’re not manually customizing LazyLayoutCacheWindow now, avoid using composition effects as a signal of content visibility, as this new prefetch mechanism will be enabled by default in a future release.

Scroll

2D Scroll APIs

Following the release of Draggable2D, Scrollable2D is now available, bringing two-dimensional scrolling to Compose. While the existing Scrollable modifier handles single-orientation scrolling, Scrollable2D enables both scrolling and flinging in 2D. This allows you to create more complex layouts that move in all directions, such as spreadsheets or image viewers. Nested scrolling is also supported, accommodating 2D scenarios.

val offset = remember { mutableStateOf(Offset.Zero) }
Box(
    Modifier.size(150.dp)
        .scrollable2D(
            state =
                rememberScrollable2DState { delta ->
                    offset.value = offset.value + delta // update the state
                    delta // indicate that we consumed all the pixels available
                }
        )
        .background(Color.LightGray),
    contentAlignment = Alignment.Center,
) {
    Text(
        "X=${offset.value.x.roundToInt()} Y=${offset.value.y.roundToInt()}",
        style = TextStyle(fontSize = 32.sp),
    )
}

moving image of 2D scroll API demo

Scroll Interop Improvements

There are bug fixes and new features to improve scroll and nested scroll interop with Views, including the following:

    • Fixed the dispatching of incorrect velocities during fling animations between Compose and Views.
    • Compose now correctly invokes the View’s nested scroll callbacks in the appropriate order.

Improve crash analysis by adding source info to stack traces

We have heard from you that it can be hard to debug Compose crashes when your own code does not appear in the stack trace. To address this we’re providing a new, opt-in API to provide richer crash location details, including composable names and locations enabling you to:

    • Efficiently identify and resolve crash sources.
    • More easily isolate crashes for reproducible samples.
    • Investigate crashes that previously only showed internal stack frames.

Note that we do not recommend using this API in release builds due to the performance impact of collecting this extra information, nor does it work in minified apks.

To enable this feature, add the line below to the application entry point. Ideally, this configuration should be performed before any compositions are created to ensure that the stack trace information is collected:

class App : Application() {
   override fun onCreate() {
        // Enable only for debug flavor to avoid perf regressions in release
        Composer.setDiagnosticStackTraceEnabled(BuildConfig.DEBUG)
   }
}

New annotations and Lint checks

We are introducing a new runtime-annotation library that exposes annotations used by the compiler and tooling (such as lint checks). This allows non-Compose modules to use these annotations without a dependency on the Compose runtime library. The @Stable, @Immutable, and @StableMarker annotations have moved to runtime-annotation, allowing you to annotate classes and functions that do not depend on Compose.

Additionally, we have added two new annotations and corresponding lint checks:

    • @RememberInComposition: An annotation that can mark constructors, functions, and property getters, to indicate that they must not be called directly inside composition without being remembered. Errors will be raised by a corresponding lint check.
    • @FrequentlyChangingValue: An annotation that can mark functions, and property getters, to indicate that they should not be called directly inside composition, as this may cause frequent recompositions (for example, marking scroll position values and animating values). Warnings are provided by a corresponding lint check.

Additional updates

Get started

We appreciate all bug reports and feature requests submitted to our issue tracker. Your feedback allows us to build the APIs you need in your apps. Happy composing!

The post What’s new in the Jetpack Compose August ’25 release appeared first on InShot Pro.

]]>
Transition to using 16 KB page sizes for Android apps and games using Android Studio https://theinshotproapk.com/transition-to-using-16-kb-page-sizes-for-android-apps-and-games-using-android-studio/ Thu, 10 Jul 2025 21:00:00 +0000 https://theinshotproapk.com/transition-to-using-16-kb-page-sizes-for-android-apps-and-games-using-android-studio/ Posted by Mayank Jain – Product Manager and Jomo Fisher – Software Engineer Get ready to upgrade your app’s performance ...

Read more

The post Transition to using 16 KB page sizes for Android apps and games using Android Studio appeared first on InShot Pro.

]]>

Posted by Mayank Jain – Product Manager and Jomo Fisher – Software Engineer

Get ready to upgrade your app’s performance as Android embraces 16 KB memory page sizes

Android’s transition to 16 KB Page size

Traditionally, Android has operated with the 4 KB memory page size. However many ARM CPUs (the most common processors for Android phones) support the larger 16 KB page size, offering improved performance gains. With Android 15, the Android operating system is page-size-agnostic, allowing devices to run efficiently with either 4 KB or 16 KB page size.

Starting November 1st, 2025, all new apps and app updates that use native C/C++ code targeting Android 15+ devices submitted to Google Play must support 16 KB page sizes. This is a crucial step towards ensuring your app delivers the best possible performance on the latest Android hardware. Apps without native C/C++ code or dependencies, that just use the Kotlin and Java programming languages, are already compatible, but if you’re using native code, now is the time to act.

This transition to larger 16 KB page sizes translates directly into a better user experience. Devices configured with 16 KB page size can see an overall performance boost of 5-10%. This means faster app launch times (up to 30% for some apps, 3.16% on average), improved battery usage (4.56% reduction in power draw), quicker camera starts (4.48-6.60% faster), and even speedier system boot-ups (around 0.8 seconds faster). While there is a marginal increase in memory use, a faster reclaim path is worth it.

The native code challenge – and how Android Studio equips you

If your app uses native C/C++ code from the Android NDK or relies on SDKs that do, you’ll need to recompile and potentially adjust your code for 16 KB compatibility. The good news? Once your application is updated for the 16 KB page size, the same application binary can run seamlessly on both 4 KB and 16 KB devices.

This table describes who needs to transition and recompile their apps

A table describes who needs to transition or recomplie their apps based on native codebase and device size

We’ve created several Android Studio tools and guides that can help you prepare for migrating to using 16 KB page size.

Detect compatibility issues

APK Analyzer: Easily identify if your app contains native libraries by checking for .so files in the lib folder. The APK Analyzer can also visually indicate your app’s 16 KB compatibility. You can then determine and update libraries as needed for 16 KB compliance.

Screenshot of the APK Analyzer in Android Studio

Alignment Checks: Android Studio also provides warnings if your prebuilt libraries or APKs are not 16 KB compliant. You should then use the APK Analyzer tool to review which libraries need to be updated or if any code changes are required. If you want to detect the 16 KB page size compatibility checks in your CI (continuous integration) pipeline, you can leverage scripts and command line tools.

Screenshot of Android 16 KB Alignment check in Android Studio

Lint in Android Studio now also highlights the native libraries which are not 16 KB aligned.

Screenshot of Lint performing a 16 KB alignment check in Android Studio

Build with 16 KB alignment

Tools Updates: Rebuild your native code with 16 KB alignment. Android Gradle Plugin (AGP) version 8.5.1 or higher automatically enables 16 KB alignment by default (during packaging) for uncompressed shared libraries. Similarly, Android NDK r28 and higher compile 16 KB-aligned by default. If you depend on other native SDK’s, they also need to be 16 KB aligned. You might need to reach out to the SDK developer to request a 16 KB compliant SDK.

Fix code for page-size agnosticism

Eliminate Hardcoded Assumptions: Identify and remove any hardcoded dependencies on PAGE_SIZE or assumptions that the page size is 4 KB (e.g., 4096). Instead, use getpagesize() or sysconf(_SC_PAGESIZE) to query the actual page size at runtime.

Test in a 16 KB environment

Android Emulator Support: Android Studio offers a 16 KB emulator target (for both arm64 and x86_64) directly in the Android Studio SDK Manager, allowing you to test your applications before uploading to Google Play.

Screenshot of the 16 KB emulator in Android Studio

On-Device Testing: For compatible devices like Pixel 8 and 8 Pro onwards (starting with Android 15 QPR1), a new developer option allows you to switch between 4 KB and 16 KB page sizes for real-device testing. You can verify the page size using adb shell getconf PAGE_SIZE.

Screenshot of the 16 KB emulator in Android Studio

Don’t wait – prepare your apps today

Leverage Android Studio’s powerful tools to detect issues, build compatible binaries, fix your code, and thoroughly test your app for the new 16 KB memory page sizes. By doing so, you’ll ensure an improved end user experience and contribute to a more performant Android ecosystem.

As always, your feedback is important to us – check known issues, report bugs, suggest improvements, and be part of our vibrant community on LinkedIn, Medium, YouTube, or X.

The post Transition to using 16 KB page sizes for Android apps and games using Android Studio appeared first on InShot Pro.

]]>
Agentic AI takes Gemini in Android Studio to the next level https://theinshotproapk.com/agentic-ai-takes-gemini-in-android-studio-to-the-next-level/ Mon, 23 Jun 2025 17:00:00 +0000 https://theinshotproapk.com/agentic-ai-takes-gemini-in-android-studio-to-the-next-level/ Posted by Sandhya Mohan – Product Manager, and Jose Alcérreca – Developer Relations Engineer Software development is undergoing a significant ...

Read more

The post Agentic AI takes Gemini in Android Studio to the next level appeared first on InShot Pro.

]]>

Posted by Sandhya Mohan – Product Manager, and Jose Alcérreca – Developer Relations Engineer

Software development is undergoing a significant evolution, moving beyond reactive assistants to intelligent agents. These agents don’t just offer suggestions; they can create execution plans, utilize external tools, and make complex, multi-file changes. This results in a more capable AI that can iteratively solve challenging problems, fundamentally changing how developers work.

At Google I/O 2025, we offered a glimpse into our work on agentic AI in Android Studio, the integrated development environment (IDE) focused on Android development. We showcased that by combining agentic AI with the built-in portfolio of tools inside of Android Studio, the IDE is able to assist you in developing Android apps in ways that were never possible before. We are now incredibly excited to announce the next frontier in Android development with the availability of ‘Agent Mode’ for Gemini in Android Studio.

These features are available in the latest Android Studio Narwhal Feature Drop Canary release, and will be rolled out to business tier subscribers in the coming days. As with all new Android Studio features, we invite developers to provide feedback to direct our development efforts and ensure we are creating the tools you need to build better apps, faster.

Agent Mode

Gemini in Android Studio’s Agent Mode is a new experimental capability designed to handle complex development tasks that go beyond what you can experience by just chatting with Gemini.

With Agent Mode, you can describe a complex goal in natural language — from generating unit tests to complex refactors — and the agent formulates an execution plan that can span multiple files in your project and executes under your direction. Agent Mode uses a range of IDE tools for reading and modifying code, building the project, searching the codebase and more to help Gemini complete complex tasks from start to finish with minimal oversight from you.

To use Agent Mode, click Gemini in the sidebar, then select the Agent tab, and describe a task you’d like the agent to perform. Some examples of tasks you can try in Agent Mode include:

    • Build my project and fix any errors
    • Extract any hardcoded strings used across my project and migrate to strings.xml
    • Add support for dark mode to my application
    • Given an attached screenshot, implement a new screen in my application using Material 3

The agent then suggests edits and iteratively fixes bugs to complete tasks. You can review, accept, or reject the proposed changes along the way, and ask the agent to iterate on your feedback.

moving image showing Gemini breaking tasks down into a plan with simple steps, and the list of IDE tools it needs to complete each step

Gemini breaks tasks down into a plan with simple steps. It also shows the list of IDE tools it needs to complete each step.

While powerful, you are firmly in control, with the ability to review, refine and guide the agent’s output at every step. When the agent proposes code changes, you can choose to accept or reject them.

screenshot of Gemini in Android Studio showing the Agent prompting the user to accept or reject a change

The Agent waits for the developer to approve or reject a change.

Additionally, you can enable “Auto-approve” if you are feeling lucky 😎 — especially useful when you want to iterate on ideas as rapidly as possible.

You can delegate routine, time-consuming work to the agent, freeing up your time for more creative, high-value work. Try out Agent Mode in the latest preview version of Android Studio – we look forward to seeing what you build! We are investing in building more agentic experiences for Gemini in Android Studio to make your development even more intuitive, so you can expect to see more agentic functionality over the next several releases.

moving image showing that Gemini understanding the context of an app

Gemini is capable of understanding the context of your app

Supercharge Agent Mode with your Gemini API key

screenshot of Gemini API key prompt in Android Studio

The default Gemini model has a generous no-cost daily quota with a limited context window. However, you can now add your own Gemini API key to expand Agent Mode’s context window to a massive 1 million tokens with Gemini 2.5 Pro.

A larger context window lets you send more instructions, code and attachments to Gemini, leading to even higher quality responses. This is especially useful when working with agents, as the larger context provides Gemini 2.5 Pro with the ability to reason about complex or long-running tasks.

screenshot of how to add your API Key in the Gemini settings

Add your API key in the Gemini settings

To enable this feature, get a Gemini API key by navigating to Google AI Studio. Sign in and get a key by clicking on the “Get API key” button. Then, back in Android Studio, navigate to the settings by going to File (Android Studio on macOS) > Settings > Tools > Gemini to enter your Gemini API key. Relaunch Gemini in Android Studio and get even better responses from Agent Mode.

Be sure to safeguard your Gemini API key, as additional charges apply for Gemini API usage associated with a personal API key. You can monitor your Gemini API key usage by navigating to AI Studio and selecting Get API key > Usage & Billing.

Note that business tier subscribers already get access to Gemini 2.5 Pro and the expanded context window automatically with their Gemini Code Assist license, so these developers will not see an API key option.

Model Context Protocol (MCP)

Gemini in Android Studio’s Agent Mode can now interact with external tools via the Model Context Protocol (MCP). This feature provides a standardized way for Agent Mode to use tools and extend knowledge and capabilities with the external environment.

There are many tools you can connect to the MCP Host in Android Studio. For example you could integrate with the Github MCP Server to create pull requests directly from Android Studio. Here are some additional use cases to consider.

In this initial release of MCP support in the IDE you will configure your MCP servers through a mcp.json file placed in the configuration directory of Studio, using the following format:

{
  "mcpServers": {
    "memory": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-memory"
      ]
    },
    "sequential-thinking": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-sequential-thinking"
      ]
    },
    "github": {
      "command": "docker",
      "args": [
        "run",
        "-i",
        "--rm",
        "-e",
        "GITHUB_PERSONAL_ACCESS_TOKEN",
        "ghcr.io/github/github-mcp-server"
      ],
      "env": {
        "GITHUB_PERSONAL_ACCESS_TOKEN": "<YOUR_TOKEN>"
      }
    }
  }  
}
Example configuration with two MCP servers

For this initial release, we support interacting with external tools via the stdio transport as defined in the MCP specification. We plan to support the full suite of MCP features in upcoming Android Studio releases, including the Streamable HTTP transport, external context resources, and prompt templates.

For more information on how to use MCP in Studio, including the mcp.json configuration file format, please refer to the Android Studio MCP Host documentation.

By delegating routine tasks to Gemini through Agent Mode, you’ll be able to focus on more innovative and enjoyable aspects of app development. Download the latest preview version of Android Studio on the canary release channel today to try it out, and let us know how much faster app development is for you!

As always, your feedback is important to us – check known issues, report bugs, suggest improvements, and be part of our vibrant community on LinkedIn, Medium, YouTube, or X. Let’s build the future of Android apps together!

The post Agentic AI takes Gemini in Android Studio to the next level appeared first on InShot Pro.

]]>
Top 3 updates for building excellent, adaptive apps at Google I/O ‘25 https://theinshotproapk.com/top-3-updates-for-building-excellent-adaptive-apps-at-google-i-o-25/ Tue, 10 Jun 2025 18:01:00 +0000 https://theinshotproapk.com/top-3-updates-for-building-excellent-adaptive-apps-at-google-i-o-25/ Posted by Mozart Louis – Developer Relations Engineer Today, Android is launching a few updates across the platform! This includes ...

Read more

The post Top 3 updates for building excellent, adaptive apps at Google I/O ‘25 appeared first on InShot Pro.

]]>

Posted by Mozart Louis – Developer Relations Engineer

Today, Android is launching a few updates across the platform! This includes the start of Android 16’s rollout, with details for both developers and users, a Developer Preview for enhanced Android desktop experiences with connected displays, and updates for Android users across Google apps and more, plus the June Pixel Drop. We’re also recapping all the Google I/O updates for Android developers focused on building excellent, adaptive Android apps.

Google I/O 2025 brought exciting advancements to Android, equipping you with essential knowledge and powerful tools you need to build outstanding, user-friendly applications that stand out.

If you missed any of the key #GoogleIO25 updates and just saw the release of Android 16 or you’re ready to dive into building excellent adaptive apps, our playlist is for you. Learn how to craft engaging experiences with Live Updates in Android 16, capture video effortlessly with CameraX, process it efficiently using Media3’s editing tools, and engage users across diverse platforms like XR, Android for Cars, Android TV, and Desktop.

Check out the Google I/O playlist for all the session details.

Here are three key announcements directly influencing how you can craft deeply engaging experiences and truly connect with your users:

#1: Build adaptively to unlock 500 million devices

In today’s diverse device ecosystem, users expect their favorite applications to function seamlessly across various form factors, including phones, tablets, Chromebooks, automobiles, and emerging XR glasses and headsets. Our recommended approach for developing applications that excel on each of these surfaces is to create a single, adaptive application. This strategy avoids the need to rebuild the application for every screen size, shape, or input method, ensuring a consistent and high-quality user experience across all devices.

The talk emphasizes that you don’t need to rebuild apps for each form factor. Instead, small, iterative changes can unlock an app’s potential.

Here are some resources we encourage you to use in your apps:

New feature support in Jetpack Compose Adaptive Libraries

    • We’re continuing to make it as easy as possible to build adaptively with Jetpack Compose Adaptive Libraries. with new features in 1.1 like pane expansion and predictive back. By utilizing canonical layout patterns such as List Detail or Supporting Pane layouts and integrating your app code, your application will automatically adjust and reflow when resized.

Navigation 3

    • The alpha release of the Navigation 3 library now supports displaying multiple panes. This eliminates the need to alter your navigation destination setup for separate list and detail views. Instead, you can adjust the setup to concurrently render multiple destinations when sufficient screen space is available.

Updates to Window Manager Library

    • AndroidX.window 1.5 introduces two new window size classes for expanded widths, facilitating better layout adaptation for large tablets and desktops. A width of 1600dp or more is now categorized as “extra large,” while widths between 1200dp and 1600dp are classified as “large.” These subdivisions offer more granularity for developers to optimize their applications for a wider range of window sizes.

Support all orientations and be resizable

Extend to Android XR

Upgrade your Wear OS apps to Material 3 Design

You should build a single, adaptive mobile app that brings the best experiences to all Android surfaces. By building adaptive apps, you meet users where they are today and in the future, enhancing user engagement and app discoverability. This approach represents a strategic business decision that optimizes an app’s long-term success.

#2: Enhance your app’s performance optimization

Get ready to take your app’s performance to the next level! Google I/O 2025, brought an inside look at cutting-edge tools and techniques to boost user satisfaction, enhance technical performance metrics, and drive those all-important key performance indicators. Imagine an end-to-end workflow that streamlines performance optimization.

Redesigned UiAutomator API

    • To make benchmarking reliable and reproducible, there’s the brand new UiAutomator API. Write robust test code and run it on your local devices or in Firebase Test Lab, ensuring consistent results every time.

Macrobenchmarks

    • Once your tests are in place, it’s time to measure and understand. Macrobenchmarks give you the hard data, while App Startup Insights provide actionable recommendations for improvement. Plus, you can get a quick snapshot of your app’s health with the App Performance Score via DAC. These tools combined give you a comprehensive view of your app’s performance and where to focus your efforts.

R8, More than code shrinking and obfuscation

    • You might know R8 as a code shrinking tool, but it’s capable of so much more! The talk dives into R8’s capabilities using the “Androidify” sample app. You’ll see how to apply R8, troubleshoot any issues (like crashes!), and configure it for optimal performance. It’ll also be shown how library developers can include “consumer Keep rules” so that their important code is not touched when used in an application.

#3: Build Richer Image and Video Experiences

In today’s digital landscape, users increasingly expect seamless content creation capabilities within their apps. To meet this demand, developers require robust tools for building excellent camera and media experiences.

Media3Effects in CameraX Preview

    • At Google I/O, developers delve into practical strategies for capturing high-quality video using CameraX, while simultaneously leveraging the Media3Effects on the preview.

Google Low-Light Boost

    • Google Low Light Boost in Google Play services enables real-time dynamic camera brightness adjustment in low light, even without device support for Low Light Boost AE Mode.

New Camera & Media Samples!

Learn more about how CameraX & Media3 can accelerate your development of camera and media related features.

Learn how to build adaptive apps

Want to learn more about building excellent, adaptive apps? Watch this playlist to learn more about all the session details.

The post Top 3 updates for building excellent, adaptive apps at Google I/O ‘25 appeared first on InShot Pro.

]]>
Androidify: Building delightful UIs with Compose https://theinshotproapk.com/androidify-building-delightful-uis-with-compose/ Tue, 03 Jun 2025 12:07:48 +0000 https://theinshotproapk.com/androidify-building-delightful-uis-with-compose/ Posted by Rebecca Franks – Developer Relations Engineer Androidify is a new sample app we built using the latest best ...

Read more

The post Androidify: Building delightful UIs with Compose appeared first on InShot Pro.

]]>

Posted by Rebecca Franks – Developer Relations Engineer

Androidify is a new sample app we built using the latest best practices for mobile apps. Previously, we covered all the different features of the app, from Gemini integration and CameraX functionality to adaptive layouts. In this post, we dive into the Jetpack Compose usage throughout the app, building upon our base knowledge of Compose to add delightful and expressive touches along the way!

Material 3 Expressive

Material 3 Expressive is an expansion of the Material 3 design system. It’s a set of new features, updated components, and design tactics for creating emotionally impactful UX.

It’s been released as part of the alpha version of the Material 3 artifact (androidx.compose.material3:material3:1.4.0-alpha10) and contains a wide range of new components you can use within your apps to build more personalized and delightful experiences. Learn more about Material 3 Expressive’s component and theme updates for more engaging and user-friendly products.

Material Expressive Component updates

Material Expressive Component updates

In addition to the new component updates, Material 3 Expressive introduces a new motion physics system that’s encompassed in the Material theme.

In Androidify, we’ve utilized Material 3 Expressive in a few different ways across the app. For example, we’ve explicitly opted-in to the new MaterialExpressiveTheme and chosen MotionScheme.expressive() (this is the default when using expressive) to add a bit of playfulness to the app:

@Composable
fun AndroidifyTheme(
   content: @Composable () -> Unit,
) {
   val colorScheme = LightColorScheme


   MaterialExpressiveTheme(
       colorScheme = colorScheme,
       typography = Typography,
       shapes = shapes,
       motionScheme = MotionScheme.expressive(),
       content = {
           SharedTransitionLayout {
               CompositionLocalProvider(LocalSharedTransitionScope provides this) {
                   content()
               }
           }
       },
   )
}

Some of the new componentry is used throughout the app, including the HorizontalFloatingToolbar for the Prompt type selection:

moving example of expressive button shapes in slow motion

The app also uses MaterialShapes in various locations, which are a preset list of shapes that allow for easy morphing between each other. For example, check out the cute cookie shape for the camera capture button:

Material Expressive Component updates

Camera button with a MaterialShapes.Cookie9Sided shape

Animations

Wherever possible, the app leverages the Material 3 Expressive MotionScheme to obtain a themed motion token, creating a consistent motion feeling throughout the app. For example, the scale animation on the camera button press is powered by defaultSpatialSpec(), a specification used for animations that move something across a screen (such as x,y or rotation, scale animations):

val interactionSource = remember { MutableInteractionSource() }
val animationSpec = MaterialTheme.motionScheme.defaultSpatialSpec<Float>()
Spacer(
   modifier
       .indication(interactionSource, ScaleIndicationNodeFactory(animationSpec))
       .clip(MaterialShapes.Cookie9Sided.toShape())
       .size(size)
       .drawWithCache {
           //.. etc
       },
)

Camera button scale interaction

Camera button scale interaction

Shared element animations

The app uses shared element transitions between different screen states. Last year, we showcased how you can create shared elements in Jetpack Compose, and we’ve extended this in the Androidify sample to create a fun example. It combines the new Material 3 Expressive MaterialShapes, and performs a transition with a morphing shape animation:

moving example of expressive button shapes in slow motion

To do this, we created a custom Modifier that takes in the target and resting shapes for the sharedBounds transition:

@Composable
fun Modifier.sharedBoundsRevealWithShapeMorph(
   sharedContentState: 
SharedTransitionScope.SharedContentState,
   sharedTransitionScope: SharedTransitionScope = 
LocalSharedTransitionScope.current,
   animatedVisibilityScope: AnimatedVisibilityScope = 
LocalNavAnimatedContentScope.current,
   boundsTransform: BoundsTransform = 
MaterialTheme.motionScheme.sharedElementTransitionSpec,
   resizeMode: SharedTransitionScope.ResizeMode = 
SharedTransitionScope.ResizeMode.RemeasureToBounds,
   restingShape: RoundedPolygon = RoundedPolygon.rectangle().normalized(),
   targetShape: RoundedPolygon = RoundedPolygon.circle().normalized(),
)

Then, we apply a custom OverlayClip to provide the morphing shape, by tying into the AnimatedVisibilityScope provided by the LocalNavAnimatedContentScope:

val animatedProgress =
   animatedVisibilityScope.transition.animateFloat(targetValueByState = targetValueByState)


val morph = remember {
   Morph(restingShape, targetShape)
}
val morphClip = MorphOverlayClip(morph, { animatedProgress.value })


return this@sharedBoundsRevealWithShapeMorph
   .sharedBounds(
       sharedContentState = sharedContentState,
       animatedVisibilityScope = animatedVisibilityScope,
       boundsTransform = boundsTransform,
       resizeMode = resizeMode,
       clipInOverlayDuringTransition = morphClip,
       renderInOverlayDuringTransition = renderInOverlayDuringTransition,
   )

View the full code snippet for this Modifer on GitHub.

Autosize text

With the latest release of Jetpack Compose 1.8, we added the ability to create text composables that automatically adjust the font size to fit the container’s available size with the new autoSize parameter:

BasicText(text,
style = MaterialTheme.typography.titleLarge,
autoSize = TextAutoSize.StepBased(maxFontSize = 220.sp),
)

This is used front and center for the “Customize your own Android Bot” text:

Text reads Customize your own Android Bot with an inline moving image

“Customize your own Android Bot” text with inline GIF

This text composable is interesting because it needed to have the fun dancing Android bot in the middle of the text. To do this, we use InlineContent, which allows us to append a composable in the middle of the text composable itself:

@Composable
private fun DancingBotHeadlineText(modifier: Modifier = Modifier) {
   Box(modifier = modifier) {
       val animatedBot = "animatedBot"
       val text = buildAnnotatedString {
           append(stringResource(R.string.customize))
           // Attach "animatedBot" annotation on the placeholder
           appendInlineContent(animatedBot)
           append(stringResource(R.string.android_bot))
       }
       var placeHolderSize by remember {
           mutableStateOf(220.sp)
       }
       val inlineContent = mapOf(
           Pair(
               animatedBot,
               InlineTextContent(
                   Placeholder(
                       width = placeHolderSize,
                       height = placeHolderSize,
                       placeholderVerticalAlign = PlaceholderVerticalAlign.TextCenter,
                   ),
               ) {
                   DancingBot(
                       modifier = Modifier
                           .padding(top = 32.dp)
                           .fillMaxSize(),
                   )
               },
           ),
       )
       BasicText(
           text,
           modifier = Modifier
               .align(Alignment.Center)
               .padding(bottom = 64.dp, start = 16.dp, end = 16.dp),
           style = MaterialTheme.typography.titleLarge,
           autoSize = TextAutoSize.StepBased(maxFontSize = 220.sp),
           maxLines = 6,
           onTextLayout = { result ->
               placeHolderSize = result.layoutInput.style.fontSize * 3.5f
           },
           inlineContent = inlineContent,
       )
   }
}

Composable visibility with onLayoutRectChanged

With Compose 1.8, a new modifier, Modifier.onLayoutRectChanged, was added. This modifier is a more performant version of onGloballyPositioned, and includes features such as debouncing and throttling to make it performant inside lazy layouts.

In Androidify, we’ve used this modifier for the color splash animation. It determines the position where the transition should start from, as we attach it to the “Let’s Go” button:

var buttonBounds by remember {
   mutableStateOf<RelativeLayoutBounds?>(null)
}
var showColorSplash by remember {
   mutableStateOf(false)
}
Box(modifier = Modifier.fillMaxSize()) {
   PrimaryButton(
       buttonText = "Let's Go",
       modifier = Modifier
           .align(Alignment.BottomCenter)
           .onLayoutRectChanged(
               callback = { bounds ->
                   buttonBounds = bounds
               },
           ),
       onClick = {
           showColorSplash = true
       },
   )
}

We use these bounds as an indication of where to start the color splash animation from.

moving image of a blue color splash transition between Androidify demo screens

Learn more delightful details

From fun marquee animations on the results screen, to animated gradient buttons for the AI-powered actions, to the path drawing animation for the loading screen, this app has many delightful touches for you to experience and learn from.

animated marquee example

animated gradient button for AI powered actions example

animated loading screen example

Check out the full codebase at github.com/android/androidify and learn more about the latest in Compose from using Material 3 Expressive, the new modifiers, auto-sizing text and of course a couple of delightful interactions!

Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.

The post Androidify: Building delightful UIs with Compose appeared first on InShot Pro.

]]>