CameraX https://theinshotproapk.com/category/app/camerax/ Download InShot Pro APK for Android, iOS, and PC Thu, 13 Nov 2025 17:00:00 +0000 en-US hourly 1 https://theinshotproapk.com/wp-content/uploads/2021/07/cropped-Inshot-Pro-APK-Logo-1-32x32.png CameraX https://theinshotproapk.com/category/app/camerax/ 32 32 Introducing CameraX 1.5: Powerful Video Recording and Pro-level Image Capture https://theinshotproapk.com/introducing-camerax-1-5-powerful-video-recording-and-pro-level-image-capture/ Thu, 13 Nov 2025 17:00:00 +0000 https://theinshotproapk.com/introducing-camerax-1-5-powerful-video-recording-and-pro-level-image-capture/ Posted by Scott Nien, Software Engineer The CameraX team is thrilled to announce the release of version 1.5! This latest ...

Read more

The post Introducing CameraX 1.5: Powerful Video Recording and Pro-level Image Capture appeared first on InShot Pro.

]]>

Posted by Scott Nien, Software Engineer



The CameraX team is thrilled to announce the release of version 1.5! This latest update focuses on bringing professional-grade capabilities to your fingertips while making the camera session easier to configure than ever before.

For video recording, users can now effortlessly capture stunning slow-motion or high-frame-rate videos. More importantly, the new Feature Group API allows you to confidently enable complex combinations like 10-bit HDR and 60 FPS, ensuring consistent results across supported devices.

On the image capture front, you gain maximum flexibility with support for capturing unprocessed, uncompressed DNG (RAW) files. Plus, you can now leverage Ultra HDR output even when using powerful Camera Extensions.

Underpinning these features is the new SessionConfig API, which streamlines camera setup and reconfiguration. Now, let’s dive into the details of these exciting new features.

Powerful Video Recording: High-Speed and Feature Combinations

CameraX 1.5 significantly expands its video capabilities, enabling more creative and robust recording experiences.

Slow Motion & High Frame Rate Video

One of our most anticipated features, slow-motion video, is now available. You can now capture high-speed video (e.g., 120 or 240 fps) and encode it directly into a dramatic slow-motion video. Alternatively, you can record at the same high frame rate to produce exceptionally smooth video.

Implementing this is straightforward if you’re familiar with the VideoCapture API.

  1. Check for High-Speed Support: Use the new Recorder.getHighSpeedVideoCapabilities() method to query if the device supports this feature.

val cameraInfo = cameraProvider.getCameraInfo(cameraSelector)

val highSpeedCapabilities = Recorder.getHighSpeedVideoCapabilities(cameraInfo)

if (highSpeedCapabilities == null) {
    // This camera device does not support high-speed video.
    return
}
  1. Configure and Bind the Use Case: Use the returned videoCapabilities (which contains supported video quality information) to build a HighSpeedVideoSessionConfig. You must then query the supported frame rate ranges via cameraInfo.getSupportedFrameRateRanges() and set the desired range. Invoke setSlowMotionEnabled(true) to record slow motion videos, otherwise it will record high-frame-rate videos. The final step is to use the regular Recorder.prepareRecording().start() to begin recording the video.


val preview = Preview.Builder().build()
val quality = highSpeedCapabilities
        .getSupportedQualities(DynamicRange.SDR).first()

val recorder = Recorder.Builder()
      .setQualitySelector(QualitySelector.from(quality)))
      .build()

val videoCapture = VideoCapture.withOutput(recorder)

val frameRateRange = cameraInfo.getSupportedFrameRateRanges(      
       HighSpeedVideoSessionConfig(videoCapture, preview)
).first()

val sessionConfig = HighSpeedVideoSessionConfig(
    videoCapture, 
    preview, 
    frameRateRange = frameRateRange, 
    // Set true for slow-motion playback, or false for high-frame-rate
    isSlowMotionEnabled = true
)

cameraProvider.bindToLifecycle(
     lifecycleOwner, cameraSelector, sessionConfig)

// Start recording slow motion videos. 
val recording = recorder.prepareRecording(context, outputOption)
      .start(executor, {})

Compatibility and Limitations

High-speed recording requires specific CameraConstrainedHighSpeedCaptureSession and CamcorderProfile support. Always perform the capability check, and enable high-speed recording only on supported devices to prevent bad user experience. Currently, this feature is supported on the rear cameras of almost all Pixel devices and select models from other manufacturers.

Check the blog post for more details.

Combine Features with Confidence: The Feature Group API

CameraX 1.5 introduces the Feature Group API, which eliminates the guesswork of feature compatibility. Based on Android 15’s feature combination query API, you can now confidently enable multiple features together, guaranteeing a stable camera session. The Feature Group currently supports: HDR (HLG), 60 fps, Preview Stabilization, and Ultra HDR. For instance, you can enable HDR, 60 fps, and Preview Stabilization simultaneously on Pixel 10 and Galaxy S25 series. Future enhancements are planned to include 4K recording and ultra-wide zoom. 

The feature group API enables two essential use cases:

Use Case 1: Prioritizing the Best Quality

If you want to capture using the best possible combination of features, you can provide a prioritized list. CameraX will attempt to enable them in order, selecting the first combination the device fully supports.

val sessionConfig = SessionConfig(
    useCases = listOf(preview, videoCapture),
    preferredFeatureGroup = listOf(
        GroupableFeature.HDR_HLG10,
        GroupableFeature.FPS_60,
        GroupableFeature.PREVIEW_STABILIZATION
    )
).apply {
    // (Optional) Get a callback with the enabled features to update your UI.
    setFeatureSelectionListener { selectedFeatures ->
        updateUiIndicators(selectedFeatures)
    }
}
processCameraProvider.bindToLifecycle(activity, cameraSelector, sessionConfig)

In this example, CameraX tries to enable features in this order:

  1. HDR + 60 FPS + Preview Stabilization

  2. HDR + 60 FPS

  3. HDR + Preview Stabilization

  4. HDR

  5. 60 FPS + Preview Stabilization

  6. 60 FPS

  7. Preview Stabilization

  8. None

Use Case 2: Building a User-Facing Settings UI

You can now accurately reflect which feature combinations are supported in your app’s settings UI, disabling toggles for unsupported options like the picture below. 

To determine whether to gray out a toggle, use the following codes to check for feature combination support. Initially, query the status of every individual feature. Once a feature is enabled, re-query the remaining features with the enabled features to see if their toggles must now be grayed out due to compatibility constraints.

fun disableFeatureIfNotSuported(
   enabledFeatures: Set<GroupableFeature>,     
   featureToCheck:GroupableFeature
) {
 val sessionConfig = SessionConfig(
     useCases = useCases,
     requiredFeatureGroup = enabledFeatures + featureToCheck
 )
 val isSupported = cameraInfo.isFeatureGroupSupported(sessionConfig)

 if (!isSupported) {
     // disable the toggle for featureToCheck
 }
}

Please refer to the Feature Group blog post for more information. 

More Video Enhancements

  • Concurrent Camera Improvements: With CameraX 1.5.1, you can now bind Preview + ImageCapture + VideoCapture use cases concurrently for each SingleCameraConfig in non-composition mode. Additionally, in composition mode (same use cases with CompositionSettings),  you can now set the CameraEffect that is applied to the final composition result.

  • Dynamic Muting: You can now start a recording in a muted state using PendingRecording.withAudioEnabled(boolean initialMuted) and allow the user to unmute later using Recording.mute(boolean muted).

  • Improved Insufficient Storage Handling: CameraX now reliably dispatches the VideoRecordEvent.Finalize.ERROR_INSUFFICIENT_STORAGE error, allowing your app to gracefully handle low storage situations and inform the user.

  • Low Light Boost: On supported devices (like the Pixel 10 series), you can enable CameraControl.enableLowLightBoostAsync to automatically brighten the preview and video streams in dark environments.

Professional-Grade Image Capture

CameraX 1.5 brings major upgrades to ImageCapture for developers who demand maximum quality and flexibility.

Unleash Creative Control with DNG (RAW) Capture

For complete control over post-processing, CameraX now supports DNG (RAW) capture. This gives you access to the unprocessed, uncompressed image data directly from the camera sensor, enabling professional-grade editing and color grading. The API supports capturing the DNG file alone, or capturing simultaneous JPEG and DNG outputs. See the sample code below for how to capture JPEG and DNG files simultaneously.

val capabilities = ImageCapture.getImageCaptureCapabilities(cameraInfo)
val imageCapture = ImageCapture.Builder().apply {
    if (capabilities.supportedOutputFormats
             .contains(OUTPUT_FORMAT_RAW_JPEG)) {
        // Capture both RAW and JPEG formats.
        setOutputFormat(OUTPUT_FORMAT_RAW_JPEG)
    }
}.build()
// ... bind imageCapture to lifecycle ...


// Provide separate output options for each format.
val outputOptionRaw = /* ... configure for image/x-adobe-dng ... */
val outputOptionJpeg = /* ... configure for image/jpeg ... */
imageCapture.takePicture(
    outputOptionRaw,
    outputOptionJpeg,
    executor,
    object : ImageCapture.OnImageSavedCallback {
        override fun onImageSaved(results: OutputFileResults) {
            // This callback is invoked twice: once for the RAW file
            // and once for the JPEG file.
        }

        override fun onError(exception: ImageCaptureException) {}
    }
)

Ultra HDR for Camera Extensions

Get the best of both worlds: the stunning computational photography of Camera Extensions (like Night Mode) combined with the brilliant color and dynamic range of Ultra HDR. This feature is now supported on many recent premium Android phones, such as the Pixel 9/10 series and Samsung S24/S25 series.

// Support UltraHDR when Extension is enabled. 

val extensionsEnabledCameraSelector = extensionsManager
     .getExtensionEnabledCameraSelector(
        CameraSelector.DEFAULT_BACK_CAMERA, ExtensionMode.NIGHT)

val imageCapabilities = ImageCapture.getImageCaptureCapabilities(
               cameraProvider.getCameraInfo(extensionsEnabledCameraSelector)

val imageCapture = ImageCapture.Builder()
     .apply {
       if (imageCapabilities.supportedOutputFormats
                .contains(OUTPUT_FORMAT_JPEG_ULTRA_HDR) {
           setOutputFormat(OUTPUT_FORMAT_JPEG_ULTRA_HDR)

       }

     }.build()

Core API and Usability Enhancements

A New Way to Configure: SessionConfig

As seen in the examples above, SessionConfig is a new concept in CameraX 1.5. It centralizes configuration and simplifies the API in two key ways:

  1. No More Manual unbind() Calls: CameraX APIs are lifecycle-aware. It will implicitly “unbind” your use cases when the activity or other LifecycleOwner is destroyed. But updating use cases or switching cameras still requires you to call unbind() or unbindAll() before rebinding. Now with CameraX 1.5, when you bind a new SessionConfig, CameraX seamlessly updates the session for you, eliminating the need for unbind calls.

  2. Deterministic Frame Rate Control: The new SessionConfig API introduces a deterministic way to manage the frame rate. Unlike the previous setTargetFrameRate, which was only a hint, this new method guarantees the specified frame rate range will be applied upon successful configuration. To ensure accuracy, you must query supported frame rates using CameraInfo.getSupportedFrameRateRanges(SessionConfig). By passing the full SessionConfig, CameraX can accurately determine the supported ranges based on stream configurations.

Camera-Compose is Now Stable

We know how much you enjoy Jetpack Compose, and we’re excited to announce that the camera-compose library is now stable at version 1.5.1! This release includes critical bug fixes related to CameraXViewfinder usage with Compose features like moveableContentOf and Pager, as well as resolving a preview stretching issue. We will continue to add more features to camera-compose in future releases.

ImageAnalysis and CameraControl Improvements

  • Torch Strength Adjustment: Gain fine-grained control over the device’s torch with new APIs. You can query the maximum supported strength using CameraInfo.getMaxTorchStrengthLevel() and then set the desired level with CameraControl.setTorchStrengthLevel().

  • NV21 Support in ImageAnalysis: You can now request the NV21 image format directly from ImageAnalysis, simplifying integration with other libraries and APIs. This is enabled by invoking ImageAnalysis.Builder.setOutputImageFormat(OUTPUT_IMAGE_FORMAT_NV21).

Get Started Today

Update your dependencies to CameraX 1.5 today and explore the exciting new features. We can’t wait to see what you build.

To use CameraX 1.5,  please add the following dependencies to your libs.versions.toml. (We recommend using 1.5.1 which contains many critical bug fixes and concurrent camera improvements.) 

[versions]

camerax = "1.5.1"


[libraries]

..

androidx-camera-core = { module = "androidx.camera:camera-core", version.ref = "camerax" }

androidx-camera-compose = { module = "androidx.camera:camera-compose", version.ref = "camerax" }

androidx-camera-view = { module = "androidx.camera:camera-view", version.ref = "camerax" }

androidx-camera-lifecycle = { group = "androidx.camera", name = "camera-lifecycle", version.ref = "camerax" }

androidx-camera-camera2 = { module = "androidx.camera:camera-camera2", version.ref = "camerax" }

androidx-camera-extensions = { module = "androidx.camera:camera-extensions", version.ref = "camerax" }

And then add these to your module build.gradle.kts dependencies:

dependencies {

  ..

  implementation(libs.androidx.camera.core)
  implementation(libs.androidx.camera.lifecycle)

  implementation(libs.androidx.camera.camera2)

  implementation(libs.androidx.camera.view) // for PreviewView 
  implementation(libs.androidx.camera.compose) // for compose UI

  implementation(libs.androidx.camera.extensions) // For Extensions 

}

Have questions or want to connect with the CameraX team? Join the CameraX developer discussion group or file a bug report:

The post Introducing CameraX 1.5: Powerful Video Recording and Pro-level Image Capture appeared first on InShot Pro.

]]>
Beyond Single Features: Guaranteeing Feature Combinations With CameraX 1.5 https://theinshotproapk.com/beyond-single-features-guaranteeing-feature-combinations-with-camerax-1-5/ Wed, 15 Oct 2025 16:00:00 +0000 https://theinshotproapk.com/beyond-single-features-guaranteeing-feature-combinations-with-camerax-1-5/ Posted by Tahsin Masrur – Software Engineer      Modern camera apps are defined by powerful, overlapping features. Users expect ...

Read more

The post Beyond Single Features: Guaranteeing Feature Combinations With CameraX 1.5 appeared first on InShot Pro.

]]>

Posted by Tahsin Masrur – Software Engineer

    

Modern camera apps are defined by powerful, overlapping features. Users expect to record video with stunning HDR, capture fluid motion at 60 FPS, and get buttery-smooth footage with Preview Stabilization—often all at the same time.

As developers, we know the reality is more complicated. How can you guarantee that a specific device actually supports a given combination? Until now, enabling multiple features was often a gamble. You could check for individual feature support, but combining them could lead to undefined behavior or, worse, a failed camera session.  This uncertainty forces developers to be conservative, which prevents users on capable devices from accessing the best possible experience.

For instance, very few premium devices reliably support HDR and 60 FPS video simultaneously. Consequently, most apps avoid enabling both at once to prevent a poor user experience on the majority of phones.

To address this, we’re introducing Feature Group in CameraX – a new API designed to eliminate this guesswork. You can now query whether a specific combination of features is supported before configuring the camera, or simply tell CameraX your priorities and let it enable the best-supported combination for you.

For Those New to CameraX

Before we dive into the new Feature Group API, let’s quickly recap what CameraX is. CameraX is a Jetpack support library, built to help you make camera app development easier. It provides a consistent and easy-to-use API surface that works across most Android devices, with backward-compatibility to Android 6.0 (API level 23). If you are new to CameraX, we recommend checking out the official documentation and trying the codelab to get started.

What You Can Build with the Feature Group API

You no longer need to gamble on feature combinations and can confidently deliver the best possible camera experiences – like simultaneous HDR and 60 FPS video on capable hardware (e.g. a Pixel 10 Pro) – while gracefully avoiding errors on devices that can’t support the combination.

Pixel 10 Pro enabling both HDR and 60 FPS simultaneously
On an older device where HDR and 60 FPS can’t run simultaneously, only HDR is enabled while the 60 FPS option is disabled.

With the Feature Group API, you can:

  • Build smarter, dynamic UIs: Intelligently enable or disable settings in your UI based on real-time hardware support. For example, if a user enables HDR, you can instantly gray out and disable the 60 FPS option if the combination isn’t supported on that device. 

  • Deliver a reliable “High-Quality” mode: Configure the camera with a prioritized list of desired features. CameraX automatically finds and enables the best-supported combination for any given device, ensuring a great result without complex, device-specific logic.

  • Prevent camera session failures: By verifying support beforehand, you prevent the camera from attempting to configure an unsupported combination, eliminating a common source of crashes and offering a smooth user experience.

How It Works: The Core Components

The new API is centered around key additions to SessionConfig and CameraInfo.

  1. GroupableFeature: This API introduces a set of predefined groupable features, such as HDR_HLG10, FPS_60, PREVIEW_STABILIZATION, and IMAGE_ULTRA_HDR. Due to computational limitations, only a specific set of features can be grouped with the high degree of reliability this API provides. We are actively working to expand this list and will introduce support for more features in future releases.

  2. New SessionConfig Parameters: This class, used for starting a camera session, now accepts two new parameters:

  • requiredFeatureGroup: Use this for features that must be supported for the configuration to succeed – ideal for features that a user explicitly enables, such as toggling an ‘HDR’ switch. To ensure a deterministic and consistent experience, the bindToLifecycle call will throw an IllegalArgumentException if the requested combination is not supported, rather than silently ignoring a feature request. The CameraInfo#isFeatureGroupSupported API (details below) should be used to query this result beforehand.

  • preferredFeatureGroup: Use this for features that are desirable but optional, for example when you want to implement a default “High-Quality” mode. You provide a list of your desired features ordered according to your priorities, and CameraX automatically enables the highest-priority combination that the device supports.

  1. CameraInfo#isFeatureGroupSupported(): This is the core query method for explicitly checking if a feature group is supported, well-suited for providing only supported feature options to users in your app UI. You pass it a SessionConfig, and it returns a boolean indicating whether the combination is supported. If you intend to bind a SessionConfig with required features, you should use this API first to ensure it is supported. 

Implementation in Practice

Let’s look at how to use these components to build a better camera experience.

Scenario 1: “Best Effort” High-Quality Mode

If you want to enable the best possible features by default, you can provide a prioritized list to preferredFeatureGroup. In this example, we tell CameraX to prioritize HDR, then 60 FPS, and finally Preview Stabilization. CameraX handles the complexity of checking all possible combinations and choosing the best one that the device supports.

For instance, if a device can handle HDR and 60 FPS together but not with Preview Stabilization, CameraX will enable the first two and discard the third. This way, you get the best possible experience without writing complex, device-specific checks.

cameraProvider.bindToLifecycle(

    lifecycleOwner,

    cameraSelector,

    SessionConfig(

        useCases = listOf(preview, videoCapture),

        // The order of features in this list determines their priority. 

        // CameraX will enable the best-supported combination based on these

        // priorities: HDR_HLG10 > FPS_60 > Preview Stabilization.  

        preferredFeatureGroup =

           listOf(HDR_HLG10, FPS_60, PREVIEW_STABILIZATION),

    ).apply {

        // (Optional) Get a callback with the enabled features

        // to update your UI. 

        setFeatureSelectionListener { selectedFeatures ->

            updateUiIndicators(selectedFeatures)

        }

    }

)

For this code snippet, CameraX will attempt to enable feature combinations in the following priority order, selecting the first one the device fully supports:

  1. HDR + 60 FPS + Preview Stabilization

  2. HDR + 60 FPS

  3. HDR + Preview Stabilization

  4. HDR

  5. 60 FPS + Preview Stabilization

  6. 60 FPS

  7. Preview Stabilization

  8. None of the above features

Scenario 2: Building a Reactive UI

To create a UI that responds to user selections and prevents users from selecting an unsupported feature combination, you can query for support directly. The function below checks which features are incompatible with the user’s current selections, allowing you to disable the corresponding UI elements.

/**

 * Returns a list of features that are NOT supported in combination

 * with the currently selected features.

 */

fun getUnsupportedFeatures(

    currentFeatures: Set<GroupableFeature>

): Set<GroupableFeature> {

    val unsupportedFeatures = mutableSetOf<GroupableFeature>()

    val appFeatureOptions = setOf(HDR_HLG10, FPS_60, PREVIEW_STABILIZATION)


    // Iterate over every available feature option in your app. 

    appFeatureOptions.forEach { featureOption ->

        // Skip features the user has already selected. 

        if (currentFeatures.contains(featureOption)) return@forEach


        // Check if adding this new feature is supported. 

        val isSupported = cameraInfo.isFeatureGroupSupported(

            SessionConfig(

                useCases = useCases,

                // Check the new feature on top of existing ones.

                requiredFeatureGroup = currentFeatures + featureOption

            )

        )


        if (!isSupported) {

            unsupportedFeatures.add(featureOption)

        }

    }


    return unsupportedFeatures

}

You can then wire this logic into your ViewModel or UI controller to react to user input and re-bind the camera with a guaranteed-to-work configuration.

// Invoked when user turns some feature on/off.

fun onFeatureChange(currentFeatures: Set<GroupableFeature>) {

    // Identify features that are unsupported with the current selection.

    val unsupportedFeatures = getUnsupportedFeatures(selectedFeatures)


    // Update app UI so that users can’t enable them.

    updateDisabledFeatures(unsupportedFeatures)


    // Bind a session config with the new set of features. Since users are

    // allowed to select only supported features always, no need to explicitly

    // check if feature group is supported.

    cameraProvider.bindToLifecycle(

        lifecycleOwner,

        cameraSelector,

        SessionConfig(

            useCases = listOf(preview, videoCapture),

            requiredFeatureGroup = currentFeatures,

        ).apply {

            setFeatureSelectionListener { selectedFeatures ->

                // Update UI to let users know which features are now selected

                updateUiIndicators(selectedFeatures)

            }

        }

    )

}


To see these concepts in a working application, you can explore our internal test app. It provides a complete implementation of both the “best effort” and “reactive UI” scenarios discussed above.

Please note: This is a test application and not an officially supported sample. While it’s a great reference for the Feature Group API, it has not been polished for production use.

Get Started Today

The Feature Group API removes the ambiguity of working with advanced camera capabilities. By providing a deterministic way to query for feature support, you can build more powerful and reliable camera apps with confidence.

The API is available as experimental in CameraX 1.5 and is scheduled to become fully stable in the 1.6 release, with more support and improvements on the way.

To learn more, check out the official documentation. We can’t wait to see what you create, and we look forward to your feedback. Please share your thoughts and report any issues through the following channels:


The post Beyond Single Features: Guaranteeing Feature Combinations With CameraX 1.5 appeared first on InShot Pro.

]]>
Androidify: Building AI first Android Experiences with Gemini using Jetpack Compose and Firebase https://theinshotproapk.com/androidify-building-ai-first-android-experiences-with-gemini-using-jetpack-compose-and-firebase/ Sat, 06 Sep 2025 12:03:47 +0000 https://theinshotproapk.com/androidify-building-ai-first-android-experiences-with-gemini-using-jetpack-compose-and-firebase/ Posted by Rebecca Franks – Developer Relations Engineer, Tracy Agyemang – Product Marketer, and Avneet Singh – Product Manager Androidify ...

Read more

The post Androidify: Building AI first Android Experiences with Gemini using Jetpack Compose and Firebase appeared first on InShot Pro.

]]>

Posted by Rebecca Franks – Developer Relations Engineer, Tracy Agyemang – Product Marketer, and Avneet Singh – Product Manager

Androidify is our new app that lets you build your very own Android bot, using a selfie and AI. We walked you through some of the components earlier this year, and starting today it’s available on the web or as an app on Google Play. In the new Androidify, you can upload a selfie or write a prompt of what you’re looking for, add some accessories, and watch as AI builds your unique bot. Once you’ve had a chance to try it, come back here to learn more about the AI APIs and Android tools we used to create the app. Let’s dive in!

Key technical integrations

The Androidify app combines powerful technologies to deliver a seamless and engaging user experience. Here’s a breakdown of the core components and their roles:

AI with Gemini and Firebase

Androidify leverages the Firebase AI Logic SDK to access Google’s powerful Gemini and Imagen* models. This is crucial for several key features:

  • Image validation: The app first uses Gemini 2.5 Flash to validate the user’s photo. This includes checking that the image contains a clear, focused person and meets safety standards before any further processing. This is a critical first step to ensure high-quality and safe outputs.
  • Image captioning: Once validated, the model generates a detailed caption of the user’s image. This is done using structured output, which means the model returns a specific JSON format, making it easier for the app to parse the information. This detailed description helps create a more accurate and creative final result.
  • Android Bot Generation: The generated caption is then used to enrich the prompt for the final image generation. A specifically fine-tuned version of the Imagen 3 model is then called to generate the custom Android bot avatar based on the enriched prompt. This custom fine-tuning ensures the results are unique and align with the app’s playful and stylized aesthetic.
  • The Androidify app also has a “Help me write” feature which uses Gemini 2.5 Flash to create a random description for a bot’s clothing and hairstyle, adding a bit of a fun “I’m feeling lucky” element.

    gif showcasing the help me write button

    UI with Jetpack Compose and CameraX

    The app’s user interface is built entirely with Jetpack Compose, enabling a declarative and responsive design across form factors. The app uses the latest Material 3 Expressive design, which provides delightful and engaging UI elements like new shapes, motion schemes, and custom animations.

    For camera functionality, CameraX is used in conjunction with the ML Kit Pose Detection API. This intelligent integration allows the app to automatically detect when a person is in the camera’s view, enabling the capture button and adding visual guides for the user. It also makes the app’s camera features responsive to different device types, including foldables in tabletop mode.

    Androidify also makes extensive use of the latest Compose features, such as:

  • Adaptive layouts: It’s designed to look great on various screen sizes, from phones to foldables and tablets, by leveraging WindowSizeClass and reusable composables.
  • Shared element transitions: The app uses the new Jetpack Navigation 3 library to create smooth and delightful screen transitions, including morphing shape animations that add a polished feel to the user experience.
  • Auto-sizing text: With Compose 1.8, the app uses a new parameter that automatically adjusts font size to fit the container’s available size, which is used for the app’s main “Customize your own Android Bot” text.
  • chart illustrating the behavior of Androidify app flow

    Figure 1. Androidify Flow

    Latest updates

    In the latest version of Androidify, we’ve added some new powerful AI driven features.

    Background vibe generation with Gemini Image editing

    Using the latest Gemini 2.5 Flash Image model, we combine the Android bot with a preset background “vibe” to bring the Android bots to life.

    a three-part image showing an Android bot on the left, text prompt in the middle reads A vibrant 3D illustration of a vibrant outdoor garden with fun plants. the flowers in thisscene have an alien-like qulaity to them and are brightly colored. the entire scene is rendered with a meticulous mixture of rounded, toy-like objects, creating a clean, minimalist aesthetic..., and image on the right is the Android bot from the first image stanging in a toy like garen scene surrounded by brightly colored flowers. A whitre picket fence is in the background, and a red watering can sits on the ground next to the driod bot

    Figure 2. Combining the Android bot with a background vibe description to generate your new Android Bot in a scene

    This is achieved by using Firebase AI Logic – passing a prompt for the background vibe, and the input image bitmap of the bot, with instructions to Gemini on how to combine the two together.

    override suspend fun generateImageWithEdit(
            image: Bitmap,
            backgroundPrompt: String = "Add the input image android bot as the main subject to the result... with the background that has the following vibe...",
        ): Bitmap {
            val model = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel(
                modelName = "gemini-2.5-flash-image-preview",
                generationConfig = generationConfig {
                    responseModalities = listOf(
                        ResponseModality.TEXT,
                        ResponseModality.IMAGE,
                    )
                },
            )
    	  // We combine the backgroundPrompt with the input image which is the Android Bot, to produce the new bot with a background
            val prompt = content {
                text(backgroundPrompt)
                image(image)
            }
            val response = model.generateContent(prompt)
            val image = response.candidates.firstOrNull()
                ?.content?.parts?.firstNotNullOfOrNull { it.asImageOrNull() }
            return image ?: throw IllegalStateException("Could not extract image from model response")
        }

    Sticker mode with ML Kit Subject Segmentation

    The app also includes a “Sticker mode” option, which integrates the ML Kit Subject Segmentation library to remove the background on the bot. You can use “Sticker mode” in apps that support stickers.

    backgroud removal

    Figure 3. White background removal of Android Bot to create a PNG that can be used with apps that support stickers

    The code for the sticker implementation first checks if the Subject Segmentation model has been downloaded and installed, if it has not – it requests that and waits for its completion. If the model is installed already, the app passes in the original Android Bot image into the segmenter, and calls process on it to remove the background. The foregroundBitmap object is then returned for exporting.

    override suspend fun generateImageWithEdit(
            image: Bitmap,
            backgroundPrompt: String = "Add the input image android bot as the main subject to the result... with the background that has the following vibe...",
        ): Bitmap {
            val model = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel(
                modelName = "gemini-2.5-flash-image-preview",
                generationConfig = generationConfig {
                    responseModalities = listOf(
                        ResponseModality.TEXT,
                        ResponseModality.IMAGE,
                    )
                },
            )
    	  // We combine the backgroundPrompt with the input image which is the Android Bot, to produce the new bot with a background
            val prompt = content {
                text(backgroundPrompt)
                image(image)
            }
            val response = model.generateContent(prompt)
            val image = response.candidates.firstOrNull()
                ?.content?.parts?.firstNotNullOfOrNull { it.asImageOrNull() }
            return image ?: throw IllegalStateException("Could not extract image from model response")
        }

    See the LocalSegmentationDataSource for the full source implementation

    Learn more

    To learn more about Androidify behind the scenes, take a look at the new solutions walkthrough, inspect the code or try out the experience for yourself at androidify.com or download the app on Google Play.

    moving demo of Androidfiy app

    *Check responses. Compatibility and availability varies. 18+.

    The post Androidify: Building AI first Android Experiences with Gemini using Jetpack Compose and Firebase appeared first on InShot Pro.

    ]]>
    Top 3 updates for building excellent, adaptive apps at Google I/O ‘25 https://theinshotproapk.com/top-3-updates-for-building-excellent-adaptive-apps-at-google-i-o-25/ Tue, 10 Jun 2025 18:01:00 +0000 https://theinshotproapk.com/top-3-updates-for-building-excellent-adaptive-apps-at-google-i-o-25/ Posted by Mozart Louis – Developer Relations Engineer Today, Android is launching a few updates across the platform! This includes ...

    Read more

    The post Top 3 updates for building excellent, adaptive apps at Google I/O ‘25 appeared first on InShot Pro.

    ]]>

    Posted by Mozart Louis – Developer Relations Engineer

    Today, Android is launching a few updates across the platform! This includes the start of Android 16’s rollout, with details for both developers and users, a Developer Preview for enhanced Android desktop experiences with connected displays, and updates for Android users across Google apps and more, plus the June Pixel Drop. We’re also recapping all the Google I/O updates for Android developers focused on building excellent, adaptive Android apps.

    Google I/O 2025 brought exciting advancements to Android, equipping you with essential knowledge and powerful tools you need to build outstanding, user-friendly applications that stand out.

    If you missed any of the key #GoogleIO25 updates and just saw the release of Android 16 or you’re ready to dive into building excellent adaptive apps, our playlist is for you. Learn how to craft engaging experiences with Live Updates in Android 16, capture video effortlessly with CameraX, process it efficiently using Media3’s editing tools, and engage users across diverse platforms like XR, Android for Cars, Android TV, and Desktop.

    Check out the Google I/O playlist for all the session details.

    Here are three key announcements directly influencing how you can craft deeply engaging experiences and truly connect with your users:

    #1: Build adaptively to unlock 500 million devices

    In today’s diverse device ecosystem, users expect their favorite applications to function seamlessly across various form factors, including phones, tablets, Chromebooks, automobiles, and emerging XR glasses and headsets. Our recommended approach for developing applications that excel on each of these surfaces is to create a single, adaptive application. This strategy avoids the need to rebuild the application for every screen size, shape, or input method, ensuring a consistent and high-quality user experience across all devices.

    The talk emphasizes that you don’t need to rebuild apps for each form factor. Instead, small, iterative changes can unlock an app’s potential.

    Here are some resources we encourage you to use in your apps:

    New feature support in Jetpack Compose Adaptive Libraries

      • We’re continuing to make it as easy as possible to build adaptively with Jetpack Compose Adaptive Libraries. with new features in 1.1 like pane expansion and predictive back. By utilizing canonical layout patterns such as List Detail or Supporting Pane layouts and integrating your app code, your application will automatically adjust and reflow when resized.

    Navigation 3

      • The alpha release of the Navigation 3 library now supports displaying multiple panes. This eliminates the need to alter your navigation destination setup for separate list and detail views. Instead, you can adjust the setup to concurrently render multiple destinations when sufficient screen space is available.

    Updates to Window Manager Library

      • AndroidX.window 1.5 introduces two new window size classes for expanded widths, facilitating better layout adaptation for large tablets and desktops. A width of 1600dp or more is now categorized as “extra large,” while widths between 1200dp and 1600dp are classified as “large.” These subdivisions offer more granularity for developers to optimize their applications for a wider range of window sizes.

    Support all orientations and be resizable

    Extend to Android XR

    Upgrade your Wear OS apps to Material 3 Design

    You should build a single, adaptive mobile app that brings the best experiences to all Android surfaces. By building adaptive apps, you meet users where they are today and in the future, enhancing user engagement and app discoverability. This approach represents a strategic business decision that optimizes an app’s long-term success.

    #2: Enhance your app’s performance optimization

    Get ready to take your app’s performance to the next level! Google I/O 2025, brought an inside look at cutting-edge tools and techniques to boost user satisfaction, enhance technical performance metrics, and drive those all-important key performance indicators. Imagine an end-to-end workflow that streamlines performance optimization.

    Redesigned UiAutomator API

      • To make benchmarking reliable and reproducible, there’s the brand new UiAutomator API. Write robust test code and run it on your local devices or in Firebase Test Lab, ensuring consistent results every time.

    Macrobenchmarks

      • Once your tests are in place, it’s time to measure and understand. Macrobenchmarks give you the hard data, while App Startup Insights provide actionable recommendations for improvement. Plus, you can get a quick snapshot of your app’s health with the App Performance Score via DAC. These tools combined give you a comprehensive view of your app’s performance and where to focus your efforts.

    R8, More than code shrinking and obfuscation

      • You might know R8 as a code shrinking tool, but it’s capable of so much more! The talk dives into R8’s capabilities using the “Androidify” sample app. You’ll see how to apply R8, troubleshoot any issues (like crashes!), and configure it for optimal performance. It’ll also be shown how library developers can include “consumer Keep rules” so that their important code is not touched when used in an application.

    #3: Build Richer Image and Video Experiences

    In today’s digital landscape, users increasingly expect seamless content creation capabilities within their apps. To meet this demand, developers require robust tools for building excellent camera and media experiences.

    Media3Effects in CameraX Preview

      • At Google I/O, developers delve into practical strategies for capturing high-quality video using CameraX, while simultaneously leveraging the Media3Effects on the preview.

    Google Low-Light Boost

      • Google Low Light Boost in Google Play services enables real-time dynamic camera brightness adjustment in low light, even without device support for Low Light Boost AE Mode.

    New Camera & Media Samples!

    Learn more about how CameraX & Media3 can accelerate your development of camera and media related features.

    Learn how to build adaptive apps

    Want to learn more about building excellent, adaptive apps? Watch this playlist to learn more about all the session details.

    The post Top 3 updates for building excellent, adaptive apps at Google I/O ‘25 appeared first on InShot Pro.

    ]]>
    Androidify: Building delightful UIs with Compose https://theinshotproapk.com/androidify-building-delightful-uis-with-compose/ Tue, 03 Jun 2025 12:07:48 +0000 https://theinshotproapk.com/androidify-building-delightful-uis-with-compose/ Posted by Rebecca Franks – Developer Relations Engineer Androidify is a new sample app we built using the latest best ...

    Read more

    The post Androidify: Building delightful UIs with Compose appeared first on InShot Pro.

    ]]>

    Posted by Rebecca Franks – Developer Relations Engineer

    Androidify is a new sample app we built using the latest best practices for mobile apps. Previously, we covered all the different features of the app, from Gemini integration and CameraX functionality to adaptive layouts. In this post, we dive into the Jetpack Compose usage throughout the app, building upon our base knowledge of Compose to add delightful and expressive touches along the way!

    Material 3 Expressive

    Material 3 Expressive is an expansion of the Material 3 design system. It’s a set of new features, updated components, and design tactics for creating emotionally impactful UX.

    It’s been released as part of the alpha version of the Material 3 artifact (androidx.compose.material3:material3:1.4.0-alpha10) and contains a wide range of new components you can use within your apps to build more personalized and delightful experiences. Learn more about Material 3 Expressive’s component and theme updates for more engaging and user-friendly products.

    Material Expressive Component updates

    Material Expressive Component updates

    In addition to the new component updates, Material 3 Expressive introduces a new motion physics system that’s encompassed in the Material theme.

    In Androidify, we’ve utilized Material 3 Expressive in a few different ways across the app. For example, we’ve explicitly opted-in to the new MaterialExpressiveTheme and chosen MotionScheme.expressive() (this is the default when using expressive) to add a bit of playfulness to the app:

    @Composable
    fun AndroidifyTheme(
       content: @Composable () -> Unit,
    ) {
       val colorScheme = LightColorScheme
    
    
       MaterialExpressiveTheme(
           colorScheme = colorScheme,
           typography = Typography,
           shapes = shapes,
           motionScheme = MotionScheme.expressive(),
           content = {
               SharedTransitionLayout {
                   CompositionLocalProvider(LocalSharedTransitionScope provides this) {
                       content()
                   }
               }
           },
       )
    }
    

    Some of the new componentry is used throughout the app, including the HorizontalFloatingToolbar for the Prompt type selection:

    moving example of expressive button shapes in slow motion

    The app also uses MaterialShapes in various locations, which are a preset list of shapes that allow for easy morphing between each other. For example, check out the cute cookie shape for the camera capture button:

    Material Expressive Component updates

    Camera button with a MaterialShapes.Cookie9Sided shape

    Animations

    Wherever possible, the app leverages the Material 3 Expressive MotionScheme to obtain a themed motion token, creating a consistent motion feeling throughout the app. For example, the scale animation on the camera button press is powered by defaultSpatialSpec(), a specification used for animations that move something across a screen (such as x,y or rotation, scale animations):

    val interactionSource = remember { MutableInteractionSource() }
    val animationSpec = MaterialTheme.motionScheme.defaultSpatialSpec<Float>()
    Spacer(
       modifier
           .indication(interactionSource, ScaleIndicationNodeFactory(animationSpec))
           .clip(MaterialShapes.Cookie9Sided.toShape())
           .size(size)
           .drawWithCache {
               //.. etc
           },
    )
    

    Camera button scale interaction

    Camera button scale interaction

    Shared element animations

    The app uses shared element transitions between different screen states. Last year, we showcased how you can create shared elements in Jetpack Compose, and we’ve extended this in the Androidify sample to create a fun example. It combines the new Material 3 Expressive MaterialShapes, and performs a transition with a morphing shape animation:

    moving example of expressive button shapes in slow motion

    To do this, we created a custom Modifier that takes in the target and resting shapes for the sharedBounds transition:

    @Composable
    fun Modifier.sharedBoundsRevealWithShapeMorph(
       sharedContentState: 
    SharedTransitionScope.SharedContentState,
       sharedTransitionScope: SharedTransitionScope = 
    LocalSharedTransitionScope.current,
       animatedVisibilityScope: AnimatedVisibilityScope = 
    LocalNavAnimatedContentScope.current,
       boundsTransform: BoundsTransform = 
    MaterialTheme.motionScheme.sharedElementTransitionSpec,
       resizeMode: SharedTransitionScope.ResizeMode = 
    SharedTransitionScope.ResizeMode.RemeasureToBounds,
       restingShape: RoundedPolygon = RoundedPolygon.rectangle().normalized(),
       targetShape: RoundedPolygon = RoundedPolygon.circle().normalized(),
    )
    

    Then, we apply a custom OverlayClip to provide the morphing shape, by tying into the AnimatedVisibilityScope provided by the LocalNavAnimatedContentScope:

    val animatedProgress =
       animatedVisibilityScope.transition.animateFloat(targetValueByState = targetValueByState)
    
    
    val morph = remember {
       Morph(restingShape, targetShape)
    }
    val morphClip = MorphOverlayClip(morph, { animatedProgress.value })
    
    
    return this@sharedBoundsRevealWithShapeMorph
       .sharedBounds(
           sharedContentState = sharedContentState,
           animatedVisibilityScope = animatedVisibilityScope,
           boundsTransform = boundsTransform,
           resizeMode = resizeMode,
           clipInOverlayDuringTransition = morphClip,
           renderInOverlayDuringTransition = renderInOverlayDuringTransition,
       )
    

    View the full code snippet for this Modifer on GitHub.

    Autosize text

    With the latest release of Jetpack Compose 1.8, we added the ability to create text composables that automatically adjust the font size to fit the container’s available size with the new autoSize parameter:

    BasicText(text,
    style = MaterialTheme.typography.titleLarge,
    autoSize = TextAutoSize.StepBased(maxFontSize = 220.sp),
    )
    

    This is used front and center for the “Customize your own Android Bot” text:

    Text reads Customize your own Android Bot with an inline moving image

    “Customize your own Android Bot” text with inline GIF

    This text composable is interesting because it needed to have the fun dancing Android bot in the middle of the text. To do this, we use InlineContent, which allows us to append a composable in the middle of the text composable itself:

    @Composable
    private fun DancingBotHeadlineText(modifier: Modifier = Modifier) {
       Box(modifier = modifier) {
           val animatedBot = "animatedBot"
           val text = buildAnnotatedString {
               append(stringResource(R.string.customize))
               // Attach "animatedBot" annotation on the placeholder
               appendInlineContent(animatedBot)
               append(stringResource(R.string.android_bot))
           }
           var placeHolderSize by remember {
               mutableStateOf(220.sp)
           }
           val inlineContent = mapOf(
               Pair(
                   animatedBot,
                   InlineTextContent(
                       Placeholder(
                           width = placeHolderSize,
                           height = placeHolderSize,
                           placeholderVerticalAlign = PlaceholderVerticalAlign.TextCenter,
                       ),
                   ) {
                       DancingBot(
                           modifier = Modifier
                               .padding(top = 32.dp)
                               .fillMaxSize(),
                       )
                   },
               ),
           )
           BasicText(
               text,
               modifier = Modifier
                   .align(Alignment.Center)
                   .padding(bottom = 64.dp, start = 16.dp, end = 16.dp),
               style = MaterialTheme.typography.titleLarge,
               autoSize = TextAutoSize.StepBased(maxFontSize = 220.sp),
               maxLines = 6,
               onTextLayout = { result ->
                   placeHolderSize = result.layoutInput.style.fontSize * 3.5f
               },
               inlineContent = inlineContent,
           )
       }
    }
    

    Composable visibility with onLayoutRectChanged

    With Compose 1.8, a new modifier, Modifier.onLayoutRectChanged, was added. This modifier is a more performant version of onGloballyPositioned, and includes features such as debouncing and throttling to make it performant inside lazy layouts.

    In Androidify, we’ve used this modifier for the color splash animation. It determines the position where the transition should start from, as we attach it to the “Let’s Go” button:

    var buttonBounds by remember {
       mutableStateOf<RelativeLayoutBounds?>(null)
    }
    var showColorSplash by remember {
       mutableStateOf(false)
    }
    Box(modifier = Modifier.fillMaxSize()) {
       PrimaryButton(
           buttonText = "Let's Go",
           modifier = Modifier
               .align(Alignment.BottomCenter)
               .onLayoutRectChanged(
                   callback = { bounds ->
                       buttonBounds = bounds
                   },
               ),
           onClick = {
               showColorSplash = true
           },
       )
    }
    

    We use these bounds as an indication of where to start the color splash animation from.

    moving image of a blue color splash transition between Androidify demo screens

    Learn more delightful details

    From fun marquee animations on the results screen, to animated gradient buttons for the AI-powered actions, to the path drawing animation for the loading screen, this app has many delightful touches for you to experience and learn from.

    animated marquee example

    animated gradient button for AI powered actions example

    animated loading screen example

    Check out the full codebase at github.com/android/androidify and learn more about the latest in Compose from using Material 3 Expressive, the new modifiers, auto-sizing text and of course a couple of delightful interactions!

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.

    The post Androidify: Building delightful UIs with Compose appeared first on InShot Pro.

    ]]>
    Androidify: Building powerful AI-driven experiences with Jetpack Compose, Gemini and CameraX https://theinshotproapk.com/androidify-building-powerful-ai-driven-experiences-with-jetpack-compose-gemini-and-camerax/ Mon, 02 Jun 2025 12:07:28 +0000 https://theinshotproapk.com/androidify-building-powerful-ai-driven-experiences-with-jetpack-compose-gemini-and-camerax/ Posted by Rebecca Franks – Developer Relations Engineer The Android bot is a beloved mascot for Android users and developers, ...

    Read more

    The post Androidify: Building powerful AI-driven experiences with Jetpack Compose, Gemini and CameraX appeared first on InShot Pro.

    ]]>

    Posted by Rebecca Franks – Developer Relations Engineer

    The Android bot is a beloved mascot for Android users and developers, with previous versions of the bot builder being very popular – we decided that this year we’d rebuild the bot maker from the ground up, using the latest technology backed by Gemini. Today we are releasing a new open source app, Androidify, for learning how to build powerful AI driven experiences on Android using the latest technologies such as Jetpack Compose, Gemini through Firebase, CameraX, and Navigation 3.

    a moving image of various droid bots dancing individually

    Androidify app demo

    Here’s an example of the app running on the device, showcasing converting a photo to an Android bot that represents my likeness:

    moving image showing the conversion of an image of a woman in a pink dress holding na umbrella into a 3D image of a droid bot wearing a pink dress holding an umbrella

    Under the hood

    The app combines a variety of different Google technologies, such as:

      • Gemini API – through Firebase AI Logic SDK, for accessing the underlying Imagen and Gemini models.
      • Jetpack Compose – for building the UI with delightful animations and making the app adapt to different screen sizes.
      • Navigation 3 – the latest navigation library for building up Navigation graphs with Compose.
      • CameraX Compose and Media3 Compose – for building up a custom camera with custom UI controls (rear camera support, zoom support, tap-to-focus) and playing the promotional video.

    This sample app is currently using a standard Imagen model, but we’ve been working on a fine-tuned model that’s trained specifically on all of the pieces that make the Android bot cute and fun; we’ll share that version later this year. In the meantime, don’t be surprised if the sample app puts out some interesting looking examples!

    How does the Androidify app work?

    The app leverages our best practices for Architecture, Testing, and UI to showcase a real world, modern AI application on device.

    Flow chart describing Androidify app flow

    Androidify app flow chart detailing how the app works with AI

    AI in Androidify with Gemini and ML Kit

    The Androidify app uses the Gemini models in a multitude of ways to enrich the app experience, all powered by the Firebase AI Logic SDK. The app uses Gemini 2.5 Flash and Imagen 3 under the hood:

      • Image validation: We ensure that the captured image contains sufficient information, such as a clearly focused person, and assessing for safety. This feature uses the multi-modal capabilities of Gemini API, by giving it a prompt and image at the same time:

    val response = generativeModel.generateContent(
       content {
           text(prompt)
           image(image)
       },
    )
    

      • Text prompt validation: If the user opts for text input instead of image, we use Gemini 2.5 Flash to ensure the text contains a sufficiently descriptive prompt to generate a bot.

      • Image captioning: Once we’re sure the image has enough information, we use Gemini 2.5 Flash to perform image captioning., We ask Gemini to be as descriptive as possible,focusing on the clothing and its colors.

      • “Help me write” feature: Similar to an “I’m feeling lucky” type feature, “Help me write” uses Gemini 2.5 Flash to create a random description of the clothing and hairstyle of a bot.

      • Image generation from the generated prompt: As the final step, Imagen generates the image, providing the prompt and the selected skin tone of the bot.

    The app also uses the ML Kit pose detection to detect a person in the viewfinder and enable the capture button when a person is detected, as well as adding fun indicators around the content to indicate detection.

    Explore more detailed information about AI usage in Androidify.

    Jetpack Compose

    The user interface of Androidify is built using Jetpack Compose, the modern UI toolkit that simplifies and accelerates UI development on Android.

    Delightful details with the UI

    The app uses Material 3 Expressive, the latest alpha release that makes your apps more premium, desirable, and engaging. It provides delightful bits of UI out-of-the-box, like new shapes, componentry, and using the MotionScheme variables wherever a motion spec is needed.

    MaterialShapes are used in various locations. These are a preset list of shapes that allow for easy morphing between each other—for example, the cute cookie shape for the camera capture button:

    Androidify app UI showing camera button

    Camera button with a MaterialShapes.Cookie9Sided shape

    Beyond using the standard Material components, Androidify also features custom composables and delightful transitions tailored to the specific needs of the app:

      • There are plenty of shared element transitions across the app—for example, a morphing shape shared element transition is performed between the “take a photo” button and the camera surface.

        moving example of expressive button shapes in slow motion

      • Custom enter transitions for the ResultsScreen with the usage of marquee modifiers.

        animated marquee example

      • Fun color splash animation as a transition between screens.

        moving image of a blue color splash transition between Androidify demo screens

      • Animating gradient buttons for the AI-powered actions.

        animated gradient button for AI powered actions example

    To learn more about the unique details of the UI, read Androidify: Building delightful UIs with Compose

    Adapting to different devices

    Androidify is designed to look great and function seamlessly across candy bar phones, foldables, and tablets. The general goal of developing adaptive apps is to avoid reimplementing the same app multiple times on each form factor by extracting out reusable composables, and leveraging APIs like WindowSizeClass to determine what kind of layout to display.

    a collage of different adaptive layouts for the Androidify app across small and large screens

    Various adaptive layouts in the app

    For Androidify, we only needed to leverage the width window size class. Combining this with different layout mechanisms, we were able to reuse or extend the composables to cater to the multitude of different device sizes and capabilities.

      • Responsive layouts: The CreationScreen demonstrates adaptive design. It uses helper functions like isAtLeastMedium() to detect window size categories and adjust its layout accordingly. On larger windows, the image/prompt area and color picker might sit side-by-side in a Row, while on smaller windows, the color picker is accessed via a ModalBottomSheet. This pattern, called “supporting pane”, highlights the supporting dependencies between the main content and the color picker.

      • Foldable support: The app actively checks for foldable device features. The camera screen uses WindowInfoTracker to get FoldingFeature information to adapt to different features by optimizing the layout for tabletop posture.

      • Rear display: Support for devices with multiple displays is included via the RearCameraUseCase, allowing for the device camera preview to be shown on the external screen when the device is unfolded (so the main content is usually displayed on the internal screen).

    Using window size classes, coupled with creating a custom @LargeScreensPreview annotation, helps achieve unique and useful UIs across the spectrum of device sizes and window sizes.

    CameraX and Media3 Compose

    To allow users to base their bots on photos, Androidify integrates CameraX, the Jetpack library that makes camera app development easier.

    The app uses a custom CameraLayout composable that supports the layout of the typical composables that a camera preview screen would include— for example, zoom buttons, a capture button, and a flip camera button. This layout adapts to different device sizes and more advanced use cases, like the tabletop mode and rear-camera display. For the actual rendering of the camera preview, it uses the new CameraXViewfinder that is part of the camerax-compose artifact.

    CameraLayout in Compose

    CameraLayout composable that takes care of different device configurations, such as table top mode

    CameraLayout in Compose

    CameraLayout composable that takes care of different device configurations, such as table top mode

    The app also integrates with Media3 APIs to load an instructional video for showing how to get the best bot from a prompt or image. Using the new media3-ui-compose artifact, we can easily add a VideoPlayer into the app:

    @Composable
    private fun VideoPlayer(modifier: Modifier = Modifier) {
        val context = LocalContext.current
        var player by remember { mutableStateOf<Player?>(null) }
        LifecycleStartEffect(Unit) {
            player = ExoPlayer.Builder(context).build().apply {
                setMediaItem(MediaItem.fromUri(Constants.PROMO_VIDEO))
                repeatMode = Player.REPEAT_MODE_ONE
                prepare()
            }
            onStopOrDispose {
                player?.release()
                player = null
            }
        }
        Box(
            modifier
                .background(MaterialTheme.colorScheme.surfaceContainerLowest),
        ) {
            player?.let { currentPlayer ->
                PlayerSurface(currentPlayer, surfaceType = SURFACE_TYPE_TEXTURE_VIEW)
            }
        }
    }
    

    Using the new onLayoutRectChanged modifier, we also listen for whether the composable is completely visible or not, and play or pause the video based on this information:

    var videoFullyOnScreen by remember { mutableStateOf(false) }     
    
    LaunchedEffect(videoFullyOnScreen) {
         if (videoFullyOnScreen) currentPlayer.play() else currentPlayer.pause()
    } 
    
    // We add this onto the player composable to determine if the video composable is visible, and mutate the videoFullyOnScreen variable, that then toggles the player state. 
    Modifier.onVisibilityChanged(
                    containerWidth = LocalView.current.width,
                    containerHeight = LocalView.current.height,
    ) { fullyVisible -> videoFullyOnScreen = fullyVisible }
    
    // A simple version of visibility changed detection
    fun Modifier.onVisibilityChanged(
        containerWidth: Int,
        containerHeight: Int,
        onChanged: (visible: Boolean) -> Unit,
    ) = this then Modifier.onLayoutRectChanged(100, 0) { layoutBounds ->
        onChanged(
            layoutBounds.boundsInRoot.top > 0 &&
                layoutBounds.boundsInRoot.bottom < containerHeight &&
                layoutBounds.boundsInRoot.left > 0 &&
                layoutBounds.boundsInRoot.right < containerWidth,
        )
    }
    

    Additionally, using rememberPlayPauseButtonState, we add on a layer on top of the player to offer a play/pause button on the video itself:

    val playPauseButtonState = rememberPlayPauseButtonState(currentPlayer)
                OutlinedIconButton(
                    onClick = playPauseButtonState::onClick,
                    enabled = playPauseButtonState.isEnabled,
                ) {
                    val icon =
                        if (playPauseButtonState.showPlay) R.drawable.play else R.drawable.pause
                    val contentDescription =
                        if (playPauseButtonState.showPlay) R.string.play else R.string.pause
                    Icon(
                        painterResource(icon),
                        stringResource(contentDescription),
                    )
                }
    

    Check out the code for more details on how CameraX and Media3 were used in Androidify.

    Navigation 3

    Screen transitions are handled using the new Jetpack Navigation 3 library androidx.navigation3. The MainNavigation composable defines the different destinations (Home, Camera, Creation, About) and displays the content associated with each destination using NavDisplay. You get full control over your back stack, and navigating to and from destinations is as simple as adding and removing items from a list.

    @Composable
    fun MainNavigation() {
       val backStack = rememberMutableStateListOf<NavigationRoute>(Home)
       NavDisplay(
           backStack = backStack,
           onBack = { backStack.removeLastOrNull() },
           entryProvider = entryProvider {
               entry<Home> { entry ->
                   HomeScreen(
                       onAboutClicked = {
                           backStack.add(About)
                       },
                   )
               }
               entry<Camera> {
                   CameraPreviewScreen(
                       onImageCaptured = { uri ->
                           backStack.add(Create(uri.toString()))
                       },
                   )
               }
               // etc
           },
       )
    }
    

    Notably, Navigation 3 exposes a new composition local, LocalNavAnimatedContentScope, to easily integrate your shared element transitions without needing to keep track of the scope yourself. By default, Navigation 3 also integrates with predictive back, providing delightful back experiences when navigating between screens, as seen in this prior shared element transition:

    CameraLayout in Compose

    Learn more about Jetpack Navigation 3, currently in alpha.

    Learn more

    By combining the declarative power of Jetpack Compose, the camera capabilities of CameraX, the intelligent features of Gemini, and thoughtful adaptive design, Androidify is a personalized avatar creation experience that feels right at home on any Android device. You can find the full code sample at github.com/android/androidify where you can see the app in action and be inspired to build your own AI-powered app experiences.

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.

    The post Androidify: Building powerful AI-driven experiences with Jetpack Compose, Gemini and CameraX appeared first on InShot Pro.

    ]]>
    What’s New in Jetpack Compose https://theinshotproapk.com/whats-new-in-jetpack-compose/ Thu, 29 May 2025 12:02:38 +0000 https://theinshotproapk.com/whats-new-in-jetpack-compose/ Posted by Nick Butcher – Product Manager At Google I/O 2025, we announced a host of features, performance, stability, libraries, ...

    Read more

    The post What’s New in Jetpack Compose appeared first on InShot Pro.

    ]]>

    Posted by Nick Butcher – Product Manager

    At Google I/O 2025, we announced a host of features, performance, stability, libraries, and tools updates for Jetpack Compose, our recommended Android UI toolkit. With Compose you can build excellent apps that work across devices. Compose has matured a lot since it was first announced (at Google I/O 2019!) and we’re now seeing 60% of the top 1,000 apps in the Play Store such as MAX and Google Drive use and love it.

    New Features

    Since I/O last year, Compose Bill of Materials (BOM) version 2025.05.01 adds new features such as:

      • Autofill support that lets users automatically insert previously entered personal information into text fields.
      • Auto-sizing text to smoothly adapt text size to a parent container size.
      • Visibility tracking for when you need high-performance information on a composable’s position in its root container, screen, or window.
      • Animate bounds modifier for beautiful automatic animations of a Composable’s position and size within a LookaheadScope.
      • Accessibility checks in tests that let you build a more accessible app UI through automated a11y testing.

    LookaheadScope {
        Box(
            Modifier
                .animateBounds(this@LookaheadScope)
                .width(if(inRow) 100.dp else 150.dp)
                .background(..)
                .border(..)
        )
    }
    

    moving image of animate bounds modifier in action

    For more details on these features, read What’s new in the Jetpack Compose April ’25 release and check out these talks from Google I/O:

    If you’re looking to try out new Compose functionality, the alpha BOM offers new features that we’re working on including:

      • Pausable Composition (see below)
      • Updates to LazyLayout prefetch
      • Context Menus
      • New modifiers: onFirstVisible, onVisbilityChanged, contentType
      • New Lint checks for frequently changing values and elements that should be remembered in composition

    Please try out the alpha features and provide feedback to help shape the future of Compose.

    Material Expressive

    At Google I/O, we unveiled Material Expressive, Material Design’s latest evolution that helps you make your products even more engaging and easier to use. It’s a comprehensive addition of new components, styles, motion and customization options that help you to build beautiful rich UIs. The Material3 library in the latest alpha BOM contains many of the new expressive components for you to try out.

    moving image of material expressive design example

    Learn more to start building with Material Expressive.

    Adaptive layouts library

    Developing adaptive apps across form factors including phones, foldables, tablets, desktop, cars and Android XR is now easier with the latest enhancements to the Compose adaptive layouts library. The stable 1.1 release adds support for predictive back gestures for smoother transitions and pane expansion for more flexible two pane layouts on larger screens. Furthermore, the 1.2 (alpha) release adds more flexibility for how panes are displayed, adding strategies for reflowing and levitating.

    moving image of compose adaptive layouts updates in the Google Play app

    Compose Adaptive Layouts Updates in the Google Play app

    Learn more about building adaptive android apps with Compose.

    Performance

    With each release of Jetpack Compose, we continue to prioritize performance improvements. The latest stable release includes significant rewrites and improvements to multiple sub-systems including semantics, focus and text optimizations. Best of all these are available to you simply by upgrading your Compose dependency; no code changes required.

    bar chart of internal benchmarks for performance run on a Pixel 3a device from January to May 2023 measured by jank rate

    Internal benchmark, run on a Pixel 3a

    We continue to work on further performance improvements, notable changes in the latest alpha BOM include:

      • Pausable Composition allows compositions to be paused, and their work split up over several frames.
      • Background text prefetch enables text layout caches to be pre-warmed on a background thread, enabling faster text layout.
      • LazyLayout prefetch improvements enabling lazy layouts to be smarter about how much content to prefetch, taking advantage of pausable composition.

    Together these improvements eliminate nearly all jank in an internal benchmark.

    Stability

    We’ve heard from you that upgrading your Compose dependency can be challenging, encountering bugs or behaviour changes that prevent you from staying on the latest version. We’ve invested significantly in improving the stability of Compose, working closely with the many Google app teams building with Compose to detect and prevent issues before they even make it to a release.

    Google apps develop against and release with snapshot builds of Compose; as such, Compose is tested against the hundreds of thousands of Google app tests and any Compose issues are immediately actioned by our team. We have recently invested in increasing the cadence of updating these snapshots and now update them daily from Compose tip-of-tree, which means we’re receiving feedback faster, and are able to resolve issues long before they reach a public release of the library.

    Jetpack Compose also relies on @Experimental annotations to mark APIs that are subject to change. We heard your feedback that some APIs have remained experimental for a long time, reducing your confidence in the stability of Compose. We have invested in stabilizing experimental APIs to provide you a more solid API surface, and reduced the number of experimental APIs by 32% in the last year.

    We have also heard that it can be hard to debug Compose crashes when your own code does not appear in the stack trace. In the latest alpha BOM, we have added a new opt-in feature to provide more diagnostic information. Note that this does not currently work with minified builds and comes at a performance cost, so we recommend only using this feature in debug builds.

    class App : Application() {
       override fun onCreate() {
            // Enable only for debug flavor to avoid perf impact in release
            Composer.setDiagnosticStackTraceEnabled(BuildConfig.DEBUG)
       }
    }
    

    Libraries

    We know that to build great apps, you need Compose integration in the libraries that interact with your app’s UI.

    A core library that powers any Compose app is Navigation. You told us that you often encountered limitations when managing state hoisting and directly manipulating the back stack with the current Compose Navigation solution. We went back to the drawing-board and completely reimagined how a navigation library should integrate with the Compose mental model. We’re excited to introduce Navigation 3, a new artifact designed to empower you with greater control and simplify complex navigation flows.

    We’re also investing in Compose support for CameraX and Media3, making it easier to integrate camera capture and video playback into your UI with Compose idiomatic components.

    @Composable
    private fun VideoPlayer(
        player: Player?, // from media3
        modifier: Modifier = Modifier
    ) {
        Box(modifier) {
            PlayerSurface(player) // from media3-ui-compose
            player?.let {
                // custom play-pause button UI
                val playPauseButtonState = rememberPlayPauseButtonState(it) // from media3-ui-compose
                MyPlayPauseButton(playPauseButtonState, Modifier.align(BottomEnd).padding(16.dp))
            }
        }
    }
    

    To learn more, see the media3 Compose documentation and the CameraX samples.

    Tools

    We continue to improve the Android Studio tools for creating Compose UIs. The latest Narwhal canary includes:

      • Resizable Previews instantly show you how your Compose UI adapts to different window sizes
      • Preview navigation improvements using clickable names and components
      • Studio Labs 🧪: Compose preview generation with Gemini quickly generate a preview
      • Studio Labs 🧪: Transform UI with Gemini change your UI with natural language, directly from preview.
      • Studio Labs 🧪: Image attachment in Gemini generate Compose code from images.

    For more information read What’s new in Android development tools.

    moving image of resizable preview in Jetpack Compose

    Resizable Preview

    New Compose Lint checks

    The Compose alpha BOM introduces two new annotations and associated lint checks to help you to write correct and performant Compose code. The @FrequentlyChangingValue annotation and FrequentlyChangedStateReadInComposition lint check warns in situations where function calls or property reads in composition might cause frequent recompositions. For example, frequent recompositions might happen when reading scroll position values or animating values. The @RememberInComposition annotation and RememberInCompositionDetector lint check warns in situations where constructors, functions, and property getters are called directly inside composition (e.g. the TextFieldState constructor) without being remembered.

    Happy Composing

    We continue to invest in providing the features, performance, stability, libraries and tools that you need to build excellent apps. We value your input so please share feedback on our latest updates or what you’d like to see next.

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.

    The post What’s New in Jetpack Compose appeared first on InShot Pro.

    ]]>