Androidify https://theinshotproapk.com/category/app/androidify/ Download InShot Pro APK for Android, iOS, and PC Thu, 18 Dec 2025 17:00:00 +0000 en-US hourly 1 https://theinshotproapk.com/wp-content/uploads/2021/07/cropped-Inshot-Pro-APK-Logo-1-32x32.png Androidify https://theinshotproapk.com/category/app/androidify/ 32 32 Bringing Androidify to Wear OS with Watch Face Push https://theinshotproapk.com/bringing-androidify-to-wear-os-with-watch-face-push/ Thu, 18 Dec 2025 17:00:00 +0000 https://theinshotproapk.com/bringing-androidify-to-wear-os-with-watch-face-push/ Posted by Garan Jenkin – Developer Relations Engineer A few months ago we relaunched Androidify as an app for generating ...

Read more

The post Bringing Androidify to Wear OS with Watch Face Push appeared first on InShot Pro.

]]>

Posted by Garan Jenkin – Developer Relations Engineer





A few months ago we
relaunched Androidify as an app for generating personalized Android bots. Androidify transforms your selfie photo into a playful Android bot using Gemini and Imagen.

However, given that Android spans multiple form factors, including our most recent addition, XR, we thought, how could we bring the fun of Androidify to Wear OS?

An Androidify watch face

As Androidify bots are highly-personalized, the natural place to showcase them is the watch face. Not only is it the most frequently visible surface but also the most personal surface, allowing you to represent who you are.


Personalized Androidify watch face, generated from selfie image

Androidify now has the ability to generate a watch face dynamically within the phone app and then send it to your watch, where it will automatically be set as your watch face. All of this happens within seconds!

High-level design

End-to-end flow for watch face creation and installation

In order to achieve the end-to-end experience, a number of technologies need to be combined together, as shown in this high-level design diagram.

First of all, the user’s avatar is combined with a pre-existing Watch Face Format template, which is then packaged into an APK. This is validated – for reasons which will be explained! – and sent to the watch.

On being received by the watch, the new Watch Face Push API – part of Wear OS 6- is used to install and activate the watch face.

Let’s explore the details:

Creating the watch face templates

The watch face is created from a template, itself designed in Watch Face Designer. This is our new Figma plugin that allows you to create Watch Face Format watch faces directly within Figma.


An Androidify watch face template in Watch Face Designer


The plugin allows the watch face to be exported in a range of different ways, including as Watch Face Format (WFF) resources. These can then be easily incorporated as assets within the Androidify app, for dynamically building the finalized watch face.

Packaging and validation

Once the template and avatar have been combined, the Portable Asset Compiler Kit (Pack) is used to assemble an APK.

In Androidify, Pack is used as a native library on the phone. For more details on how Androidify interfaces with the Pack library, see the GitHub repository.

As a final step before transmission, the APK is checked by the Watch Face Push validator.

This validator checks that the APK is suitable for installation. This includes checking the contents of the APK to ensure it is a valid watch face, as well as some performance checks. If it is valid, then the validator produces a token.

This token is required by the watch for installation.

Sending the watch face

The Androidify app on Wear OS uses WearableListenerService to listen for events on the Wearable Data Layer.

The phone app transfers the watch face by using a combination of MessageClient to set up the process, then ChannelClient to stream the APK.

Installing the watch face on the watch

Once the watch face is received on the Wear OS device, the Androidify app uses the new Watch Face Push API to install the watch face:

val wfpManager = 

    WatchFacePushManagerFactory.createWatchFacePushManager(context)

val response = wfpManager.listWatchFaces()


try {

    if (response.remainingSlotCount > 0) {

        wfpManager.addWatchFace(apkFd, token)

    } else {

        val slotId = response.installedWatchFaceDetails.first().slotId

        wfpManager.updateWatchFace(slotId, apkFd, token)

    }

} catch (a: WatchFacePushManager.AddWatchFaceException) {

    return WatchFaceInstallError.WATCH_FACE_INSTALL_ERROR

} catch (u: WatchFacePushManager.UpdateWatchFaceException) {

    return WatchFaceInstallError.WATCH_FACE_INSTALL_ERROR

}

Androidify uses either the addWatchFace or updateWatchFace method, depending on the scenario: Watch Face Push defines a concept of “slots” – how many watch faces a given app can have installed at any time. For Wear OS 6, this value is in fact 1.

Androidify’s approach is to install the watch face if there is a free slot, and if not, any existing watch face is swapped out for the new one.

Setting the active watch face

Installing the watch face programmatically is a great step, but Androidify seeks to ensure the watch face is also the active watch face. 

Watch Face Push introduces a new runtime permission which must be granted in order for apps to be able to achieve this:

com.google.wear.permission.SET_PUSHED_WATCH_FACE_AS_ACTIVE

Once this permission has been acquired, the wfpManager.setWatchFaceAsActive() method can be called, to set an installed watch face to being the active watch face.

However, there are a number of considerations that Androidify has to navigate:

  • setWatchFaceAsActive can only be used once.

  • SET_PUSHED_WATCH_FACE_AS_ACTIVE cannot be re-requested after being denied by the user.

  • Androidify might already be in control of the active watch face.

For more details see how Androidify implements the set active logic.

Get started with Watch Face Push for Wear OS

Watch Face Push is a versatile API, equally suited to enhancing Androidify as it is to building fully-featured watch face marketplaces.

Perhaps you have an existing phone app and are looking for opportunities to further engage and delight your users?

Or perhaps you’re an existing watch face developer looking to create your own community and gallery through releasing a marketplace app?

Take a look at these resources:

And also check out the accompanying video for a greater-depth look at how we brought Androidify to Wear OS!


We’re looking forward to what you’ll create with Watch Face Push!

The post Bringing Androidify to Wear OS with Watch Face Push appeared first on InShot Pro.

]]>
Androidify: Building AI first Android Experiences with Gemini using Jetpack Compose and Firebase https://theinshotproapk.com/androidify-building-ai-first-android-experiences-with-gemini-using-jetpack-compose-and-firebase/ Sat, 06 Sep 2025 12:03:47 +0000 https://theinshotproapk.com/androidify-building-ai-first-android-experiences-with-gemini-using-jetpack-compose-and-firebase/ Posted by Rebecca Franks – Developer Relations Engineer, Tracy Agyemang – Product Marketer, and Avneet Singh – Product Manager Androidify ...

Read more

The post Androidify: Building AI first Android Experiences with Gemini using Jetpack Compose and Firebase appeared first on InShot Pro.

]]>

Posted by Rebecca Franks – Developer Relations Engineer, Tracy Agyemang – Product Marketer, and Avneet Singh – Product Manager

Androidify is our new app that lets you build your very own Android bot, using a selfie and AI. We walked you through some of the components earlier this year, and starting today it’s available on the web or as an app on Google Play. In the new Androidify, you can upload a selfie or write a prompt of what you’re looking for, add some accessories, and watch as AI builds your unique bot. Once you’ve had a chance to try it, come back here to learn more about the AI APIs and Android tools we used to create the app. Let’s dive in!

Key technical integrations

The Androidify app combines powerful technologies to deliver a seamless and engaging user experience. Here’s a breakdown of the core components and their roles:

AI with Gemini and Firebase

Androidify leverages the Firebase AI Logic SDK to access Google’s powerful Gemini and Imagen* models. This is crucial for several key features:

  • Image validation: The app first uses Gemini 2.5 Flash to validate the user’s photo. This includes checking that the image contains a clear, focused person and meets safety standards before any further processing. This is a critical first step to ensure high-quality and safe outputs.
  • Image captioning: Once validated, the model generates a detailed caption of the user’s image. This is done using structured output, which means the model returns a specific JSON format, making it easier for the app to parse the information. This detailed description helps create a more accurate and creative final result.
  • Android Bot Generation: The generated caption is then used to enrich the prompt for the final image generation. A specifically fine-tuned version of the Imagen 3 model is then called to generate the custom Android bot avatar based on the enriched prompt. This custom fine-tuning ensures the results are unique and align with the app’s playful and stylized aesthetic.
  • The Androidify app also has a “Help me write” feature which uses Gemini 2.5 Flash to create a random description for a bot’s clothing and hairstyle, adding a bit of a fun “I’m feeling lucky” element.

    gif showcasing the help me write button

    UI with Jetpack Compose and CameraX

    The app’s user interface is built entirely with Jetpack Compose, enabling a declarative and responsive design across form factors. The app uses the latest Material 3 Expressive design, which provides delightful and engaging UI elements like new shapes, motion schemes, and custom animations.

    For camera functionality, CameraX is used in conjunction with the ML Kit Pose Detection API. This intelligent integration allows the app to automatically detect when a person is in the camera’s view, enabling the capture button and adding visual guides for the user. It also makes the app’s camera features responsive to different device types, including foldables in tabletop mode.

    Androidify also makes extensive use of the latest Compose features, such as:

  • Adaptive layouts: It’s designed to look great on various screen sizes, from phones to foldables and tablets, by leveraging WindowSizeClass and reusable composables.
  • Shared element transitions: The app uses the new Jetpack Navigation 3 library to create smooth and delightful screen transitions, including morphing shape animations that add a polished feel to the user experience.
  • Auto-sizing text: With Compose 1.8, the app uses a new parameter that automatically adjusts font size to fit the container’s available size, which is used for the app’s main “Customize your own Android Bot” text.
  • chart illustrating the behavior of Androidify app flow

    Figure 1. Androidify Flow

    Latest updates

    In the latest version of Androidify, we’ve added some new powerful AI driven features.

    Background vibe generation with Gemini Image editing

    Using the latest Gemini 2.5 Flash Image model, we combine the Android bot with a preset background “vibe” to bring the Android bots to life.

    a three-part image showing an Android bot on the left, text prompt in the middle reads A vibrant 3D illustration of a vibrant outdoor garden with fun plants. the flowers in thisscene have an alien-like qulaity to them and are brightly colored. the entire scene is rendered with a meticulous mixture of rounded, toy-like objects, creating a clean, minimalist aesthetic..., and image on the right is the Android bot from the first image stanging in a toy like garen scene surrounded by brightly colored flowers. A whitre picket fence is in the background, and a red watering can sits on the ground next to the driod bot

    Figure 2. Combining the Android bot with a background vibe description to generate your new Android Bot in a scene

    This is achieved by using Firebase AI Logic – passing a prompt for the background vibe, and the input image bitmap of the bot, with instructions to Gemini on how to combine the two together.

    override suspend fun generateImageWithEdit(
            image: Bitmap,
            backgroundPrompt: String = "Add the input image android bot as the main subject to the result... with the background that has the following vibe...",
        ): Bitmap {
            val model = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel(
                modelName = "gemini-2.5-flash-image-preview",
                generationConfig = generationConfig {
                    responseModalities = listOf(
                        ResponseModality.TEXT,
                        ResponseModality.IMAGE,
                    )
                },
            )
    	  // We combine the backgroundPrompt with the input image which is the Android Bot, to produce the new bot with a background
            val prompt = content {
                text(backgroundPrompt)
                image(image)
            }
            val response = model.generateContent(prompt)
            val image = response.candidates.firstOrNull()
                ?.content?.parts?.firstNotNullOfOrNull { it.asImageOrNull() }
            return image ?: throw IllegalStateException("Could not extract image from model response")
        }

    Sticker mode with ML Kit Subject Segmentation

    The app also includes a “Sticker mode” option, which integrates the ML Kit Subject Segmentation library to remove the background on the bot. You can use “Sticker mode” in apps that support stickers.

    backgroud removal

    Figure 3. White background removal of Android Bot to create a PNG that can be used with apps that support stickers

    The code for the sticker implementation first checks if the Subject Segmentation model has been downloaded and installed, if it has not – it requests that and waits for its completion. If the model is installed already, the app passes in the original Android Bot image into the segmenter, and calls process on it to remove the background. The foregroundBitmap object is then returned for exporting.

    override suspend fun generateImageWithEdit(
            image: Bitmap,
            backgroundPrompt: String = "Add the input image android bot as the main subject to the result... with the background that has the following vibe...",
        ): Bitmap {
            val model = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel(
                modelName = "gemini-2.5-flash-image-preview",
                generationConfig = generationConfig {
                    responseModalities = listOf(
                        ResponseModality.TEXT,
                        ResponseModality.IMAGE,
                    )
                },
            )
    	  // We combine the backgroundPrompt with the input image which is the Android Bot, to produce the new bot with a background
            val prompt = content {
                text(backgroundPrompt)
                image(image)
            }
            val response = model.generateContent(prompt)
            val image = response.candidates.firstOrNull()
                ?.content?.parts?.firstNotNullOfOrNull { it.asImageOrNull() }
            return image ?: throw IllegalStateException("Could not extract image from model response")
        }

    See the LocalSegmentationDataSource for the full source implementation

    Learn more

    To learn more about Androidify behind the scenes, take a look at the new solutions walkthrough, inspect the code or try out the experience for yourself at androidify.com or download the app on Google Play.

    moving demo of Androidfiy app

    *Check responses. Compatibility and availability varies. 18+.

    The post Androidify: Building AI first Android Experiences with Gemini using Jetpack Compose and Firebase appeared first on InShot Pro.

    ]]>
    Top 3 things to know for AI on Android at Google I/O ‘25 https://theinshotproapk.com/top-3-things-to-know-for-ai-on-android-at-google-i-o-25/ Mon, 16 Jun 2025 16:00:00 +0000 https://theinshotproapk.com/top-3-things-to-know-for-ai-on-android-at-google-i-o-25/ Posted by Kateryna Semenova – Sr. Developer Relations Engineer AI is reshaping how users interact with their favorite apps, opening ...

    Read more

    The post Top 3 things to know for AI on Android at Google I/O ‘25 appeared first on InShot Pro.

    ]]>

    Posted by Kateryna Semenova – Sr. Developer Relations Engineer

    AI is reshaping how users interact with their favorite apps, opening new avenues for developers to create intelligent experiences. At Google I/O, we showcased how Android is making it easier than ever for you to build smart, personalized and creative apps. And we’re committed to providing you with the tools needed to innovate across the full development stack in this evolving landscape.

    This year, we focused on making AI accessible across the spectrum, from on-device processing to cloud-powered capabilities. Here are the top 3 announcements you need to know for building with AI on Android from Google I/O ‘25:

    #1 Leverage the efficiency of Gemini Nano for on-device AI experiences

    For on-device AI, we announced a new set of ML Kit GenAI APIs powered by Gemini Nano, our most efficient and compact model designed and optimized for running directly on mobile devices. These APIs provide high-level, easy integration for common tasks including text summarization, proofreading, rewriting content in different styles, and generating image description. Building on-device offers significant benefits such as local data processing and offline availability at no additional cost for inference. To start integrating these solutions explore the ML Kit GenAI documentation, the sample on GitHub and watch the “Gemini Nano on Android: Building with on-device GenAI” talk.

    #2 Seamlessly integrate on-device ML/AI with your own custom models

    The Google AI Edge platform enables building and deploying a wide range of pretrained and custom models on edge devices and supports various frameworks like TensorFlow, PyTorch, Keras, and Jax, allowing for more customization in apps. The platform now also offers improved support of on-device hardware accelerators and a new AI Edge Portal service for broad coverage of on-device benchmarking and evals. If you are looking for GenAI language models on devices where Gemini Nano is not available, you can use other open models via the MediaPipe LLM Inference API.

    Serving your own custom models on-device can pose challenges related to handling large model downloads and updates, impacting the user experience. To improve this, we’ve launched Play for On-Device AI in beta. This service is designed to help developers manage custom model downloads efficiently, ensuring the right model size and speed are delivered to each Android device precisely when needed.

    For more information watch “Small language models with Google AI Edge” talk.

    #3 Power your Android apps with Gemini Flash, Pro and Imagen using Firebase AI Logic

    For more advanced generative AI use cases, such as complex reasoning tasks, analyzing large amounts of data, processing audio or video, or generating images, you can use larger models from the Gemini Flash and Gemini Pro families, and Imagen running in the cloud. These models are well suited for scenarios requiring advanced capabilities or multimodal inputs and outputs. And since the AI inference runs in the cloud any Android device with an internet connection is supported. They are easy to integrate into your Android app by using Firebase AI Logic, which provides a simplified, secure way to access these capabilities without managing your own backend. Its SDK also includes support for conversational AI experiences using the Gemini Live API or generating custom contextual visual assets with Imagen. To learn more, check out our sample on GitHub and watch “Enhance your Android app with Gemini Pro and Flash, and Imagen” session.

    These powerful AI capabilities can also be brought to life in immersive Android XR experiences. You can find corresponding documentation, samples and the technical session: “The future is now, with Compose and AI on Android XR“.

    Flow cahrt demonstrating Firebase AI Logic integration architecture

    Figure 1: Firebase AI Logic integration architecture

    Get inspired and start building with AI on Android today

    We released a new open source app, Androidify, to help developers build AI-driven Android experiences using Gemini APIs, ML Kit, Jetpack Compose, CameraX, Navigation 3, and adaptive design. Users can create personalized Android bot with Gemini and Imagen via the Firebase AI Logic SDK. Additionally, it incorporates ML Kit pose detection to detect a person in the camera viewfinder. The full code sample is available on GitHub for exploration and inspiration. Discover additional AI examples in our Android AI Sample Catalog.

    moving image of the Androidify app on a mobile device, showing a fair-skinned woman with blond hair wearing a red jacket with black shirt and pants and a pair of sunglasses converting into a 3D image of a droid with matching skin tone and blond hair wearing a red jacket with black shirt and pants and a pair of sunglasses

    The original image and Androidifi-ed image

    Choosing the right Gemini model depends on understanding your specific needs and the model’s capabilities, including modality, complexity, context window, offline capability, cost, and device reach. To explore these considerations further and see all our announcements in action, check out the AI on Android at I/O ‘25 playlist on YouTube and check out our documentation.

    We are excited to see what you will build with the power of Gemini!

    The post Top 3 things to know for AI on Android at Google I/O ‘25 appeared first on InShot Pro.

    ]]>
    Androidify: Building delightful UIs with Compose https://theinshotproapk.com/androidify-building-delightful-uis-with-compose/ Tue, 03 Jun 2025 12:07:48 +0000 https://theinshotproapk.com/androidify-building-delightful-uis-with-compose/ Posted by Rebecca Franks – Developer Relations Engineer Androidify is a new sample app we built using the latest best ...

    Read more

    The post Androidify: Building delightful UIs with Compose appeared first on InShot Pro.

    ]]>

    Posted by Rebecca Franks – Developer Relations Engineer

    Androidify is a new sample app we built using the latest best practices for mobile apps. Previously, we covered all the different features of the app, from Gemini integration and CameraX functionality to adaptive layouts. In this post, we dive into the Jetpack Compose usage throughout the app, building upon our base knowledge of Compose to add delightful and expressive touches along the way!

    Material 3 Expressive

    Material 3 Expressive is an expansion of the Material 3 design system. It’s a set of new features, updated components, and design tactics for creating emotionally impactful UX.

    It’s been released as part of the alpha version of the Material 3 artifact (androidx.compose.material3:material3:1.4.0-alpha10) and contains a wide range of new components you can use within your apps to build more personalized and delightful experiences. Learn more about Material 3 Expressive’s component and theme updates for more engaging and user-friendly products.

    Material Expressive Component updates

    Material Expressive Component updates

    In addition to the new component updates, Material 3 Expressive introduces a new motion physics system that’s encompassed in the Material theme.

    In Androidify, we’ve utilized Material 3 Expressive in a few different ways across the app. For example, we’ve explicitly opted-in to the new MaterialExpressiveTheme and chosen MotionScheme.expressive() (this is the default when using expressive) to add a bit of playfulness to the app:

    @Composable
    fun AndroidifyTheme(
       content: @Composable () -> Unit,
    ) {
       val colorScheme = LightColorScheme
    
    
       MaterialExpressiveTheme(
           colorScheme = colorScheme,
           typography = Typography,
           shapes = shapes,
           motionScheme = MotionScheme.expressive(),
           content = {
               SharedTransitionLayout {
                   CompositionLocalProvider(LocalSharedTransitionScope provides this) {
                       content()
                   }
               }
           },
       )
    }
    

    Some of the new componentry is used throughout the app, including the HorizontalFloatingToolbar for the Prompt type selection:

    moving example of expressive button shapes in slow motion

    The app also uses MaterialShapes in various locations, which are a preset list of shapes that allow for easy morphing between each other. For example, check out the cute cookie shape for the camera capture button:

    Material Expressive Component updates

    Camera button with a MaterialShapes.Cookie9Sided shape

    Animations

    Wherever possible, the app leverages the Material 3 Expressive MotionScheme to obtain a themed motion token, creating a consistent motion feeling throughout the app. For example, the scale animation on the camera button press is powered by defaultSpatialSpec(), a specification used for animations that move something across a screen (such as x,y or rotation, scale animations):

    val interactionSource = remember { MutableInteractionSource() }
    val animationSpec = MaterialTheme.motionScheme.defaultSpatialSpec<Float>()
    Spacer(
       modifier
           .indication(interactionSource, ScaleIndicationNodeFactory(animationSpec))
           .clip(MaterialShapes.Cookie9Sided.toShape())
           .size(size)
           .drawWithCache {
               //.. etc
           },
    )
    

    Camera button scale interaction

    Camera button scale interaction

    Shared element animations

    The app uses shared element transitions between different screen states. Last year, we showcased how you can create shared elements in Jetpack Compose, and we’ve extended this in the Androidify sample to create a fun example. It combines the new Material 3 Expressive MaterialShapes, and performs a transition with a morphing shape animation:

    moving example of expressive button shapes in slow motion

    To do this, we created a custom Modifier that takes in the target and resting shapes for the sharedBounds transition:

    @Composable
    fun Modifier.sharedBoundsRevealWithShapeMorph(
       sharedContentState: 
    SharedTransitionScope.SharedContentState,
       sharedTransitionScope: SharedTransitionScope = 
    LocalSharedTransitionScope.current,
       animatedVisibilityScope: AnimatedVisibilityScope = 
    LocalNavAnimatedContentScope.current,
       boundsTransform: BoundsTransform = 
    MaterialTheme.motionScheme.sharedElementTransitionSpec,
       resizeMode: SharedTransitionScope.ResizeMode = 
    SharedTransitionScope.ResizeMode.RemeasureToBounds,
       restingShape: RoundedPolygon = RoundedPolygon.rectangle().normalized(),
       targetShape: RoundedPolygon = RoundedPolygon.circle().normalized(),
    )
    

    Then, we apply a custom OverlayClip to provide the morphing shape, by tying into the AnimatedVisibilityScope provided by the LocalNavAnimatedContentScope:

    val animatedProgress =
       animatedVisibilityScope.transition.animateFloat(targetValueByState = targetValueByState)
    
    
    val morph = remember {
       Morph(restingShape, targetShape)
    }
    val morphClip = MorphOverlayClip(morph, { animatedProgress.value })
    
    
    return this@sharedBoundsRevealWithShapeMorph
       .sharedBounds(
           sharedContentState = sharedContentState,
           animatedVisibilityScope = animatedVisibilityScope,
           boundsTransform = boundsTransform,
           resizeMode = resizeMode,
           clipInOverlayDuringTransition = morphClip,
           renderInOverlayDuringTransition = renderInOverlayDuringTransition,
       )
    

    View the full code snippet for this Modifer on GitHub.

    Autosize text

    With the latest release of Jetpack Compose 1.8, we added the ability to create text composables that automatically adjust the font size to fit the container’s available size with the new autoSize parameter:

    BasicText(text,
    style = MaterialTheme.typography.titleLarge,
    autoSize = TextAutoSize.StepBased(maxFontSize = 220.sp),
    )
    

    This is used front and center for the “Customize your own Android Bot” text:

    Text reads Customize your own Android Bot with an inline moving image

    “Customize your own Android Bot” text with inline GIF

    This text composable is interesting because it needed to have the fun dancing Android bot in the middle of the text. To do this, we use InlineContent, which allows us to append a composable in the middle of the text composable itself:

    @Composable
    private fun DancingBotHeadlineText(modifier: Modifier = Modifier) {
       Box(modifier = modifier) {
           val animatedBot = "animatedBot"
           val text = buildAnnotatedString {
               append(stringResource(R.string.customize))
               // Attach "animatedBot" annotation on the placeholder
               appendInlineContent(animatedBot)
               append(stringResource(R.string.android_bot))
           }
           var placeHolderSize by remember {
               mutableStateOf(220.sp)
           }
           val inlineContent = mapOf(
               Pair(
                   animatedBot,
                   InlineTextContent(
                       Placeholder(
                           width = placeHolderSize,
                           height = placeHolderSize,
                           placeholderVerticalAlign = PlaceholderVerticalAlign.TextCenter,
                       ),
                   ) {
                       DancingBot(
                           modifier = Modifier
                               .padding(top = 32.dp)
                               .fillMaxSize(),
                       )
                   },
               ),
           )
           BasicText(
               text,
               modifier = Modifier
                   .align(Alignment.Center)
                   .padding(bottom = 64.dp, start = 16.dp, end = 16.dp),
               style = MaterialTheme.typography.titleLarge,
               autoSize = TextAutoSize.StepBased(maxFontSize = 220.sp),
               maxLines = 6,
               onTextLayout = { result ->
                   placeHolderSize = result.layoutInput.style.fontSize * 3.5f
               },
               inlineContent = inlineContent,
           )
       }
    }
    

    Composable visibility with onLayoutRectChanged

    With Compose 1.8, a new modifier, Modifier.onLayoutRectChanged, was added. This modifier is a more performant version of onGloballyPositioned, and includes features such as debouncing and throttling to make it performant inside lazy layouts.

    In Androidify, we’ve used this modifier for the color splash animation. It determines the position where the transition should start from, as we attach it to the “Let’s Go” button:

    var buttonBounds by remember {
       mutableStateOf<RelativeLayoutBounds?>(null)
    }
    var showColorSplash by remember {
       mutableStateOf(false)
    }
    Box(modifier = Modifier.fillMaxSize()) {
       PrimaryButton(
           buttonText = "Let's Go",
           modifier = Modifier
               .align(Alignment.BottomCenter)
               .onLayoutRectChanged(
                   callback = { bounds ->
                       buttonBounds = bounds
                   },
               ),
           onClick = {
               showColorSplash = true
           },
       )
    }
    

    We use these bounds as an indication of where to start the color splash animation from.

    moving image of a blue color splash transition between Androidify demo screens

    Learn more delightful details

    From fun marquee animations on the results screen, to animated gradient buttons for the AI-powered actions, to the path drawing animation for the loading screen, this app has many delightful touches for you to experience and learn from.

    animated marquee example

    animated gradient button for AI powered actions example

    animated loading screen example

    Check out the full codebase at github.com/android/androidify and learn more about the latest in Compose from using Material 3 Expressive, the new modifiers, auto-sizing text and of course a couple of delightful interactions!

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.

    The post Androidify: Building delightful UIs with Compose appeared first on InShot Pro.

    ]]>
    Androidify: How Androidify leverages Gemini, Firebase and ML Kit https://theinshotproapk.com/androidify-how-androidify-leverages-gemini-firebase-and-ml-kit/ Tue, 03 Jun 2025 12:07:31 +0000 https://theinshotproapk.com/androidify-how-androidify-leverages-gemini-firebase-and-ml-kit/ Posted by Thomas Ezan – Developer Relations Engineer, Rebecca Franks – Developer Relations Engineer, and Avneet Singh – Product Manager ...

    Read more

    The post Androidify: How Androidify leverages Gemini, Firebase and ML Kit appeared first on InShot Pro.

    ]]>

    Posted by Thomas Ezan – Developer Relations Engineer, Rebecca Franks – Developer Relations Engineer, and Avneet Singh – Product Manager

    We’re bringing back Androidify later this year, this time powered by Google AI, so you can customize your very own Android bot and share your creativity with the world. Today, we’re releasing a new open source demo app for Androidify as a great example of how Google is using its Gemini AI models to enhance app experiences.

    In this post, we’ll dive into how the Androidify app uses Gemini models and Imagen via the Firebase AI Logic SDK, and we’ll provide some insights learned along the way to help you incorporate Gemini and AI into your own projects. Read more about the Androidify demo app.

    App flow

    The overall app functions as follows, with various parts of it using Gemini and Firebase along the way:

    flow chart demonstrating Androidify app flow

    Gemini and image validation

    To get started with Androidify, take a photo or choose an image on your device. The app needs to make sure that the image you upload is suitable for creating an avatar.

    Gemini 2.5 Flash via Firebase helps with this by verifying that the image contains a person, that the person is in focus, and assessing image safety, including whether the image contains abusive content.

    val jsonSchema = Schema.obj(
       properties = mapOf("success" to Schema.boolean(), "error" to Schema.string()),
       optionalProperties = listOf("error"),
       )
       
    val generativeModel = Firebase.ai(backend = GenerativeBackend.googleAI())
       .generativeModel(
                modelName = "gemini-2.5-flash-preview-04-17",
       	     generationConfig = generationConfig {
                    responseMimeType = "application/json"
                    responseSchema = jsonSchema
                },
                safetySettings = listOf(
                    SafetySetting(HarmCategory.HARASSMENT, HarmBlockThreshold.LOW_AND_ABOVE),
                    SafetySetting(HarmCategory.HATE_SPEECH, HarmBlockThreshold.LOW_AND_ABOVE),
                    SafetySetting(HarmCategory.SEXUALLY_EXPLICIT, HarmBlockThreshold.LOW_AND_ABOVE),
                    SafetySetting(HarmCategory.DANGEROUS_CONTENT, HarmBlockThreshold.LOW_AND_ABOVE),
                    SafetySetting(HarmCategory.CIVIC_INTEGRITY, HarmBlockThreshold.LOW_AND_ABOVE),
        	),
        )
    
     val response = generativeModel.generateContent(
                content {
                    text("You are to analyze the provided image and determine if it is acceptable and appropriate based on specific criteria.... (more details see the full sample)")
                    image(image)
                },
            )
    
    val jsonResponse = Json.parseToJsonElement(response.text)
    val isSuccess = jsonResponse.jsonObject["success"]?.jsonPrimitive?.booleanOrNull == true
    val error = jsonResponse.jsonObject["error"]?.jsonPrimitive?.content
    

    In the snippet above, we’re leveraging structured output capabilities of the model by defining the schema of the response. We’re passing a Schema object via the responseSchema param in the generationConfig.

    We want to validate that the image has enough information to generate a nice Android avatar. So we ask the model to return a json object with success = true/false and an optional error message explaining why the image doesn’t have enough information.

    Structured output is a powerful feature enabling a smoother integration of LLMs to your app by controlling the format of their output, similar to an API response.

    Image captioning with Gemini Flash

    Once it’s established that the image contains sufficient information to generate an Android avatar, it is captioned using Gemini 2.5 Flash with structured output.

    val jsonSchema = Schema.obj(
                properties = mapOf(
                    "success" to Schema.boolean(),
                    "user_description" to Schema.string(),
                ),
                optionalProperties = listOf("user_description"),
            )
    val generativeModel = createGenerativeTextModel(jsonSchema)
    
    val prompt = "You are to create a VERY detailed description of the main person in the given image. This description will be translated into a prompt for a generative image model..."
    
    val response = generativeModel.generateContent(
    content { 
           	text(prompt) 
                 	image(image) 
    	})
            
    val jsonResponse = Json.parseToJsonElement(response.text!!) 
    val isSuccess = jsonResponse.jsonObject["success"]?.jsonPrimitive?.booleanOrNull == true
    
    val userDescription = jsonResponse.jsonObject["user_description"]?.jsonPrimitive?.content
    

    The other option in the app is to start with a text prompt. You can enter in details about your accessories, hairstyle, and clothing, and let Imagen be a bit more creative.

    Android generation via Imagen

    We’ll use this detailed description of your image to enrich the prompt used for image generation. We’ll add extra details around what we would like to generate and include the bot color selection as part of this too, including the skin tone selected by the user.

    val imagenPrompt = "A 3D rendered cartoonish Android mascot in a photorealistic style, the pose is relaxed and straightforward, facing directly forward [...] The bot looks as follows $userDescription [...]"
    

    We then call the Imagen model to create the bot. Using this new prompt, we create a model and call generateImages:

    // we supply our own fine-tuned model here but you can use "imagen-3.0-generate-002" 
    val generativeModel = Firebase.ai(backend = GenerativeBackend.googleAI()).imagenModel(
                "imagen-3.0-generate-002",
                safetySettings =
                ImagenSafetySettings(
                    ImagenSafetyFilterLevel.BLOCK_LOW_AND_ABOVE,
                    personFilterLevel = ImagenPersonFilterLevel.ALLOW_ALL,
                ),
    )
    
    val response = generativeModel.generateImages(imagenPrompt)
    
    val image = response.images.first().asBitmap()
    

    And that’s it! The Imagen model generates a bitmap that we can display on the user’s screen.

    Finetuning the Imagen model

    The Imagen 3 model was finetuned using Low-Rank Adaptation (LoRA). LoRA is a fine-tuning technique designed to reduce the computational burden of training large models. Instead of updating the entire model, LoRA adds smaller, trainable “adapters” that make small changes to the model’s performance. We ran a fine tuning pipeline on the Imagen 3 model generally available with Android bot assets of different color combinations and different assets for enhanced cuteness and fun. We generated text captions for the training images and the image-text pairs were used to finetune the model effectively.

    The current sample app uses a standard Imagen model, so the results may look a bit different from the visuals in this post. However, the app using the fine-tuned model and a custom version of Firebase AI Logic SDK was demoed at Google I/O. This app will be released later this year and we are also planning on adding support for fine-tuned models to Firebase AI Logic SDK later in the year.

    moving image of Androidify app demo turning a selfie image of a bearded man wearing a black tshirt and sunglasses, with a blue back pack into a green 3D bearded droid wearing a black tshirt and sunglasses with a blue backpack

    The original image… and Androidifi-ed image

    ML Kit

    The app also uses the ML Kit Pose Detection SDK to detect a person in the camera view, which triggers the capture button and adds visual indicators.

    To do this, we add the SDK to the app, and use PoseDetection.getClient(). Then, using the poseDetector, we look at the detectedLandmarks that are in the streaming image coming from the Camera, and we set the _uiState.detectedPose to true if a nose and shoulders are visible:

    private suspend fun runPoseDetection() {
        PoseDetection.getClient(
            PoseDetectorOptions.Builder()
                .setDetectorMode(PoseDetectorOptions.STREAM_MODE)
                .build(),
        ).use { poseDetector ->
            // Since image analysis is processed by ML Kit asynchronously in its own thread pool,
            // we can run this directly from the calling coroutine scope instead of pushing this
            // work to a background dispatcher.
            cameraImageAnalysisUseCase.analyze { imageProxy ->
                imageProxy.image?.let { image ->
                    val poseDetected = poseDetector.detectPersonInFrame(image, imageProxy.imageInfo)
                    _uiState.update { it.copy(detectedPose = poseDetected) }
                }
            }
        }
    }
    
    private suspend fun PoseDetector.detectPersonInFrame(
        image: Image,
        imageInfo: ImageInfo,
    ): Boolean {
        val results = process(InputImage.fromMediaImage(image, imageInfo.rotationDegrees)).await()
        val landmarkResults = results.allPoseLandmarks
        val detectedLandmarks = mutableListOf<Int>()
        for (landmark in landmarkResults) {
            if (landmark.inFrameLikelihood > 0.7) {
                detectedLandmarks.add(landmark.landmarkType)
            }
        }
    
        return detectedLandmarks.containsAll(
            listOf(PoseLandmark.NOSE, PoseLandmark.LEFT_SHOULDER, PoseLandmark.RIGHT_SHOULDER),
        )
    }
    

    moving image showing the camera shutter button activating when an orange droid figurine is held in the camera frame

    The camera shutter button is activated when a person (or a bot!) enters the frame.

    Get started with AI on Android

    The Androidify app makes an extensive use of the Gemini 2.5 Flash to validate the image and generate a detailed description used to generate the image. It also leverages the specifically fine-tuned Imagen 3 model to generate images of Android bots. Gemini and Imagen models are easily integrated into the app via the Firebase AI Logic SDK. In addition, ML Kit Pose Detection SDK controls the capture button, enabling it only when a person is present in front of the camera.

    To get started with AI on Android, go to the Gemini and Imagen documentation for Android.

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.

    The post Androidify: How Androidify leverages Gemini, Firebase and ML Kit appeared first on InShot Pro.

    ]]>
    Androidify: Building powerful AI-driven experiences with Jetpack Compose, Gemini and CameraX https://theinshotproapk.com/androidify-building-powerful-ai-driven-experiences-with-jetpack-compose-gemini-and-camerax/ Mon, 02 Jun 2025 12:07:28 +0000 https://theinshotproapk.com/androidify-building-powerful-ai-driven-experiences-with-jetpack-compose-gemini-and-camerax/ Posted by Rebecca Franks – Developer Relations Engineer The Android bot is a beloved mascot for Android users and developers, ...

    Read more

    The post Androidify: Building powerful AI-driven experiences with Jetpack Compose, Gemini and CameraX appeared first on InShot Pro.

    ]]>

    Posted by Rebecca Franks – Developer Relations Engineer

    The Android bot is a beloved mascot for Android users and developers, with previous versions of the bot builder being very popular – we decided that this year we’d rebuild the bot maker from the ground up, using the latest technology backed by Gemini. Today we are releasing a new open source app, Androidify, for learning how to build powerful AI driven experiences on Android using the latest technologies such as Jetpack Compose, Gemini through Firebase, CameraX, and Navigation 3.

    a moving image of various droid bots dancing individually

    Androidify app demo

    Here’s an example of the app running on the device, showcasing converting a photo to an Android bot that represents my likeness:

    moving image showing the conversion of an image of a woman in a pink dress holding na umbrella into a 3D image of a droid bot wearing a pink dress holding an umbrella

    Under the hood

    The app combines a variety of different Google technologies, such as:

      • Gemini API – through Firebase AI Logic SDK, for accessing the underlying Imagen and Gemini models.
      • Jetpack Compose – for building the UI with delightful animations and making the app adapt to different screen sizes.
      • Navigation 3 – the latest navigation library for building up Navigation graphs with Compose.
      • CameraX Compose and Media3 Compose – for building up a custom camera with custom UI controls (rear camera support, zoom support, tap-to-focus) and playing the promotional video.

    This sample app is currently using a standard Imagen model, but we’ve been working on a fine-tuned model that’s trained specifically on all of the pieces that make the Android bot cute and fun; we’ll share that version later this year. In the meantime, don’t be surprised if the sample app puts out some interesting looking examples!

    How does the Androidify app work?

    The app leverages our best practices for Architecture, Testing, and UI to showcase a real world, modern AI application on device.

    Flow chart describing Androidify app flow

    Androidify app flow chart detailing how the app works with AI

    AI in Androidify with Gemini and ML Kit

    The Androidify app uses the Gemini models in a multitude of ways to enrich the app experience, all powered by the Firebase AI Logic SDK. The app uses Gemini 2.5 Flash and Imagen 3 under the hood:

      • Image validation: We ensure that the captured image contains sufficient information, such as a clearly focused person, and assessing for safety. This feature uses the multi-modal capabilities of Gemini API, by giving it a prompt and image at the same time:

    val response = generativeModel.generateContent(
       content {
           text(prompt)
           image(image)
       },
    )
    

      • Text prompt validation: If the user opts for text input instead of image, we use Gemini 2.5 Flash to ensure the text contains a sufficiently descriptive prompt to generate a bot.

      • Image captioning: Once we’re sure the image has enough information, we use Gemini 2.5 Flash to perform image captioning., We ask Gemini to be as descriptive as possible,focusing on the clothing and its colors.

      • “Help me write” feature: Similar to an “I’m feeling lucky” type feature, “Help me write” uses Gemini 2.5 Flash to create a random description of the clothing and hairstyle of a bot.

      • Image generation from the generated prompt: As the final step, Imagen generates the image, providing the prompt and the selected skin tone of the bot.

    The app also uses the ML Kit pose detection to detect a person in the viewfinder and enable the capture button when a person is detected, as well as adding fun indicators around the content to indicate detection.

    Explore more detailed information about AI usage in Androidify.

    Jetpack Compose

    The user interface of Androidify is built using Jetpack Compose, the modern UI toolkit that simplifies and accelerates UI development on Android.

    Delightful details with the UI

    The app uses Material 3 Expressive, the latest alpha release that makes your apps more premium, desirable, and engaging. It provides delightful bits of UI out-of-the-box, like new shapes, componentry, and using the MotionScheme variables wherever a motion spec is needed.

    MaterialShapes are used in various locations. These are a preset list of shapes that allow for easy morphing between each other—for example, the cute cookie shape for the camera capture button:

    Androidify app UI showing camera button

    Camera button with a MaterialShapes.Cookie9Sided shape

    Beyond using the standard Material components, Androidify also features custom composables and delightful transitions tailored to the specific needs of the app:

      • There are plenty of shared element transitions across the app—for example, a morphing shape shared element transition is performed between the “take a photo” button and the camera surface.

        moving example of expressive button shapes in slow motion

      • Custom enter transitions for the ResultsScreen with the usage of marquee modifiers.

        animated marquee example

      • Fun color splash animation as a transition between screens.

        moving image of a blue color splash transition between Androidify demo screens

      • Animating gradient buttons for the AI-powered actions.

        animated gradient button for AI powered actions example

    To learn more about the unique details of the UI, read Androidify: Building delightful UIs with Compose

    Adapting to different devices

    Androidify is designed to look great and function seamlessly across candy bar phones, foldables, and tablets. The general goal of developing adaptive apps is to avoid reimplementing the same app multiple times on each form factor by extracting out reusable composables, and leveraging APIs like WindowSizeClass to determine what kind of layout to display.

    a collage of different adaptive layouts for the Androidify app across small and large screens

    Various adaptive layouts in the app

    For Androidify, we only needed to leverage the width window size class. Combining this with different layout mechanisms, we were able to reuse or extend the composables to cater to the multitude of different device sizes and capabilities.

      • Responsive layouts: The CreationScreen demonstrates adaptive design. It uses helper functions like isAtLeastMedium() to detect window size categories and adjust its layout accordingly. On larger windows, the image/prompt area and color picker might sit side-by-side in a Row, while on smaller windows, the color picker is accessed via a ModalBottomSheet. This pattern, called “supporting pane”, highlights the supporting dependencies between the main content and the color picker.

      • Foldable support: The app actively checks for foldable device features. The camera screen uses WindowInfoTracker to get FoldingFeature information to adapt to different features by optimizing the layout for tabletop posture.

      • Rear display: Support for devices with multiple displays is included via the RearCameraUseCase, allowing for the device camera preview to be shown on the external screen when the device is unfolded (so the main content is usually displayed on the internal screen).

    Using window size classes, coupled with creating a custom @LargeScreensPreview annotation, helps achieve unique and useful UIs across the spectrum of device sizes and window sizes.

    CameraX and Media3 Compose

    To allow users to base their bots on photos, Androidify integrates CameraX, the Jetpack library that makes camera app development easier.

    The app uses a custom CameraLayout composable that supports the layout of the typical composables that a camera preview screen would include— for example, zoom buttons, a capture button, and a flip camera button. This layout adapts to different device sizes and more advanced use cases, like the tabletop mode and rear-camera display. For the actual rendering of the camera preview, it uses the new CameraXViewfinder that is part of the camerax-compose artifact.

    CameraLayout in Compose

    CameraLayout composable that takes care of different device configurations, such as table top mode

    CameraLayout in Compose

    CameraLayout composable that takes care of different device configurations, such as table top mode

    The app also integrates with Media3 APIs to load an instructional video for showing how to get the best bot from a prompt or image. Using the new media3-ui-compose artifact, we can easily add a VideoPlayer into the app:

    @Composable
    private fun VideoPlayer(modifier: Modifier = Modifier) {
        val context = LocalContext.current
        var player by remember { mutableStateOf<Player?>(null) }
        LifecycleStartEffect(Unit) {
            player = ExoPlayer.Builder(context).build().apply {
                setMediaItem(MediaItem.fromUri(Constants.PROMO_VIDEO))
                repeatMode = Player.REPEAT_MODE_ONE
                prepare()
            }
            onStopOrDispose {
                player?.release()
                player = null
            }
        }
        Box(
            modifier
                .background(MaterialTheme.colorScheme.surfaceContainerLowest),
        ) {
            player?.let { currentPlayer ->
                PlayerSurface(currentPlayer, surfaceType = SURFACE_TYPE_TEXTURE_VIEW)
            }
        }
    }
    

    Using the new onLayoutRectChanged modifier, we also listen for whether the composable is completely visible or not, and play or pause the video based on this information:

    var videoFullyOnScreen by remember { mutableStateOf(false) }     
    
    LaunchedEffect(videoFullyOnScreen) {
         if (videoFullyOnScreen) currentPlayer.play() else currentPlayer.pause()
    } 
    
    // We add this onto the player composable to determine if the video composable is visible, and mutate the videoFullyOnScreen variable, that then toggles the player state. 
    Modifier.onVisibilityChanged(
                    containerWidth = LocalView.current.width,
                    containerHeight = LocalView.current.height,
    ) { fullyVisible -> videoFullyOnScreen = fullyVisible }
    
    // A simple version of visibility changed detection
    fun Modifier.onVisibilityChanged(
        containerWidth: Int,
        containerHeight: Int,
        onChanged: (visible: Boolean) -> Unit,
    ) = this then Modifier.onLayoutRectChanged(100, 0) { layoutBounds ->
        onChanged(
            layoutBounds.boundsInRoot.top > 0 &&
                layoutBounds.boundsInRoot.bottom < containerHeight &&
                layoutBounds.boundsInRoot.left > 0 &&
                layoutBounds.boundsInRoot.right < containerWidth,
        )
    }
    

    Additionally, using rememberPlayPauseButtonState, we add on a layer on top of the player to offer a play/pause button on the video itself:

    val playPauseButtonState = rememberPlayPauseButtonState(currentPlayer)
                OutlinedIconButton(
                    onClick = playPauseButtonState::onClick,
                    enabled = playPauseButtonState.isEnabled,
                ) {
                    val icon =
                        if (playPauseButtonState.showPlay) R.drawable.play else R.drawable.pause
                    val contentDescription =
                        if (playPauseButtonState.showPlay) R.string.play else R.string.pause
                    Icon(
                        painterResource(icon),
                        stringResource(contentDescription),
                    )
                }
    

    Check out the code for more details on how CameraX and Media3 were used in Androidify.

    Navigation 3

    Screen transitions are handled using the new Jetpack Navigation 3 library androidx.navigation3. The MainNavigation composable defines the different destinations (Home, Camera, Creation, About) and displays the content associated with each destination using NavDisplay. You get full control over your back stack, and navigating to and from destinations is as simple as adding and removing items from a list.

    @Composable
    fun MainNavigation() {
       val backStack = rememberMutableStateListOf<NavigationRoute>(Home)
       NavDisplay(
           backStack = backStack,
           onBack = { backStack.removeLastOrNull() },
           entryProvider = entryProvider {
               entry<Home> { entry ->
                   HomeScreen(
                       onAboutClicked = {
                           backStack.add(About)
                       },
                   )
               }
               entry<Camera> {
                   CameraPreviewScreen(
                       onImageCaptured = { uri ->
                           backStack.add(Create(uri.toString()))
                       },
                   )
               }
               // etc
           },
       )
    }
    

    Notably, Navigation 3 exposes a new composition local, LocalNavAnimatedContentScope, to easily integrate your shared element transitions without needing to keep track of the scope yourself. By default, Navigation 3 also integrates with predictive back, providing delightful back experiences when navigating between screens, as seen in this prior shared element transition:

    CameraLayout in Compose

    Learn more about Jetpack Navigation 3, currently in alpha.

    Learn more

    By combining the declarative power of Jetpack Compose, the camera capabilities of CameraX, the intelligent features of Gemini, and thoughtful adaptive design, Androidify is a personalized avatar creation experience that feels right at home on any Android device. You can find the full code sample at github.com/android/androidify where you can see the app in action and be inspired to build your own AI-powered app experiences.

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.

    The post Androidify: Building powerful AI-driven experiences with Jetpack Compose, Gemini and CameraX appeared first on InShot Pro.

    ]]>