InShot Pro https://theinshotproapk.com/ Download InShot Pro APK for Android, iOS, and PC Thu, 07 Aug 2025 16:00:00 +0000 en-US hourly 1 https://theinshotproapk.com/wp-content/uploads/2021/07/cropped-Inshot-Pro-APK-Logo-1-32x32.png InShot Pro https://theinshotproapk.com/ 32 32 #WeArePlay: Meet the people coding a more sustainable world https://theinshotproapk.com/weareplay-meet-the-people-coding-a-more-sustainable-world/ Thu, 07 Aug 2025 16:00:00 +0000 https://theinshotproapk.com/weareplay-meet-the-people-coding-a-more-sustainable-world/ Posted by Robbie McLachlan, Developer Marketing How do you tackle the planet’s biggest sustainability and environmental challenges? For 10 new ...

Read more

The post #WeArePlay: Meet the people coding a more sustainable world appeared first on InShot Pro.

]]>

Posted by Robbie McLachlan, Developer Marketing

How do you tackle the planet’s biggest sustainability and environmental challenges? For 10 new founders we’re spotlighting in #WeArePlay, it starts with coding. Their apps and games are helping to build a healthier planet by developing career paths for aspiring environmentalists, preserving indigenous knowledge, and turning nature education into an adventure for all.

Here are a few of our favourites:

Ariane, Flávia, Andréia, and Mayla’s game BoRa turns a simple park visit into an immersive, gamified adventure.

Ariane, Flávia, Andréia, and Mayla, co-founders of Fubá Educação Ambiental, São Carlos, Brazil

Ariane, Flávia, Andréia, and Mayla, co-founders of Fubá Educação Ambiental
São Carlos, Brazil

Passionate about nature, co-founders Mayla, Flávia, Andréia, and Ariane met while researching environmental education. They wanted to foster more meaningful connections between people and Brazil’s national parks. Their app, BoRa – Iguaçu National Park, transforms a visit into an immersive experience using interactive storytelling, gamified trails, and accessibility features like sign language, helping everyone connect more deeply with the natural world.

Louis and Justin’s app, CyberTracker, turns the ancient knowledge of indigenous trackers into vital scientific data for modern conservation.

Louis, co-founder of CyberTracker Conservation, Cape Town, South Africa

Louis, co-founder of CyberTracker Conservation
Cape Town, South Africa

Louis knew that animal tracking was a science, but the expert knowledge of many indigenous trackers couldn’t be recorded because they were unable to read or write. He partnered with Justin to create CyberTracker to solve this. Their app uses a simple icon-based interface, enabling non-literate trackers to record vital biodiversity data. This innovation preserves invaluable knowledge and supports conservation efforts worldwide.

Bharati and Saurabh’s app, Earth5R, turns a passion for the planet into real-world experience and careers in the green economy.

Bharati and Saurabh, co-founders of Earth5R Environmental Services, Mumbai, India

Bharati and Saurabh, co-founders of Earth5R Environmental Services
Mumbai, India

After a life-changing cycling trip around the world, Saurabh was inspired by sustainable practices he saw in different communities. He and his wife, Bharati, brought those lessons home to Mumbai and launched Earth5R. Their app provides environmental education and career development, connecting people to internships and hands-on projects. By providing the skills and experience needed for the green economy, they’re building the next generation of environmental leaders.

Discover more #WeArePlay stories from founders across the globe.

Google Play logo

The post #WeArePlay: Meet the people coding a more sustainable world appeared first on InShot Pro.

]]>
What is HDR? https://theinshotproapk.com/what-is-hdr/ Wed, 06 Aug 2025 16:00:00 +0000 https://theinshotproapk.com/what-is-hdr/ Posted by John Reck – Software Engineer For Android developers, delivering exceptional visual experiences is a continuous goal. High Dynamic ...

Read more

The post What is HDR? appeared first on InShot Pro.

]]>

Posted by John Reck – Software Engineer

For Android developers, delivering exceptional visual experiences is a continuous goal. High Dynamic Range (HDR) unlocks new possibilities, offering the potential for more vibrant and immersive content. Technologies like UltraHDR on Android are particularly compelling, providing the benefits of HDR displays while maintaining crucial backwards compatibility with SDR displays. On Android you can use HDR for both video and images.

Over the years, the term HDR has been used to signify a number of related, but ultimately distinct visual fidelity features. Users encounter it in the context of camera features (exposure fusion), or as a marketing term in TV or monitor (“HDR capable”). This conflates distinct features like wider color gamuts, increased bit depth or enhanced contrast with HDR itself.

From an Android Graphics perspective, HDR primarily signifies higher peak brightness capability that extends beyond the conventional Standard Dynamic Range. Other perceived benefits often derive from standards such as HDR10 or Dolby Vision which also include the usage of wider color spaces, higher bit depths, and specific transfer functions.

In this article, we’ll establish the foundational color principles, then address common myths, clarify HDR’s role in the rendering pipeline, and examine how Android’s display technologies and APIs enable HDR experience.

The components of color

Understanding HDR begins with defining the three primary components that form the displayed volume of color: bit depth, transfer function, and color gamut. These describe the precision, scaling, and range of the color volume, respectively.

While a color model defines the format for encoding pixel values (e.g., RGB, YUV, HSL, CMYK, XYZ), RGB is typically assumed in a graphics context. The combination of a color model, a color gamut, and a transfer function constitutes color space. Examples include sRGB, Display P3, Adobe RGB, BT.2020, or BT.2020 HLG. Numerous combinations of color gamut and transfer function are possible, leading to a variety of color spaces.

components of color include bit depth + transfer fn + color gamut + color model with the last three being within the color space

Components of color

Bit Depth

Bit depth defines the precision of color representation. A higher bit depth allows for finer gradation between color values. In modern graphics, bit depth typically refers to bits per channel (e.g., an 8-bit image uses 8 bits for each red, green, blue, and optionally alpha channel).

Crucially, bit depth does not determine the overall range of colors (minimum and maximum values) an image can represent; this is set by the color gamut and, in HDR, the transfer function. Instead, increasing bit depth provides more discrete steps within that defined range, resulting in smoother transitions and reduced visual artifacts such as banding in gradients.

5-bit

5-bit color gradient showing distinct transition between color values

8-bit

8-bit color gradient showing smoother transition between color values

Although 8-bit is one of the most common formats in widespread usage, it’s not the only option. RAW images can be captured at 10, 12, 14, or 16 bits. PNG supports 16 bits. Games frequently use 16-bit floating point (FP16) instead of integer space for intermediate render buffers. Modern GPU APIs like Vulkan even support 64-bit RGBA formats in both integer and floating point varieties, providing up to 256-bits per pixel.

Transfer Function

A transfer function defines the mathematical relationship between a pixel’s stored numerical value and its final displayed luminance or color. In other words, the transfer function describes how to interpret the increments in values between the minimum and maximum. This function is essential because the human visual system’s response to light intensity is non-linear. We are more sensitive to changes in luminance at low light levels than at high light levels. Therefore, a linear mapping from stored values to display luminance would not result in an efficient usage of the available bits. There would be more than necessary precision in the brighter region and too little in the darker region with respect to what is perceptual. The transfer function compensates for this non-linearity by adjusting the luminance values to match the human visual response.

While some transfer functions are linear, most employ complex curves or piecewise functions to optimize image quality for specific displays or viewing conditions. sRGB, Gamma 2.2, HLG, and PQ are common examples, each prioritizing bit allocation differently across the luminance range.

Color Gamut

Color gamut refers to the entire range of colors that a particular color space or device can accurately reproduce. It is typically a subset of the visible color spectrum, which encompasses all the colors that the human eye can perceive. Each color space (e.g., sRGB, Display P3, BT2020) defines its own unique gamut, establishing the boundaries for color representation.

A wider gamut signifies that the color space can display a greater variety of colors, leading to richer and more vibrant images. However, simply having a larger gamut doesn’t always guarantee better color accuracy or a more vibrant result. The device or medium used to display the colors must also be capable of reproducing the full range of the gamut. When a display encounters colors outside its reproducible gamut, the typical handling method is clipping. This is to ensure that in-gamut colors are properly preserved for accuracy, as otherwise attempts to scale the color gamut may produce unpleasant results, particularly in regions in which human vision is particularly sensitive like skin tones.

HDR myths and realities

With an understanding of what forms the basic working color principles, it’s now time to evaluate some of the common claims of HDR and how they apply in a general graphics context.

Claim: HDR offers more vibrant colors

This claim comes from HDR video typically using the BT2020 color space, which is indeed a wide color volume. However, there are several problems with this claim as a blanket statement.

The first is that images and graphics have been able to use wider color gamuts, such as Display P3 or Adobe RGB, for quite a long time now. This is not a unique advancement that was coupled to HDR. In JPEGs for example this is defined by the ICC profile, which dates back to the early 1990s, although wide-spread adoption of ICC profile handling is somewhat more recent. Similarly on the graphics rendering side the usage of wider color spaces is fully decoupled from whether or not HDR is being used.

The second is that not all HDR videos even use such a wider gamut at all. Although HDR10 specifies the usage of BT2020, other HDR formats have since been created that do not use such a wide gamut.

The biggest issue, though, is one of capturing and displaying. Just because the format allows for the color gamut of BT2020 does not mean that the entire gamut is actually usable in practice. For example current Dolby Vision mastering guidelines only require a 99% coverage of the P3 gamut. This means that even for high-end professional content, it’s not expected that the authoring of content beyond that of Display P3 is possible. Similarly, the vast majority of consumer displays today are only capable of displaying either sRGB or Display P3 color gamuts. Given that the typical recommendation of out-of-gamut colors is to clip them, this means that even though HDR10 allows for up to BT2020 gamut, the widest gamut in practice is still going to be P3.

Thus this claim should really be considered something offered by HDR video profiles when compared to SDR video profiles specifically, although SDR videos could use wider gamuts if desired without using an HDR profile.

Claim: HDR offers more contrast / better black detail

One of the benefits of HDR sometimes claimed is dark blacks (e.g. Dolby Vision Demo #3 – Core Universe – 4K HDR or “Dark scenes come alive with darker darks” ) or more detail in the dark regions. This is even reflected in BT.2390: “HDR also allows for lower black levels than traditional SDR, which was typically in the range between 0.1 and 1.0 cd/m2 for cathode ray tubes (CRTs) and is now in the range of 0.1 cd/m2 for most standard SDR liquid crystal displays (LCDs).” However, in reality no display attempts to show anything but SDR black as the blackest black the display is physically capable of. Thus there is no difference between HDR or SDR in terms of how dark it can reach – both bottom out at the same dark level on the same display.

As for contrast ratio, as that is the ratio between the brightest white and the darkest black, it is overwhelmingly influenced by how dark a display can get. With the prevalence of OLED displays, particularly in the mobile space, both SDR and HDR have the same contrast ratio as a result, as they both have essentially perfect black levels giving them infinite contrast ratios.

The PQ transfer function does allocate more bits to the dark region, so in theory it can convey better black detail. However, this is a unique aspect of PQ rather than a feature of HDR. HLG is increasingly the more common HDR format as it is preferred by mobile cameras as well as several high end cameras. And while PQ may contain this detail, that doesn’t mean the HDR display can necessarily display it anyway, as discussed in Display Realities.

Claim: HDR offers higher bit depth

This claim comes from HDR10 and some, but not all, Dolby Vision profiles using 10 or 12-bits for the video stream. Similar to more vibrant colors, this is really just an aspect of particular video profiles rather than something HDR itself inherently provides or is coupled to HDR. The usage of 10-bits or more is otherwise not uncommon in imaging, particularly in the higher end photography world, with RAW and TIFF image formats capable of having 10, 12, 14, or 16-bits. Similarly, PNG supports 16-bits, although that is rarely used.

Claim: HDR offers higher peak brightness

This then, is all that HDR really is. But what does “higher peak brightness” really mean? After all, SDR displays have been pushing ever increasing brightness levels before HDR was significant, particularly for sunlight viewing. And even without that, what is the difference between “HDR” and just “SDR with the brightness slider cranked up”? The answer is that we define “HDR” as having a brightness range bigger than SDR, and we think of SDR as being the range driven by autobrightness to be comfortably readable in the current ambient conditions. Thus we define HDR in terms of things like “HDR headroom” or “HDR/SDR ratio” to indicate it’s a floating region relative to SDR. This makes brightness policies easier to reason about. However, it does complicate the interaction with traditional HDR such as that used in video, specifically HLG and PQ content.

PQ/HLG transfer functions

PQ and HLG represent the two most common approaches to HDR in terms of video content. They represent two transfer functions that represent different concepts of what is “HDR.” PQ, published as SMPTE ST 2084:2014, is defined in terms of absolute nits in the display. The expectation is that it encodes from 0 to 10,000 nits, and expects to be mastered for a particular reference viewing environment. HLG takes a different approach, instead opting to take a typical gamma curve for part of the range before switching to logarithmic for the brighter portion. This has a claimed nominal peak brightness of 1000 nits in the reference environment, although it is not defined in absolute luminance terms like PQ is.

Industry-wide specifications have recently formalized the brightness range of both PQ- and HLG-encoded content in relation to SDR. ITU-R BT. 2408-8 defines the reference white level for graphics to be 203 nits. ISO/TS 22028-5 and ISO/PRF 21496-1 have followed suit; 21496-1 in particular defines HDR headroom in terms of nominal peak luminance, relative to a diffuse white luminance at 203 nits.

The realities of modern displays, discussed below, as well as typical viewing environments mean that traditional HDR video are nearly never displayed as intended. A display’s HDR headroom may evaporate under bright viewing conditions, demanding an on-demand tonemapping into SDR. Traditional HDR video encodes a fixed headroom, while modern displays employ a dynamic headroom, resulting in vast differences in video quality even on the same display.

Display Realities

So far most of the discussion around HDR has been from the perspective of the content. However, users consume content on a display, which has its own capabilities and more importantly limits. A high-end mobile display is likely to have characteristics such as gamma 2.2, P3 gamut, and a peak brightness of around 2000 nits. If we then consider something like HDR10 there are mismatches in bit usage prioritization:

    • PQ’s increased bit allocation at the lower ranges ends up being wasted
    • The usage of BT2020 ends up spending bits on parts of a gamut that will never be displayed
    • Encoding up to 10,000 nits of brightness is similarly headroom that’s not utilized

These mismatches are not inherently a problem, however, but it means that as 10-bit displays become more common the existing 10-bit HDR video profiles are unable to actually take advantage of the full display’s capabilities. Thus HDR video profiles are in a position of simultaneously being forward looking while also already being unable to maximize a current 10-bit display’s capabilities. This is where technology such as Ultra HDR or gainmaps in general provide a compelling alternative. Despite sometimes using an 8-bit base image, because the gain layer that transforms it to HDR is specialized to the content and its particular range needs it is more efficient with its bit usage, leading to results that still look stunning. And as that base image is upgraded to 10-bit with newer image formats such as AVIF, the effective bit usage is even better than those of typical HDR video codecs. Thus these approaches do not represent evolutionary or stepping stones to “true HDR”, but rather are also an improvement on HDR in addition to having better backwards compatibility. Similarly Android’s UI toolkit’s usage of the extendedRangeBrightness API actually still primarily happens in 8-bit space. Because the rendering is tailored to the specific display and current conditions it is still possible to have a good HDR experience despite the usage of RGBA_8888.

Unlocking HDR on Android: Next steps

High Dynamic Range (HDR) offers advancement in visual fidelity for Android developers, moving beyond the traditional constraints of Standard Dynamic Range (SDR) by enabling higher peak brightness.

By understanding the core components of color – bit depth, transfer function, and color gamut – and debunking common myths, developers can leverage technologies like Ultra HDR to deliver truly immersive experiences that are both visually stunning and backward compatible.

In our next article, we’ll delve into the nuances of HDR and user intent, exploring how to optimize your content for diverse display capabilities and viewing environments.

The post What is HDR? appeared first on InShot Pro.

]]>
Android Studio Narwhal Feature Drop is stable – start using Agent Mode https://theinshotproapk.com/android-studio-narwhal-feature-drop-is-stable-start-using-agent-mode/ Thu, 31 Jul 2025 17:30:00 +0000 https://theinshotproapk.com/android-studio-narwhal-feature-drop-is-stable-start-using-agent-mode/ Posted by Paris Hsu – Product Manager, Android Studio The next wave of innovation is here with Android Studio Narwhal ...

Read more

The post Android Studio Narwhal Feature Drop is stable – start using Agent Mode appeared first on InShot Pro.

]]>

Posted by Paris Hsu – Product Manager, Android Studio

The next wave of innovation is here with Android Studio Narwhal Feature Drop. We’re thrilled to announce that Gemini in Android Studio’s Agent Mode is now available in the stable release, ready to tackle your most complex coding challenges. This release also brings powerful new tools for XR development, continued quality improvements, and key updates to enhance your productivity and help you build high-quality apps.

Dive in to learn more about all the updates and new features designed to supercharge your workflow.

moving image of Gemini in Android Studio: Agent Mode

Gemini in Android Studio: Agent Mode

Develop with Gemini

Try out Agent Mode

Go beyond chat and assign tasks to Gemini. Gemini in Android Studio’s Agent Mode is a powerful AI feature designed to handle complex, multi-stage development tasks. To use Agent Mode, click Gemini in the sidebar and then select the Agent tab. You can describe a high-level goal, like adding a new feature, generating comprehensive unit tests, or fixing a nuanced bug.

The agent analyzes your request, breaks it down into smaller steps, and formulates an execution plan that uses IDE tools, such as reading and writing files and performing Gradle tasks, and can span multiple files in your project. It then iteratively suggests code changes, and you’re always in control—you can review, accept, or reject the proposed changes and ask the agent to iterate based on your feedback. Let the agent handle the heavy lifting while you focus on the bigger picture.

After releasing Agent Mode to Canary, we had positive feedback from the developers who tried it. We were so excited about the feature’s potential, we moved it to the stable channel faster than ever before, so that you can get your hands on it. Try it out and let us know what you build.

screen grab of Gemini's Agent Mode in Android Studio

Gemini in Android Studio: Agent Mode

Currently, the default model offered in the free tier in Android Studio has a shorter context length, which can limit the depth of response from some agent questions and tasks. In order to get the best performance from Agent Mode, you can bring your own key for the public Gemini API. Once you add your Gemini API key with a paid GCP project, you’ll then be able to use the latest Gemini 2.5 Pro with a full 1M context window from Android Studio. Remember to pick the “Gemini 2.5 Pro” from the model picker in the chat and agent input boxes.

screen grab of Gemini's model selector in Android Studio

Gemini in Android Studio: model selector

Rules in prompt library

Tailor the response from Gemini to fit your project’s specific needs with Rules in the prompt library. You can define preferred coding styles, tech stacks, languages, or output formats to help Gemini understand your project standards for more accurate and personalized code assistance. You can set these preferences once, and they’ll be automatically applied to all subsequent prompts sent to Gemini. For example, you can create a rule such as, “Always provide concise responses in Kotlin using Jetpack Compose.” You can also set rules at the IDE level for personal use across projects, or at the project level, which can be shared with teammates by adding the .idea folder to your version control system.

screen grab of Rules in Prompt Library in Android Studio

Rules in prompt library

Transform UI with Gemini [Studio Labs]

You can now transform UI code within the Compose Preview environment using natural language, directly in the preview. This experimental feature, available through Studio Labs, speeds up UI development by letting you iterate with simple text commands. To use it, right-click in the Compose Preview and select Transform UI With Gemini. Then enter your natural language requests, such as “Center align these buttons,” to guide Gemini in adjusting your layout or styling, or select specific UI elements in the preview for better context. Gemini will then edit your Compose UI code in place, which you can review and approve.

side by side screen captures of accessing the 'Transform UI with Gemini' menu on the left, and applying a natural language transformationto a Compose preview on the right in Android Studio

Immersive development

XR Android Emulator and template

Kickstart your extended reality development! Android Studio now includes:

    • XR Android Emulator: The XR Android Emulator now launches embedded within the IDE by default. You can deploy your Jetpack app, navigate the 3D space, and use the Embedded Layout Inspector directly inside Android Studio.
    • XR template: Get a head start on your next project with a new template specifically designed for Jetpack XR. This provides a solid foundation with boilerplate code to begin your immersive experience development journey right away.

XR Android Emulator in Android Studio

XR Android Emulator

XR Android Emulator in Android Studio

XR Android template in new project template

Embedded Layout Inspector for XR

The embedded Layout Inspector now supports XR applications, which lets you inspect and optimize your UI layouts within the XR environment. Get detailed insights into your app’s component structure and identify potential layout issues to create more polished and performant experiences.

Embedded Layout Inspector for XR in Android Studio

Embedded Layout Inspector for XR

Android Partner Device Labs available with Android Device Streaming

Android Partner Device Labs are device labs operated by Google OEM partners, such as Samsung, Xiaomi, OPPO, OnePlus, vivo, and others, and expand the selection of devices available in Android Device Streaming. To learn more, see Connect to Android Partner Device Labs.

Embedded Layout Inspector for XR in Android Studio

Android Device Streaming supports Android Partner Device Labs

Optimize and refine

Jetpack Compose preview quality improvements

We’ve made several enhancements to Compose previews to make UI iteration faster and more intuitive:

    • Improved code navigation: You can now click on a preview’s name to instantly jump to its @Preview definition, or click an individual component within the preview to navigate directly to the function where it’s defined. Hover states and improved keyboard arrow navigation make moving through multiple previews a breeze.
    • Preview picker: The new Compose preview picker is now available. You can click any @Preview annotation in your Compose code to access the picker and easily manage your previews.

improved code navigation in Compose preview in Android Studio

Compose preview: Improved code navigation

Compose preview picker in Android Studio

Compose preview picker

K2 mode by default

Android Studio now uses the K2 Kotlin compiler by default. This next-generation compiler brings significant performance improvements to the IDE and your builds. By enabling K2, we are paving the way for future Kotlin programming language features and an even faster, more robust development experience in Kotlin.

K2 mode setting in Android Studio

K2 mode setting

16 KB page size support

To help you prepare for the future of Android hardware, this release adds improved support for transitioning to 16 KB page sizes. Android Studio now offers proactive warnings when building apps that are incompatible with 16 KB devices. You can use the APK Analyzer to identify which specific libraries in your project are incompatible. Lint checks also highlight the native libraries which are not 16 KB aligned. To test your app in this new environment, a dedicated 16 KB emulator target is also available in the AVD Manager.

16 KB page size support: APK Analyzer indication

16 KB page size support: APK Analyzer indication

16 KB page size support: APK Analyzer indication

16 KB page size support: Lint checks

Services compatibility policy

Android Studio offers service integrations that help you and your team make faster progress as you develop, release, and maintain Android apps. Services are constantly evolving and may become incompatible with older versions of Android Studio. Therefore, we are introducing a policy where features that depend on a Google Cloud service are supported for approximately a year in each version of Android Studio. The IDE will notify you when the current version is within 30 days of becoming incompatible so you can update it.

Example notification for services compatibility policy in Android Studio

Example notification for services compatibility policy

Summary

To recap, Android Studio Narwhal Feature Drop includes the following enhancements and features:

Develop with Gemini

    • Gemini in Android Studio: agent mode: use Gemini for tackling complex, multi-step coding tasks.
    • Rules in Prompt Library: Customize Gemini’s output for your project’s standards.
    • Transform preview with Gemini [Studio Labs]: Use natural language to iterate on Compose UI.

Immersive development

    • Embedded XR Android Emulator: Test and debug XR apps directly within the IDE.
    • XR template: A new project template to kickstart XR development.
    • Embedded Layout Inspector for XR: Debug and optimize your UI in an XR environment.
    • Android Partner Device Labs available with Android Device Streaming: access more Google OEM partner devices.

Optimize and refine

    • Compose preview improvements: Better navigation and a new picker for a smoother workflow.
    • K2 mode by default: Faster performance with the next-gen Kotlin compiler.
    • 16KB page size support: Lint warnings, analysis, and an emulator to prepare for new devices.
    • Services compatibility policy: Stay up-to-date for access to integrated Google services.

Get started

Ready to accelerate your development? Download Android Studio Narwhal Feature Drop and start exploring these powerful new features today! As always, your feedback is crucial to us.

Check known issues, report bugs, suggest improvements, and be part of our vibrant community on LinkedIn Medium, YouTube, or X. Let’s build the future of Android apps together!

The post Android Studio Narwhal Feature Drop is stable – start using Agent Mode appeared first on InShot Pro.

]]>
#WeArePlay: 10 million downloads and counting, meet app and game founders from across the U.S. https://theinshotproapk.com/weareplay-10-million-downloads-and-counting-meet-app-and-game-founders-from-across-the-u-s/ Tue, 29 Jul 2025 12:05:38 +0000 https://theinshotproapk.com/weareplay-10-million-downloads-and-counting-meet-app-and-game-founders-from-across-the-u-s/ Posted by Robbie McLachlan, Developer Marketing They saw a problem and built the answer. Meet 20 #WeArePlay founders from across ...

Read more

The post #WeArePlay: 10 million downloads and counting, meet app and game founders from across the U.S. appeared first on InShot Pro.

]]>

Posted by Robbie McLachlan, Developer Marketing

They saw a problem and built the answer. Meet 20 #WeArePlay founders from across the U.S. who started their entrepreneurial journey with a question like: what if reading was no longer a barrier for anyone? What if an app could connect neighbors to fight local hunger? What if fitness or self-care could feel as engaging as playing a game?

These new stories showcase how innovation often starts with finding the answer to a personal problem. Here are just a few of our favorites:

Cliff’s app Speechify makes the written word accessible to all

Headshot of Cliff, founder of Speechify, Miami, Florida

Cliff, founder of Speechify
Miami, Florida

Growing up with dyslexia, Cliff always wished he could enjoy books but found reading them challenging. After moving to the U.S., the then college student turned that personal challenge into a solution for millions. His app, Speechify, empowers people by turning any text—from PDFs to web pages—into audio. By making the written word accessible to all, Cliff’s innovation gives students, professionals, and auditory learners a new kind of independence.

Jenny’s game Run Legends turns everyday fitness into a social adventure

Headshot of Jenny, founder of Tofala Games, San Francisco, California

Jenny, founder of Talofa Games
San Francisco, California

As a teen, Jenny funded her computer science studies by teaching herself to code and publishing over 100 games. A passionate cross-country runner, she wanted to combine her love for gaming and fitness to make exercise feel more like an adventure. The result is Run Legends, a multiplayer RPG where players battle monsters by moving in real life. Jenny’s on a mission to blend all types of exercise with playful storytelling, turning everyday fitness into a fun, social, and heroic quest.

Nino and Stephanie’s app Finch makes self-care a rewarding daily habit

Headshot of Nino and Stephanie, co-founders of Finch, Santa Clara, California

Nino and Stephanie, co-founders of Finch
Santa Clara, California

As engineers, Nino and Stephanie knew the power of technology but found the world of self-care apps overwhelming. Inspired by their own mental health journeys and a gamified app Stephanie built in college, they created Finch. The app introduces a fresh take on the virtual pet: by completing small, positive actions for yourself, like journaling or practicing breathing exercises, you care for your digital companion. With over 10 million downloads, Finch has helped people around the world build healthier habits. With seasonal events every month and growing personalization, the app continues to evolve to make self-care more fun and rewarding.

John’s app The HungreeApp connects communities to fight hunger

Headshot of John, founder of The HungreeApp, Denver, Colorado

John, founder of The HungreeApp
Denver, Colorado

John began coding as a nine-year-old in Nigeria, sometimes with just a pen and paper. After moving to the U.S., he was struck by how much food from events was wasted while people nearby went hungry. That spark led him to create The HungreeApp, a platform that connects communities with free, surplus food from businesses and restaurants. John’s ingenuity turns waste into opportunity, creating a more connected and resourceful nation, one meal at a time.

Anthony’s game studio Tech Tree Games turns a passion for idle games into cosmic adventures for aspiring tycoons

Headshot of Anthony, founder of Tech Tree Games, Austin, Texas

Anthony, founder of Tech Tree Games
Austin, Texas

While working as a chemical engineer, Anthony dreamed of creating an idle game like the ones he loved to play, leading him to teach himself how to code from scratch. This passion project turned into his studio Tech Tree Games and the hit title Idle Planet Miner, where players grow a space mining empire filled with mystical planets and alluring gems. After releasing a 2.0 update with enhanced visuals for the game, Anthony is back in prototyping mode with new titles in the pipeline.

Discover more #WeArePlay stories from the US and stories from across the globe.

Google Play logo

The post #WeArePlay: 10 million downloads and counting, meet app and game founders from across the U.S. appeared first on InShot Pro.

]]>
#WeArePlay: With over 3 billion downloads, meet the people behind Amanotes https://theinshotproapk.com/weareplay-with-over-3-billion-downloads-meet-the-people-behind-amanotes/ Thu, 17 Jul 2025 16:00:00 +0000 https://theinshotproapk.com/weareplay-with-over-3-billion-downloads-meet-the-people-behind-amanotes/ Posted by Robbie McLachlan – Developer Marketing In our latest #WeArePlay film, which celebrates the people behind apps and games ...

Read more

The post #WeArePlay: With over 3 billion downloads, meet the people behind Amanotes appeared first on InShot Pro.

]]>

Posted by Robbie McLachlan – Developer Marketing

In our latest #WeArePlay film, which celebrates the people behind apps and games on Google Play, we meet Bill and Silver – the duo behind Amanotes. Their game company has reached over 3 billion downloads with their mission ‘everyone can music’. Their titles, including the global hit Magic Tiles 3, turn playing musical instruments into a fun, easy, and interactive experience, with no musical background needed. Discover how Amanotes blends creativity and technology to bring joy and connection to billions of players around the world.

What inspired you to create Amanotes?

Bill: It all began with a question I’d pursued for over 20 years – how can technology make music even more beautiful? I grew up in a musical family, surrounded by instruments, but I also loved building things with tech. Amanotes became the space where I could bring those two passions together.

Silver: Honestly, I wasn’t planning to start a company. I had just finished studying entrepreneurship and was looking to join a startup, not launch one. I dropped a message in an online group saying I wanted to find a team to work with, and Bill reached out. We met for coffee, talked for about an hour, and by the end, we just said, why not give it a shot? That one meeting turned into ten years of building Amanotes.

Do you remember the first time you realized your game was more than just a game and that it could change someone’s life?

Silver: There’s one moment I’ll never forget. A woman in the U.S. left a review saying she used to be a pianist, but after an accident, she lost use of some of her fingers and couldn’t play anymore. Then she found Magic Tiles. She said the game gave her that feeling of playing again—even without full movement. That’s when it hit me. We weren’t just building a game. We were helping people reconnect with something they thought they’d lost.

Amanotes founders, Bill Vo and Silver Nguyen

How has Google Play helped your journey?

Silver: Google Play has been a huge part of our story. It was actually the first platform we ever published on. The audience was global from day one, which gave us the reach we needed to grow fast. We made great use of tools such as Firebase for A/B testing. We also relied on the Play Console for analytics and set custom pricing by country. Without Google Play, Amanotes wouldn’t be where it is today.

A user plays Amanotes on their mobile device

What’s next for Amanotes?

Silver: Music will always be the soul of what we do, but now we’re building games with more depth. We want to go beyond just tapping to songs. We’re adding stories, challenges, and richer gameplay on top of the music. We’ve got a whole lineup of new games in the works. Each one is a chance to push the boundaries of what music games can be.

Discover other inspiring app and game founders featured in #WeArePlay.

Google Play logo

The post #WeArePlay: With over 3 billion downloads, meet the people behind Amanotes appeared first on InShot Pro.

]]>
New tools to help drive success for one-time products https://theinshotproapk.com/new-tools-to-help-drive-success-for-one-time-products/ Tue, 15 Jul 2025 16:00:00 +0000 https://theinshotproapk.com/new-tools-to-help-drive-success-for-one-time-products/ Posted by Laura Nechita – Product Manager, Google Play and Rejane França – Group Product Manager, Google Play Starting today, ...

Read more

The post New tools to help drive success for one-time products appeared first on InShot Pro.

]]>

Posted by Laura Nechita – Product Manager, Google Play and Rejane França – Group Product Manager, Google Play

Starting today, Google Play is revamping the way developers can manage one time products, providing greater flexibility and new ways to sell. Play has continually enhanced the ways developers can reach buyers by helping you to diversify the way you can sell products.

Starting in 2022, we created more flexibility for subscriptions and a new Console interface. And now, we are bringing the same flexibility to one-time products, aligning the taxonomy for our one-time products. Previously known as in-app products, one-time product purchases are a vital way for developers to monetize on Google Play. As this business model continues to evolve, we’ve heard from many of you that you need more flexibility and less complexity in how you offer these digital products.

To address these needs, we’re launching new capabilities and a new way of thinking about your products that can help you grow your business. At its core, we’ve separated what the product is from how you sell it. For each one-time product, you can now configure multiple purchase options and offers. This allows you to sell the same product in multiple ways, reducing operational costs by removing the need to create and manage an ever-increasing number of catalog items.

You might have already noticed some changes as we introduce this new model, which provides a more structured way to define and manage your one-time product offerings.

Introducing the new model

flow chart showing the new model hierarchy with one time product at the top, purchase options in the middle, and offers at the bottom

We’re introducing a new three-level hierarchy for defining and managing one-time products. This new structure builds upon concepts already familiar from our subscription model and aligns the taxonomy for all of your in-app product offerings on Play.

    • One-time product: This object defines what the user is buying. Think of it as the core item in your catalog, such as a “Diamond sword”, “Coins” or “No ads”.
    • Purchase option: This defines how the entitlement is granted to the user, its price, and where the product will be available. A single one-time product can have multiple purchase options representing different ways to acquire it, such as buying it or renting it for a set period of time. Purchase options now have two distinct types: buy and rent.
    • Offer: Offers further modify a purchase option and can be used to model discounts or pre-orders. A single purchase option can have multiple offers associated with it.

This allows for a more organized and efficient way to manage your catalog. For instance, you can have one “Diamond sword” product and offer it with a “Buy” purchase option in the US for $10 and a “Rent” purchase option in the UK for £5. This new taxonomy will also allow Play to better understand what the catalogue means, helping developers to further amplify their impact in Play surfaces.

More flexibility to reach more users

The new model unlocks significant flexibility to help you reach a wider audience and cater to different user preferences.

    • Sell in multiple ways: Once you’ve migrated to PBL 8, you can set up different ways of selling the same product. This reduces the complexity of managing numerous individual products for slightly different scenarios.
    • Introducing rentals: We’re introducing the ability to configure items that are sold as rentals. Users have access to the item for a set duration of time. You can define the rental period, which is the amount of time a user has the entitlement after completing the purchase, and an optional expiration period, which is the time after starting consumption before the entitlement is revoked.
    • Pre-order capabilities: You can now set up one-time products to be bought before their release through pre-order offers. You can configure the start date, end date, and the release date for these offers, and even include a discount. Users who pre-order agree to pay on the release date unless they cancel beforehand.
    • No default price: we will remove the concept of default price for a product. Now you can set and manage the prices in bulk or individually for each region.
    • Regional pricing and availability: Price changes can now be applied to purchase options and offers, allowing you to set different prices in different regions. Furthermore, you can also configure the regional availability for both purchase options and offers. This functionality is available for paid apps in addition to one-time products.
    • Offers for promotions: Leverage offers to create various promotions, such as discounts on your base purchase price or special conditions for early access through pre-orders.

To use these new features you first need to upgrade to PBL 8.0. Then, you’ll need to utilize the new monetization.onetimeproducts service of the Play Developer API or use the Play Developer Console. You’ll also need to integrate with the queryProductDetailsAsync API to take advantage of these new capabilities. And while querySkuDetailsAsync and inappproducts service are not supported with the new model, they will continue to be supported as long as PBL 7 is supported.

Important considerations

    • With this change, we will offer a backwards compatible way to port your existing SKUs into this new model. The migration will happen differently depending on how you decide to interact with your catalogue the first time you change the metadata for one or more products.
    • New products created through Play Console UI are normalized. And products created or managed with the existing inappproducts service won’t support these new features. To access them, you’ll need to convert existing ones in the Play Developer Console UI. Once converted, a product can only be managed through the new Play Developer API or Play Developer Console. Products created through the new monetization.onetimeproducts service or through the Play Developer Console are already converted.
    • Buy purchase options marked as ‘Backwards compatible’ will be returned as response for calls through querySkuDetailsAsync API. At launch, all existing products have a backwards compatible purchase option.
    • At the time of this post, the pre-orders capability is available through the Early Access Program (EAP) only. If you are interested, please sign-up.
    • One-time products will be reflected in the earnings reports at launch (Base plan ID and Offer ID columns will be populated for newly configured one-time products). To minimise the potential for breaking changes, we will be updating these column names in the earnings reports later this year.

We encourage you to explore the new Play Developer API and the updated Play Console interface to see how this enhanced flexibility can help you better manage your catalog and grow your business.

We’re excited to see how you leverage these new tools to connect with your users in innovative ways.

Google Play logo

The post New tools to help drive success for one-time products appeared first on InShot Pro.

]]>
Start building for the next generation of Samsung Galaxy devices https://theinshotproapk.com/start-building-for-the-next-generation-of-samsung-galaxy-devices/ Sat, 12 Jul 2025 12:05:20 +0000 https://theinshotproapk.com/start-building-for-the-next-generation-of-samsung-galaxy-devices/ Posted by J. Eason – Director, Product Management The next generation of foldable and wearable devices from Samsung has arrived. ...

Read more

The post Start building for the next generation of Samsung Galaxy devices appeared first on InShot Pro.

]]>

Posted by J. Eason – Director, Product Management

The next generation of foldable and wearable devices from Samsung has arrived. Yesterday at Galaxy Unpacked, Samsung introduced the new Galaxy Z Fold7, Galaxy Z Flip7, and Galaxy Watch8 series. For Android developers, these devices represent an exciting new opportunity to create engaging and adaptive experiences that reach even more users on their favorite screens.

With new advancements in adaptive development and the launch of Wear OS 6, it has never been a better time to build for the expanding Android device ecosystem. Learn more about what these new devices mean for you and how you can get started.

side by side images of Samsung's Galaxy Z Flip7 on the left and Galaxy Z Fold7 on the right

Unfold your app’s adaptive potential on Samsung’s newest Galaxy devices

The launch of the Galaxy Z Fold7 and Z Flip7 on Android 16 means users are about to experience your app in more dynamic and versatile ways than before. This creates an opportunity to captivate them with experiences that adaptively respond to every fold and flip. And preparing your app for these features is easier than you think. Building adaptive apps isn’t just about rewriting your code, but about making strategic enhancements that ensure a seamless experience across screens.

Google and Samsung have collaborated to bring a more seamless and powerful desktop windowing experience to large screen devices and phones with connected displays in Android 16 across the Android ecosystem. These advancements will enhance Samsung DeX, starting with the new Galaxy Z Fold7 and Z Flip7, and also extend to the wider Android ecosystem.

To help you meet this moment, we’ve built a foundation of development tools to simplify creating compelling adaptive experiences. Create adaptive layouts that reflow automatically with the Compose Adaptive Layouts library and guide users seamlessly across panes with Jetpack Navigation 3. Make smarter top-level layout decisions using the newly expanded Window Size Classes. Then, iterate and validate your design in Android Studio, from visualizing your UI with Compose Previews to generating robust tests with natural language using Journeys with Gemini.

side by side images of Samsung's Watch8 Classic LTE 44mm in Silver on the left and Watch8 Classic LTE 46mm in Black on the right

Build for a more personal and expressive era with Wear OS 6

The next chapter for wearables begins with the new Samsung Galaxy Watch8 series becoming the first device to feature Wear OS 6, the most power-efficient version of our wearable platform yet. This update is focused on giving you the tools to create more personal experiences without compromising on battery life. With version 4 of the Watch Face Format, you can unlock new creative possibilities like letting users customize their watch faces by selecting their own photos or adding fluid transitions to the display. And, to give you more flexibility in distribution, the Watch Face Push API allows you to create and manage your own watch face marketplace.

Beyond the watch face, you can provide a streamlined experience to users by embracing an improved always-on display and adding passkey support to your app with the Credential Manager API, which is now available on Wear OS.

Check out the latest changes to get started and test your app for compatibility using the Wear OS 6 emulator.

Get started building across screens, from foldables to wearables

With these new devices from Samsung, there are more reasons than ever to build experiences that excite users on their favorite Android screens. From building fully adaptive apps for foldables to creating more personal experiences on Wear OS, the tools are in your hands to create for the future of Android.

Explore all the resources you’ll need to build adaptive experiences at developer.android.com/adaptive-apps. And, start building for Wear OS today by checking out developer.android.com/wear and visiting the Wear OS gallery for inspiration.

The post Start building for the next generation of Samsung Galaxy devices appeared first on InShot Pro.

]]>
Transition to using 16 KB page sizes for Android apps and games using Android Studio https://theinshotproapk.com/transition-to-using-16-kb-page-sizes-for-android-apps-and-games-using-android-studio/ Thu, 10 Jul 2025 21:00:00 +0000 https://theinshotproapk.com/transition-to-using-16-kb-page-sizes-for-android-apps-and-games-using-android-studio/ Posted by Mayank Jain – Product Manager and Jomo Fisher – Software Engineer Get ready to upgrade your app’s performance ...

Read more

The post Transition to using 16 KB page sizes for Android apps and games using Android Studio appeared first on InShot Pro.

]]>

Posted by Mayank Jain – Product Manager and Jomo Fisher – Software Engineer

Get ready to upgrade your app’s performance as Android embraces 16 KB memory page sizes

Android’s transition to 16 KB Page size

Traditionally, Android has operated with the 4 KB memory page size. However many ARM CPUs (the most common processors for Android phones) support the larger 16 KB page size, offering improved performance gains. With Android 15, the Android operating system is page-size-agnostic, allowing devices to run efficiently with either 4 KB or 16 KB page size.

Starting November 1st, 2025, all new apps and app updates that use native C/C++ code targeting Android 15+ devices submitted to Google Play must support 16 KB page sizes. This is a crucial step towards ensuring your app delivers the best possible performance on the latest Android hardware. Apps without native C/C++ code or dependencies, that just use the Kotlin and Java programming languages, are already compatible, but if you’re using native code, now is the time to act.

This transition to larger 16 KB page sizes translates directly into a better user experience. Devices configured with 16 KB page size can see an overall performance boost of 5-10%. This means faster app launch times (up to 30% for some apps, 3.16% on average), improved battery usage (4.56% reduction in power draw), quicker camera starts (4.48-6.60% faster), and even speedier system boot-ups (around 0.8 seconds faster). While there is a marginal increase in memory use, a faster reclaim path is worth it.

The native code challenge – and how Android Studio equips you

If your app uses native C/C++ code from the Android NDK or relies on SDKs that do, you’ll need to recompile and potentially adjust your code for 16 KB compatibility. The good news? Once your application is updated for the 16 KB page size, the same application binary can run seamlessly on both 4 KB and 16 KB devices.

This table describes who needs to transition and recompile their apps

A table describes who needs to transition or recomplie their apps based on native codebase and device size

We’ve created several Android Studio tools and guides that can help you prepare for migrating to using 16 KB page size.

Detect compatibility issues

APK Analyzer: Easily identify if your app contains native libraries by checking for .so files in the lib folder. The APK Analyzer can also visually indicate your app’s 16 KB compatibility. You can then determine and update libraries as needed for 16 KB compliance.

Screenshot of the APK Analyzer in Android Studio

Alignment Checks: Android Studio also provides warnings if your prebuilt libraries or APKs are not 16 KB compliant. You should then use the APK Analyzer tool to review which libraries need to be updated or if any code changes are required. If you want to detect the 16 KB page size compatibility checks in your CI (continuous integration) pipeline, you can leverage scripts and command line tools.

Screenshot of Android 16 KB Alignment check in Android Studio

Lint in Android Studio now also highlights the native libraries which are not 16 KB aligned.

Screenshot of Lint performing a 16 KB alignment check in Android Studio

Build with 16 KB alignment

Tools Updates: Rebuild your native code with 16 KB alignment. Android Gradle Plugin (AGP) version 8.5.1 or higher automatically enables 16 KB alignment by default (during packaging) for uncompressed shared libraries. Similarly, Android NDK r28 and higher compile 16 KB-aligned by default. If you depend on other native SDK’s, they also need to be 16 KB aligned. You might need to reach out to the SDK developer to request a 16 KB compliant SDK.

Fix code for page-size agnosticism

Eliminate Hardcoded Assumptions: Identify and remove any hardcoded dependencies on PAGE_SIZE or assumptions that the page size is 4 KB (e.g., 4096). Instead, use getpagesize() or sysconf(_SC_PAGESIZE) to query the actual page size at runtime.

Test in a 16 KB environment

Android Emulator Support: Android Studio offers a 16 KB emulator target (for both arm64 and x86_64) directly in the Android Studio SDK Manager, allowing you to test your applications before uploading to Google Play.

Screenshot of the 16 KB emulator in Android Studio

On-Device Testing: For compatible devices like Pixel 8 and 8 Pro onwards (starting with Android 15 QPR1), a new developer option allows you to switch between 4 KB and 16 KB page sizes for real-device testing. You can verify the page size using adb shell getconf PAGE_SIZE.

Screenshot of the 16 KB emulator in Android Studio

Don’t wait – prepare your apps today

Leverage Android Studio’s powerful tools to detect issues, build compatible binaries, fix your code, and thoroughly test your app for the new 16 KB memory page sizes. By doing so, you’ll ensure an improved end user experience and contribute to a more performant Android ecosystem.

As always, your feedback is important to us – check known issues, report bugs, suggest improvements, and be part of our vibrant community on LinkedIn, Medium, YouTube, or X.

The post Transition to using 16 KB page sizes for Android apps and games using Android Studio appeared first on InShot Pro.

]]>
Evolving Android’s early-access programs: Introducing the Canary channel https://theinshotproapk.com/evolving-androids-early-access-programs-introducing-the-canary-channel/ Thu, 10 Jul 2025 18:14:00 +0000 https://theinshotproapk.com/evolving-androids-early-access-programs-introducing-the-canary-channel/ Posted by Dan Galpin – Android Developer Relations To better support you and provide earlier, more consistent access to in-development ...

Read more

The post Evolving Android’s early-access programs: Introducing the Canary channel appeared first on InShot Pro.

]]>

Posted by Dan Galpin – Android Developer Relations

To better support you and provide earlier, more consistent access to in-development features, we are announcing a significant evolution in our pre-release program. Moving forward, the Android platform will have a Canary release channel, which will replace the previous developer preview program. This Canary release channel will function alongside the existing beta program.

This change is designed to provide a more streamlined and continuous opportunity for you to try out new platform capabilities and provide feedback throughout the entire year, not just in the early months of a new release cycle.

Limitations of the previous developer preview model

The Developer Preview program has been a critical part of our release cycle, but its structure had inherent limitations:

    • Developer Previews were not tied to a release channel, and had to be manually flashed to devices every time the cycle would restart.
    • Because previews were tied to the next designated Android release, they were only available during the earliest part of the cycle. Once a platform version reached the Beta stage, the preview track would end, creating a gap where features that were promising but not yet ready for Beta had no official channel for feedback.

A continuous flow of features with the Canary channel

The new Android platform Canary channel addresses these challenges directly. By flashing your supported Pixel device to the Canary release channel, you can now receive a continuous, rolling stream of the latest platform builds via over-the-air (OTA) updates.

    • You can try out and provide input on new features and planned behavior changes in their earliest stages. These changes may not always make it into a stable Android release.
    • The Canary release channel will run in parallel with the beta program. The beta program remains the way for you to try a more polished set of likely soon-to-be-released features.
    • You can use the Canary builds with your CI to see if any of our in-development features cause unexpected problems with your app, maximizing the time we have to address your concerns.

Who should use the Canary channel?

The Canary channel is intended for developers that want to explore and test with the earliest pre-release Android APIs and potential behavior changes. Builds from the Canary channel will have passed our automated tests as well as experienced a short test cycle with internal users. You should expect bugs and breaking changes. These bleeding-edge builds will not be the best choice for someone to use as their primary or only device.

The existing beta channel will remain the primary way for you to make sure that your apps are both compatible with and take advantage of upcoming platform features.

Getting started and providing feedback

You can use the Android Flash Tool to get the most recent Canary build onto your supported Pixel device. Once flashed, you should expect OTA updates for the latest Canary builds as they become available. To exit the channel, flash a Beta or Public build to your device. This will require a data partition wipe.

screenshot of the select a build menu for a Pixel 9 Pro device to get the most recent Canary build in the Android Flash Tool

Canary releases will be available on the Android Emulator through the Device Manager in Android Studio (currently, just in the Android Studio Canary channel), and Canary SDKs will be available for you to develop against through the SDK Manager.

screenshot of the Android SDK manager showing the Android Canary SDKs

Since most behavior changes require targeting a release, you can target Canary releases the way you can target any other platform SDK version, or use the Compatibility Framework with supported features to enable behavior changes in your apps.

screenshot of the Target SDK Version and the android-CANARY target

Feedback is a critical component of this new program, so please file feature feedback and bug reports on your Canary experience through the Google Issue Tracker.

By transitioning to a true Canary channel, we aim to create a more transparent, collaborative, and efficient development process, giving you the seamless access you need to prepare for the future of Android.

The post Evolving Android’s early-access programs: Introducing the Canary channel appeared first on InShot Pro.

]]>
Level up your game: Google Play’s Indie Games Fund in Latin America returns for its 4th year https://theinshotproapk.com/level-up-your-game-google-plays-indie-games-fund-in-latin-america-returns-for-its-4th-year/ Tue, 01 Jul 2025 14:00:00 +0000 https://theinshotproapk.com/level-up-your-game-google-plays-indie-games-fund-in-latin-america-returns-for-its-4th-year/ Posted by Daniel Trócoli – Google Play Partnerships We’re thrilled to announce the return of Google Play’s Indie Games Fund ...

Read more

The post Level up your game: Google Play’s Indie Games Fund in Latin America returns for its 4th year appeared first on InShot Pro.

]]>

Posted by Daniel Trócoli – Google Play Partnerships

We’re thrilled to announce the return of Google Play’s Indie Games Fund (IGF) in Latin America for its fourth consecutive year! This year, we’re once again committing $2 million to empower another 10 indie game studios across the region. With this latest round of funding, our total investment in Latin American indie games will reach an impressive $8 million USD.

Since its inception, the IGF has been a cornerstone of our commitment to fostering growth for developers of all sizes on Google Play. We’ve seen firsthand the transformative impact this support has had, enabling studios to expand their teams, refine their creations, and reach new audiences globally.

What’s in store for the Indie Games Fund in 2025?

Just like in previous years, selected small game studios based in Latin America will receive a share of the $2 million fund, along with support from the Google Play team.

As Vish Game Studio, a previously selected studio, shared: “The IGF was a pivotal moment for our studio, boosting us to the next level and helping us form lasting connections.” We believe in fostering these kinds of pivotal moments for all our selected studios.

The program is open to indie game developers who have already launched a game, whether it’s on Google Play, another mobile platform, PC, or console. Each selected recipient will receive between $150,000 and $200,000 to help them elevate their game and realize their full potential.

Check out all eligibility criteria and apply now! Applications will close at 12:00 PM BRT on July 31, 2025. To give your application the best chance, remember that priority will be given to applications received by 12:00 PM BRT on July 15, 2025.

Google Play logo

The post Level up your game: Google Play’s Indie Games Fund in Latin America returns for its 4th year appeared first on InShot Pro.

]]>