HDR https://theinshotproapk.com/category/app/hdr/ Download InShot Pro APK for Android, iOS, and PC Thu, 13 Nov 2025 17:00:00 +0000 en-US hourly 1 https://theinshotproapk.com/wp-content/uploads/2021/07/cropped-Inshot-Pro-APK-Logo-1-32x32.png HDR https://theinshotproapk.com/category/app/hdr/ 32 32 Introducing CameraX 1.5: Powerful Video Recording and Pro-level Image Capture https://theinshotproapk.com/introducing-camerax-1-5-powerful-video-recording-and-pro-level-image-capture/ Thu, 13 Nov 2025 17:00:00 +0000 https://theinshotproapk.com/introducing-camerax-1-5-powerful-video-recording-and-pro-level-image-capture/ Posted by Scott Nien, Software Engineer The CameraX team is thrilled to announce the release of version 1.5! This latest ...

Read more

The post Introducing CameraX 1.5: Powerful Video Recording and Pro-level Image Capture appeared first on InShot Pro.

]]>

Posted by Scott Nien, Software Engineer



The CameraX team is thrilled to announce the release of version 1.5! This latest update focuses on bringing professional-grade capabilities to your fingertips while making the camera session easier to configure than ever before.

For video recording, users can now effortlessly capture stunning slow-motion or high-frame-rate videos. More importantly, the new Feature Group API allows you to confidently enable complex combinations like 10-bit HDR and 60 FPS, ensuring consistent results across supported devices.

On the image capture front, you gain maximum flexibility with support for capturing unprocessed, uncompressed DNG (RAW) files. Plus, you can now leverage Ultra HDR output even when using powerful Camera Extensions.

Underpinning these features is the new SessionConfig API, which streamlines camera setup and reconfiguration. Now, let’s dive into the details of these exciting new features.

Powerful Video Recording: High-Speed and Feature Combinations

CameraX 1.5 significantly expands its video capabilities, enabling more creative and robust recording experiences.

Slow Motion & High Frame Rate Video

One of our most anticipated features, slow-motion video, is now available. You can now capture high-speed video (e.g., 120 or 240 fps) and encode it directly into a dramatic slow-motion video. Alternatively, you can record at the same high frame rate to produce exceptionally smooth video.

Implementing this is straightforward if you’re familiar with the VideoCapture API.

  1. Check for High-Speed Support: Use the new Recorder.getHighSpeedVideoCapabilities() method to query if the device supports this feature.

val cameraInfo = cameraProvider.getCameraInfo(cameraSelector)

val highSpeedCapabilities = Recorder.getHighSpeedVideoCapabilities(cameraInfo)

if (highSpeedCapabilities == null) {
    // This camera device does not support high-speed video.
    return
}
  1. Configure and Bind the Use Case: Use the returned videoCapabilities (which contains supported video quality information) to build a HighSpeedVideoSessionConfig. You must then query the supported frame rate ranges via cameraInfo.getSupportedFrameRateRanges() and set the desired range. Invoke setSlowMotionEnabled(true) to record slow motion videos, otherwise it will record high-frame-rate videos. The final step is to use the regular Recorder.prepareRecording().start() to begin recording the video.


val preview = Preview.Builder().build()
val quality = highSpeedCapabilities
        .getSupportedQualities(DynamicRange.SDR).first()

val recorder = Recorder.Builder()
      .setQualitySelector(QualitySelector.from(quality)))
      .build()

val videoCapture = VideoCapture.withOutput(recorder)

val frameRateRange = cameraInfo.getSupportedFrameRateRanges(      
       HighSpeedVideoSessionConfig(videoCapture, preview)
).first()

val sessionConfig = HighSpeedVideoSessionConfig(
    videoCapture, 
    preview, 
    frameRateRange = frameRateRange, 
    // Set true for slow-motion playback, or false for high-frame-rate
    isSlowMotionEnabled = true
)

cameraProvider.bindToLifecycle(
     lifecycleOwner, cameraSelector, sessionConfig)

// Start recording slow motion videos. 
val recording = recorder.prepareRecording(context, outputOption)
      .start(executor, {})

Compatibility and Limitations

High-speed recording requires specific CameraConstrainedHighSpeedCaptureSession and CamcorderProfile support. Always perform the capability check, and enable high-speed recording only on supported devices to prevent bad user experience. Currently, this feature is supported on the rear cameras of almost all Pixel devices and select models from other manufacturers.

Check the blog post for more details.

Combine Features with Confidence: The Feature Group API

CameraX 1.5 introduces the Feature Group API, which eliminates the guesswork of feature compatibility. Based on Android 15’s feature combination query API, you can now confidently enable multiple features together, guaranteeing a stable camera session. The Feature Group currently supports: HDR (HLG), 60 fps, Preview Stabilization, and Ultra HDR. For instance, you can enable HDR, 60 fps, and Preview Stabilization simultaneously on Pixel 10 and Galaxy S25 series. Future enhancements are planned to include 4K recording and ultra-wide zoom. 

The feature group API enables two essential use cases:

Use Case 1: Prioritizing the Best Quality

If you want to capture using the best possible combination of features, you can provide a prioritized list. CameraX will attempt to enable them in order, selecting the first combination the device fully supports.

val sessionConfig = SessionConfig(
    useCases = listOf(preview, videoCapture),
    preferredFeatureGroup = listOf(
        GroupableFeature.HDR_HLG10,
        GroupableFeature.FPS_60,
        GroupableFeature.PREVIEW_STABILIZATION
    )
).apply {
    // (Optional) Get a callback with the enabled features to update your UI.
    setFeatureSelectionListener { selectedFeatures ->
        updateUiIndicators(selectedFeatures)
    }
}
processCameraProvider.bindToLifecycle(activity, cameraSelector, sessionConfig)

In this example, CameraX tries to enable features in this order:

  1. HDR + 60 FPS + Preview Stabilization

  2. HDR + 60 FPS

  3. HDR + Preview Stabilization

  4. HDR

  5. 60 FPS + Preview Stabilization

  6. 60 FPS

  7. Preview Stabilization

  8. None

Use Case 2: Building a User-Facing Settings UI

You can now accurately reflect which feature combinations are supported in your app’s settings UI, disabling toggles for unsupported options like the picture below. 

To determine whether to gray out a toggle, use the following codes to check for feature combination support. Initially, query the status of every individual feature. Once a feature is enabled, re-query the remaining features with the enabled features to see if their toggles must now be grayed out due to compatibility constraints.

fun disableFeatureIfNotSuported(
   enabledFeatures: Set<GroupableFeature>,     
   featureToCheck:GroupableFeature
) {
 val sessionConfig = SessionConfig(
     useCases = useCases,
     requiredFeatureGroup = enabledFeatures + featureToCheck
 )
 val isSupported = cameraInfo.isFeatureGroupSupported(sessionConfig)

 if (!isSupported) {
     // disable the toggle for featureToCheck
 }
}

Please refer to the Feature Group blog post for more information. 

More Video Enhancements

  • Concurrent Camera Improvements: With CameraX 1.5.1, you can now bind Preview + ImageCapture + VideoCapture use cases concurrently for each SingleCameraConfig in non-composition mode. Additionally, in composition mode (same use cases with CompositionSettings),  you can now set the CameraEffect that is applied to the final composition result.

  • Dynamic Muting: You can now start a recording in a muted state using PendingRecording.withAudioEnabled(boolean initialMuted) and allow the user to unmute later using Recording.mute(boolean muted).

  • Improved Insufficient Storage Handling: CameraX now reliably dispatches the VideoRecordEvent.Finalize.ERROR_INSUFFICIENT_STORAGE error, allowing your app to gracefully handle low storage situations and inform the user.

  • Low Light Boost: On supported devices (like the Pixel 10 series), you can enable CameraControl.enableLowLightBoostAsync to automatically brighten the preview and video streams in dark environments.

Professional-Grade Image Capture

CameraX 1.5 brings major upgrades to ImageCapture for developers who demand maximum quality and flexibility.

Unleash Creative Control with DNG (RAW) Capture

For complete control over post-processing, CameraX now supports DNG (RAW) capture. This gives you access to the unprocessed, uncompressed image data directly from the camera sensor, enabling professional-grade editing and color grading. The API supports capturing the DNG file alone, or capturing simultaneous JPEG and DNG outputs. See the sample code below for how to capture JPEG and DNG files simultaneously.

val capabilities = ImageCapture.getImageCaptureCapabilities(cameraInfo)
val imageCapture = ImageCapture.Builder().apply {
    if (capabilities.supportedOutputFormats
             .contains(OUTPUT_FORMAT_RAW_JPEG)) {
        // Capture both RAW and JPEG formats.
        setOutputFormat(OUTPUT_FORMAT_RAW_JPEG)
    }
}.build()
// ... bind imageCapture to lifecycle ...


// Provide separate output options for each format.
val outputOptionRaw = /* ... configure for image/x-adobe-dng ... */
val outputOptionJpeg = /* ... configure for image/jpeg ... */
imageCapture.takePicture(
    outputOptionRaw,
    outputOptionJpeg,
    executor,
    object : ImageCapture.OnImageSavedCallback {
        override fun onImageSaved(results: OutputFileResults) {
            // This callback is invoked twice: once for the RAW file
            // and once for the JPEG file.
        }

        override fun onError(exception: ImageCaptureException) {}
    }
)

Ultra HDR for Camera Extensions

Get the best of both worlds: the stunning computational photography of Camera Extensions (like Night Mode) combined with the brilliant color and dynamic range of Ultra HDR. This feature is now supported on many recent premium Android phones, such as the Pixel 9/10 series and Samsung S24/S25 series.

// Support UltraHDR when Extension is enabled. 

val extensionsEnabledCameraSelector = extensionsManager
     .getExtensionEnabledCameraSelector(
        CameraSelector.DEFAULT_BACK_CAMERA, ExtensionMode.NIGHT)

val imageCapabilities = ImageCapture.getImageCaptureCapabilities(
               cameraProvider.getCameraInfo(extensionsEnabledCameraSelector)

val imageCapture = ImageCapture.Builder()
     .apply {
       if (imageCapabilities.supportedOutputFormats
                .contains(OUTPUT_FORMAT_JPEG_ULTRA_HDR) {
           setOutputFormat(OUTPUT_FORMAT_JPEG_ULTRA_HDR)

       }

     }.build()

Core API and Usability Enhancements

A New Way to Configure: SessionConfig

As seen in the examples above, SessionConfig is a new concept in CameraX 1.5. It centralizes configuration and simplifies the API in two key ways:

  1. No More Manual unbind() Calls: CameraX APIs are lifecycle-aware. It will implicitly “unbind” your use cases when the activity or other LifecycleOwner is destroyed. But updating use cases or switching cameras still requires you to call unbind() or unbindAll() before rebinding. Now with CameraX 1.5, when you bind a new SessionConfig, CameraX seamlessly updates the session for you, eliminating the need for unbind calls.

  2. Deterministic Frame Rate Control: The new SessionConfig API introduces a deterministic way to manage the frame rate. Unlike the previous setTargetFrameRate, which was only a hint, this new method guarantees the specified frame rate range will be applied upon successful configuration. To ensure accuracy, you must query supported frame rates using CameraInfo.getSupportedFrameRateRanges(SessionConfig). By passing the full SessionConfig, CameraX can accurately determine the supported ranges based on stream configurations.

Camera-Compose is Now Stable

We know how much you enjoy Jetpack Compose, and we’re excited to announce that the camera-compose library is now stable at version 1.5.1! This release includes critical bug fixes related to CameraXViewfinder usage with Compose features like moveableContentOf and Pager, as well as resolving a preview stretching issue. We will continue to add more features to camera-compose in future releases.

ImageAnalysis and CameraControl Improvements

  • Torch Strength Adjustment: Gain fine-grained control over the device’s torch with new APIs. You can query the maximum supported strength using CameraInfo.getMaxTorchStrengthLevel() and then set the desired level with CameraControl.setTorchStrengthLevel().

  • NV21 Support in ImageAnalysis: You can now request the NV21 image format directly from ImageAnalysis, simplifying integration with other libraries and APIs. This is enabled by invoking ImageAnalysis.Builder.setOutputImageFormat(OUTPUT_IMAGE_FORMAT_NV21).

Get Started Today

Update your dependencies to CameraX 1.5 today and explore the exciting new features. We can’t wait to see what you build.

To use CameraX 1.5,  please add the following dependencies to your libs.versions.toml. (We recommend using 1.5.1 which contains many critical bug fixes and concurrent camera improvements.) 

[versions]

camerax = "1.5.1"


[libraries]

..

androidx-camera-core = { module = "androidx.camera:camera-core", version.ref = "camerax" }

androidx-camera-compose = { module = "androidx.camera:camera-compose", version.ref = "camerax" }

androidx-camera-view = { module = "androidx.camera:camera-view", version.ref = "camerax" }

androidx-camera-lifecycle = { group = "androidx.camera", name = "camera-lifecycle", version.ref = "camerax" }

androidx-camera-camera2 = { module = "androidx.camera:camera-camera2", version.ref = "camerax" }

androidx-camera-extensions = { module = "androidx.camera:camera-extensions", version.ref = "camerax" }

And then add these to your module build.gradle.kts dependencies:

dependencies {

  ..

  implementation(libs.androidx.camera.core)
  implementation(libs.androidx.camera.lifecycle)

  implementation(libs.androidx.camera.camera2)

  implementation(libs.androidx.camera.view) // for PreviewView 
  implementation(libs.androidx.camera.compose) // for compose UI

  implementation(libs.androidx.camera.extensions) // For Extensions 

}

Have questions or want to connect with the CameraX team? Join the CameraX developer discussion group or file a bug report:

The post Introducing CameraX 1.5: Powerful Video Recording and Pro-level Image Capture appeared first on InShot Pro.

]]>
Beyond Single Features: Guaranteeing Feature Combinations With CameraX 1.5 https://theinshotproapk.com/beyond-single-features-guaranteeing-feature-combinations-with-camerax-1-5/ Wed, 15 Oct 2025 16:00:00 +0000 https://theinshotproapk.com/beyond-single-features-guaranteeing-feature-combinations-with-camerax-1-5/ Posted by Tahsin Masrur – Software Engineer      Modern camera apps are defined by powerful, overlapping features. Users expect ...

Read more

The post Beyond Single Features: Guaranteeing Feature Combinations With CameraX 1.5 appeared first on InShot Pro.

]]>

Posted by Tahsin Masrur – Software Engineer

    

Modern camera apps are defined by powerful, overlapping features. Users expect to record video with stunning HDR, capture fluid motion at 60 FPS, and get buttery-smooth footage with Preview Stabilization—often all at the same time.

As developers, we know the reality is more complicated. How can you guarantee that a specific device actually supports a given combination? Until now, enabling multiple features was often a gamble. You could check for individual feature support, but combining them could lead to undefined behavior or, worse, a failed camera session.  This uncertainty forces developers to be conservative, which prevents users on capable devices from accessing the best possible experience.

For instance, very few premium devices reliably support HDR and 60 FPS video simultaneously. Consequently, most apps avoid enabling both at once to prevent a poor user experience on the majority of phones.

To address this, we’re introducing Feature Group in CameraX – a new API designed to eliminate this guesswork. You can now query whether a specific combination of features is supported before configuring the camera, or simply tell CameraX your priorities and let it enable the best-supported combination for you.

For Those New to CameraX

Before we dive into the new Feature Group API, let’s quickly recap what CameraX is. CameraX is a Jetpack support library, built to help you make camera app development easier. It provides a consistent and easy-to-use API surface that works across most Android devices, with backward-compatibility to Android 6.0 (API level 23). If you are new to CameraX, we recommend checking out the official documentation and trying the codelab to get started.

What You Can Build with the Feature Group API

You no longer need to gamble on feature combinations and can confidently deliver the best possible camera experiences – like simultaneous HDR and 60 FPS video on capable hardware (e.g. a Pixel 10 Pro) – while gracefully avoiding errors on devices that can’t support the combination.

Pixel 10 Pro enabling both HDR and 60 FPS simultaneously
On an older device where HDR and 60 FPS can’t run simultaneously, only HDR is enabled while the 60 FPS option is disabled.

With the Feature Group API, you can:

  • Build smarter, dynamic UIs: Intelligently enable or disable settings in your UI based on real-time hardware support. For example, if a user enables HDR, you can instantly gray out and disable the 60 FPS option if the combination isn’t supported on that device. 

  • Deliver a reliable “High-Quality” mode: Configure the camera with a prioritized list of desired features. CameraX automatically finds and enables the best-supported combination for any given device, ensuring a great result without complex, device-specific logic.

  • Prevent camera session failures: By verifying support beforehand, you prevent the camera from attempting to configure an unsupported combination, eliminating a common source of crashes and offering a smooth user experience.

How It Works: The Core Components

The new API is centered around key additions to SessionConfig and CameraInfo.

  1. GroupableFeature: This API introduces a set of predefined groupable features, such as HDR_HLG10, FPS_60, PREVIEW_STABILIZATION, and IMAGE_ULTRA_HDR. Due to computational limitations, only a specific set of features can be grouped with the high degree of reliability this API provides. We are actively working to expand this list and will introduce support for more features in future releases.

  2. New SessionConfig Parameters: This class, used for starting a camera session, now accepts two new parameters:

  • requiredFeatureGroup: Use this for features that must be supported for the configuration to succeed – ideal for features that a user explicitly enables, such as toggling an ‘HDR’ switch. To ensure a deterministic and consistent experience, the bindToLifecycle call will throw an IllegalArgumentException if the requested combination is not supported, rather than silently ignoring a feature request. The CameraInfo#isFeatureGroupSupported API (details below) should be used to query this result beforehand.

  • preferredFeatureGroup: Use this for features that are desirable but optional, for example when you want to implement a default “High-Quality” mode. You provide a list of your desired features ordered according to your priorities, and CameraX automatically enables the highest-priority combination that the device supports.

  1. CameraInfo#isFeatureGroupSupported(): This is the core query method for explicitly checking if a feature group is supported, well-suited for providing only supported feature options to users in your app UI. You pass it a SessionConfig, and it returns a boolean indicating whether the combination is supported. If you intend to bind a SessionConfig with required features, you should use this API first to ensure it is supported. 

Implementation in Practice

Let’s look at how to use these components to build a better camera experience.

Scenario 1: “Best Effort” High-Quality Mode

If you want to enable the best possible features by default, you can provide a prioritized list to preferredFeatureGroup. In this example, we tell CameraX to prioritize HDR, then 60 FPS, and finally Preview Stabilization. CameraX handles the complexity of checking all possible combinations and choosing the best one that the device supports.

For instance, if a device can handle HDR and 60 FPS together but not with Preview Stabilization, CameraX will enable the first two and discard the third. This way, you get the best possible experience without writing complex, device-specific checks.

cameraProvider.bindToLifecycle(

    lifecycleOwner,

    cameraSelector,

    SessionConfig(

        useCases = listOf(preview, videoCapture),

        // The order of features in this list determines their priority. 

        // CameraX will enable the best-supported combination based on these

        // priorities: HDR_HLG10 > FPS_60 > Preview Stabilization.  

        preferredFeatureGroup =

           listOf(HDR_HLG10, FPS_60, PREVIEW_STABILIZATION),

    ).apply {

        // (Optional) Get a callback with the enabled features

        // to update your UI. 

        setFeatureSelectionListener { selectedFeatures ->

            updateUiIndicators(selectedFeatures)

        }

    }

)

For this code snippet, CameraX will attempt to enable feature combinations in the following priority order, selecting the first one the device fully supports:

  1. HDR + 60 FPS + Preview Stabilization

  2. HDR + 60 FPS

  3. HDR + Preview Stabilization

  4. HDR

  5. 60 FPS + Preview Stabilization

  6. 60 FPS

  7. Preview Stabilization

  8. None of the above features

Scenario 2: Building a Reactive UI

To create a UI that responds to user selections and prevents users from selecting an unsupported feature combination, you can query for support directly. The function below checks which features are incompatible with the user’s current selections, allowing you to disable the corresponding UI elements.

/**

 * Returns a list of features that are NOT supported in combination

 * with the currently selected features.

 */

fun getUnsupportedFeatures(

    currentFeatures: Set<GroupableFeature>

): Set<GroupableFeature> {

    val unsupportedFeatures = mutableSetOf<GroupableFeature>()

    val appFeatureOptions = setOf(HDR_HLG10, FPS_60, PREVIEW_STABILIZATION)


    // Iterate over every available feature option in your app. 

    appFeatureOptions.forEach { featureOption ->

        // Skip features the user has already selected. 

        if (currentFeatures.contains(featureOption)) return@forEach


        // Check if adding this new feature is supported. 

        val isSupported = cameraInfo.isFeatureGroupSupported(

            SessionConfig(

                useCases = useCases,

                // Check the new feature on top of existing ones.

                requiredFeatureGroup = currentFeatures + featureOption

            )

        )


        if (!isSupported) {

            unsupportedFeatures.add(featureOption)

        }

    }


    return unsupportedFeatures

}

You can then wire this logic into your ViewModel or UI controller to react to user input and re-bind the camera with a guaranteed-to-work configuration.

// Invoked when user turns some feature on/off.

fun onFeatureChange(currentFeatures: Set<GroupableFeature>) {

    // Identify features that are unsupported with the current selection.

    val unsupportedFeatures = getUnsupportedFeatures(selectedFeatures)


    // Update app UI so that users can’t enable them.

    updateDisabledFeatures(unsupportedFeatures)


    // Bind a session config with the new set of features. Since users are

    // allowed to select only supported features always, no need to explicitly

    // check if feature group is supported.

    cameraProvider.bindToLifecycle(

        lifecycleOwner,

        cameraSelector,

        SessionConfig(

            useCases = listOf(preview, videoCapture),

            requiredFeatureGroup = currentFeatures,

        ).apply {

            setFeatureSelectionListener { selectedFeatures ->

                // Update UI to let users know which features are now selected

                updateUiIndicators(selectedFeatures)

            }

        }

    )

}


To see these concepts in a working application, you can explore our internal test app. It provides a complete implementation of both the “best effort” and “reactive UI” scenarios discussed above.

Please note: This is a test application and not an officially supported sample. While it’s a great reference for the Feature Group API, it has not been polished for production use.

Get Started Today

The Feature Group API removes the ambiguity of working with advanced camera capabilities. By providing a deterministic way to query for feature support, you can build more powerful and reliable camera apps with confidence.

The API is available as experimental in CameraX 1.5 and is scheduled to become fully stable in the 1.6 release, with more support and improvements on the way.

To learn more, check out the official documentation. We can’t wait to see what you create, and we look forward to your feedback. Please share your thoughts and report any issues through the following channels:


The post Beyond Single Features: Guaranteeing Feature Combinations With CameraX 1.5 appeared first on InShot Pro.

]]>
HDR and User Interfaces https://theinshotproapk.com/hdr-and-user-interfaces/ Wed, 10 Sep 2025 14:00:00 +0000 https://theinshotproapk.com/hdr-and-user-interfaces/ Posted by Alec Mouri – Software Engineer As explained in What is HDR?, we can think of HDR as only ...

Read more

The post HDR and User Interfaces appeared first on InShot Pro.

]]>

Posted by Alec Mouri – Software Engineer

As explained in What is HDR?, we can think of HDR as only referring to a luminance range brighter than SDR. When integrating HDR content into a user interface, you must be careful when your user interface is primarily SDR colors and assets. The human visual system adapts to perceived color based on the surrounding environment, which can lead to surprising results. We’ll look at one pertinent example.

Simultaneous Contrast

Consider the following image:

contrast example 1

Source: Wikipedia

This image shows two gray rectangles with different background colors. For most people viewing this image, the two gray rectangles appear to be different shades of gray: the topmost rectangle with a darker background appears to be a lighter shade than the bottommost rectangle with a lighter background.

But these are the same shades of gray! You can prove this to yourself by using your favorite color picking tool or by looking at the below image:

contrast example 2

This illustrates a visual phenomenon called simultaneous contrast. Readers who are interested in the biological explanation may learn more here.

Nearby differences in color are therefore “emphasized”: colors appear darker when immediately next to brighter colors. That same color would appear lighter when immediately next to darker colors.

Implications on Mixing HDR and SDR

The effect of simultaneous contrast affects the appearance of user interfaces that need to present a mixture of HDR and SDR content. The peak luminance allowed by HDR will create an effect of simultaneous contrast: the eye will adapt* to a higher peak luminance (and oftentimes a higher average luminance in practice), which will perceptually cause SDR content to appear dimmer although technically the SDR content luminance has not changed at all. For users, this can be expressed as: my phone screen became “grey” or “washed out”.

We can see this phenomenon in the below image. The device on the right simulates how photos may appear with an SDR UI, if those photos were rendered as HDR. Note that the August photos look identical when compared side-by-side, but the quality of the SDR UI is visually degraded.

contrast example on Google Photos

Applications, when designing for HDR, need to consider how “much” SDR is shown at any given time in their screens when controlling how bright HDR is “allowed” to be. A UI that is dominated by SDR, such as a gallery view where small amounts of HDR content are displayed, can suddenly appear to be darker than expected.

When building your UI, consider the impact of HDR on text legibility or the appearance of nearby SDR assets, and use the appropriate APIs provided by your platform to constrain HDR brightness, or even disable HDR. For example, a 2x headroom for HDR brightness may be acceptable to balance the quality of your HDR scene with your SDR elements. In contrast, a UI that is dominated by HDR, such as full-screen video without other UI elements on-top, does not need to consider this as strongly, as the focus of the UI is on the HDR content itself. In those situations, a 5x headroom (or higher, depending on content metadata such as UltraHDR‘s max_content_boost) may be more appropriate.

It might be tempting to “brighten” SDR content instead. Resist this temptation! This will cause your application to be too bright, especially if there are other applications or system UI elements on-screen.

How to control HDR headroom

Android 15 introduced a control for desired HDR headroom. You can have your application request that the system uses a particular HDR headroom based on the context around your desired UI:

    • If you only want to show SDR content, simply request no headroom.
    • If you only want to show HDR content, then request a high HDR headroom up to and according to the demands of the content.
    • If you want to show a mixture of HDR and SDR content, then can request an intermediate headroom value accordingly. Typical headroom amounts would be around 2x for a mixed scene and 5-8x for a fully-HDR scene.

Here is some example usage:

// Required for the window to respect the desired HDR headroom.
// Note that the equivalent api on SurfaceView does NOT require
// COLOR_MODE_HDR to constraint headroom, if there is HDR content displayed
// on the SurfaceView.
window.colorMode = ActivityInfo.COLOR_MODE_HDR
// Illustrative values: different headroom values may be used depending on
// the desired headroom of the content AND particularities of apps's UI
// design.
window.desiredHdrHeadroom =
    if(/* SDR only */) {
        0f
    } else {
        if (/* Mixed, mostly SDR */) {
            1.5f
        } else {
            if ( /* Mixed, mostly HDR */) {
                3f
            } else { 
                /* HDR only */
                5f
            }
        }
    }

Other platforms also have APIs that allow for developers to have some control over constraining HDR content in their application.

Web platforms have a more coarse concept: The First Public Working Draft of the CSS Color HDR Module adds a constrained-high option to constrain the headroom for mixed HDR and SDR scenes. Within the Apple ecosystem, constrainedHigh is similarly coarse, reckoning with the challenges of displaying mixed HDR and SDR scenes on consumer displays.

If you are a developer who is considering supporting HDR, be thoughtful about how HDR interacts with your UI and use HDR headroom controls appropriately.


*There are other mechanisms the eye employs for light adaptation, like pupillary light reflex, which amplifies this visual phenomenon (brighter peak HDR light means the pupil constricts, which causes less light to hit the retina).

The post HDR and User Interfaces appeared first on InShot Pro.

]]>
What is HDR? https://theinshotproapk.com/what-is-hdr/ Wed, 06 Aug 2025 16:00:00 +0000 https://theinshotproapk.com/what-is-hdr/ Posted by John Reck – Software Engineer For Android developers, delivering exceptional visual experiences is a continuous goal. High Dynamic ...

Read more

The post What is HDR? appeared first on InShot Pro.

]]>

Posted by John Reck – Software Engineer

For Android developers, delivering exceptional visual experiences is a continuous goal. High Dynamic Range (HDR) unlocks new possibilities, offering the potential for more vibrant and immersive content. Technologies like UltraHDR on Android are particularly compelling, providing the benefits of HDR displays while maintaining crucial backwards compatibility with SDR displays. On Android you can use HDR for both video and images.

Over the years, the term HDR has been used to signify a number of related, but ultimately distinct visual fidelity features. Users encounter it in the context of camera features (exposure fusion), or as a marketing term in TV or monitor (“HDR capable”). This conflates distinct features like wider color gamuts, increased bit depth or enhanced contrast with HDR itself.

From an Android Graphics perspective, HDR primarily signifies higher peak brightness capability that extends beyond the conventional Standard Dynamic Range. Other perceived benefits often derive from standards such as HDR10 or Dolby Vision which also include the usage of wider color spaces, higher bit depths, and specific transfer functions.

In this article, we’ll establish the foundational color principles, then address common myths, clarify HDR’s role in the rendering pipeline, and examine how Android’s display technologies and APIs enable HDR experience.

The components of color

Understanding HDR begins with defining the three primary components that form the displayed volume of color: bit depth, transfer function, and color gamut. These describe the precision, scaling, and range of the color volume, respectively.

While a color model defines the format for encoding pixel values (e.g., RGB, YUV, HSL, CMYK, XYZ), RGB is typically assumed in a graphics context. The combination of a color model, a color gamut, and a transfer function constitutes color space. Examples include sRGB, Display P3, Adobe RGB, BT.2020, or BT.2020 HLG. Numerous combinations of color gamut and transfer function are possible, leading to a variety of color spaces.

components of color include bit depth + transfer fn + color gamut + color model with the last three being within the color space

Components of color

Bit Depth

Bit depth defines the precision of color representation. A higher bit depth allows for finer gradation between color values. In modern graphics, bit depth typically refers to bits per channel (e.g., an 8-bit image uses 8 bits for each red, green, blue, and optionally alpha channel).

Crucially, bit depth does not determine the overall range of colors (minimum and maximum values) an image can represent; this is set by the color gamut and, in HDR, the transfer function. Instead, increasing bit depth provides more discrete steps within that defined range, resulting in smoother transitions and reduced visual artifacts such as banding in gradients.

5-bit

5-bit color gradient showing distinct transition between color values

8-bit

8-bit color gradient showing smoother transition between color values

Although 8-bit is one of the most common formats in widespread usage, it’s not the only option. RAW images can be captured at 10, 12, 14, or 16 bits. PNG supports 16 bits. Games frequently use 16-bit floating point (FP16) instead of integer space for intermediate render buffers. Modern GPU APIs like Vulkan even support 64-bit RGBA formats in both integer and floating point varieties, providing up to 256-bits per pixel.

Transfer Function

A transfer function defines the mathematical relationship between a pixel’s stored numerical value and its final displayed luminance or color. In other words, the transfer function describes how to interpret the increments in values between the minimum and maximum. This function is essential because the human visual system’s response to light intensity is non-linear. We are more sensitive to changes in luminance at low light levels than at high light levels. Therefore, a linear mapping from stored values to display luminance would not result in an efficient usage of the available bits. There would be more than necessary precision in the brighter region and too little in the darker region with respect to what is perceptual. The transfer function compensates for this non-linearity by adjusting the luminance values to match the human visual response.

While some transfer functions are linear, most employ complex curves or piecewise functions to optimize image quality for specific displays or viewing conditions. sRGB, Gamma 2.2, HLG, and PQ are common examples, each prioritizing bit allocation differently across the luminance range.

Color Gamut

Color gamut refers to the entire range of colors that a particular color space or device can accurately reproduce. It is typically a subset of the visible color spectrum, which encompasses all the colors that the human eye can perceive. Each color space (e.g., sRGB, Display P3, BT2020) defines its own unique gamut, establishing the boundaries for color representation.

A wider gamut signifies that the color space can display a greater variety of colors, leading to richer and more vibrant images. However, simply having a larger gamut doesn’t always guarantee better color accuracy or a more vibrant result. The device or medium used to display the colors must also be capable of reproducing the full range of the gamut. When a display encounters colors outside its reproducible gamut, the typical handling method is clipping. This is to ensure that in-gamut colors are properly preserved for accuracy, as otherwise attempts to scale the color gamut may produce unpleasant results, particularly in regions in which human vision is particularly sensitive like skin tones.

HDR myths and realities

With an understanding of what forms the basic working color principles, it’s now time to evaluate some of the common claims of HDR and how they apply in a general graphics context.

Claim: HDR offers more vibrant colors

This claim comes from HDR video typically using the BT2020 color space, which is indeed a wide color volume. However, there are several problems with this claim as a blanket statement.

The first is that images and graphics have been able to use wider color gamuts, such as Display P3 or Adobe RGB, for quite a long time now. This is not a unique advancement that was coupled to HDR. In JPEGs for example this is defined by the ICC profile, which dates back to the early 1990s, although wide-spread adoption of ICC profile handling is somewhat more recent. Similarly on the graphics rendering side the usage of wider color spaces is fully decoupled from whether or not HDR is being used.

The second is that not all HDR videos even use such a wider gamut at all. Although HDR10 specifies the usage of BT2020, other HDR formats have since been created that do not use such a wide gamut.

The biggest issue, though, is one of capturing and displaying. Just because the format allows for the color gamut of BT2020 does not mean that the entire gamut is actually usable in practice. For example current Dolby Vision mastering guidelines only require a 99% coverage of the P3 gamut. This means that even for high-end professional content, it’s not expected that the authoring of content beyond that of Display P3 is possible. Similarly, the vast majority of consumer displays today are only capable of displaying either sRGB or Display P3 color gamuts. Given that the typical recommendation of out-of-gamut colors is to clip them, this means that even though HDR10 allows for up to BT2020 gamut, the widest gamut in practice is still going to be P3.

Thus this claim should really be considered something offered by HDR video profiles when compared to SDR video profiles specifically, although SDR videos could use wider gamuts if desired without using an HDR profile.

Claim: HDR offers more contrast / better black detail

One of the benefits of HDR sometimes claimed is dark blacks (e.g. Dolby Vision Demo #3 – Core Universe – 4K HDR or “Dark scenes come alive with darker darks” ) or more detail in the dark regions. This is even reflected in BT.2390: “HDR also allows for lower black levels than traditional SDR, which was typically in the range between 0.1 and 1.0 cd/m2 for cathode ray tubes (CRTs) and is now in the range of 0.1 cd/m2 for most standard SDR liquid crystal displays (LCDs).” However, in reality no display attempts to show anything but SDR black as the blackest black the display is physically capable of. Thus there is no difference between HDR or SDR in terms of how dark it can reach – both bottom out at the same dark level on the same display.

As for contrast ratio, as that is the ratio between the brightest white and the darkest black, it is overwhelmingly influenced by how dark a display can get. With the prevalence of OLED displays, particularly in the mobile space, both SDR and HDR have the same contrast ratio as a result, as they both have essentially perfect black levels giving them infinite contrast ratios.

The PQ transfer function does allocate more bits to the dark region, so in theory it can convey better black detail. However, this is a unique aspect of PQ rather than a feature of HDR. HLG is increasingly the more common HDR format as it is preferred by mobile cameras as well as several high end cameras. And while PQ may contain this detail, that doesn’t mean the HDR display can necessarily display it anyway, as discussed in Display Realities.

Claim: HDR offers higher bit depth

This claim comes from HDR10 and some, but not all, Dolby Vision profiles using 10 or 12-bits for the video stream. Similar to more vibrant colors, this is really just an aspect of particular video profiles rather than something HDR itself inherently provides or is coupled to HDR. The usage of 10-bits or more is otherwise not uncommon in imaging, particularly in the higher end photography world, with RAW and TIFF image formats capable of having 10, 12, 14, or 16-bits. Similarly, PNG supports 16-bits, although that is rarely used.

Claim: HDR offers higher peak brightness

This then, is all that HDR really is. But what does “higher peak brightness” really mean? After all, SDR displays have been pushing ever increasing brightness levels before HDR was significant, particularly for sunlight viewing. And even without that, what is the difference between “HDR” and just “SDR with the brightness slider cranked up”? The answer is that we define “HDR” as having a brightness range bigger than SDR, and we think of SDR as being the range driven by autobrightness to be comfortably readable in the current ambient conditions. Thus we define HDR in terms of things like “HDR headroom” or “HDR/SDR ratio” to indicate it’s a floating region relative to SDR. This makes brightness policies easier to reason about. However, it does complicate the interaction with traditional HDR such as that used in video, specifically HLG and PQ content.

PQ/HLG transfer functions

PQ and HLG represent the two most common approaches to HDR in terms of video content. They represent two transfer functions that represent different concepts of what is “HDR.” PQ, published as SMPTE ST 2084:2014, is defined in terms of absolute nits in the display. The expectation is that it encodes from 0 to 10,000 nits, and expects to be mastered for a particular reference viewing environment. HLG takes a different approach, instead opting to take a typical gamma curve for part of the range before switching to logarithmic for the brighter portion. This has a claimed nominal peak brightness of 1000 nits in the reference environment, although it is not defined in absolute luminance terms like PQ is.

Industry-wide specifications have recently formalized the brightness range of both PQ- and HLG-encoded content in relation to SDR. ITU-R BT. 2408-8 defines the reference white level for graphics to be 203 nits. ISO/TS 22028-5 and ISO/PRF 21496-1 have followed suit; 21496-1 in particular defines HDR headroom in terms of nominal peak luminance, relative to a diffuse white luminance at 203 nits.

The realities of modern displays, discussed below, as well as typical viewing environments mean that traditional HDR video are nearly never displayed as intended. A display’s HDR headroom may evaporate under bright viewing conditions, demanding an on-demand tonemapping into SDR. Traditional HDR video encodes a fixed headroom, while modern displays employ a dynamic headroom, resulting in vast differences in video quality even on the same display.

Display Realities

So far most of the discussion around HDR has been from the perspective of the content. However, users consume content on a display, which has its own capabilities and more importantly limits. A high-end mobile display is likely to have characteristics such as gamma 2.2, P3 gamut, and a peak brightness of around 2000 nits. If we then consider something like HDR10 there are mismatches in bit usage prioritization:

    • PQ’s increased bit allocation at the lower ranges ends up being wasted
    • The usage of BT2020 ends up spending bits on parts of a gamut that will never be displayed
    • Encoding up to 10,000 nits of brightness is similarly headroom that’s not utilized

These mismatches are not inherently a problem, however, but it means that as 10-bit displays become more common the existing 10-bit HDR video profiles are unable to actually take advantage of the full display’s capabilities. Thus HDR video profiles are in a position of simultaneously being forward looking while also already being unable to maximize a current 10-bit display’s capabilities. This is where technology such as Ultra HDR or gainmaps in general provide a compelling alternative. Despite sometimes using an 8-bit base image, because the gain layer that transforms it to HDR is specialized to the content and its particular range needs it is more efficient with its bit usage, leading to results that still look stunning. And as that base image is upgraded to 10-bit with newer image formats such as AVIF, the effective bit usage is even better than those of typical HDR video codecs. Thus these approaches do not represent evolutionary or stepping stones to “true HDR”, but rather are also an improvement on HDR in addition to having better backwards compatibility. Similarly Android’s UI toolkit’s usage of the extendedRangeBrightness API actually still primarily happens in 8-bit space. Because the rendering is tailored to the specific display and current conditions it is still possible to have a good HDR experience despite the usage of RGBA_8888.

Unlocking HDR on Android: Next steps

High Dynamic Range (HDR) offers advancement in visual fidelity for Android developers, moving beyond the traditional constraints of Standard Dynamic Range (SDR) by enabling higher peak brightness.

By understanding the core components of color – bit depth, transfer function, and color gamut – and debunking common myths, developers can leverage technologies like Ultra HDR to deliver truly immersive experiences that are both visually stunning and backward compatible.

In our next article, we’ll delve into the nuances of HDR and user intent, exploring how to optimize your content for diverse display capabilities and viewing environments.

The post What is HDR? appeared first on InShot Pro.

]]>