Friday, March 14, 2025
HomeAndroid app developmentCreate a highlight impact with CameraX and Jetpack Compose | by Jolanda...

Create a highlight impact with CameraX and Jetpack Compose | by Jolanda Verhoef | Android Builders | Jan, 2025


Half 3 of Unlocking the Energy of CameraX in Jetpack Compose

Hey there! Welcome again to our collection on CameraX and Jetpack Compose. Within the earlier posts, we’ve coated the basics of establishing a digital camera preview and added tap-to-focus performance.

  • 🧱 Half 1: Constructing a fundamental digital camera preview utilizing the brand new camera-compose artifact. We coated permission dealing with and fundamental integration.
  • 👆 Half 2: Utilizing the Compose gesture system, graphics, and coroutines to implement a visible tap-to-focus.
  • 🔦 Half 3 (this put up): Exploring tips on how to overlay Compose UI parts on prime of your digital camera preview for a richer consumer expertise.
  • đź“‚ Half 4: Utilizing adaptive APIs and the Compose animation framework to easily animate to and from tabletop mode on foldable telephones.

On this put up, we’ll dive into one thing a bit extra visually participating — implementing a highlight impact on prime of our digital camera preview, utilizing face detection as the idea for the impact. Why, you say? I’m unsure. But it surely certain appears cool 🙂. And, extra importantly, it demonstrates how we are able to simply translate sensor coordinates into UI coordinates, permitting us to make use of them in Compose!

First, let’s modify the CameraPreviewViewModel to allow face detection. We’ll use the Camera2Interop API, which permits us to work together with the underlying Camera2 API from CameraX. This offers us the chance to make use of digital camera options that aren’t uncovered by CameraX instantly. We have to make the next adjustments:

  • Create a StateFlow that accommodates the face bounds as a listing of Rects.
  • Set the STATISTICS_FACE_DETECT_MODE seize request choice to FULL, which allows face detection.
  • Set a CaptureCallback to get the face data from the seize consequence.
class CameraPreviewViewModel : ViewModel() {
...
personal val _sensorFaceRects = MutableStateFlow(listOf<Rect>())
val sensorFaceRects: StateFlow<Record<Rect>> = _sensorFaceRects.asStateFlow()

personal val cameraPreviewUseCase = Preview.Builder()
.apply {
Camera2Interop.Extender(this)
.setCaptureRequestOption(
CaptureRequest.STATISTICS_FACE_DETECT_MODE,
CaptureRequest.STATISTICS_FACE_DETECT_MODE_FULL
)
.setSessionCaptureCallback(object : CameraCaptureSession.CaptureCallback() {
override enjoyable onCaptureCompleted(
session: CameraCaptureSession,
request: CaptureRequest,
consequence: TotalCaptureResult
) {
tremendous.onCaptureCompleted(session, request, consequence)
consequence.get(CaptureResult.STATISTICS_FACES)
?.map { face -> face.bounds.toComposeRect() }
?.toList()
?.let { faces -> _sensorFaceRects.replace { faces } }
}
})
}
.construct().apply {
...
}

With these adjustments in place, our view mannequin now emits a listing of Rectobjects representing the bounding packing containers of detected faces in sensor coordinates.

The bounding packing containers of detected faces that we saved within the final part use coordinates within the sensor coordinate system. To attract the bounding packing containers in our UI, we have to rework these coordinates in order that they’re right within the Compose coordinate system. We have to:

  • Remodel the sensor coordinates into preview buffer coordinates
  • Remodel the preview buffer coordinates into Compose UI coordinates

These transformations are accomplished utilizing transformation matrices. Every of the transformations has its personal matrix:

We are able to create a helper technique that may do the transformation for us:

personal enjoyable Record<Rect>.transformToUiCoords(
transformationInfo: SurfaceRequest.TransformationInfo?,
uiToBufferCoordinateTransformer: MutableCoordinateTransformer
): Record<Rect> = this.map { sensorRect ->
val bufferToUiTransformMatrix = Matrix().apply {
setFrom(uiToBufferCoordinateTransformer.transformMatrix)
invert()
}

val sensorToBufferTransformMatrix = Matrix().apply {
transformationInfo?.let {
setFrom(it.sensorToBufferTransform)
}
}

val bufferRect = sensorToBufferTransformMatrix.map(sensorRect)
val uiRect = bufferToUiTransformMatrix.map(bufferRect)

uiRect
}

  • We iterate by the record of detected faces, and for every face execute the transformation.
  • The CoordinateTransformer.transformMatrix that we get from our CameraXViewfinder transforms coordinates from UI to buffer coordinates by default. In our case, we would like the matrix to work the opposite means round, reworking buffer coordinates into UI coordinates. Subsequently, we use the invert() technique to invert the matrix.
  • We first rework the face from sensor coordinates to buffer coordinates utilizing the sensorToBufferTransformMatrix, after which rework these buffer coordinates to UI coordinates utilizing the bufferToUiTransformMatrix.

Now, let’s replace the CameraPreviewContent composable to attract the highlight impact. We’ll use a Canvas composable to attract a gradient masks over the preview, making the detected faces seen:

@Composable
enjoyable CameraPreviewContent(
viewModel: CameraPreviewViewModel,
modifier: Modifier = Modifier,
lifecycleOwner: LifecycleOwner = LocalLifecycleOwner.present
) {
val surfaceRequest by viewModel.surfaceRequest.collectAsStateWithLifecycle()
val sensorFaceRects by viewModel.sensorFaceRects.collectAsStateWithLifecycle()
val transformationInfo by
produceState<SurfaceRequest.TransformationInfo?>(null, surfaceRequest) {
strive {
surfaceRequest?.setTransformationInfoListener(Runnable::run) { transformationInfo ->
worth = transformationInfo
}
awaitCancellation()
} lastly {
surfaceRequest?.clearTransformationInfoListener()
}
}
val shouldSpotlightFaces by bear in mind {
derivedStateOf { sensorFaceRects.isNotEmpty() && transformationInfo != null}
}
val spotlightColor = Shade(0xDDE60991)
..

surfaceRequest?.let { request ->
val coordinateTransformer = bear in mind { MutableCoordinateTransformer() }
CameraXViewfinder(
surfaceRequest = request,
coordinateTransformer = coordinateTransformer,
modifier = ..
)

AnimatedVisibility(shouldSpotlightFaces, enter = fadeIn(), exit = fadeOut()) {
Canvas(Modifier.fillMaxSize()) {
val uiFaceRects = sensorFaceRects.transformToUiCoords(
transformationInfo = transformationInfo,
uiToBufferCoordinateTransformer = coordinateTransformer
)

// Fill the entire area with the colour
drawRect(spotlightColor)
// Then extract every face and make it clear

uiFaceRects.forEach { faceRect ->
drawRect(
Brush.radialGradient(
0.4f to Shade.Black, 1f to Shade.Clear,
heart = faceRect.heart,
radius = faceRect.minDimension * 2f,
),
blendMode = BlendMode.DstOut
)
}
}
}
}
}

Right here’s the way it works:

  • We gather the record of faces from the view mannequin.
  • To verify we’re not recomposing the entire display screen each time the record of detected faces adjustments, we use derivedStateOf to maintain monitor of whether or not any faces are detected in any respect. This could then be used with AnimatedVisibility to animate the coloured overlay out and in.
  • The surfaceRequest accommodates the knowledge we have to rework sensor coordinates to buffer coordinates within the SurfaceRequest.TransformationInfo. We use the produceState operate to arrange a listener within the floor request, and clear this listener when the composable leaves the composition tree.
  • We use a Canvas to attract a translucent pink rectangle that covers all the display screen.
  • We defer the studying of the sensorFaceRects variable till we’re contained in the Canvas draw block. Then we rework the coordinates into UI coordinates.
  • We iterate over the detected faces, and for every face, we draw a radial gradient that may make the within of the face rectangle clear.
  • We use BlendMode.DstOut to guarantee that we’re reducing out the gradient from the pink rectangle, creating the highlight impact.

Be aware: Whenever you change the digital camera to DEFAULT_FRONT_CAMERA you’ll discover that the highlight is mirrored! It is a identified difficulty, tracked within the Google Difficulty Tracker.

With this code, we now have a completely useful highlight impact that highlights detected faces. You could find the total code snippet right here.

This impact is just the start — by utilizing the facility of Compose, you possibly can create a myriad of visually gorgeous digital camera experiences. Having the ability to rework sensor and buffer coordinates into Compose UI coordinates and again means we are able to make the most of all Compose UI options and combine them seamlessly with the underlying digital camera system. With animations, superior UI graphics, easy UI state administration, and full gesture management, your creativeness is the restrict!

Within the closing put up of the collection, we’ll dive into tips on how to use adaptive APIs and the Compose animation framework to seamlessly transition between completely different digital camera UIs on foldable units. Keep tuned!



Supply hyperlink

RELATED ARTICLES

Most Popular

Recent Comments