We heard from you that you love the functionality that both the CameraX and Jetpack Compose libraries offer, but you want the idiomatic Compose API to build your camera UI. This year, our engineering team worked on two new Compose artifacts: a low-level viewfinder configuration and a high-level camera configuration. Both are currently available as alpha releases 🚀🚀🚀.
This blog post series explains how to integrate the camera configuration API into your app. But even more interestingly, here are some of the fun UI experiences made possible by the integration with Compose. All the amazing Compose features like the adaptive API and animation support are seamlessly integrated with camera previews.
Here’s a quick overview of what’s included in each post:
🧱 Part 1 (this post): Build a basic camera preview using the new camera configuration artifact. Describes permission handling and basic integration. 👆 Part 2: Implement visual tap focus using the Compose gesture system, graphics, and coroutines. 🔎 Part 3: Explore how to overlay Compose UI elements on top of the camera. Preview for a richer user experience. 📂 Part 4: Run smooth animations to and from tabletop mode on a foldable phone using the adaptive API and the Compose animation framework.
After doing all this, your final app will look like this:
Plus, you can easily switch back and forth to tabletop mode.
By the end of this first post, you will have a functional camera viewfinder that can be expanded upon in subsequent parts of the series. Please write the code with us. This is the best way to learn.
We assume that you already have Compose set up in your app. If you want to proceed, just create a new app in Android Studio. I usually use the latest canary version. This is because it includes the latest Compose templates (and I like cutting edge environments 😀).
Add the following to the libs.versions.toml file.
(version)
..
camera x = “1.5.0-alpha03”
accompanist = “0.36.0” # or whatever matches your Compose version
(library)
..
Contains basic camera functionality such as # SurfaceRequest
androidx-camera-core = { module = “androidx.camera:camera-core”, version.ref = “camerax” }
# Includes CameraXViewfinder composable
androidx-camera-compose = { module = “androidx.camera:camera-compose”, version.ref = “camerax” }
# Allow camera preview to be bound to UI lifecycle
androidx-camera-lifecycle = { group = “androidx.camera”, name = “camera-lifecycle”, version.ref = “camerax” }
# Specific camera implementation to render the preview
androidx-camera-camera2 = { module = “androidx.camera:camera-camera2”, version.ref = “camerax” }
# Helper library to grant camera permissions
accompanist-permissions = { module = “com.google.accompanist:accompanist-permissions”, version.ref = “accompanist” }
Then add these to your module build.gradle.kts dependency block.
dependencies {
..
Implementation (libs.androidx.camera.core)
Implementation (libs.androidx.camera.compose)
Implementation (libs.androidx.camera.lifecycle)
Implementation (libs.androidx.camera.camera2)
Implementation (libs.accompanist.permissions)
}
I’ve added all the dependencies so that I can grant camera permissions and actually see the camera preview. Next, let’s look at granting appropriate permissions.
The Accompanist permissions library makes it easy to grant appropriate camera permissions. First, you need to configure AndroidManifest.xml.
..
Now you can follow the library’s instructions to grant the appropriate permissions.
Class MainActivity : ComponentActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
EnableEdgeToEdge()
setContent {
MyApplicationTheme {
CameraPreviewScreen()
}
}
}
}
@OptIn(ExperimentalPermissionsApi::class)
@composable
fun CameraPreviewScreen(modifier: modifier = modifier) {
val CameraPermissionState = rememberPermissionState(android.Manifest.permission.CAMERA)
if (cameraPermissionState.status.isGranted) {
CameraPreviewContent(modifier)
} else {
column(
modifier = modifier.fillMaxSize().wrapContentSize().widthIn(max = 480.dp),
horizontalAlignment = Alignment.Centerhorizontally
) {
val textToShow = if (cameraPermissionState.status.ShouldShowRationale) {
// If the user has denied permission and can provide a rationale for denying the permission,
// Next, carefully explain why your app needs this permission
“Oops! Looks like you need your camera to use your magic!” +
“Don’t worry, we just wanted to see your cute face (and maybe a few cats).”
“Give me permission and let the party begin!”
} else {
// If this is the first time the user accesses this feature, or if the user
// If you do not want to be asked for this permission again,
// permission required
“Hello! We need your camera to work our magic! ✨\n” +
“Grant permission and let the party begin! \uD83C\uDF89”
}
Text(textToShow, textAlign = TextAlign.Center)
Spacer (Modifier.height(16.dp))
Button(onClick = { CameraPermissionState.launchPermissionRequest() }) {
Text(“Unleash the camera!”)
}
}
}
}
@composable
private fun CameraPreviewContent(modifier: Modifier = Modifier) {
// TODO: Implement
}
This gives a nice UI where the user can give camera permission before showing the camera preview.
We recommend separating your business logic from your UI. To do this, create a view model for the screen. This view model sets up an example of how to use CameraX preview. Note that the CameraX usage examples represent configurations of different workflows (preview, capture, record, analyze, etc.) that can be implemented with the library. The view model binds the UI to the camera provider.
Class CameraPreviewViewModel : ViewModel() {
// Used to set up the link between camera and UI.
private val _surfaceRequest = MutableStateFlow(null)
val surfaceRequest: StateFlow = _surfaceRequest
private val CameraPreviewUseCase = Preview.Builder().build().apply {
setSurfaceProvider { newSurfaceRequest ->
_surfaceRequest.update { newSurfaceRequest }
}
}
fun pause bindToCamera(appContext: Context, lifecycleOwner: LifecycleOwner) {
val processCameraProvider = ProcessCameraProvider.awaitInstance(appContext)
processCameraProvider.bindToLifecycle(
lifecycleOwner, DEFAULT_FRONT_CAMERA, cameraPreviewUseCase
)
// Cancel signal indicating that camera usage is complete
Try { await cancel() } and finally { processCameraProvider.unbindAll() }
}
}
There’s quite a bit going on here! This code defines the CameraPreviewViewModel class, which is responsible for managing camera previews. Use the CameraX Preview builder to configure how previews are bound to the UI. The bindToCamera function initializes the camera, binds it to the provided LifecycleOwner so that it runs at least once the lifecycle starts, and starts the preview stream.
Cameras that are part of the internal camera library must be rendered to a surface provided by the UI. Therefore, the library needs a way to request surfaces. That’s exactly what SurfaceRequest is for. So whenever the camera indicates it wants a surface, the surfaceRequest will be triggered. You can then forward that request to the UI, where you can pass the surface to the request object.
Finally, wait until the UI has finished binding to the camera and free up camera resources to avoid leaks.
Now that the view model is complete, we can implement the CameraPreviewContent composable. It reads the surface request from the view model, binds to the camera while the composable is in the configuration tree, and calls the CameraXViewfinder from the library.
@composable
Fun CameraPreviewContent(
viewModel: CameraPreviewViewModel,
modifier: modifier = modifier,
lifecycleOwner: LifecycleOwner = LocalLifecycleOwner.current
) {
val surfaceRequest by viewModel.surfaceRequest.collectAsStateWithLifecycle()
val context = LocalContext.current
LaunchedEffect(lifecycleOwner) {
viewModel.bindToCamera(context.applicationContext, lifecycleOwner)
}
surfaceRequest?.let { request ->
Camera X viewfinder (
surfaceRequest = request,
modifier = modifier
)
}
}
As explained in the previous section, surfaceRequest allows the camera library to request surfaces when they are needed for rendering. This code collects these surfaceRequest instances and forwards them to the CameraXViewfinder, which is part of the camera configuration artifact.
The full screen viewfinder is now working. The complete code snippet can be found here. In the next blog post, we’ll add smooth tabletop mode by listening to the device’s display capabilities and using Compose animations to go back and forth to tabletop mode. stay tuned!