Skip to content

SceneView/sceneview

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1,080 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

SceneView

SceneView Logo

3D is just Compose UI.

SceneView brings 3D and AR into Jetpack Compose (Android) and SwiftUI (iOS). Write a Scene { } the same way you write a Column { }. Nodes are composables. Lifecycle is automatic. State drives everything.

Sceneview ARSceneview Filament ARCore

Discord Open Collective

The idea

You already know how to build a screen:

Column {
    Text("Title")
    Image(painter = painterResource(R.drawable.cover), contentDescription = null)
    Button(onClick = { /* ... */ }) { Text("Open") }
}

This is a 3D scene — a photorealistic helmet, HDR lighting, orbit-camera gestures:

Scene(modifier = Modifier.fillMaxSize()) {
    rememberModelInstance(modelLoader, "models/helmet.glb")?.let { instance ->
        ModelNode(modelInstance = instance, scaleToUnits = 1.0f, autoAnimate = true)
    }
    LightNode(type = LightManager.Type.SUN) {
        intensity(100_000f)
        castShadows(true)
    }
}

Same pattern. Same Kotlin. Same mental model — now with depth.

No engine lifecycle callbacks. No addChildNode / removeChildNode. No onResume/onPause overrides. No manual cleanup. The Compose runtime handles all of it.


AR in 15 lines

var anchor by remember { mutableStateOf<Anchor?>(null) }

ARScene(
    modifier = Modifier.fillMaxSize(),
    planeRenderer = true,
    onSessionUpdated = { _, frame ->
        if (anchor == null) {
            anchor = frame.getUpdatedPlanes()
                .firstOrNull { it.type == Plane.Type.HORIZONTAL_UPWARD_FACING }
                ?.let { frame.createAnchorOrNull(it.centerPose) }
        }
    }
) {
    anchor?.let { a ->
        AnchorNode(anchor = a) {
            ModelNode(modelInstance = helmet, scaleToUnits = 0.5f)
        }
    }
}

When the plane is detected, anchor becomes non-null. Compose recomposes. AnchorNode enters the composition. The model appears — anchored to the physical world. When anchor is cleared, the node is removed and destroyed automatically. Pure Compose semantics, in AR.


What's new in 3.0

SceneView 3.0 is a ground-up rewrite around a single idea: 3D is just more Compose UI.

What changed What it means for you
Scene { } / ARScene { } content block Declare nodes as composables — no list, no add()
SceneScope / ARSceneScope DSL Every node type (ModelNode, AnchorNode, LightNode, ...) is @Composable
NodeScope trailing lambda Nest child nodes exactly like Column { } nests children
rememberModelInstance Async loading — returns null while loading, recomposes when ready
SceneNodeManager Internal bridge — Compose snapshot state drives the Filament scene graph
ViewNode Embed any Compose UI as a 3D billboard inside the scene
SurfaceType enum Choose SurfaceView (best performance) or TextureView (transparency)
All resources are remember Engine, loaders, environment, camera — Compose owns the lifecycle

See MIGRATION.md for a step-by-step upgrade guide from 2.x.

Table of Contents


3D with Compose

Installation

dependencies {
    implementation("io.github.sceneview:sceneview:3.2.0")
}

Quick start

Scene is a @Composable that renders a Filament 3D viewport. Think of it as a Box that adds a third dimension — everything inside its trailing block is declared with the SceneScope DSL.

@Composable
fun ModelViewerScreen() {
    val engine = rememberEngine()
    val modelLoader = rememberModelLoader(engine)
    val environmentLoader = rememberEnvironmentLoader(engine)

    // Loaded asynchronously — null until ready, then recomposition places it in the scene
    val modelInstance = rememberModelInstance(modelLoader, "models/damaged_helmet.glb")
    val environment = rememberEnvironment(environmentLoader) {
        environmentLoader.createHDREnvironment("environments/sky_2k.hdr")
            ?: createEnvironment(environmentLoader)
    }

    Scene(
        modifier = Modifier.fillMaxSize(),
        engine = engine,
        modelLoader = modelLoader,
        environment = environment,
        cameraManipulator = rememberCameraManipulator(),
        mainLightNode = rememberMainLightNode(engine) { intensity = 100_000.0f },
        onGestureListener = rememberOnGestureListener(
            onDoubleTap = { _, node -> node?.apply { scale *= 2.0f } }
        )
    ) {
        // ── Everything below is 3D Compose ─────────────────────────────────

        modelInstance?.let { instance ->
            ModelNode(modelInstance = instance, scaleToUnits = 1.0f, autoAnimate = true)
        }

        // Nodes nest exactly like Compose UI
        Node(position = Position(y = 1.5f)) {
            CubeNode(size = Size(0.2f), materialInstance = redMaterial)
            SphereNode(radius = 0.1f)
        }
    }
}

That's it. No engine lifecycle callbacks, no onResume/onPause overrides, no manual scene graph bookkeeping. The Compose runtime handles all of it.

SceneScope DSL reference

All composables available inside Scene { }:

Composable Description
ModelNode(modelInstance, scaleToUnits?) Renders a glTF/GLB model. Set isEditable = true to enable pinch-to-scale and drag-to-rotate.
LightNode(type) Directional, point, spot, or sun light
CameraNode() Named camera (e.g. imported from a glTF)
CubeNode(size, materialInstance?) Box geometry
SphereNode(radius, materialInstance?) Sphere geometry
CylinderNode(radius, height, materialInstance?) Cylinder geometry
PlaneNode(size, normal, materialInstance?) Flat quad geometry
ImageNode(bitmap / fileLocation / resId) Image rendered on a plane
ViewNode(windowManager) { ComposeUI } Compose UI rendered as a 3D surface
MeshNode(primitiveType, vertexBuffer, indexBuffer) Custom GPU mesh
Node() Pivot / group node

Gesture sensitivityNode exposes scaleGestureSensitivity: Float (default 0.5). Lower values make pinch-to-scale feel more progressive. Tune it per-node in the apply block:

ModelNode(modelInstance = instance, isEditable = true, apply = {
    scaleGestureSensitivity = 0.3f   // 1.0 = raw, lower = more damped
    editableScaleRange = 0.2f..1.0f
})

Every node accepts an optional content trailing lambda — a NodeScope where child composables are automatically parented to the enclosing node:

Scene {
    Node(position = Position(y = 0.5f)) {    // NodeScope
        ModelNode(modelInstance = helmet)     // child of Node
        CubeNode(size = Size(0.05f))          // sibling, still a child of Node
    }
}

Async model loadingrememberModelInstance returns null while the file loads on Dispatchers.IO, then triggers recomposition. The node appears automatically when ready:

Scene {
    rememberModelInstance(modelLoader, "models/helmet.glb")?.let { instance ->
        ModelNode(modelInstance = instance, scaleToUnits = 0.5f)
    }
}

Compose UI inside 3D spaceViewNode renders any composable onto a plane in the scene:

val windowManager = rememberViewNodeManager()

Scene {
    ViewNode(windowManager = windowManager) {
        Card {
            Text("Hello from 3D!")
            Button(onClick = { /* ... */ }) { Text("Click me") }
        }
    }
}

Reactive state — pass any State directly into node parameters. The scene updates on every state change with no manual synchronisation:

var rotationY by remember { mutableFloatStateOf(0f) }
LaunchedEffect(Unit) { while (true) { withFrameNanos { rotationY += 0.5f } } }

Scene {
    ModelNode(
        modelInstance = helmet,
        rotation = Rotation(y = rotationY)   // recomposes on every frame change
    )
}

Tap interactionisEditable = true enables pinch-to-scale, drag-to-move, and two-finger-rotate gestures on any node with zero extra code:

Scene(
    onGestureListener = rememberOnGestureListener(
        onSingleTapConfirmed = { event, node -> println("Tapped: ${node?.name}") }
    )
) {
    ModelNode(modelInstance = helmet, isEditable = true)
}

Surface type — choose the backing Android surface:

// SurfaceView — renders behind Compose layers, best GPU performance (default)
Scene(surfaceType = SurfaceType.Surface)

// TextureView — renders inline with Compose, supports transparency / alpha blending
Scene(surfaceType = SurfaceType.TextureSurface, isOpaque = false)

Samples

Sample What it shows
Model Viewer Animated camera orbit around a glTF model, HDR environment, double-tap to scale
glTF Camera Use a camera node imported directly from a glTF file
Camera Manipulator Orbit / pan / zoom camera interaction
Autopilot Demo Full animated scene built entirely with geometry nodes — no model files needed

AR with Compose

Installation

dependencies {
    // Includes sceneview — no need to add both
    implementation("io.github.sceneview:arsceneview:3.2.0")
}

Add to AndroidManifest.xml:

<uses-permission android:name="android.permission.CAMERA" />
<uses-feature android:name="android.hardware.camera.ar" android:required="true" />

<application>
    <meta-data android:name="com.google.ar.core" android:value="required" />
</application>

Quick start

ARScene is Scene with ARCore wired in. The camera is driven by ARCore tracking. Everything else — anchors, models, lights, UI — is declared in the ARSceneScope content block. Normal Compose state decides what is in the scene.

var anchor by remember { mutableStateOf<Anchor?>(null) }

val engine = rememberEngine()
val modelLoader = rememberModelLoader(engine)
val modelInstance = rememberModelInstance(modelLoader, "models/helmet.glb")

ARScene(
    modifier = Modifier.fillMaxSize(),
    engine = engine,
    modelLoader = modelLoader,
    cameraNode = rememberARCameraNode(engine),
    planeRenderer = true,
    sessionConfiguration = { session, config ->
        config.depthMode =
            if (session.isDepthModeSupported(Config.DepthMode.AUTOMATIC))
                Config.DepthMode.AUTOMATIC
            else Config.DepthMode.DISABLED
        config.instantPlacementMode = Config.InstantPlacementMode.LOCAL_Y_UP
        config.lightEstimationMode = Config.LightEstimationMode.ENVIRONMENTAL_HDR
    },
    onSessionUpdated = { _, frame ->
        if (anchor == null) {
            anchor = frame.getUpdatedPlanes()
                .firstOrNull { it.type == Plane.Type.HORIZONTAL_UPWARD_FACING }
                ?.let { frame.createAnchorOrNull(it.centerPose) }
        }
    }
) {
    // ── AR Compose content ───────────────────────────────────────────────────

    anchor?.let {
        AnchorNode(anchor = it) {
            // All SceneScope nodes are available inside AR nodes too
            modelInstance?.let { instance ->
                ModelNode(modelInstance = instance, scaleToUnits = 0.5f)
            }
        }
    }
}

The anchor drives state. When anchor changes, Compose recomposes and AnchorNode appears. When the anchor is cleared, the node is removed and destroyed automatically. AR state is just Kotlin state.

ARSceneScope DSL reference

ARScene { } provides everything from SceneScope plus:

Composable Description
AnchorNode(anchor) Follows a real-world ARCore anchor
PoseNode(pose) Follows a world-space pose (non-persistent)
HitResultNode(xPx, yPx) Auto hit-tests at a screen coordinate each frame
HitResultNode { frame -> hitResult } Custom hit-test lambda
AugmentedImageNode(augmentedImage) Tracks a detected real-world image
AugmentedFaceNode(augmentedFace) Renders a mesh aligned to a detected face
CloudAnchorNode(anchor) Persistent cross-device anchor via Google Cloud
TrackableNode(trackable) Follows any ARCore trackable
StreetscapeGeometryNode(streetscapeGeometry) Renders a Geospatial streetscape mesh

Augmented Images

ARScene(
    sessionConfiguration = { session, config ->
        config.augmentedImageDatabase = AugmentedImageDatabase(session).also { db ->
            db.addImage("cover", coverBitmap)
        }
    },
    onSessionUpdated = { _, frame ->
        frame.getUpdatedTrackables(AugmentedImage::class.java)
            .filter { it.trackingState == TrackingState.TRACKING }
            .forEach { detectedImages += it }
    }
) {
    detectedImages.forEach { image ->
        AugmentedImageNode(augmentedImage = image) {
            ModelNode(modelInstance = rememberModelInstance(modelLoader, "drone.glb"))
        }
    }
}

Augmented Faces

ARScene(
    sessionFeatures = setOf(Session.Feature.FRONT_CAMERA),
    sessionConfiguration = { _, config ->
        config.augmentedFaceMode = Config.AugmentedFaceMode.MESH3D
    },
    onSessionUpdated = { session, _ ->
        detectedFaces = session.getAllTrackables(AugmentedFace::class.java)
            .filter { it.trackingState == TrackingState.TRACKING }
    }
) {
    detectedFaces.forEach { face ->
        AugmentedFaceNode(augmentedFace = face, meshMaterialInstance = faceMaterial)
    }
}

Geospatial Streetscape

ARScene(
    sessionConfiguration = { _, config ->
        config.geospatialMode = Config.GeospatialMode.ENABLED
        config.streetscapeGeometryMode = Config.StreetscapeGeometryMode.ENABLED
    },
    onSessionUpdated = { _, frame ->
        geometries = frame.getUpdatedTrackables(StreetscapeGeometry::class.java).toList()
    }
) {
    geometries.forEach { geo ->
        StreetscapeGeometryNode(streetscapeGeometry = geo, meshMaterialInstance = buildingMat)
    }
}

Samples

Sample What it shows
AR Model Viewer Tap-to-place on detected planes, model picker, animated reticle, pinch-to-scale, drag-to-rotate
AR Augmented Image Overlay content on detected real-world images
AR Cloud Anchors Host and resolve persistent cross-device anchors
AR Point Cloud Visualise ARCore feature points
Autopilot Demo Autonomous AR scene driven entirely by Compose state

iOS (SwiftUI + RealityKit)

SceneView is also available for iOS via the SceneViewSwift package, built on SwiftUI and RealityKit. Same concepts — declarative scene building, model loading, gesture controls — using native Apple frameworks.

// Package.swift
dependencies: [
    .package(url: "https://github.com/SceneView/SceneViewSwift.git", from: "0.1.0")
]
SceneView { root in
    let model = try? await ModelNode.load("helmet.usdz")
    model?.scaleToUnits(1.0)
    if let model { root.addChild(model.entity) }
}
.environment(.studio)
.cameraControls(.orbit)

See the SceneViewSwift/ directory for the full library, demo app, and documentation.

Kotlin Multiplatform (sceneview-core)

The core math, collision, geometry, animation, and physics modules are shared across Android and iOS via Kotlin Multiplatform in sceneview-core/. This includes Vector3, Quaternion, Ray, Box, Sphere, Earcut, Delaunator, spring animations, and more.

Platform parity

Feature Android iOS
3D scene composable Scene { } SceneView { }
AR scene ARScene { } ARSceneView(...)
Model loading glTF/GLB USDZ
Procedural geometry CubeNode, SphereNode, CylinderNode, PlaneNode GeometryNode (cube, sphere, cylinder, cone, plane)
Text TextNode TextNode
Billboards BillboardNode BillboardNode
Lines LineNode LineNode
Lighting LightNode LightNode
Orbit camera rememberCameraManipulator() .cameraControls(.orbit)
Environment/HDR rememberEnvironment() .environment(.studio)
Gesture editing isEditable = true Drag/pinch/tap built-in
Physics PhysicsNode --
Dynamic sky DynamicSkyNode --
Augmented images AugmentedImageNode --
Face tracking AugmentedFaceNode --
Cloud anchors CloudAnchorNode --
Renderer Google Filament Apple RealityKit
AR framework Google ARCore Apple ARKit

Resources

Documentation

Community

Related Projects

Support the project

SceneView is open-source and community-funded.

About

The #1 Android 3D & AR SDK — Jetpack Compose composables powered by Google Filament and ARCore. Drop-in Scene{} and ARScene{} for model viewing, AR placement, and immersive experiences. Successor to Google Sceneform.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors