Over the past few months, Apple experts have fielded questions about visionOS in Apple Vision Pro developer labs all over the world. Here are answers to some of the most frequent questions they’ve been asked, including insights on new concepts like entities, immersive spaces, collision shapes, and much more.How can I interact with an entity using gestures?There are three important pieces to enabling gesture-based entity interaction:
The entity must have an InputTargetComponent. Otherwise, it won’t receive gesture input at all.
The entity must have a CollisionComponent. The shapes of the collision component define the regions that gestures can actually hit, so make sure the collision shapes are specified appropriately for interaction with your entity.
The gesture that you’re using must be targeted to the entity you’re trying to interact with (or to any entity). For example:
private var tapGesture: some Gesture {
TapGesture()
.targetedToAnyEntity()
.onEnded { gestureValue in
let tappedEntity = gestureValue.entity
print(tappedEntity.name)
}
}It’s also a good idea to give an interactive entity a HoverEffectComponent, which enables the system to trigger a standard highlight effect when the user looks at the entity.Should I use a window group, an immersive space, or both?Consider the technical differences between windows, volumes, and immersive spaces when you decide which scene type to use for a particular feature in your app.
Here are some significant technical differences that you should factor into your decision:
Windows and volumes from other apps the user has open are hidden when an immersive space is open.
Windows and volumes clip content that exceeds their bounds.
Users have full control over the placement of windows and volumes. Apps have full control over the placement of content in an immersive space.
Volumes have a fixed size, windows are resizable.
ARKit only delivers data to your app if it has an open immersive space.
Explore the Hello World sample code to familiarize yourself with the behaviors of each scene type in visionOS.How can I visualize collision shapes in my scene?Use the Collision Shapes debug visualization in the Debug Visualizations menu, where you can find several other helpful debug visualizations as well. For information on debug visualizations, check out Diagnosing issues in the appearance of a running app.Can I position SwiftUI views within an immersive space?Yes! You can position SwiftUI views in an immersive space with the offset(x:y:) and offset(z:) methods. It’s important to remember that these offsets are specified in points, not meters. You can utilize PhysicalMetric to convert meters to points.What if I want to position my SwiftUI views relative to an entity in a reality view?Use the RealityView attachments API to create a SwiftUI view and make it accessible as a ViewAttachmentEntity. This entity can be positioned, oriented, and scaled just like any other entity.RealityView { content, attachments in
// Fetch the attachment entity using the unique identifier.
let attachmentEntity = attachments.entity(for: "uniqueID")!
// Add the attachment entity as RealityView content.
content.add(attachmentEntity)
} attachments: {
// Declare a view that attaches to an entity.
Attachment(id: "uniqueID") {
Text("My Attachment")
}
}Can I position windows programmatically?There’s no API available to position windows, but we’d love to know about your use case. Please file an enhancement request. For more information on this topic, check out Positioning and sizing windows.Is there any way to know what the user is looking at?As noted in Adopting best practices for privacy and user preferences, the system handles camera and sensor inputs without passing the information to apps directly. There's no way to get precise eye movements or exact line of sight. Instead, create interface elements that people can interact with and let the system manage the interaction. If you have a use case that you can't get to work this way, and as long as it doesn't require explicit eye tracking, please file an enhancement request.When are the onHover and onContinuousHover actions called on visionOS?The onHover and onContinuousHover actions are called when a finger is hovering over the view, or when the pointer from a connected trackpad is hovering over the view.Can I show my own immersive environment textures in my app?If your app has an ImmersiveSpace open, you can create a large sphere with an UnlitMaterial and scale it to have inward-facing geometry:struct ImmersiveView: View {
var body: some View {
RealityView { content in
do {
// Create the sphere mesh.
let mesh = MeshResource.generateSphere(radius: 10)
// Create an UnlitMaterial.
var material = UnlitMaterial(applyPostProcessToneMap: false)
// Give the UnlitMaterial your equirectangular color texture.
let textureResource = try await TextureResource(named: "example")
material.color = .init(tint: .white, texture: .init(textureResource))
// Create the model.
let entity = ModelEntity(mesh: mesh, materials: [material])
// Scale the model so that it's mesh faces inward.
entity.scale.x *= -1
content.add(entity)
} catch {
// Handle the error.
}
}
}
}I have existing stereo videos. How can I convert them to MV-HEVC?AVFoundation provides APIs to write videos in MV-HEVC format.
To convert your videos to MV-HEVC:
Create an AVAsset for each of the left and right views.
Use AVOutputSettingsAssistant to get output settings that work for MV-HEVC.
Specify the horizontal disparity adjustment and field of view (this is asset specific). Here’s an example:
var compressionProperties = outputSettings[AVVideoCompressionPropertiesKey] as! [String: Any]
// Specifies the parallax plane.
compressionProperties[kVTCompressionPropertyKey_HorizontalDisparityAdjustment as String] = horizontalDisparityAdjustment
// Specifies the horizontal FOV (90 degrees is chosen in this case.)
compressionProperties[kCMFormatDescriptionExtension_HorizontalFieldOfView as String] = horizontalFOV
Create an AVAssetWriterInputTaggedPixelBufferGroupAdaptor as the input for your AVAssetWriter.
Create an AVAssetReader for each of the left and right video tracks.
Read the left and right tracks, then append matching samples to the tagged pixel buffer group adaptor:
// Create a tagged buffer for each stereoView.
let taggedBuffers: [CMTaggedBuffer] = [
.init(tags: [.videoLayerID(0), .stereoView(.leftEye)], pixelBuffer: leftSample.imageBuffer!),
.init(tags: [.videoLayerID(1), .stereoView(.rightEye)], pixelBuffer: rightSample.imageBuffer!)
]
// Append the tagged buffers to the asset writer input adaptor.
let didAppend = adaptor.appendTaggedBuffers(taggedBuffers,
withPresentationTime: leftSample.presentationTimeStamp)How can I light my scene in RealityKit on visionOS?You can light your scene in RealityKit on visionOS by:
Using a system-provided automatic lighting environment that updates based on real-world surroundings.
Providing your own image-based lighting via an ImageBasedLightComponent. To see an example, create a new visionOS app, select RealityKit as the Immersive Space Renderer, and select Full as the Immersive Space.
I see that CustomMaterial isn’t supported on visionOS. Is there a way I can create materials with custom shading?You can create materials with custom shading in Reality Composer Pro using the Shader Graph. A material created this way is accessible to your app as a ShaderGraphMaterial, so that you can dynamically change inputs to the shader in your code.
For a detailed introduction to the Shader Graph, watch Explore materials in Reality Composer Pro.How can I position entities relative to the position of the device?In an ImmersiveSpace, you can get the full transform of the device using the queryDeviceAnchor(atTimestamp:) method.Learn more about building apps for visionOS
Q&A: Spatial design for visionOS
Get expert advice from the Apple design team on creating experiences for Apple Vision Pro.
View now
Spotlight on: Developing for visionOS
Learn how the developers behind djay, Blackbox, JigSpace, and XRHealth started designing and building apps for Apple Vision Pro.
View now
Spotlight on: Developer tools for visionOS
Learn how developers are using Xcode, Reality Composer Pro, and the visionOS simulator to start building apps for Apple Vision Pro.
View now
Sample code contained herein is provided under the Apple Sample Code License.