Spatial Computing Agents for AR/VR Builders
The agency-agents spatial computing lineup for AR, VR, and WebXR builders, including Vision Pro, Quest, and WebXR specialists.
Spatial computing is where mobile was in 2009. The platforms (Apple Vision Pro, Meta Quest, WebXR) are maturing. The developer tools are stabilizing. The design patterns are finally consolidating. But the talent pool is tiny — there are maybe a few thousand developers worldwide with deep spatial computing experience.
The spatial computing agents in msitarzewski/agency-agents help close that gap. This article walks through the roster and explains how they fit into modern AR/VR development.
Key Takeaways
- Spatial computing agents cover Vision Pro, Quest, WebXR, and cross-platform patterns
- Agents help with spatial UX, hand tracking, passthrough design, and performance
- Best suited for developers already familiar with at least one spatial platform
- Pair with game development and engineering agents for a full virtual studio
- All MIT licensed via msitarzewski/agency-agents
The spatial computing roster
Vision Pro Specialist
Deep knowledge of visionOS, RealityKit, RealityView, and SwiftUI for spatial interfaces. Knows the quirks of the platform: eye tracking as input, privacy-preserving hand gestures, and shared space vs full space modes.
Particularly strong at designing interfaces that feel native to Vision Pro rather than ports of flat apps. If you're building for Vision Pro, this is the agent to install first.
Quest Developer
Meta Quest (Quest 2, 3, Pro, and 3S) specialist. Handles Unity and Unreal integration, Meta SDK specifics, and the constraints of standalone VR (mobile-class GPU, battery life, heat management).
Fluent in Quest-specific topics like guardian system integration, hand tracking APIs, and the Meta Quest Store submission process.
WebXR Developer
For browser-based AR/VR, this agent knows Three.js, A-Frame, Babylon.js, and the WebXR Device API. It's particularly useful for cross-platform experiences that need to work on Quest, Vision Pro, and desktop browsers simultaneously.
Spatial UX Designer
Where flat-UI designers struggle, this agent thrives. Spatial UX has its own rules: depth cues, personal space, reachability zones, and the surprising importance of audio cues for non-focal UI elements. The agent applies these principles to specific design problems.
Mixed Reality Designer
For passthrough AR experiences that blend virtual content with the real world. Addresses lighting estimation, occlusion, spatial anchoring, and the specific UX patterns that make MR feel grounded rather than floating.
3D Interaction Specialist
Hand tracking, controller input, gaze selection, voice commands. This agent is the go-to for input modalities in spatial environments. Knows when each mode is appropriate and how to combine them.
Immersive Performance Engineer
Spatial computing has brutal performance requirements: 90-120 fps rendered twice (once per eye) on mobile-class hardware. This agent knows the optimization tricks: foveated rendering, fixed vs dynamic LODs, occlusion culling, and shader budgets.
Spatial Audio Designer
Audio is half of immersion in VR. This agent handles HRTF, spatial audio sources, reverb zones, and adaptive audio that responds to the player's environment.
How agents fit into a spatial project
For a solo developer building a small spatial app, the recommended starter lineup is:
- Platform specialist matching your primary target (Vision Pro, Quest, or WebXR)
- Spatial UX Designer for the interface design
- 3D Interaction Specialist for input handling
- Immersive Performance Engineer when you hit the framerate wall
Four agents, deep expertise, no salary. Combined with the engine specialists from the game development roundup, you have a credible virtual studio.
A real project workflow
Here's how a spatial developer we spoke with uses the agents on a Vision Pro app:
Concept phase. Developer describes the app idea to the Spatial UX Designer. Gets feedback on whether the idea leans into spatial affordances or just ports flat UI.
Design phase. With the concept approved, the Spatial UX Designer and 3D Interaction Specialist collaborate on a detailed interaction spec. The developer reviews and commits.
Implementation phase. Vision Pro Specialist handles the code-level questions: "How do I make a RealityView respond to a pinch gesture while preserving scroll?"
Performance phase. Immersive Performance Engineer audits frame time after major milestones. Identifies expensive operations and proposes optimizations.
Polish phase. Spatial Audio Designer specs the audio layer. Spatial UX Designer does a final review for consistency.
Submission. Quest Developer or Vision Pro Specialist handles store submission checklists.
Total agent cost for a small app: probably under $20 in API credits across the entire project.
Where they fall short
Spatial computing moves fast. The agents may not know the latest SDK features from the last few months. Provide documentation links in context when working with cutting-edge APIs.
They also can't see your 3D scene. Unlike a human reviewer, they can't look at a headset view and say "that button is too far away." You have to describe the scene or provide screenshots (if using a multimodal model).
And finally, they don't know your specific hardware quirks. Every Quest revision has subtle differences. Every visionOS update changes something. Expect to do some validation yourself.
Frequently Asked Questions
Do these work for AR-only mobile apps (ARKit/ARCore)?
Yes. The Mixed Reality Designer and 3D Interaction Specialist are platform-agnostic. Tell them the platform explicitly and they'll adapt.
Can they write shader code for XR?
Simple shaders yes, complex PBR or custom lighting models, not reliably. Pair with a general engineering agent for the heavy lifting.
Is there a Unity-for-XR specialist?
Use the Unity Specialist from the game dev roundup combined with the relevant platform specialist. The combination covers most needs.
What about haptic design?
Not a dedicated agent, but the Spatial Audio Designer has some haptic knowledge and will help if asked.
Should I use these if I've never built for VR before?
The agents assume some familiarity with the platform. If you're brand new, start with platform tutorials and then bring in the agents to deepen specific areas.
Build for the next frontier
Spatial computing is still small today, but it's the platform most likely to be obvious in hindsight. Developers who get early experience now will be in high demand later. Agency-agents can help you get that experience faster than going alone.
Browse all 150 agents at aiskill.market/agents or submit your own skill.