Roughly three months ago (April 2025), Meta officially opened access to the Passthrough Camera API for Quest 3, Quest 3S and Quest 2 headsets1. This move not only lets developers publish camera-powered mixed reality (MR) apps to the Horizon Store—it also enables entirely new levels of environmental interaction.
What This Unlocks
Until now, third‑party apps only had access to high‑level data—like skeletal tracking and basic meshes—that Meta extracted from the passthrough feed. Now, developers can directly access the RGB camera stream (up to 1280×960 @ 30 FPS, ~40‑60 ms latency) via Android’s Camera2 API within Horizon OS, complete with headset-pose metadata2.
That means apps can:
- Run custom computer-vision pipelines, like QR code or board detection.
- Apply photogrammetry or 3D scanning in-headset without needing an iPhone or LiDAR equipment.
- Overlay contextually intelligent content, thanks to real-time object recognition models.
.jpg)
Use Case 1: Boundary-Pinching → Virtual 3D Cards
One clever app lets the user scan an image—like an atomic diagram—by pinching the thumb and index finger to define boundaries. Once scanned, the physical image becomes a virtual card in the headset that, when placed on a surface, spawns a 3D version of the content3. All in VR, no phone needed. This is exactly the “bringing physical into virtual” approach the API intended.
Use Case 2: In-Headset Photogrammetry
Thanks to the API and depth sensors, developers can now build tools that let users scan their real-world spaces in VR and generate 3D/photogrammetry models inside the headset4. Previously you’d have to use a high-end iPhone or subject your space to external cameras. Now, you can:
- Scan your room in VR and export it as a custom VRChat world.
- Recreate your living room inside your virtual home.
- Build 3D assets for games or AR experiences fast and organically.
The last 3 minutes of this video showcase the photogrammetry application
Use Case 3: AI Vision Inside VR
Imagine your Quest 3 headset running an on-device or cloud AI that interprets what you see—objects, furniture, people—and describes them in real time 5. Think of it like Meta AI on smartglasses, but inside a fully immersive headset:
- Audio descriptions for accessibility.
- Context-aware assistance—“that’s a coffee mug; tap it to turn it into a virtual cup”.
- Intelligent overlays—property details during home tours, real-time workout guidance, tooltips on gadgets.
See it in action
Why This Matters: A New Era of Autonomy
By giving apps direct access to visual data, developers can create MR applications that see and react. This shifts the model from users manually defining their surroundings to apps that autonomously understand them. It’s not full AI consciousness, but it’s a crucial step toward environment-aware XR apps6.
Tech Conditions & Constraints
- Latency & FPS: The camera stream runs at ~30 FPS with noticeable latency, so it’s not ideal for fast-moving tracking or fine text parsing2.
- Permission & Privacy: Users must grant camera access at run-time, similar to granting mic or location access. Meta still vets such apps to prevent misuse.
- Hardware & Engine Support: Unity developers can use WebCamTexture for access (one camera at a time). Native Android and OpenXR routes retain full camera+pose data7.
The Full Input Stack Is Coming Online
Meta’s decision to open up camera access is the headline, but what makes this moment truly exciting is what it enables in combination with other recent developer tools. Think of it not just as "apps can see"—but "apps can see, understand, and respond in space."
Here are some of the capabilities that now work in concert:
Thumb Micro-Gestures API
Released in the same window, this gives developers access to pinch, press, and subtle finger gestures. In vision-enabled applications, this means you can literally draw out scan areas with your fingers, summon objects, or trigger AI descriptions with a gesture—without needing physical controllers.
Inside-Out Body and Arm Tracking
Using the headset’s sensors, developers can now reconstruct upper-body motion with no external trackers. For object interaction and spatial UI, this means you can reach out, grab, or even manipulate virtual objects anchored to real ones—without needing to model your body from scratch.
Photogrammetry + Depth API Access (Quest 3-specific)
Thanks to the Quest 3’s depth sensor and improved passthrough fidelity, we’re now seeing prototypes of room-scanning apps that don’t need an iPhone Pro or LIDAR device10. With a few gestures and a camera pass, you could scan your desk, your living room, or even map your entire home into a playable VR environment.
Taken together, these tools form the beginnings of a real machine perception layer—one where XR apps can interpret space, recognize gestures, and work with real-world objects dynamically. In other words: the VR headset isn’t just an output display anymore. It’s gaining eyes, hands, and context.

Looking Ahead: Applications & Industry Impact
Education & STEM
Imagine scanning a textbook illustration and generating a floating 3D model next to it—interactive and explorable from any angle.
Training & Enterprise
On-site MR guides that identify machinery and overlay step-by-step procedures without external hardware.
Accessibility
Real-time visual narration to help visually impaired users navigate new spaces.
Social & Entertainment
Real-world game board recognition—drop physical tokens and watch them come alive in VR across multiplayer sessions.
Final Thoughts
Meta’s decision to open camera access marks a meaningful shift. It empowers third-party developers to build MR experiences that sense, interpret, and react to the real world — making mixed reality more truly mixed. With innovations like gesture-based scanning, in-headset photogrammetry, and AI vision overlays, we're seeing the dawn of practical and creative XR use cases that were previously locked behind hardware or privacy barriers.
Call to Action for Developers
- Experiment with Meta’s Passthrough Camera API and example code on GitHub11.
- Combine it with hand and microgesture input to build intuitive MR opt-ins.
- Prototype photogrammetry or object detection experiences and push them through the Horizon Store review.
References
Footnotes
- Meta Developers Blog – Explore a New Era of Mixed Reality with the Passthrough Camera API ↩
- Meta Developer Docs – Camera Access API on Quest ↩ ↩2
- Example Demo: “Scan & Spawn” prototype via Meta ↩
- GitHub Example – Room Scanning with Passthrough and Depth API ↩
- Meta Unity Integration Guide – WebCamTexture & Native Access ↩
- Meta Developers – Microgestures on Quest 3 ↩
- UploadVR – Inside-Out Body Tracking Demoed on Quest 3 ↩
- Mixed Reality News – Photogrammetry Comes to Quest 3 ↩
Meta GitHub – Meta Quest Camera API Examples ↩