The initial technical vision for VR on the Web includes:
- Rendering Canvas (WebGL or 2D) to VR output devices
- Rendering 3D Video to VR output devices (as directly as possible)
- Rendering HTML (DOM+CSS) content to VR output devices – taking advantage of existing CSS features such as 3D transforms
- Mixing WebGL-rendered 3D Content with DOM rendered 3D-transformed content in a single 3D space
- Receiving input from orientation and position sensors, with a focus on reducing latency from input/render to final presentation
In particular, Web content should not need to be aware of the particulars of the VR output device, beyond that there is one and it has certain standard rendering characteristics (e.g. a specific projection matrix that’s needed). For example, in the case of the Oculus Rift, content should not need to apply the Rift-specific distortion rendering effect.
Mozilla's initial step has the seed technical functionality needed to support the first type of VR content listed above: receiving sensor input and rendering Canvas/WebGL content to VR.
In addition, Mozilla will work at the problem from a user experience and design angle, to figure out what some best practices might be for bringing VR to Web content – and what the experience of browsing a VR Web might feel like, what creating VR Web content could be like, and what it could mean to browse the current Web using a VR interface.