rt-xr: VR, MR and AR visualization for augmented sentient spaces

Introduction

The rt-xr project provides tools for visualizing, interaction and sharing augmented sentient spaces:

  • Visualization: A model of the sentient space is used to derive a virtual world for VR headset-wearing occupants of the space and augmentations for MR and AR headset-wearing occupants of the space. The structural model can be augmented with various assets, including proxy objects that provide a UI for remote services.
  • Interaction: Both MR/AR occupants physically within a space can interact with objects in a space while VR users can interact with virtual analogs within the same space for a telepresent experience.
  • Sharing: VR users in a space see avatars representing MR/AR users physically within the space while MR/AR users see avatars representing VR users within the space. Spatially located audio enhances the reality of the shared experience, allowing users to converse in a realistic manner.

rt-xr is based on the Manifold networking surface which greatly simplifies dynamic, ad-hoc architectures, supported by efficient multicast and point to point communication services and easy service discovery.

The rt-xr SpaceServer

A key component of rt-xr is the rt-xr SpaceServer. This provides a repository for all augmentation objects and models within a sentient space. The root object is the space definition that models the physical space. This allows a virtual model to be generated for VR users while also locating augmentation objects for all users. When a user first enters a space, either physically or virtually, they receive the space definition file from the rt-xr SpaceServer. Depending on their mode, this is used to generate all the objects and models necessary for the experience. The space definition file can contain references to standard objects in the rt-xr viewer apps (such as video panels) or else references to proxy objects that can be downloaded from the rt-xr SpaceServer or any other server used as a proxy object repository.

The rt-xr SharingServer

The rt-xr SharingServer is responsible for distributing camera transforms and other user state data between occupants of a sentient space allowing animation of avatars representing virtual users in a space. It also provides support for the spatially located audio system.

Packaged with the SharingServer are some services that operate Proxy Objects. These include:

  • Clock service. This provides a service that can operate clock assets. Essentially, the service provides angular offsets for an hour, minute and second hand. It is intended mainly as a test service.
  • Whiteboard service. This provides support for multiple independent whiteboards to be used in the sentient space. Individual contributions from occupants are distributed in real-time to all other occupants, ensuring a consistent view of the whiteboard.

The rt-xr Viewers

The rt-xr Viewers are Unity apps that provide the necessary functionality to interact with the rest of the rt-xr system. Platforms include:

  • Windows desktop.
  • Mac desktop.
  • UWP for Windows Mixed Reality devices.
  • Android.
  • iOS.

Scaling rt-xr to multiple locations

One of the fundamental concepts of the rt-xr and rt-ai Edge projects is that it should be possible to experience a remote sentient space in a telepresent way. The diagram above shows the idea. The main sentient space houses a ManifoldNexus instance that supplies service discovery, subscription and message passing functions to all of the other components. Not shown is the rt-ai Edge component that deals with real-time intelligent processing, both reactive and proactive, of real-world sensor data and controls. However, rt-ai Edge interconnects with ManifoldNexus, making data and control flows available in the Manifold world.

Co-located with ManifoldNexus are the various servers that implement the visualization part of the sentient space. The SpaceServer allows occupants of the space to download a space definition file that is used to construct a model of the space. For VR users, this is a virtual model of the space that can be used remotely. For AR and MR users, only augmentations and interaction elements are instantiated so that the real space can be seen normally. The SpaceServer also houses downloadable asset bundles that contain augmentations that occupants have placed around the space. This is why it is referred to as a dynamic sentient space – as an occupant either physically or virtually enters the space, the relevant space model and augmentations are downloaded. Any changes that occupants make get merged back to the space definition and model repository to ensure that all occupants are synced with the space correctly. The SharingServer provides real-time transfer of pose and audio data. The Home Automation server provides a way for the space model to be linked with networked controls that physically exist in the space.

When everything is on a single LAN, things just work. New occupants of a space auto-discover sentient spaces available on that LAN and, via a GUI on the generic viewer app, can select the appropriate space. Normally there would be just one space but the system allows for multiple spaces on a single LAN if required. The issue then is how to connect VR users at remote locations. As shown in the diagram, ManifoldNexus has to ability to use secure tunnels between regions. This does require that one of the gateway routers has a port forwarding entry configured but otherwise requires no configuration other than security. There can be several remote spaces if necessary and a tunnel can support more than one sentient space. Once the Manifold infrastructure is established, integration is total in that auto-discovery and message switching all behave for remote occupants in exactly the same way as local occupants. What is also nice is that multicast services can be replicated for remote users in the remote LAN so data never has to be sent more than once on the tunnel itself. This optimization is implemented automatically within ManifoldNexus.

Dynamic sentient spaces (where a standard viewer is customized for each space by the servers) is now supported on five platforms (Windows desktop, macOS, Windows Mixed Reality, Android and iOS).