rt-xr: VR, MR and AR visualization for augmented sentient spaces

It was becoming pretty clear that the Unity/XR parts of rt-ai Edge were taking on a life of their own so they have now been broken out into a new project called rt-xr. rt-ai Edge is an always on, real-time and long-lived stream processing system whereas rt-xr is ideal for ad-hoc networking where components come and go as required. In particular, the XR headsets of real and virtual occupants of a sentient space can come and go on a random basis – the sentient space is persistent and new users just get updated with the current state upon entering the space. In terms of sentient space implementation, rt-ai Edge provides the underlying sensing, intelligent processing and reaction processing (somewhat like an autonomic system) whereas rt-xr provides a more user-orientated system for visualizing and interacting with the space at the conscious level (to keep the analogy going) along with the necessary servers for sharing state, providing object repositories etc.

Functions include:

  • Visualization: A model of the sentient space is used to derive a virtual world for VR headset-wearing occupants of the space and augmentations for MR and AR headset-wearing occupants of the space. The structural model can be augmented with various assets, including proxy objects that provide a UI for remote services.
  • Interaction: Both MR/AR occupants physically within a space can interact with objects in a space while VR users can interact with virtual analogs within the same space for a telepresent experience.
  • Sharing: VR users in a space see avatars representing MR/AR users physically within the space while MR/AR users see avatars representing VR users within the space. Spatially located audio enhances the reality of the shared experience, allowing users to converse in a realistic manner.

rt-xr is based on the Manifold networking surface which greatly simplifies dynamic, ad-hoc architectures, supported by efficient multicast and point to point communication services and easy service discovery.

A key component of rt-xr is the rt-xr SpaceServer. This provides a repository for all augmentation objects and models within a sentient space. The root object is the space definition that models the physical space. This allows a virtual model to be generated for VR users while also locating augmentation objects for all users. When a user first enters a space, either physically or virtually, they receive the space definition file from the rt-xr SpaceServer. Depending on their mode, this is used to generate all the objects and models necessary for the experience. The space definition file can contain references to standard objects in the rt-xr viewer apps (such as video panels) or else references to proxy objects that can be downloaded from the rt-xr SpaceServer or any other server used as a proxy object repository.

The rt-xr SharingServer is responsible for distributing camera transforms and other user state data between occupants of a sentient space allowing animation of avatars representing virtual users in a space. It also provides support for the spatially located audio system.

The rt-xr Viewers are Unity apps that provide the necessary functionality to interact with the rest of the rt-xr system:

  • rt-xr Viewer3D is a Windows desktop viewer.
  • rt-xr ViewerMR is a UWP viewer for Windows Mixed Reality devices.
  • rt-xr ViewerAndroid is a viewer for Android devices.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.