Sentient space sharing avatars with Windows desktop, Windows Mixed Reality and Android apps


One of the goals of the rt-ai Edge system is that users of the system can use whatever device they have available to interact and extract value from it. Unity is a tremendous help given that Unity apps can be run on pretty much everything. The main task was integration with Manifold so that all apps can receive and interact with everything else in the system. Manifold currently supports Windows, UWP, Linux, Android and macOS. iOS is a notable absentee and will hopefully be added at some point in the future. However, I perceive Android support as more significant as it also leads to multiple MR headset support.

The screen shot above and video below show three instances of the rt-ai viewer apps running on Windows desktop, Windows Mixed Reality and Android interacting in a shared sentient space. Ok, so the avatars are rubbish (I call them Sad Robots) but that’s just a detail and can be improved later. The wall panels are receiving sensor and video data from ZeroSensors via an rt-ai Edge stream processing network while the light switch is operated via a home automation server and Insteon.

Sharing is mediated by a SharingServer that is part of Manifold. The SharingServer uses Manifold multicast and end to end services to implement scalable sharing while minimizing the load on each individual device. Ultimately, the SharingServer will also download the space definition file when the user enters a sentient space and also provide details of virtual objects that may have been placed in the space by other users. This allows a new user with a standard app to enter a space and quickly create a view of the sentient space consistent with existing users.

While this is all kind of fun, the more interesting thing is when this is combined with a HoloLens or similar MR headset. The MR headset user in a space would see any VR users in the space represented by their avatars. Likewise, VR users in a space would see avatars representing MR users in the space. The idea is to get as close to a telepresent experience for VR users as possible without very complex setups. It would be much nicer to use Holoportation but that would require every room in the space has a very complex and expensive setup which really isn’t the point. The idea is to make it very easy and low cost to implement an rt-ai Edge based sentient space.

Still lots to do of course. One big thing is audio. Another is representing interaction devices (pointers, motion controllers etc) to all users. Right now, each app just sends out the camera transform to the SharingServer which then distributes this to all other users. This will be extended to include PCM audio chunks and transforms for interaction devices so that everyone will be able to create a meaningful scene. Each user will receive the audio stream from every other user. The reason for this is that then each individual audio stream can be attached to the avatar for each user giving a spatialized sound effect using Unity capabilities (that’s the hope anyway). Another very important thing is that the apps work differently if they are running on VR type devices or AR/MR type devices. In the latter case, the walls and related objects are not drawn and just the colliders instantiated although virtual objects and avatars will be visible. Obviously AR/MR users want to see the real walls, light switches etc, not the virtual representations. However, they will still be able to interact in exactly the same way as a VR user.

Controlling the real world from the virtual world with Android

Since the ability operate a real light switch from the VR world using Windows Mixed Reality (WMR) is now working, it was time to get to get the same thing working on the Android version of the Unity app – rtAndroidView. This uses the same rt-ai Edge stream processing network and Manifold network as the WMR and desktop versions but the extra trick was to get the interaction working.

The video shows me using the touch screen to navigate around the virtual model of my office and operate the light switch, showing that the Manifold HAServer interface is working, along with the normal video and ZeroSensor interfaces.

This is using the Android device as a VR device. In theory, it should be possible to use ARCore with an AR version of this app but the issue is locking the virtual space to the real space. That will take some experimentation I suspect.

Controlling the real world using Windows Mixed Reality, Manifold, rt-ai Edge and Insteon

Having now constructed a simple walk around model of my office and another room, it was time to start work on the interaction side of things. I have an Insteon switch controlling some of the lights in my office and this seemed like an obvious target. Manifold now has a home automation server app (HAServer) based on one from an earlier project. This allows individual Insteon devices to be addressed by user-friendly names using JSON over Manifold’s end to end datagram service. Light switches can now be specified in the Unity rtXRView space definition file and linked to the control interface of the HAServer.

The screen capture above and video below were made using a Samsung Odyssey headset and motion controllers. The light switch specification causes a virtual light switch to be placed, ideally exactly where the real light switch happens to be. Then, by pointing at the light switch with the motion controller and clicking, the light can be turned on and off. The virtual light switch is gray when the light is off and green when it is on. If the real switch is operated by some other means, the virtual light switch will reflect this as the HAServer broadcasts state change updates on a regular basis. It’s nice to see that the light sensor on the ZeroSensor responds appropriately to the light level too. Technically this light switch is a dimmer – setting an intermediate level is a TODO at this point.

An interesting aspect of this is the extent to which a remote VR user can get a sense of telepresence in a space, even if it is just a virtual representation of the real space. To make that connection more concrete, the virtual light in Unity should reflect the ambient light level as measured by the ZeroSensor. That’s another TODO…

While this is kind of fun in the VR world, it could actually be interesting in the AR world. If the virtual light switch is placed correctly but is invisible (apart from a collider), a HoloLens user (for example) could look at a real light switch and click in order to change the state of the switch. Very handy for the terminally lazy! More useful than just this would be to annotate the switch with what it controls. For some reason, people in this house never seem to know which light switch controls what so this feature by itself would be quite handy.