Scaling dynamic sentient spaces to multiple locations

One of the fundamental concepts of the rt-xr and rt-ai Edge projects is that it should be possible to experience a remote sentient space in a telepresent way. The diagram above shows the idea. The main sentient space houses a ManifoldNexus instance that supplies service discovery, subscription and message passing functions to all of the other components. Not shown is the rt-ai Edge component that deals with real-time intelligent processing, both reactive and proactive, of real-world sensor data and controls. However, rt-ai Edge interconnects with ManifoldNexus, making data and control flows available in the Manifold world.

Co-located with ManifoldNexus are the various servers that implement the visualization part of the sentient space. The SpaceServer allows occupants of the space to download a space definition file that is used to construct a model of the space. For VR users, this is a virtual model of the space that can be used remotely. For AR and MR users, only augmentations and interaction elements are instantiated so that the real space can be seen normally. The SpaceServer also houses downloadable asset bundles that contain augmentations that occupants have placed around the space. This is why it is referred to as a dynamic sentient space – as an occupant either physically or virtually enters the space, the relevant space model and augmentations are downloaded. Any changes that occupants make get merged back to the space definition and model repository to ensure that all occupants are synced with the space correctly. The SharingServer provides real-time transfer of pose and audio data. The Home Automation server provides a way for the space model to be linked with networked controls that physically exist in the space.

When everything is on a single LAN, things just work. New occupants of a space auto-discover sentient spaces available on that LAN and, via a GUI on the generic viewer app, can select the appropriate space. Normally there would be just one space but the system allows for multiple spaces on a single LAN if required. The issue then is how to connect VR users at remote locations. As shown in the diagram, ManifoldNexus has to ability to use secure tunnels between regions. This does require that one of the gateway routers has a port forwarding entry configured but otherwise requires no configuration other than security. There can be several remote spaces if necessary and a tunnel can support more than one sentient space. Once the Manifold infrastructure is established, integration is total in that auto-discovery and message switching all behave for remote occupants in exactly the same way as local occupants. What is also nice is that multicast services can be replicated for remote users in the remote LAN so data never has to be sent more than once on the tunnel itself. This optimization is implemented automatically within ManifoldNexus.

Dynamic sentient spaces (where a standard viewer is customized for each space by the servers) is now basically working on the five platforms (Windows desktop, macOS, Windows Mixed Reality, Android and iOS). Persistent ad-hoc augmentations using downloadable assets is the next step in this process. Probably I am going to start with the virtual sticky note – this is where an occupant can leave a persistent message for other occupants. This requires a lot of the general functionality of persistent dynamic augmentations and is actually kind of useful for change!

rt-xr sentient space visualization now on iOS!

I have to admit, I am in a state of shock right now. For some reason today I decided to try to get the rt-xr Viewer software working on iOS. After all, it worked fine on Windows desktop, UWP (Windows MR), macOS and Android so why not? However, I expected endless trouble with the Manifold library but, as it turned out, getting it to work on iOS was trivial. I guess Unity and .NET magic came together so I didn’t have to do too much work once again. In fact, the hardest part was working out how to sort out microphone permission and that wasn’t too hard – this thread certainly helped with that. Avatar pose sharing, audio sharing, proxy objects, video and sensor feeds all work perfectly.

The nice thing now is that most (if not all) of the further development is intrinsically multi-platform.

Sentient space sharing avatars with Windows desktop, Windows Mixed Reality and Android apps


One of the goals of the rt-ai Edge system is that users of the system can use whatever device they have available to interact and extract value from it. Unity is a tremendous help given that Unity apps can be run on pretty much everything. The main task was integration with Manifold so that all apps can receive and interact with everything else in the system. Manifold currently supports Windows, UWP, Linux, Android and macOS. iOS is a notable absentee and will hopefully be added at some point in the future. However, I perceive Android support as more significant as it also leads to multiple MR headset support.

The screen shot above and video below show three instances of the rt-ai viewer apps running on Windows desktop, Windows Mixed Reality and Android interacting in a shared sentient space. Ok, so the avatars are rubbish (I call them Sad Robots) but that’s just a detail and can be improved later. The wall panels are receiving sensor and video data from ZeroSensors via an rt-ai Edge stream processing network while the light switch is operated via a home automation server and Insteon.

Sharing is mediated by a SharingServer that is part of Manifold. The SharingServer uses Manifold multicast and end to end services to implement scalable sharing while minimizing the load on each individual device. Ultimately, the SharingServer will also download the space definition file when the user enters a sentient space and also provide details of virtual objects that may have been placed in the space by other users. This allows a new user with a standard app to enter a space and quickly create a view of the sentient space consistent with existing users.

While this is all kind of fun, the more interesting thing is when this is combined with a HoloLens or similar MR headset. The MR headset user in a space would see any VR users in the space represented by their avatars. Likewise, VR users in a space would see avatars representing MR users in the space. The idea is to get as close to a telepresent experience for VR users as possible without very complex setups. It would be much nicer to use Holoportation but that would require every room in the space has a very complex and expensive setup which really isn’t the point. The idea is to make it very easy and low cost to implement an rt-ai Edge based sentient space.

Still lots to do of course. One big thing is audio. Another is representing interaction devices (pointers, motion controllers etc) to all users. Right now, each app just sends out the camera transform to the SharingServer which then distributes this to all other users. This will be extended to include PCM audio chunks and transforms for interaction devices so that everyone will be able to create a meaningful scene. Each user will receive the audio stream from every other user. The reason for this is that then each individual audio stream can be attached to the avatar for each user giving a spatialized sound effect using Unity capabilities (that’s the hope anyway). Another very important thing is that the apps work differently if they are running on VR type devices or AR/MR type devices. In the latter case, the walls and related objects are not drawn and just the colliders instantiated although virtual objects and avatars will be visible. Obviously AR/MR users want to see the real walls, light switches etc, not the virtual representations. However, they will still be able to interact in exactly the same way as a VR user.

Controlling the real world using Windows Mixed Reality, Manifold, rt-ai Edge and Insteon

Having now constructed a simple walk around model of my office and another room, it was time to start work on the interaction side of things. I have an Insteon switch controlling some of the lights in my office and this seemed like an obvious target. Manifold now has a home automation server app (HAServer) based on one from an earlier project. This allows individual Insteon devices to be addressed by user-friendly names using JSON over Manifold’s end to end datagram service. Light switches can now be specified in the Unity rtXRView space definition file and linked to the control interface of the HAServer.

The screen capture above and video below were made using a Samsung Odyssey headset and motion controllers. The light switch specification causes a virtual light switch to be placed, ideally exactly where the real light switch happens to be. Then, by pointing at the light switch with the motion controller and clicking, the light can be turned on and off. The virtual light switch is gray when the light is off and green when it is on. If the real switch is operated by some other means, the virtual light switch will reflect this as the HAServer broadcasts state change updates on a regular basis. It’s nice to see that the light sensor on the ZeroSensor responds appropriately to the light level too. Technically this light switch is a dimmer – setting an intermediate level is a TODO at this point.

An interesting aspect of this is the extent to which a remote VR user can get a sense of telepresence in a space, even if it is just a virtual representation of the real space. To make that connection more concrete, the virtual light in Unity should reflect the ambient light level as measured by the ZeroSensor. That’s another TODO…

While this is kind of fun in the VR world, it could actually be interesting in the AR world. If the virtual light switch is placed correctly but is invisible (apart from a collider), a HoloLens user (for example) could look at a real light switch and click in order to change the state of the switch. Very handy for the terminally lazy! More useful than just this would be to annotate the switch with what it controls. For some reason, people in this house never seem to know which light switch controls what so this feature by itself would be quite handy.

A virtual walk through a sentient space with rt3DView

The screen capture above and video below are from a walk-through of a procedurally generated sentient space model with video and IoT data displays (derived from ZeroSensor data, rt-ai Edge and Manifold). This was made using rt3DView and the actual Unity video recording made with the aid of this very nice Unity store asset.

The idea of this model is that it reflects the major features of the real sentient space so that users of VR and AR can interact correctly. For example, an AR headset wearer in one of the rooms would also see the displays on the equivalent physical wall. This model is pretty basic but obviously a lot more bling could be added to get further along the road to realism. Plus I made no attempt to sort out the exterior for this test.

Now that the basics are working and the XR world is fully coupled to the rt-ai Edge design that is the real world element of the sentient space, the focus will move to more interaction. Instantiating new objects, positioning objects, real-time sharing of camera poses leading to avatars… The list is endless.

Continue reading “A virtual walk through a sentient space with rt3DView”

Using Windows Mixed Reality to visualize sentient spaces with rtXRView

The Windows Mixed Reality version of 3DView is now working nicely. Had a few problems with my Windows development PC which is a few years old and didn’t have adequate USB ports. In the end this PCI-e USB 3.1 card solved that problem otherwise a complete upgrade might have been required. A different USB 3.0 card did not work however.

Hopefully this is the last time that I see the displays all lined up like that. The space modeling software is coming along and soon it will be possible to model a space with a (relatively) simple procedural definition file. Potentially this could be texture mapped from a 3D scan of rooms but the simplified models generated procedurally with simple textures might well be good enough. Then it will be possible to position versions of these displays (and lots of other things) in the correct rooms.

XRView is intended to be runnable both on Windows MR headsets (I am using the Samsung Odyssey as it has a good display and built-in audio) and HoloLens. Now clearly VR modes and AR modes have to be completely different. In VR, you navigate and interact with the motion controllers and see the modeled space whereas in AR you navigate by walking around, interact using the clicker and don’t see the modeled space directly. However, the modeled space will still be there and will be used instead of the spatially mapped surfaces that the HoloLens might normally use. This means that objects placed in the model by a VR user will appear to AR users correctly positioned and vice versa. One key advantage of using the modeled space rather than the dynamically mapped space generated by the HoloLens itself is that it is easy to add context to the surfaces using the procedural model language. Another is the ability to interwork with non-HoloLens AR headsets that can share the HoloLens spatial map data. The procedural model becomes a platform-independent spatial mapping that “just” leaves the problem of spatial synchronization to the individual headsets.

I am sure that there will be some fun challenges in getting spatial synchronization but that’s something for later.

Creating a procedural mesh with rectangular cutouts in Unity

As part of my ongoing work to create sentient spaces, I’d like to be able to create a procedural model of the sentient space in Unity. The idea is that the model is pretty simple – walls, floors, ceilings, windows and doors are about the extent of my ambition right now. I realized that doors and windows really required planes with rectangular cutouts and I thought it would be fun to try this from first principles. The screen capture above shows the result and it seems to work quite nicely.

Procedurally generating the model means that it will be quite easy to specify the sentient space. It’s just a case of measuring each room and then entering the parameters into a file. The Unity app can then read the file on startup in order to generate the model. In principle, this model definition could come from a cloud source which also supplies materials and other configuration data. This will lead to a standard Unity app that can be used in any sentient space. The model itself is really targeted at VR users of the sentient space. AR users do not need the model of course as they can see the real thing. In their case, they would need to download the data they need for persistent objects in the sentient space when they first enter the space.

The mesh generated by this code is hardly efficient however as it creates a lot of triangles. The next step for this code is to grow the individual triangles as much as possible to keep the triangle count down and allow the spatial resolution to be increased as much as desired without significant impact.

The Unity project (PuncturedPlane1) is available here.