Scaling dynamic sentient spaces to multiple locations

One of the fundamental concepts of the rt-xr and rt-ai Edge projects is that it should be possible to experience a remote sentient space in a telepresent way. The diagram above shows the idea. The main sentient space houses a ManifoldNexus instance that supplies service discovery, subscription and message passing functions to all of the other components. Not shown is the rt-ai Edge component that deals with real-time intelligent processing, both reactive and proactive, of real-world sensor data and controls. However, rt-ai Edge interconnects with ManifoldNexus, making data and control flows available in the Manifold world.

Co-located with ManifoldNexus are the various servers that implement the visualization part of the sentient space. The SpaceServer allows occupants of the space to download a space definition file that is used to construct a model of the space. For VR users, this is a virtual model of the space that can be used remotely. For AR and MR users, only augmentations and interaction elements are instantiated so that the real space can be seen normally. The SpaceServer also houses downloadable asset bundles that contain augmentations that occupants have placed around the space. This is why it is referred to as a dynamic sentient space – as an occupant either physically or virtually enters the space, the relevant space model and augmentations are downloaded. Any changes that occupants make get merged back to the space definition and model repository to ensure that all occupants are synced with the space correctly. The SharingServer provides real-time transfer of pose and audio data. The Home Automation server provides a way for the space model to be linked with networked controls that physically exist in the space.

When everything is on a single LAN, things just work. New occupants of a space auto-discover sentient spaces available on that LAN and, via a GUI on the generic viewer app, can select the appropriate space. Normally there would be just one space but the system allows for multiple spaces on a single LAN if required. The issue then is how to connect VR users at remote locations. As shown in the diagram, ManifoldNexus has to ability to use secure tunnels between regions. This does require that one of the gateway routers has a port forwarding entry configured but otherwise requires no configuration other than security. There can be several remote spaces if necessary and a tunnel can support more than one sentient space. Once the Manifold infrastructure is established, integration is total in that auto-discovery and message switching all behave for remote occupants in exactly the same way as local occupants. What is also nice is that multicast services can be replicated for remote users in the remote LAN so data never has to be sent more than once on the tunnel itself. This optimization is implemented automatically within ManifoldNexus.

Dynamic sentient spaces (where a standard viewer is customized for each space by the servers) is now basically working on the five platforms (Windows desktop, macOS, Windows Mixed Reality, Android and iOS). Persistent ad-hoc augmentations using downloadable assets is the next step in this process. Probably I am going to start with the virtual sticky note – this is where an occupant can leave a persistent message for other occupants. This requires a lot of the general functionality of persistent dynamic augmentations and is actually kind of useful for change!

Accessing the iOS WiFi IP address within a Unity app

At the moment, Manifold requires that a client supplies an appropriate IP address (although I might change this in the future). Mostly it is pretty easy to do but the .NET way of doing things (using Dns.GetHostEntry()) didn’t seem to pick up the WiFi IP address on an iPad. After wasting a lot of time, I decided to go back to basics and create a native plugin.

The basis of the code comes from here – it just needed the right wrapping to get it to work with Unity. I will be the first to admit that I know nothing about native iOS coding but, following the Bonjour example here, the code below  seemed to work just fine when placed in the Assets/Plugins/iOS directory of the project.

IPAddress.h:

#import <Foundation/Foundation.h>

@interface IPAddressDelegate : NSObject

- (NSString *)getAddress;
@end

IPAddress.m:

#include <ifaddrs.h>
#include <arpa/inet.h>

#import "IPAddress.h"
@implementation IPAddressDelegate

- (id)init
{
    self = [super init];
    return self;
}

- (NSString *)getAddress {
    NSString *address = @"error";
    struct ifaddrs *interfaces = NULL;
    struct ifaddrs *temp_addr = NULL;
    int success = 0;
    success = getifaddrs(&interfaces);
    if (success == 0) {
        temp_addr = interfaces;
        while(temp_addr != NULL) {
            if(temp_addr->ifa_addr->sa_family == AF_INET) {
                if([[NSString stringWithUTF8String:temp_addr->ifa_name] isEqualToString:@"en0"]) {
                    address = [NSString stringWithUTF8String:inet_ntoa(((struct sockaddr_in *)temp_addr->ifa_addr)->sin_addr)];
                }
            }
            temp_addr = temp_addr->ifa_next;
        }
    }
    // Free memory
    freeifaddrs(interfaces);
    return address;
}
@end

static IPAddressDelegate* delegateObject = nil;

char* MakeStringCopy (const char* string)
{
    if (string == NULL)
        return NULL;
    
    char* res = (char*)malloc(strlen(string) + 1);
    strcpy(res, string);
    return res;
}

const char * getLocalWifiIpAddress()
{
    if (delegateObject == nil)
        delegateObject = [[IPAddressDelegate alloc] init];
    
    return MakeStringCopy([[delegateObject getAddress] UTF8String]);
}

To use the plugin is pretty straightforward. Just add this declaration to a C# class:

	[DllImport ("__Internal")]
	private static extern string getLocalWifiIpAddress();

Then call getLocalWiFiIpAddress() to get the dotted address string.

rt-xr sentient space visualization now on iOS!

I have to admit, I am in a state of shock right now. For some reason today I decided to try to get the rt-xr Viewer software working on iOS. After all, it worked fine on Windows desktop, UWP (Windows MR), macOS and Android so why not? However, I expected endless trouble with the Manifold library but, as it turned out, getting it to work on iOS was trivial. I guess Unity and .NET magic came together so I didn’t have to do too much work once again. In fact, the hardest part was working out how to sort out microphone permission and that wasn’t too hard – this thread certainly helped with that. Avatar pose sharing, audio sharing, proxy objects, video and sensor feeds all work perfectly.

The nice thing now is that most (if not all) of the further development is intrinsically multi-platform.

Streaming PCM audio from Unity on Android

The final step in adding audio support to rt-xr visualization was to make it work with Android. Supporting audio capture natively on Windows desktop and Windows UWP was relatively easy since it could all be done in C#. However, I didn’t really want to implement a native capture plugin for Android and in turns out that the Unity capture technique works pretty well, albeit with noticeable latency.

The Inspector view in the screen capture shows the idea. The MicrophoneFilter script starts up the Unity Microphone and adds it to the AudioSource. When running, the output of the AudioSource is passed to MicrophoneFilter via the OnAudioFilterRead method that gives access to the PCM stream from the microphone.

The resulting stream needs some processing, however. I am sending single channel PCM audio at 16000 samples per second on the network whereas the output of the AudioSource is stereo, either 16000 or 48000 depending on the platform and floating point rather than 16 bit values so the code has to be able to convert this. It also needs to zero out the output of the filter otherwise it will be picked up by the listener on the main camera which is certainly not desirable! There is an alternate way of running this that uses the AudioSource.clip.GetData call directly but I had problems with that and also prefer the asynchronous callback used for OnAudioFilterRead rather than using Update or FixedUpdate to poll. The complete MicrophoneFilter script looks like this:

using UnityEngine;

[RequireComponent(typeof(AudioSource))]
public class MicrophoneFilter : MonoBehaviour
{
    [Tooltip("Index of microphone to use")]
    public int deviceIndex = 0;

    private StatusUpdate statusUpdate;
    private bool running = false;
    private byte[] buffer = new byte[32000];
    private int scale;

    // Use this for initialization
    void Start()
    {

        AudioSource source = GetComponent<AudioSource>();

        if (deviceIndex >= Microphone.devices.Length)
            deviceIndex = 0;

        GameObject scripts = GameObject.Find("Scripts");
        statusUpdate = scripts.GetComponent<StatusUpdate>();

        int sampleRate = AudioSettings.outputSampleRate;

        if (sampleRate > 16000)
            scale = 3;
        else
            scale = 1;

        source.clip = Microphone.Start(Microphone.devices[deviceIndex], true, 1, sampleRate);
        source.Play();
        running = true;
    }

    private void OnAudioFilterRead(float[] data, int channels)
    {
        if (!running)
            return;

        int byteIndex = 0;
        if (channels == 1) {
            for (int i = 0; i < data.Length;) {
                short val = (short)((data[i]) * 32767.0f);
                for (int offset = 0; offset < scale; offset++) {
                    if (i < data.Length) 
                        data[i++] = 0; 
                } 
                buffer[byteIndex++] = (byte)(val & 0xff); 
                buffer[byteIndex++] = (byte)((val >> 8) & 0xff);
            }
        } else {
            for (int i = 0; i < data.Length;) {
                short val = (short)((data[i] + data[i + 1]) * 32767.0f / 2.0f);
                for (int offset = 0; offset < 2 * scale; offset++) {
                    if (i < data.Length) 
                        data[i++] = 0; 
                } 
                buffer[byteIndex++] = (byte)(val & 0xff); 
                buffer[byteIndex++] = (byte)((val >> 8) & 0xff);
            }
        }
        statusUpdate.newAudioData(buffer, byteIndex);
    }
}

Note the fixed maximal size buffer allocation to try to prevent too much garbage collection. In general, the code uses maximal sized fixed buffers wherever possible.

The SharingServer has now been updated to generate separate feeds for VR and AR/MR users with all user audio feeds in the VR version and just VR headset users’ audio in the MR version. The audio update rate has also been decoupled from the avatar pose update rate. This allows a faster update rate for pose updates than makes sense for audio.

Just a note on why I am using single channel 16 bit PCM at 16000 samples per second rather than sending single channel floats at 48000 samples per second which would be a better fit in many cases. The problem is that this makes the data rate 6 times higher – it goes from 256kbps to 1.536Mbps. Using uncompressed 16 bit audio and dealing with the consequences seemed like a better trade than either the higher data rate or moving to compressed audio. This decision may have to be revisited when running on real MR headset hardware however.

rt-xr visualization with spatialized sound

An important goal of the rt-xr project is to allow MR and AR headset wearing physical occupants of a sentient space to interact as naturally as possible with virtual users in the same space. A component of this is spatialized sound, where a sound or someone’s voice appears to originate from where it should in the scene. Unity has a variety of tools for achieving this, depending on the platform.

I have standardized on 16 bit, single channel PCM at 16000 samples per second for audio within rt-xr in order to keep implementation simple (no need for codecs) but still keep the required bit rate down. The problem is that the SharingServer has to send all audio feeds to all users – each user needs all the other user’s feeds so that they can spatialize them correctly. If spatialized sound wasn’t required, the SharingServer could just mix them all together on some basis. Another solution is for the SharingServer to just forward the dominant speaker but this assumes that only intermittent speakers are supported. Plus it leads to the “half-duplex” effect where the loudest speaker blocks everyone else. Mixing them all is a lot more democratic.

Another question is how to deal with occupants in different rooms within the same sentient space. Some things (such as video) are turned off to reduce bit rate if the user isn’t in the same room as the video panel. However, it makes sense that you can hear users in other rooms at an appropriate level. The AudioSource in Unity has tools for ensuring that sound levels drop off appropriately.

Spatialized sound currently works on Windows desktop and Windows MR. The desktop version uses the Oculus spatializer as this can support 16000 samples per second. The Windows MR version uses the Microsoft HRTF spatializer which unfortunately requires 48000 samples per second so I have to upsample to do this. This does mess up the quality a bit – better upsampling is a todo.

Right now, the SharingServer just broadcasts a standard feed with all audio sources. Individual users filter these in two ways. First of all, they discard their own audio feed. Secondly, if the user is a physical occupant of the space, feeds from other physical occupants are omitted so as to just leave the VR user feeds. Whether or not it would be better to send customized feeds to each user is an interesting question – this could certainly be done if necessary. For example, a simple optimization would be to have two feeds – one for AR and MR users that only contains VR user audio and the current complete feed for VR users. This has the great benefit of cutting down bit rate to AR and MR users whose headsets may benefit from not having to deal with unnecessary data. In fact, this idea sounds so good that I think I am going to implement it!

Next up is getting something to work on Android. I am using native audio capture code on the two Windows platforms and something is needed for Android. There is a Unity technique using the Microphone that, coupled with a custom audio filter, might work. If not, I might have to brush up on JNI. Probably spatialized sound is going to be difficult in terms of panning. Volume rolloff with distance should work however.

Proxy objects: Unity assets that are UI extensions of remote servers

For some reason I often end up back at the analog clock for trying out new ideas. I guess it is because it is pretty trivial to operate a clock – just supply three angles. In this case, the clock is a proxy object which is in many ways just a simple extension of the system that animates the avatars for other occupants of a sentient space. A proxy object is a conventional Unity GameObject hierarchy that has certain specially named child nodes. By itself, there’s nothing special about the Unity asset part of a proxy object – it could be an asset included in the app or an asset downloaded from a server using Unity’s asset bundle system. Either way, these specially named nodes can be linked to external servers. In this case, the SharingServer generates an analog clock stream that animates the clock hands. The clock definition is contained in the space definition file that instantiates all the other parts of the scene.

In principle, interaction (i.e. sending stuff back to the remote server) can be added by using specially named nodes to attach scripts that are hard-coded in the app. I haven’t tried this yet but see no reason why it wouldn’t work. The key point is that proxy objects leverage standard scripts in the app as opposed to customized scripts for every asset.

Right now, you can modify the local scale, local position, local orientation, color and text (if associated with a TextMesh) of any of the GameObjects in an asset’s hierarchy. This could easily be extended to other things including updating a texture with a new image. For example, a virtual fireplace could be created where the flames are animated by constantly varying the textures being displayed. The system is still simplistic however as there are no mechanisms for controlling transitions (such as lerping between positions or fading between textures) but this could certainly be added without too much difficulty.

Just for reference, the analog clock stream message looks like this:

{
    "type": "proxyobject",
    "updateList": [
        {
            "name": "PO_AnalogClock_Second",
            "orientation": {
                "x": 0,
                "y": 222,
                "z": 0
            },
            "orientationValid": true
        },
        {
            "name": "PO_AnalogClock_Minute",
            "orientation": {
                "x": 0,
                "y": 342,
                "z": 0
            },
            "orientationValid": true
        },
        {
            "name": "PO_AnalogClock_Hour",
            "orientation": {
                "x": 0,
                "y": 568,
                "z": 0
            },
            "orientationValid": true
        }
    ]
}

Here the y value encodes the relevant hand angle. The hour angle is greater than 360 degrees as the system uses a 24 hour clock but the result is the same whatever.

Sentient space sharing avatars with Windows desktop, Windows Mixed Reality and Android apps


One of the goals of the rt-ai Edge system is that users of the system can use whatever device they have available to interact and extract value from it. Unity is a tremendous help given that Unity apps can be run on pretty much everything. The main task was integration with Manifold so that all apps can receive and interact with everything else in the system. Manifold currently supports Windows, UWP, Linux, Android and macOS. iOS is a notable absentee and will hopefully be added at some point in the future. However, I perceive Android support as more significant as it also leads to multiple MR headset support.

The screen shot above and video below show three instances of the rt-ai viewer apps running on Windows desktop, Windows Mixed Reality and Android interacting in a shared sentient space. Ok, so the avatars are rubbish (I call them Sad Robots) but that’s just a detail and can be improved later. The wall panels are receiving sensor and video data from ZeroSensors via an rt-ai Edge stream processing network while the light switch is operated via a home automation server and Insteon.

Sharing is mediated by a SharingServer that is part of Manifold. The SharingServer uses Manifold multicast and end to end services to implement scalable sharing while minimizing the load on each individual device. Ultimately, the SharingServer will also download the space definition file when the user enters a sentient space and also provide details of virtual objects that may have been placed in the space by other users. This allows a new user with a standard app to enter a space and quickly create a view of the sentient space consistent with existing users.

While this is all kind of fun, the more interesting thing is when this is combined with a HoloLens or similar MR headset. The MR headset user in a space would see any VR users in the space represented by their avatars. Likewise, VR users in a space would see avatars representing MR users in the space. The idea is to get as close to a telepresent experience for VR users as possible without very complex setups. It would be much nicer to use Holoportation but that would require every room in the space has a very complex and expensive setup which really isn’t the point. The idea is to make it very easy and low cost to implement an rt-ai Edge based sentient space.

Still lots to do of course. One big thing is audio. Another is representing interaction devices (pointers, motion controllers etc) to all users. Right now, each app just sends out the camera transform to the SharingServer which then distributes this to all other users. This will be extended to include PCM audio chunks and transforms for interaction devices so that everyone will be able to create a meaningful scene. Each user will receive the audio stream from every other user. The reason for this is that then each individual audio stream can be attached to the avatar for each user giving a spatialized sound effect using Unity capabilities (that’s the hope anyway). Another very important thing is that the apps work differently if they are running on VR type devices or AR/MR type devices. In the latter case, the walls and related objects are not drawn and just the colliders instantiated although virtual objects and avatars will be visible. Obviously AR/MR users want to see the real walls, light switches etc, not the virtual representations. However, they will still be able to interact in exactly the same way as a VR user.