Multi-platform interaction styles for rt-xr and Unity

The implementation of sticky notes in rt-xr opened a whole can of worms but really just forced the development of a set of capabilities that will be needed for the general case where occupants of a sentient space can download assets from anywhere, instantiate them in a space and then interact with them. In particular, the need to be able to create a sticky note, position it and add text to it when being used on the supported platforms (Windows and mac OS desktop, Android, iOS and Windows Mixed Reality) required a surprising amount of work. I decided to standardize on a three button mouse model and map the various interaction styles into mouse events. This means that the bulk of the code doesn’t need to care about the interaction style (mouse, motion controller, touch screen etc) as all the complexity is housed in one script.

The short video below shows this in operation on a Windows desktop.

It ended up running a bit fast but that was due to the video recorder setup – I can’t really do things that fast!

I am still just using opaque devices – where is my HoloLens 2 or Magic Leap?!!! However, things should map across pretty well. Note how the current objects are glued to the virtual walls. Using MR devices, the spatial maps would be used for raycasting so that the objects would be glued to the real walls. I do need to add a mode where you can pull things off walls and position them arbitrarily in space but that’s just a TODO right now.

What doesn’t yet work is sharing of these actions. When an object is moved, that move should be visible to all occupants of the space. Likewise, when a new object is created or text updated on a sticky note, everyone should see that. That’s the next step, followed by making all of this persistent.

Anyway, here is how interaction works on the various platforms.

Windows and mac OS desktop

For Windows, the assumption is that three button (middle wheel) mouse is used. The middle button is used to grab and position objects. The right mouse button opens up the menu for that object. The left button is used for selection and resizing. On the Mac, which doesn’t have a middle button, the middle button is simulated by holding down the Command key which maps the left button into the middle button.

Navigation is via the SpaceMouse on both platforms.

Windows Mixed Reality

The motion controllers have quite a few controls and buttons available. I am using the Grab button to grab and position objects. The trigger is used for selection and resizing while the menu button is used to bring up the object menu. Pointing at the sticky note and pressing the trigger causes the virtual keyboard to appear.

Navigation uses the standard joystick-based teleport system.

Android and iOS

My solution here is a little ugly. Basically, the number of fingers used for a tap and/or hold dictates which mouse button the action maps to. A single touch means the left mouse button, two touches means the right mouse button while three touches means the middle button. It works but it is pretty amusing trying to get three simultaneous touches on an object to initiate a grab on a small screen device like a phone!

Navigation is via single or dual touch. Single touch and slide moves in x and y directions. Dual touch and slide rotates around the y axis. Since touches are used for other things, navigation touches need to be made away from objects or else they will be misinterpreted. Probably there is a better way of doing this. However, in the longer term, see-through mode using something like ARCore or ARKit will eliminate the navigation issue which is only a problem in VR (opaque) mode. I assume the physical occupants of a space will use see-through mode with only remote occupants using VR mode.

I haven’t been using ARCore or ARKit yet, mainly because they haven’t seemed good enough to create a spatial map that is useful for rt-xr. This is changing (ARKit 2 for example) but the question is whether it can cope with multiple rooms. For example, objects behind a real wall should not be visible – they need to be occluded by the spatial map. The HoloLens can do this however and is the best available option right now for multi-room MR with persistence.

 

Sentient space sharing avatars with Windows desktop, Windows Mixed Reality and Android apps


One of the goals of the rt-ai Edge system is that users of the system can use whatever device they have available to interact and extract value from it. Unity is a tremendous help given that Unity apps can be run on pretty much everything. The main task was integration with Manifold so that all apps can receive and interact with everything else in the system. Manifold currently supports Windows, UWP, Linux, Android and macOS. iOS is a notable absentee and will hopefully be added at some point in the future. However, I perceive Android support as more significant as it also leads to multiple MR headset support.

The screen shot above and video below show three instances of the rt-ai viewer apps running on Windows desktop, Windows Mixed Reality and Android interacting in a shared sentient space. Ok, so the avatars are rubbish (I call them Sad Robots) but that’s just a detail and can be improved later. The wall panels are receiving sensor and video data from ZeroSensors via an rt-ai Edge stream processing network while the light switch is operated via a home automation server and Insteon.

Sharing is mediated by a SharingServer that is part of Manifold. The SharingServer uses Manifold multicast and end to end services to implement scalable sharing while minimizing the load on each individual device. Ultimately, the SharingServer will also download the space definition file when the user enters a sentient space and also provide details of virtual objects that may have been placed in the space by other users. This allows a new user with a standard app to enter a space and quickly create a view of the sentient space consistent with existing users.

While this is all kind of fun, the more interesting thing is when this is combined with a HoloLens or similar MR headset. The MR headset user in a space would see any VR users in the space represented by their avatars. Likewise, VR users in a space would see avatars representing MR users in the space. The idea is to get as close to a telepresent experience for VR users as possible without very complex setups. It would be much nicer to use Holoportation but that would require every room in the space has a very complex and expensive setup which really isn’t the point. The idea is to make it very easy and low cost to implement an rt-ai Edge based sentient space.

Still lots to do of course. One big thing is audio. Another is representing interaction devices (pointers, motion controllers etc) to all users. Right now, each app just sends out the camera transform to the SharingServer which then distributes this to all other users. This will be extended to include PCM audio chunks and transforms for interaction devices so that everyone will be able to create a meaningful scene. Each user will receive the audio stream from every other user. The reason for this is that then each individual audio stream can be attached to the avatar for each user giving a spatialized sound effect using Unity capabilities (that’s the hope anyway). Another very important thing is that the apps work differently if they are running on VR type devices or AR/MR type devices. In the latter case, the walls and related objects are not drawn and just the colliders instantiated although virtual objects and avatars will be visible. Obviously AR/MR users want to see the real walls, light switches etc, not the virtual representations. However, they will still be able to interact in exactly the same way as a VR user.

Using Windows Mixed Reality to visualize sentient spaces with rtXRView

The Windows Mixed Reality version of 3DView is now working nicely. Had a few problems with my Windows development PC which is a few years old and didn’t have adequate USB ports. In the end this PCI-e USB 3.1 card solved that problem otherwise a complete upgrade might have been required. A different USB 3.0 card did not work however.

Hopefully this is the last time that I see the displays all lined up like that. The space modeling software is coming along and soon it will be possible to model a space with a (relatively) simple procedural definition file. Potentially this could be texture mapped from a 3D scan of rooms but the simplified models generated procedurally with simple textures might well be good enough. Then it will be possible to position versions of these displays (and lots of other things) in the correct rooms.

XRView is intended to be runnable both on Windows MR headsets (I am using the Samsung Odyssey as it has a good display and built-in audio) and HoloLens. Now clearly VR modes and AR modes have to be completely different. In VR, you navigate and interact with the motion controllers and see the modeled space whereas in AR you navigate by walking around, interact using the clicker and don’t see the modeled space directly. However, the modeled space will still be there and will be used instead of the spatially mapped surfaces that the HoloLens might normally use. This means that objects placed in the model by a VR user will appear to AR users correctly positioned and vice versa. One key advantage of using the modeled space rather than the dynamically mapped space generated by the HoloLens itself is that it is easy to add context to the surfaces using the procedural model language. Another is the ability to interwork with non-HoloLens AR headsets that can share the HoloLens spatial map data. The procedural model becomes a platform-independent spatial mapping that “just” leaves the problem of spatial synchronization to the individual headsets.

I am sure that there will be some fun challenges in getting spatial synchronization but that’s something for later.

3DView: visualizing environmental data for sentient spaces

Th 3DView app I mentioned in a previous post is moving forward nicely. The screen capture shows the app capturing real time from four ZeroSensors, with the real time data coming from an rt-ai Edge stream processing network via Manifold. The app creates a video window and sensor display panel for each physical device and then updates the data whenever new messages are received from the ZeroSensor.

This is the rt-ai Edge part of the design. All the blocks are synth modules to speed the design replication. The four ZeroManifoldSynth modules each contain two PutManifold stream processing elements (SPEs) to inject the video and sensor streams into the Manifold. The ZeroSynth modules contain the video and sensor capture SPEs. The ZeroManifoldSynth modules all run on the default node while the ZeroSynth modules run directly on the ZeroSensors themselves. As always with rt-ai Edge, deployment of new designs or design changes is a one click action making this kind of distributed system development much more pleasant.

The Unity graphics elements are basic as I take the standard programmer’s view of Unity graphics elements: they can always be upgraded later by somebody with artistic talent but the key is the underlying functionality. The next step moving forward is to hang these displays (and other much more interesting elements) on the walls of a 3D model of the sentient space. Ultimately the idea is that people can walk through the sentient space using AR headsets and see the elements persistently positioned in the sentient space. In addition, users of the sentient space will be able to instantiate and position elements themselves and also interact with them.

Even more interesting than that is the ability for the sentient space to autonomously instantiate elements in the space based on perceived user actions. This is really the goal of the sentient space concept – to have the sentient space work with the occupants in a natural way (apart from needing an AR headset of course!).

For the moment, I am going to develop this in VR rather than AR. The HoloLens is the only available AR device that can support the level of persistence required but I’d rather wait for the rumored HoloLens 2 or the Magic Leap One (assuming it has the required multi-room persistence capability).

My day with Windows Insider Preview, Unity 2017.2.0b8 and Windows Mixed Reality Headset

Yes, I am drinking a beer right now – it has been a long day. Mostly I seemed to spend it nursing Windows through its upgrade to the latest Insider Preview (16257) and begging the Insider Preview website to allow me to download the Insider Preview SDK which seemed to require all kinds of things done right and the wind blowing in the right direction at the same time.

The somewhat bizarre screen capture above is from a scene I created in the default room. The hologram figures are animated incidentally. What I mostly failed to do was to get existing HoloLens apps to run on the MR headset as Unity kept on reporting errors when generating the Visual Studio project for the apps, after having performed every other stage of the build process correctly. Very odd. I did manage to get a very simple scene with a single cube working ok, however.

Then I went back to the production version of Windows (15063) and tried things there. Ironically, my HoloLens app worked (apart from interaction) on the MR headset using Unity 5.6.2.

Clearly this particular Rome wasn’t built in a day – a lot more investigation is needed.

Latest fun thing in the office: a Windows Mixed Reality Headset

Just got my hands on an HP Windows Mixed Reality headset. Now setting up my Windows dev machine to dual boot so that I can have a standard production Windows version for normal work and an insider Program fast ring version to work with this headset. Based on experience, setting up the Insider Preview could take a while.

Second version of HoloLens HPU – separating mixed reality from the cloud

Some information from Microsoft here about the next generation of HoloLens. I am a great fan of only using the cloud to enhance functionality when there’s no other choice. This is especially relevant to MR devices where internet connectivity might be dodgy at best or entirely non-existent depending on the location. Putting some AI inference capability right on the device means that it can be far more capable in stand-alone mode.

There seems to be the start of a movement to towards putting serious but low power-consuming AI capability in wearable devices. The Movidius VPU is a good example of this kind of technology and probably every CPU manufacturer is on a path to include inference engines in future generations.

While the HoloLens could certainly use updating in many areas (WiFi capability, adding cellular communications, more general purpose processing power, supporting real-time occlusion), adding an inference engine is certainly extremely interesting.