Registering virtual and real worlds with rt-xr, ARKit and Unity


One of the goals for rt-xr is to allow augmented reality users within a space to collaborate with virtual reality users physically outside of the space, with the VR users getting a telepresent sense of being physically within the same space. To this end, VR users see a complete model of the space (my office in this case) including augmentations while physically present AR users just see the augmentations. Some examples of augmentations are virtual whiteboards and virtual sticky notes. Both AR and VR users see avatars representing the position and pose of other users in the space.

Achieving this for AR users requires that their coordinate system corresponds with that of the virtual models of the room. For iOS, ARKit goes a long way to achieving this so the rt-xr app for iOS has been extended to include ARKit and work in AR mode. The screen capture above shows how coordinate systems are synced. A known location in physical space (in this case, the center of the circular control of the fan controller) is selected by touching the iPad screen on the exact center of the control. This identifies position. To avoid multiple control points, the app is currently started in the correct pose so that the yaw rotation is zero relative to the model origin. It is pretty quick and easy to do. The video below shows the process and the result.

After starting the app in the correct orientation, the user is then free to move to click on the control point. Once that’s done, the rt-xr part of the app starts up and loads the virtual model of the room. For this test, the complete model is being shown (i.e. as for VR users rather than AR users) although in real life only the augmentations would be visible – the idea here was to see how the windows lined up. The results are not too bad all things considered although moving or rotating too fast can cause some drift. However, collaborating using augmentations can tolerate some offset so this should not be a major problem.

There are just a couple of augmentations in this test. One is the menu switch (the glowing M) which is used to instantiate and control augmentations. There is also a video screen showing the snowy scene from the driveway camera, the feed being generated by an rt-ai design.

Next step is to test out VR and AR collaboration properly by displaying the correct AR scene on the iOS app. Since VR collaboration has worked for some time, extending it to AR users should not be too hard.

Speeding up ARKit development with Unity ARKit Remote

Anything that speeds up the development cycle is interesting and the Unity ARKit Remote manages to avoid having to go through Xcode every time around the loop. Provided the app can be run in the Editor, any changes to objects or scripts can be tested very quickly. The iPhone (in this case) runs a special remote app that passes ARKit data back to the app running in the Editor. You don’t see any of the Unity stuff in the phone itself, just the camera feed. The composite frames are shown in the Editor window as above.

Using ARKit with ExpoKit, React Native, three.js and soon (hopefully) WebRTC


This rather unimpressive test scene, captured from an iPhone, is actually quite interesting. It is derived from a simple test app using Expo that makes it easy to use React Native, ARKit and three.js to generate native iOS (and Android although not with ARKit of course) apps. Expo provides a nice environment where a standard app supports rapid development of javascript apps on top of the underlying native support. This test app worked will in Expo within a very short space of time.

The only problem is that I also want to support WebRTC in the app. There is a React Native WebRTC implementation but as far as I can tell it requires that the app be detached from Expo to ExpoKit so that it can be included in Xcode. Unfortunately, that didn’t work as AR support didn’t seem to be included in the automatically generated project.

To include ARKit support requires that the Podfile in the project’s ios directory be modified to add AR support. The first section should look like this:

source 'https://github.com/CocoaPods/Specs.git'
platform :ios, '9.0'
target 'test' do
  pod 'ExpoKit',
   :git => "http://github.com/expo/expo.git",
   :tag => "ios/2.0.3",
   :subspecs => [
     "Core",
     "CPP”,
     "AR"
   ]
...

Basically “AR” is added as an extra subspec. Then ARKit seems to work quite happily with ExpoKit.

ZenFone AR – Tango and Daydream together

The ZenFone AR is a potentially very interesting device, combining both Tango for spatial mapping and Daydream capability for VR headset use all in one package. This is a step up from the older Phab 2 Pro Tango phone in that it can also be used with Daydream (and looks like a neater package). Adding Tango to Daydream means that it is possible to do inside-out spatial tracking in a completely untethered VR device. It should be a step up from ARKit in its current form which relies on just inertial and VSLAM tracking from what I understand. Still, the ability for ARKit to be used with existing devices is a massive advantage

Maybe in the end the XR market will divide up into those applications that don’t need tight spatial locking (where standard devices can be used) and those that do require tight spatial locking (demanding some form of inside-out tracking).