Creating a procedural mesh with rectangular cutouts in Unity

As part of my ongoing work to create sentient spaces, I’d like to be able to create a procedural model of the sentient space in Unity. The idea is that the model is pretty simple – walls, floors, ceilings, windows and doors are about the extent of my ambition right now. I realized that doors and windows really required planes with rectangular cutouts and I thought it would be fun to try this from first principles. The screen capture above shows the result and it seems to work quite nicely.

Procedurally generating the model means that it will be quite easy to specify the sentient space. It’s just a case of measuring each room and then entering the parameters into a file. The Unity app can then read the file on startup in order to generate the model. In principle, this model definition could come from a cloud source which also supplies materials and other configuration data. This will lead to a standard Unity app that can be used in any sentient space. The model itself is really targeted at VR users of the sentient space. AR users do not need the model of course as they can see the real thing. In their case, they would need to download the data they need for persistent objects in the sentient space when they first enter the space.

The mesh generated by this code is hardly efficient however as it creates a lot of triangles. The next step for this code is to grow the individual triangles as much as possible to keep the triangle count down and allow the spatial resolution to be increased as much as desired without significant impact.

The Unity project (PuncturedPlane1) is available here.

On the road to sentient spaces: using Unity to visualize rt-ai Edge streams via Manifold

It’s only a step on the road to the ultimate goal of AR headset support for sentient spaces but it is a start at least. As mentioned in an earlier post, passing data from rt-ai Edge into Manifold allows any number of ad-hoc uses of real time and historic data. One of the intended uses is to support a number of AR headset-wearing occupants in a sentient space – the rt-ai Edge to Manifold connection makes this relatively straightforward. Almost every AR headset supports Unity so it seemed like a natural step to develop a Manifold connection for Unity apps. The result, an app called 3DView, is shown in the screen capture above. The simple scene consists of a couple of video walls displaying MJPEG video feeds captured from the rt-ai Edge network.

The test design (shown above) to generate the data is trivial but demonstrates that any rt-ai Edge stream can be piped out into the Manifold allowing access by appropriate Manifold apps. Although not yet fully implemented, Manifold apps will be able to feed data back into the rt-ai Edge design via a new SPE to be called GetManifold.

Next step for the 3DView Unity app is to provide visualization for all ZeroSensor streams correctly physically located within a 3D model of the sentient space. Right now I am using a SpaceMouse to navigate within the space but ultimately this should work with any VR headset with appropriate controller. AR headsets will use their spatial mapping capability to overlay visualizations on the real space so they won’t need a separate controller for navigation.

Speeding up ARKit development with Unity ARKit Remote

Anything that speeds up the development cycle is interesting and the Unity ARKit Remote manages to avoid having to go through Xcode every time around the loop. Provided the app can be run in the Editor, any changes to objects or scripts can be tested very quickly. The iPhone (in this case) runs a special remote app that passes ARKit data back to the app running in the Editor. You don’t see any of the Unity stuff in the phone itself, just the camera feed. The composite frames are shown in the Editor window as above.

My day with Windows Insider Preview, Unity 2017.2.0b8 and Windows Mixed Reality Headset

Yes, I am drinking a beer right now – it has been a long day. Mostly I seemed to spend it nursing Windows through its upgrade to the latest Insider Preview (16257) and begging the Insider Preview website to allow me to download the Insider Preview SDK which seemed to require all kinds of things done right and the wind blowing in the right direction at the same time.

The somewhat bizarre screen capture above is from a scene I created in the default room. The hologram figures are animated incidentally. What I mostly failed to do was to get existing HoloLens apps to run on the MR headset as Unity kept on reporting errors when generating the Visual Studio project for the apps, after having performed every other stage of the build process correctly. Very odd. I did manage to get a very simple scene with a single cube working ok, however.

Then I went back to the production version of Windows (15063) and tried things there. Ironically, my HoloLens app worked (apart from interaction) on the MR headset using Unity 5.6.2.

Clearly this particular Rome wasn’t built in a day – a lot more investigation is needed.

HoloLens Spectator View…without the HoloLens


I’ll explain the photo above in a moment. Microsoft’s Spectator View is a great device but not that practical in the general case. For example, the original requires modifications to the HoloLens itself and a fairly costly camera capable of outputting clean 1080p, 2k or 4k video on an HDMI port. Total cost can be more than $6000 depending on the camera used. My goal is to do much the same thing but without requiring a HoloLens and at a much lower cost – just using a standard camera with fairly simple calibration. Not only that, but I want to stream the mixed reality video across the internet using WebRTC for both conventional and stereo headsets (such as VR headsets).

So, why is there a HoloLens in the photo? This is the calibration setup. The camera that I am using for this Mixed Reality streaming system is a Stereolabs ZED. I have been working with this quite a bit lately and it seems to work extremely well. Notably it can produce a 2K stereo 3D output, a depth map and a 6 DoF pose, all available via a USB 3 interface and a very easy to use SDK.

Unlike Spectator View, the Unity Editor is not used on the desktop. Instead, a standard HoloLens UWP app is run on a Windows 10 desktop, along with a separate capture, compositor and WebRTC streamer program. There is some special code in the HoloLens app that talks to the rest of the streaming system. This can be present in the HoloLens app even when run on the HoloLens without problems (it just remains inactive in this case).

The calibration process determines, amongst other things, the actual field of view of the ZED and its orientation and position in the Unity scene used to create the virtual part of the mixed reality scene. This is essential in order to correctly render the virtual scene in a form that can be composited with the video coming from the ZED. This is why the HoloLens is placed in this prototype rig in the photo. It puts the HoloLens camera roughly in the same vertical plane as the ZED camera with a small (known) vertical offset. It’s not critical to get the orientation exactly right when fitting the HoloLens to the rig – this can be calibrated out very easily. The important thing is that the cameras see roughly the same field. That’s the because the next step matches features in each view and, from the positions of the matches, can derive the field of view of the ZED and its pose offset from the HoloLens. This then makes it possible to set the Unity camera in the desktop in exactly the right position and orientation so that the scene it streams is correctly composed.

Once the calibration step has completed, the HoloLens can be removed and used as required. The prototype version looks very ungainly like this! The real version will have a nice 3D printed bracket system that will also have the advantage of reducing the vertical separation and limit the possible offsets.

In operation, it is required that the HoloLens apps running on both the HoloLens(es) and the desktop are sharing data about the Unity scene that allows each device to compute exactly the same scene. In this way, everyone sees the same thing. I am actually using Arvizio‘s own sharing system but any sharing system could be used. The Unity scene generated on the desktop is then composited with the ZED camera’s video feed and streamed over WebRTC. The nice thing about using WebRTC is that almost anyone with a Chrome or Firefox browser can display the mixed reality stream without having to install any plugins or extensions. It is also worth mentioning that the ZED does not have to remain fixed in place after calibration. Because it is able to measure its pose with respect to its surroundings, the ZED could potentially pan, tilt and dolly if that is required.

Latest fun thing in the office: a Stereolabs ZED depth-sensing stereo camera


Just in is this Stereolabs ZED depth-sensing stereo camera. I have been looking for an affordable, good quality stereo camera for ages and this seems to fit the bill. Plus, since it is able to generate a point cloud depth map simultaneously with video frames, it’s possible to compose mixed reality scenes in Unity using a plugin available here.

One of the fun things to try is the ZEDfu program.

This is designed for mapping 3D spaces and turns the ZED into a scanner. The screen capture shows an intermediate view while it is collecting data. You can see the original image, a depth map and the surface map. Thought it was a fun image.

Definitely looking forward to working with this device and using it in some MR projects.

 

Obtaining high quality Unity textures from image files

The default settings for image import in Unity produce poor quality textures, especially when the image contains text that needs to look good in the scene. To fix this, click on the image file (not the generated material file) to get the Import Settings inspector. Deselect Generate Mip Map and change Filter mode to Point (no filter). The result should look like that in the screen capture below and the material will update automatically.