Intel Neural Compute Stick 2


An Intel Neural Compute Stick 2 (NCS 2) just turned up. It will be interesting to see how it compares to the earlier version. The software supporting it is now OpenVINO which is pretty interesting as it makes it relatively easy to move models across multiple hardware platforms.

The NCS 2 is really the best edge inference hardware engine that is generally available as far as I am aware. Hopefully, one day, the Edge TPU will be generally available and there seem to be many more in the pipeline. Typically these edge inference devices do not support RNNs or related architectures but, in the short term, that isn’t a problem as CNNs and DNNs are probably most useful at the edge at the moment, being very effective at compressing audio and video streams down to low rate information streams for example.

A nice feature of the NCS 2 is that it is easy to connect multiple examples to a single powerful CPU. The combination of a reasonably powerful CPU along with dedicated inference hardware is pretty interesting and is an ideal architecture for an rt-ai Edge node as it happens.

Registering virtual and real worlds with rt-xr, ARKit and Unity


One of the goals for rt-xr is to allow augmented reality users within a space to collaborate with virtual reality users physically outside of the space, with the VR users getting a telepresent sense of being physically within the same space. To this end, VR users see a complete model of the space (my office in this case) including augmentations while physically present AR users just see the augmentations. Some examples of augmentations are virtual whiteboards and virtual sticky notes. Both AR and VR users see avatars representing the position and pose of other users in the space.

Achieving this for AR users requires that their coordinate system corresponds with that of the virtual models of the room. For iOS, ARKit goes a long way to achieving this so the rt-xr app for iOS has been extended to include ARKit and work in AR mode. The screen capture above shows how coordinate systems are synced. A known location in physical space (in this case, the center of the circular control of the fan controller) is selected by touching the iPad screen on the exact center of the control. This identifies position. To avoid multiple control points, the app is currently started in the correct pose so that the yaw rotation is zero relative to the model origin. It is pretty quick and easy to do. The video below shows the process and the result.

After starting the app in the correct orientation, the user is then free to move to click on the control point. Once that’s done, the rt-xr part of the app starts up and loads the virtual model of the room. For this test, the complete model is being shown (i.e. as for VR users rather than AR users) although in real life only the augmentations would be visible – the idea here was to see how the windows lined up. The results are not too bad all things considered although moving or rotating too fast can cause some drift. However, collaborating using augmentations can tolerate some offset so this should not be a major problem.

There are just a couple of augmentations in this test. One is the menu switch (the glowing M) which is used to instantiate and control augmentations. There is also a video screen showing the snowy scene from the driveway camera, the feed being generated by an rt-ai design.

Next step is to test out VR and AR collaboration properly by displaying the correct AR scene on the iOS app. Since VR collaboration has worked for some time, extending it to AR users should not be too hard.

MobileNet SSD object detection with Unity, ARKit and Core ML


This iOS app is really step 1 on the road to integrating Core ML enabled iOS devices with rt-ai Edge. The screenshot shows the MobileNet SSD object detector running within the ARKit-enabled Unity app on an iPad Pro. If anyone wants to try this, code is here. I put this together pretty quickly so apologies if it is a bit rough but it is early days. Detection box registration isn’t perfect as you can see (especially for the mouse) but it is not too bad. This is probably a field of view mismatch somewhere and will need to be investigated.

Next, this code needs to be integrated with the Manifold C# Unity client. Following that, I will need to write the PutManifold SPE for rt-ai Edge. When this is done, the video and object detection data stream from the iOS device will appear within an rt-ai Edge stream processing network and look exactly the same as the stream from the CYOLO SPE.

The app is based on two repos that were absolutely invaluable in putting it together:

Many thanks to the authors of those repos.

The new ZED camera SPE and CYOLO SPE with support for depth cameras


The new SPE for the Stereolabs ZED depth camera is now working nicely, as is the new support for depth data in the CYOLO SPE. The extra depth information can be seen in the metadata display on the right of the screen capture – the annotation on the image itself is still the standard code but, since that is just for testing, it is ok.


This is the design used for testing. The ZED camera SPE has two outputs: one looks like a standard camera output while the other has both left and right images and the depth image. The CYOLO SPE can now accept either standard video messages or depth video messages using the appropriate input pin. The depth image adds about 3.7MB to each message so it isn’t a trivial overhead but the CYOLO module only ever outputs a standard video frame so the large payload is contained in the single link in this design. Even running everything on a busy machine with 1280 x 720 frames, the whole design still runs at around 15fps which is not too bad.

 

Old-school IoT sensing with absolutely no AI

Winter is coming, as one might say. Actually it is here, with freezing temperatures and inches of snow. Time to turn on the heater in the garage. However, if someone leaves a door open it will heat the atmosphere in general and burn through all of the propane. Yes, it would be absolutely trivial to put magnetic sensors on the doors that turn off the heater if the door is open but that is no fun at all. This has actually been a long-running project with all kinds of interesting solutions, some even including Apache NiFi and other big data things. One solution used Insteon and actually allowed the system to automatically close doors. However, various disasters could occur and I turned this system off. In any case, the Insteon door sensors were not reliable.

The root of the problem is that the system really has to understand what is happening in the space, not just the states of various parts of it. Someone could be just about to back a car out when the system decides to shut a door. Not good. Seemingly, a perfect and autonomous solution is still a little way down the road. A short term hack seems in order, however!

Since I am working on rt-ai Edge right now, it was natural to think about how to use rt-ai Edge to solve this problem. I have some ZeroSensors around so I put one of them in the garage. I realized that, when the doors are shut, the garage is very dark and the temperature stable at the thermostat setting (allowing for hysteresis). If a door is open, the garage gets illuminated in one way or another (garage door motor lights, ambient light, outside lights at night etc) and is a somewhat reliable, albeit indirect, indicator of door status. Also, the temperature will drop pretty quickly, giving an indication that either a door is open or the heater has failed.

Given how simple this is, I put together a quick filter SPE to process the light and temperature data from a ZeroSensor and send me an email if the doors are seemingly open for more than a preset time. The screen capture above shows what is going on. There are actually a couple of stream processing networks (SPNs) in action in this space, mainly running on the rtai0 node. One of them is the YOLO-based driveway vehicle detection system while the other is the garage environmental sensing network. A node can be running multiple SPNs at the same time – they are ships in the night and can be managed totally independently.

 

This screen capture shows the configuration dialog for the SensorFilter SPE. I currently also have it set to confirm when the garage doors have been shut so that I know if someone else has taken care of the problem.

Another step would be to turn the heater off if a door is open and the temperature is dropping – right now, I am not planning to allow the door to be shut remotely for the reasons mentioned earlier. At least turning the heater off avoids wasting a load of energy. I can control the heater via Insteon. I would just need another SPE to provide the interface between the filtered alerts and my Insteon server.

Still, one day it would be nice to do this properly. This entails understanding the state of the space – things like are there any people in it? This is not completely trivial as the system can’t always see someone inside a car. However, it can assume (until I get an autonomous vehicle) that if a car is moving or has moved, there must be someone driving and, until the system detects that person leaving the space, it must also assume that it should take no autonomous action. However, it needs to maintain state in order to be sure that there’s nobody in the space (or in a car just outside the space) so that it can take control. I have a feeling that there may be quite a few corner cases with this but it would be fun to try, even if it only simulates trying to close doors.

Stereolabs ZED depth camera with YOLO

The Stereolabs ZED camera is a quite effective way of generating depth-enhanced video streams and it seemed like it was time to get one and integrate it with rt-ai Edge. I have worked with one of these before in a different context and I knew that using the ZED was pretty straightforward.

The screen capture above shows the ZED YOLO C++ example code running. The mug in the shot was a bit too close to the monitor to get picked up and my hand was probably too close in general hence the strange 4.92m depth reading. However, it does seem to work pretty well. It even picked up the image of the monitor on the screen as a monitor.

Just as a note, I did have to modify the main.cpp code to run. At line 49, I had to add a std:: in front of an isfinite() call for some reason. Maybe something odd on my Ubuntu system. Also, to get the standard samples to build, I had to add libxmu-dev as another dependency.

Now comes the task of adding this to rt-ai Edge. I am going to split this into two: the first is to produce a new camera SPE that works with the ZED and outputs the depth image in addition to the normal camera image. Then, the CYOLO SPE will be modified to accept optional depth information and perform the processing to generate the actual object depth value. This seems like a more general solution as the ZED SPE then looks like a standard depth camera while the upgraded CYOLO will be able to work with any depth camera.

Integrating Core ML with Unity on iOS

The latest iPads and iPhones have some pretty serious edge neural network capabilities that are a natural fit with ARKit and Unity. AR and Unity go together quite nicely as AR provides an excellent way of communicating back to the user the results of intelligently processing sensor data from the user, other users and static (infrastructure) sensors in a space. The screen capture above was obtained from code largely based on this repo which integrates Core ML models with Unity. In this case, Inceptionv3 was used. While it isn’t perfect, it does ably demonstrate that this can be done. Getting the plugin to work was quite straightforward – you just have to include the mlmodel file in XCode via the Files -> Add Files menu option rather than dragging the file into the project. The development cycle is pretty annoying as the plugin won’t run in the Unity Editor and compile (on my old Mac Mini) is painfully slow but I guess a decent Mac would do a better job.

This all brings up the point that there seem to be different perceptions of what the edge actually is. rt-ai Edge can be perceived as a local aggregation and compute facility for inference-capable or conventional mobile and infrastructure devices (such as security cameras) – basically an edge compute facility supporting edge devices. A particular advantage of edge compute is that it is possible to integrate legacy devices (such as dumb cameras) into an AI-enhanced system by utilizing edge compute inference capabilities. In a sense, edge compute is a local mini-cloud, providing high capacity compute and inference a short distance in time away from sensors and actuators. This minimizes backhaul and latency, not to mention securing data in the local area rather than dispersing it in a cloud. It can also be very cost-effective when compared to the costs of running multiple cloud CPU instances 24/7.

Given the latest developments in tablets and smart phones, it is essential that rt-ai Edge be able to incorporate inference-capable devices into its stream processing networks. Inference-capable, per user devices make scaling very straightforward as capability increases in direct proportion to the number of users of an edge system. The normal rt-ai Edge deployment system can’t be used with mobile devices which requires (at the very least) framework apps to make use of AI models within the devices themselves. However, with that proviso, it is certainly possible to incorporate smart edge devices into edge networks with rt-ai Edge.