Learning engineering lessons from NTSB accident reports

I am fully aware that this probably means that I have too much time on my hands but, having read a few NTSB accident reports recently, I do find them to have considerable educational value in demonstrating how systems can go wrong in sometimes surprising ways. Take this one, for example. It contains a detailed description of how an autonomous vehicle tried to deal with a situation when its sensors were giving inconsistent information. Or this one, which contains a very detailed description of what went wrong to cause the collapse of a new pedestrian bridge in Florida last year. Many are concerned with aviation accidents. This report is an example.

I believe that a tremendous amount can be learned from these reports about expected and unexpected failure modes of complex systems, especially those where humans are part of the loop. The hope is of course that by understanding what went wrong,  these failures and consequent loss of life never happen again.

For reference, this is a list of recent reports while the @NTSB_Newsroom Twitter feed is a good way of keeping up to date.

Last gasp for Glass (Explorer Edition at least)


Had to dig out my old Google Glass headset this morning to update its firmware after reading about the necessity of installing the final upgrade. If it hadn’t been upgraded, it would eventually have more in common with a brick than an AR headset. Frankly not the best use of $1500 in hindsight but at least it allowed me to get a taste of things to come. Technology such as Qualcomm’s XR2 will (hopefully) finally enable affordable, spatially-aware XR headsets that are viable consumer and enterprise devices.

Platform-independent highly augmented spaces using UWB

The SHAPE project needs to be able to support consistent highly augmented spaces no matter what platform (headset + software) is chosen by any user situated within the physical space. Previously, SHAPE had just used ARKit to design spaces as an interim measure but this was not going to solve the problem in a platform-independent way. What SHAPE needed was a platform-independent way of linking an ARKit spatial map to the real physical environment. UWB technology provides just such a mechanism.

SHAPE breaks a large physical space into multiple subspaces – often mapped to physical rooms. A big problem is that augmentations can be seen through walls unless something prevents this. ARKit is relatively awful at wall detection so I gave up trying to get that to work. It’s not really ARKit’s fault. Using a single camera to reliably map a room’s walls is just not reliable. Another problem concerns windows and doors. Ideally, it should be possible to see augmentations outside of a physical room if they can be viewed through a window. That might be tough for any mapping software to handle correctly.

SHAPE is now able to solve these problems using UWB. The photo above shows part of the process used to link ARKit’s coordinate system to a physical space coordinate system defined by the UWB installation in the space. What this is showing is how the yaw (rotation about the y-axis in Unity terms) offset between ARKit’s coordinate system and UWB’s coordinate system is measured and subsequently corrected. Basically, a UWB tag (which has a known position in the space) is covered by the red ball in the center of the iPad screen and data recorded at that point. The data recorded consists of the virtual AR camera position and rotation along with the iPad’s position in the physical space. Because iPads currently do not support UWB, I attached one of the Decawave tags to the back of the iPad. Separately, a tag in Listener mode is used by a new SHAPE component, EdgeUWB, to provide a service that makes available the positions of all UWB tags in the subspace. EdgeSpace keeps track of these tag positions so when it receives a message to update the ARKit offset, it can combine the AR camera pose (in the message from the SHAPE app to EdgeSpace), iPad position (from the UWB tag via EdgeUWB) and the known location of the target UWB anchor (from a configuration file). With all of this information, EdgeSpace can calculate position and rotation offsets that are sent back to the SHAPE app so that the Unity augmentations can be correctly aligned in the physical space.

As the ARKit spatial map is also saved during this process and reloaded every time the SHAPE app starts up (or enters the room/subspace in a full implementation), the measured offsets remain valid.


Coming back to the issue of making sure that only augmentations in a physical subspace can be seen, except through a window or door, SHAPE now includes the concept of rooms that can have walls, a floor and a ceiling. The screenshot above shows an example of this. The yellow sticky note is outside of the room and so only the part in the “window” is visible. Since it is not at all obvious what is happening, the invisible but occluding walls used in normal mode can be replaced with visible walls so alignment can be visualized more easily.


This screenshot was taken in debug mode. The effect is subtle but there is a blue film representing the normal invisible occluding walls and the cutout for the window can be seen as it is clear. It can also be seen that the alignment isn’t totally perfect – for example, the cutout is a couple of inches higher than the actual transparent part of the physical window. In this case, the full sticky note is visible as the debug walls don’t occlude.

Incidentally, the room walls use the procedural punctured plane technology that was developed a while ago.


This is an example showing a wall with rectangular and elliptical cutouts. Cutouts can be defined to allow augmentations to be seen through windows and doors by configuring the appropriate cutout in terms of the UWB coordinate system.

While the current system only supports ARKit, in principle any equivalent system could be aligned to the UWB coordinate system using some sort of similar alignment process. Once this is done, a user in the space with any supported XR headset type will see a consistent set of augmentations, reliably positioned within the physical space (at least within the alignment accuracy limits of the underlying platform and the UWB location system). Note that, while UWB support in user devices is helpful, it is only required for setting up the space and initial map alignment. After that, user devices can achieve spatial lock (via ARKit for example) and then maintain tracking in the normal way.

Indoor position measurement for XR applications using UWB

Obtaining reasonably accurate absolute indoor position measurements for mobile devices has always been tricky, to say the least. Things like ARKit can determine a relative position within a mapped space reasonably well in ideal circumstances, as can structured IR. What would be much more useful for creating complex XR experiences spread over a large physical space in which lots of people and objects might be moving around and occluding things is something that allows the XR application to determine its absolute position in the space. Ultra-wideband (UWB) provides a mechanism for obtaining an absolute position (with respect to a fixed point in the space) over a large area and in challenging environments and could be an ideal partner for XR applications. This is a useful backgrounder on how the technology works. Interestingly, Apple have added some form of UWB support to the iPhone 11. Hopefully future manufacturers of XR headsets, or phones that pair with XR headsets, will include UWB capability so that they can act as tags in an RTLS system.

UWB RTLS (Real Time Location System) technology seems like an ideal fit for the SHAPE project. An important requirement for SHAPE is that the system can locate a user within one of the subspaces that together cover the entire physical space. The use of subspaces allows the system to grow to cover very large physical spaces just by scaling servers. One idea that works to some extent is to use the AR headset or tablet camera to recognize physical features within a subspace, as in this work. However, once again, this only works in ideal situations where the physical environment has not changed since the reference images were taken. And, in a large space that has no features, such as a warehouse, this just won’t work at all.

Using UWB RTLS with a number of anchors spanning the physical space, it should be possible to determine an absolute location in the space and map this to a specific subspace, regardless of other objects moving through the space or how featureless the space is. To try this out, I am using the Decawave MDEK1001 development kit.

This includes 12 devices that can be used as anchors, tags, or gateways. Anchors are placed at fixed positions in the space – I have mounted four high up in the corners of my office for example. Tags represent users moving around in the space. Putting a device in gateway mode allows it to be connected to a Raspberry Pi which provides web and MQTT access to the tag position data for example.

Setting up is pretty easy using an Android app that auto-discovers devices. It does have an automatic measurement system that tries to determine the relative positions of anchors with respect to an origin but that didn’t seem to work too well for me so I resorted to physical measurement. Not a big problem and, in any case, the software cannot determine the system z offset automatically. Another thing that confused me is that tags have, by default, a very slow update rate if they are not moving much which makes it seem that the system isn’t working. Changing this to a much higher rate probably much increases power usage but certainly helps with testing! Speaking of power, the devices can be powered from USB power supplies or internal rechargeable batteries (which were not included incidentally).

Anyway, once everything was set up correctly, it seemed to work very well, using the Android app to display the tag locations with respect to the anchors. The next step is to integrate this system into SHAPE so that subspaces can be defined in terms of RTLS coordinates.

An interesting additional feature of UWB is that it also supports data transfers between devices. This could lead to some interesting new concepts for XR experiences…

The ghost in the AI machine

The driveway monitoring system has been running full time for months now and it’s great to know if a vehicle or a person is moving on the driveway up to the house. The only bad thing is that it will give occasional false detections like the one above. This only happens at night and I guess there’s enough correct texture to trigger the “person” response with a very high confidence. Those white streaks might be rain or bugs being illuminated by the IR light. It also only seems to happen when the trash can is out for collection – it is in the frame about half way out from the center to the right.

It is well known that the image recognition capabilities of convolutional networks aren’t always exactly what they seem and this is a good example of the problem. Clearly, in this case, MobileNet feature detectors have detected things in small areas with a particular spatial relationship and added these together to come to the completely wrong conclusion. My problem is how to deal with these false detections. A couple of ideas come to mind. One is to use a different model in parallel and only generate an alert if both detect the same object at (roughly) the same place in the frame. Or instead of another CNN, use semantic segmentation to detect the object in a somewhat different way.

Whatever, it is a good practical demonstration of the fact that these simple neural networks don’t in any way understand what they are seeing. However, they can certainly be used as the basis of a more sophisticated system which adds higher level understanding to raw detections.

Using homography to solve the “Where am I?” problem

In SHAPE, a large highly augmented space is broken up into a number of sub-spaces. Each sub-space has its own set of virtual augmentation objects positioned persistently in the real space with which AR device users physically present in the sub-space can interact in a collaborative way. It is necessary to break up the global space in this way in order keep the number of augmentation objects that any one AR device has to handle down to a manageable number. Take the case of a large museum with very many individual rooms. A user can only experience augmentation objects in the same physical room so each room becomes a SHAPE sub-space and only the augmentation objects in that particular room need to be processed by the user’s AR device.

This brings up two problems: how to work out which room the user is in when the SHAPE app is started (the “Where am I?” problem) and also detecting that the user has moved from one room to another. It’s desirable to do this without depending on external navigation which, in indoor environments, can be pretty unreliable or completely unavailable.

The goal was to use the video feed from the AR device’s camera (e.g. the rear camera on an iPad running ARKit) to solve these problems. The question was how to make this work. This seemed like something that OpenCV probably had an answer to which meant that the first place to look was the Learn OpenCV web site. A while ago there was a post about feature based image alignment which seemed like the right sort of technique to use for this. I used the code as the basis for my code which ended up working quite nicely.

The approach is to take a set of overlapping reference photos for each room and then pre-process them to extract the necessary keypoints and descriptors. These can then go into a database, labelled with the sub-space to which they belong, for comparison against user generated images. Here are two reference images of my (messy) office for example:

Next, I took another image to represent a user generated image:

It is obviously similar but not the same as any of the reference set. Running this image against the database resulted in the following two results for the two reference images above:

As you can see, the code has done a pretty good job of selecting the overlaps of the test image with the two reference images. This is an example of what you see if the match is poor:

It looks very cool but clearly has nothing to do with a real match! In order to select the best reference image match, I add up the distances for the 10 best feature matches against every reference image and then select the reference image (and therefore sub-space) with the lowest total distance. This can also be thresholded in case there is no good match. For these images, a threshold of around 300 would work.

In practice, the SHAPE app will start sending images to a new SHAPE component, the homography server, which will keep processing images until the lowest distance match is under the threshold. At that point, the sub-space has been detected and the augmentation objects and spatial map can be downloaded to the app and use to populate the scene. By continuing this process, if the user moves from one room to another, the room (sub-space) change will be detected and the old set of augmentation objects and spatial map replaced with the ones for the new room.

Object detection on the Raspberry Pi 4 with the Neural Compute Stick 2


Following on from the Coral USB experiment, the next step was to try it out with the NCS 2. Installation of OpenVINO on Raspbian Buster was straightforward. The rt-ai design was basically the same as for the Coral USB experiment but with the CoralSSD SPE replaced with the OpenVINO equivalent called CSSDPi. Both SPEs run ssd_mobilenet_v2_coco object detection.

Performance was pretty good – 17fps with 1280 x 720 frames. This is a little better than the Coral USB accelerator attained but then again the OpenVINO SPE is a C++ SPE while the Coral USB SPE is a Python SPE and image preparation and post processing takes its toll on performance. One day I am really going to use the C++ API to produce a new Coral USB SPE so that the two are on a level playing field. The raw inference time on the Coral USB accelerator is about 40mS or so meaning that there is plenty of opportunity for higher throughputs.