The new ZED camera SPE and CYOLO SPE with support for depth cameras


The new SPE for the Stereolabs ZED depth camera is now working nicely, as is the new support for depth data in the CYOLO SPE. The extra depth information can be seen in the metadata display on the right of the screen capture – the annotation on the image itself is still the standard code but, since that is just for testing, it is ok.


This is the design used for testing. The ZED camera SPE has two outputs: one looks like a standard camera output while the other has both left and right images and the depth image. The CYOLO SPE can now accept either standard video messages or depth video messages using the appropriate input pin. The depth image adds about 3.7MB to each message so it isn’t a trivial overhead but the CYOLO module only ever outputs a standard video frame so the large payload is contained in the single link in this design. Even running everything on a busy machine with 1280 x 720 frames, the whole design still runs at around 15fps which is not too bad.

 

Validating long-term sensor data collection with rtaiView

In order to avoid the whole garbage in, garbage out problem, I want to make sure that the long term data that I am collecting from the ZeroSensors and IP cameras is actually what I think it is. rtaiView is an rt-ai Edge app that allows both real-time streams and historic stored data to be reviewed in a convenient way. It can be used to check both raw data and extracted metadata for each stream. The screen capture above shows an rtaiView instance monitoring the real-time streams from two external IP cameras and the three streams (video, audio and multi-channel sensor) from a ZeroSensor. I am going to add a second ZeroSensor in a second internal location and then let the whole thing run for a while. Just to make life complicated I am using the straight TensorFlow object detector for the external cameras and DeepLabv3 segmentation in color map mode for the internal cameras so that I don’t upset the other inhabitants who aren’t keen on me putting cameras everywhere.

Focus now is moving on to extracting interesting sequences from the stored data and then using these to train an anomaly detector. The exact architecture of the anomaly detector is still a bit unclear – definitely a research project.

ZeroSensor case design

It has taken a while to get to this point but, now the focus is back on rt-ai Edge, it is time to get the ZeroSensors sorted out properly. The design above is the prototype 3D printed case. It’s a free standing case, about 3 inches by 2.7 inches and 1.4 inches deep. The biggest problem with these things is getting thermal isolation so that the temperature reading is from the outside air rather than Raspberry Pi Zero heated air. The big baffle on the rear (on the right of the image above) is intended to keep the air separate in the two halves. The little slot is to allow four thin cables to run between the Pi and the sensor boards. Right now the back has no holes so that air flow is fully bottom to top convection on both the sensor side and the Pi side. However, this might need to be changed if the initial design doesn’t work. The plastic material will conduct heat so it may be necessary to add more thermal isolation using holes or slots in the back.

The ZeroSensor – a sentient space point of presence

One application for rt-ai Edge is ubiquitous sensing leading to sentient spaces – spaces that can interact with people moving through and provide useful functionality, whether learned or programmed. A step on the road to that is the ZeroSensor, four prototypes of which are shown in the photo. Each ZeroSensor consists of a Raspberry Pi Zero W, a Pi camera module v2, an Adafruit BME 680 breakout and an Adafruit TSL2561 breakout. The combination gives a video stream and a sensor stream with light, temperature, pressure, humidity and air quality values. The video stream can be used to derive motion sensing and identification while the other sensors provide a general idea of conditions in the space. Notably missing is audio. Microphone support would be useful for general sensing and I might add that in real devices. A 3D printable case design is underway in order to allow wide-scale deployment.

Voice-based interaction is a powerful way for users to interact with sentient spaces. However, it is assumed that people who want to interact are using an AR headset of some sort which itself provides the audio I/O capabilities. Gesture input would be possible via the ZeroSensor’s camera. For privacy reasons video would not be viewed directly or stored but just used as a source of activity data and interaction.

This is the simple rt-ai design used to test the ZeroSensors. The ZeroSynth modules are rt-ai Edge synth modules that contain SPEs that interface with the ZeroSensor’s hardware and generate a video stream and a sensor data stream. An instance of a video viewer and sensor viewer are connected to each ZeroSynth module.

This is the result of running the ZeroSensor test design, showing a video and sensor window for each ZeroSensor. The cameras are staring at the ceiling because the four sensors were on a table. When the correct case is available, they will be deployed in the corners of rooms in the space.

How rt-ai Edge will enable Sentient Spaces

The idea of creating spaces that understand the needs of the people moving within them – Sentient Spaces – has been a long term personal goal. Our ability today to create sensor data (video, audio, environmental etc) is incredible. Our ability to make practical use of this enormous body of data is minimal. The question is: how can ubiquitous sensing in a space be harnessed to make the space more functional for people within it?

rt-ai Edge could be the basis of an answer to this question. It is designed to receive large volumes of multi-sensor data, extract meaningful information and then take control actions as necessary. This closes the local loop without requiring external cloud server interaction. This is important because creating a space with ubiquitous sensing raises all kinds of privacy issues. Because rt-ai Edge keeps all raw data (such as video and audio) within the space, privacy is much less of a concern.

I believe that a key to making a space sentient is to harness artificial intelligence concepts such as online learning of event sequences and anomaly detection. It is not practical for anyone to sit down and program a system to correctly recognize normal behavior in a space and what actions might be helpful as a result. Instead, the system needs to learn what is normal and develop strategies that might be helpful. Reinforcement via user feedback can be used to refine responses.

A trivial example would be someone moving through a dark space at night. It might be helpful to provide light at a suitable intensity to safely help a person navigate the space. The system could deduce this by having experienced other people moving though the space, turning on and off lights as they go. Meanwhile, face recognition could be employed to see if the person is known to the space and, if not, an assessment could be made if an alert needs to be generated. Finally, a video record could be put together of the person moving through the space, using assembled clips from all relevant cameras, and stored (on-site) for a time in case it is useful.

Well that’s a trivial example to describe but not at all trivial to implement. However, my goal is to see if AI techniques can be used to approach this level of functionality. In practical terms, this means developing a series of rt-ai modules using TensorFlow to perform feature extraction, anomaly detection and sequence prediction that are then glued together with sensor and control modules to perform a complete system requiring minimal supervised training to perform useful functions.

The trouble with temperature sensors

Working with the Bosch XDK reminded me that temperature sensing seems like such an obvious concept but it is actually very tough to do and get correct results. The prototype above was something I tried to do in a startup a few years ago, back when this kind of thing was all the rage. It combined motion sensing, the usual environmental sensors including air quality and could have a webcam attached if you wanted video coverage of the space also.

In this photo of the interior you can see my attempt at getting reasonable results from the temperature sensor by keeping the power and ground planes away from the sensor – the small black chip on the right of the photo. Trouble is, the pcb’s FR-4 still conducts heat, as do the remaining copper traces to the chip. Various other attempts followed included cutting a slot through the FR-4 and isolating the air above the rest of the circuit board from the sensor. This is an example:SensorBoard1.jpegAnd this is a thermistor design (with some additional wireless hardware):

SensorBoard2.jpeg

In the end, the only solution was to use a thermistor attached by wires that could be kept some distance from the main circuitry. Or, just having all the very low power sensors completely removed from the processor.

The Raspberry Pi Sense HAT suffers from this problem as it is right above the Pi’s processor, as does the Bosch XDK itself. Actually I am not aware of any other really good solution apart from the one where a cable is used to separate the sensor board completely from the processor controlling it (which might work for the Sense HAT although I have not tried that.

Project Soli

Very interesting project from Google ATAP which uses a miniaturized radar device to detect hand gestures. Because it doesn’t use structured light, it has the potential to work outdoors and in difficult environments. Experience shows that structured light has a lot of limitations so radar technology like this is potentially a big step forward.

It has been around for a while – it’ll be interesting to see if it does turn into a real thing.