The ZeroSensor – a sentient space point of presence

One application for rt-ai Edge is ubiquitous sensing leading to sentient spaces – spaces that can interact with people moving through and provide useful functionality, whether learned or programmed. A step on the road to that is the ZeroSensor, four prototypes of which are shown in the photo. Each ZeroSensor consists of a Raspberry Pi Zero W, a Pi camera module v2, an Adafruit BME 680 breakout and an Adafruit TSL2561 breakout. The combination gives a video stream and a sensor stream with light, temperature, pressure, humidity and air quality values. The video stream can be used to derive motion sensing and identification while the other sensors provide a general idea of conditions in the space. Notably missing is audio. Microphone support would be useful for general sensing and I might add that in real devices. A 3D printable case design is underway in order to allow wide-scale deployment.

Voice-based interaction is a powerful way for users to interact with sentient spaces. However, it is assumed that people who want to interact are using an AR headset of some sort which itself provides the audio I/O capabilities. Gesture input would be possible via the ZeroSensor’s camera. For privacy reasons video would not be viewed directly or stored but just used as a source of activity data and interaction.

This is the simple rt-ai design used to test the ZeroSensors. The ZeroSynth modules are rt-ai Edge synth modules that contain SPEs that interface with the ZeroSensor’s hardware and generate a video stream and a sensor data stream. An instance of a video viewer and sensor viewer are connected to each ZeroSynth module.

This is the result of running the ZeroSensor test design, showing a video and sensor window for each ZeroSensor. The cameras are staring at the ceiling because the four sensors were on a table. When the correct case is available, they will be deployed in the corners of rooms in the space.

The AwareSpace project

The earlier Smart space post got me thinking about other related projects and I came across these old screen captures from the AwareSpace project. This was a much more serious attempt to make use of ubiquitous sensor data. It worked fine, giving easy access to real time and historic data from sensors. There was even web access to the system. Like many projects, it was never really finished and needed a lot more work to do everything that I wanted. One day…

Smart spaces and IoT data – the challenge is what to do with it

A while back I built some add-on cards for Raspberry Pis to do some environmental monitoring around the house. This is one of them.

The project starting collecting dust when I couldn’t really think of good ways of using the data, beyond triggering an alarm under some conditions or something. However, it’s often interesting just to see what’s going on around the place so I have revived the sensors (a good use for old first generation Pis). The screen capture shows a simple but actually quite effective way of using the data that’s being generated, providing a display that’s adjacent to the camera feed from a webcam on the same Pi. Between the two streams, you can get good confidence on what’s happening in the smart space.

One day, I’d like to get the HoloLens integrated with this so that I can see the data when I am in the smart space. That would be even more fun.