3DView: visualizing environmental data for sentient spaces

Th 3DView app I mentioned in a previous post is moving forward nicely. The screen capture shows the app capturing real time from four ZeroSensors, with the real time data coming from an rt-ai Edge stream processing network via Manifold. The app creates a video window and sensor display panel for each physical device and then updates the data whenever new messages are received from the ZeroSensor.

This is the rt-ai Edge part of the design. All the blocks are synth modules to speed the design replication. The four ZeroManifoldSynth modules each contain two PutManifold stream processing elements (SPEs) to inject the video and sensor streams into the Manifold. The ZeroSynth modules contain the video and sensor capture SPEs. The ZeroManifoldSynth modules all run on the default node while the ZeroSynth modules run directly on the ZeroSensors themselves. As always with rt-ai Edge, deployment of new designs or design changes is a one click action making this kind of distributed system development much more pleasant.

The Unity graphics elements are basic as I take the standard programmer’s view of Unity graphics elements: they can always be upgraded later by somebody with artistic talent but the key is the underlying functionality. The next step moving forward is to hang these displays (and other much more interesting elements) on the walls of a 3D model of the sentient space. Ultimately the idea is that people can walk through the sentient space using AR headsets and see the elements persistently positioned in the sentient space. In addition, users of the sentient space will be able to instantiate and position elements themselves and also interact with them.

Even more interesting than that is the ability for the sentient space to autonomously instantiate elements in the space based on perceived user actions. This is really the goal of the sentient space concept – to have the sentient space work with the occupants in a natural way (apart from needing an AR headset of course!).

For the moment, I am going to develop this in VR rather than AR. The HoloLens is the only available AR device that can support the level of persistence required but I’d rather wait for the rumored HoloLens 2 or the Magic Leap One (assuming it has the required multi-room persistence capability).

The ZeroSensor – a sentient space point of presence

One application for rt-ai Edge is ubiquitous sensing leading to sentient spaces – spaces that can interact with people moving through and provide useful functionality, whether learned or programmed. A step on the road to that is the ZeroSensor, four prototypes of which are shown in the photo. Each ZeroSensor consists of a Raspberry Pi Zero W, a Pi camera module v2, an Adafruit BME 680 breakout and an Adafruit TSL2561 breakout. The combination gives a video stream and a sensor stream with light, temperature, pressure, humidity and air quality values. The video stream can be used to derive motion sensing and identification while the other sensors provide a general idea of conditions in the space. Notably missing is audio. Microphone support would be useful for general sensing and I might add that in real devices. A 3D printable case design is underway in order to allow wide-scale deployment.

Voice-based interaction is a powerful way for users to interact with sentient spaces. However, it is assumed that people who want to interact are using an AR headset of some sort which itself provides the audio I/O capabilities. Gesture input would be possible via the ZeroSensor’s camera. For privacy reasons video would not be viewed directly or stored but just used as a source of activity data and interaction.

This is the simple rt-ai design used to test the ZeroSensors. The ZeroSynth modules are rt-ai Edge synth modules that contain SPEs that interface with the ZeroSensor’s hardware and generate a video stream and a sensor data stream. An instance of a video viewer and sensor viewer are connected to each ZeroSynth module.

This is the result of running the ZeroSensor test design, showing a video and sensor window for each ZeroSensor. The cameras are staring at the ceiling because the four sensors were on a table. When the correct case is available, they will be deployed in the corners of rooms in the space.

Scaling embedded edge inference with rt-ai Edge synth modules

Now that edge devices with embedded inference support are starting to appear, there’s a need for scalable deployment of software and configuration data to these devices. rt-ai Edge can address this scaling requirement using synth modules. Synth modules are composite elements in a stream processing network (SPN) that combine simpler stream processing elements (SPEs) into more complex structures. The idea is that a synth module can be created that contains the SPEs required for a specific type of embedded edge inference device. This synth module can then be deployed, configured and managed for all instances of this type of edge inference device very easily using the rtaiDesigner tool.

The screen capture above is an example of the output from an SPN that includes two differently configured DeepLab v3+ instances along with associated video and audio capture SPEs. The top level SPN looks like this:

There are two synth modules in the design, both instances of the same underlying synth module:

This simple synth module consists of a video capture SPE, an audio capture SPE and the DeepLab v3+ SPE.

As with standard SPEs, synth modules can be allocated to any node in the rt-ai Edge network. The only limitation at present is that all SPEs in an instance of a synth module must run on the same node. This will be relaxed at later date when automatic SPE placement based on available resources is implemented. A synth module can be instanced multiple times on the same node or different nodes as required. In this example, two instances of the same synth module were placed on the Default node.

Individual instances of a synth module can be configured in the top level design:

In this case, Synth0 is being configured. Note the tabs in the dialog. There is one tab for each SPE in the underlying synth module. SPE dialogs are auto-generated from a JSON spec in the SPE design directory. This makes it very easy to construct a combined dialog when SPEs are used in a synth module. Any design can be turned into a synth module just by pressing the Generate synth module button. The synth module then becomes available in the Add module dialog just like any other SPE.

As designs are completely regenerated every time the Generate design button is pressed, internal changes can be made to the synth module at any time and they will be reflected in top level designs the next time that they are generated.

Right now, synth module designs cannot include synth modules, only standard SPEs. If multi-level synth modules were required, it would be a small extension of the current implementation. For now, the ability to reproduce and configure a standard SPN subnetwork multiple times is sufficient to scale most edge inference applications.