Old-school IoT sensing with absolutely no AI

Winter is coming, as one might say. Actually it is here, with freezing temperatures and inches of snow. Time to turn on the heater in the garage. However, if someone leaves a door open it will heat the atmosphere in general and burn through all of the propane. Yes, it would be absolutely trivial to put magnetic sensors on the doors that turn off the heater if the door is open but that is no fun at all. This has actually been a long-running project with all kinds of interesting solutions, some even including Apache NiFi and other big data things. One solution used Insteon and actually allowed the system to automatically close doors. However, various disasters could occur and I turned this system off. In any case, the Insteon door sensors were not reliable.

The root of the problem is that the system really has to understand what is happening in the space, not just the states of various parts of it. Someone could be just about to back a car out when the system decides to shut a door. Not good. Seemingly, a perfect and autonomous solution is still a little way down the road. A short term hack seems in order, however!

Since I am working on rt-ai Edge right now, it was natural to think about how to use rt-ai Edge to solve this problem. I have some ZeroSensors around so I put one of them in the garage. I realized that, when the doors are shut, the garage is very dark and the temperature stable at the thermostat setting (allowing for hysteresis). If a door is open, the garage gets illuminated in one way or another (garage door motor lights, ambient light, outside lights at night etc) and is a somewhat reliable, albeit indirect, indicator of door status. Also, the temperature will drop pretty quickly, giving an indication that either a door is open or the heater has failed.

Given how simple this is, I put together a quick filter SPE to process the light and temperature data from a ZeroSensor and send me an email if the doors are seemingly open for more than a preset time. The screen capture above shows what is going on. There are actually a couple of stream processing networks (SPNs) in action in this space, mainly running on the rtai0 node. One of them is the YOLO-based driveway vehicle detection system while the other is the garage environmental sensing network. A node can be running multiple SPNs at the same time – they are ships in the night and can be managed totally independently.

 

This screen capture shows the configuration dialog for the SensorFilter SPE. I currently also have it set to confirm when the garage doors have been shut so that I know if someone else has taken care of the problem.

Another step would be to turn the heater off if a door is open and the temperature is dropping – right now, I am not planning to allow the door to be shut remotely for the reasons mentioned earlier. At least turning the heater off avoids wasting a load of energy. I can control the heater via Insteon. I would just need another SPE to provide the interface between the filtered alerts and my Insteon server.

Still, one day it would be nice to do this properly. This entails understanding the state of the space – things like are there any people in it? This is not completely trivial as the system can’t always see someone inside a car. However, it can assume (until I get an autonomous vehicle) that if a car is moving or has moved, there must be someone driving and, until the system detects that person leaving the space, it must also assume that it should take no autonomous action. However, it needs to maintain state in order to be sure that there’s nobody in the space (or in a car just outside the space) so that it can take control. I have a feeling that there may be quite a few corner cases with this but it would be fun to try, even if it only simulates trying to close doors.

An rt-xr SpaceObjects tour de force

rt-xr SpaceObjects are now working very nicely. It’s easy to create, configure and delete SpaceObjects as needed using the menu switch which has been placed just above the light switch in my office model above.

The video below shows all of this in operation.

The typical process is to instantiate an object, place and size it and then attach it to a Manifold stream if it is a Proxy Object. Persistence, sharing and collaboration works for all relevant SpaceObjects across the supported platforms (Windows and macOS desktop, Windows MR, Android and iOS).

This is a good place to leave rt-xr for the moment while I wait for the arrival of some sort of AR headset in order to support local users of an rt-xr enhanced sentient space. Unfortunately, Magic Leap won’t deliver to my zip code (sigh) so that’s that for the moment. Lots of teasers about the HoloLens 2 right now and this might be the best way to go…eventually.

Now the focus moves back to rt-ai Edge. While this is working pretty well, it needs to have a few bugs fixed and also add some production modes (such as auto-starting SPNs when server nodes are started). Then begins the process of data collection for machine learning. ZeroSensors will collect data from each monitored room and this will be saved by ManifoldStore for later use. The idea is to classify normal and abnormal situations and also to be proactive in responding to the needs of occupants of the sentient space.

3DView: visualizing environmental data for sentient spaces

Th 3DView app I mentioned in a previous post is moving forward nicely. The screen capture shows the app capturing real time from four ZeroSensors, with the real time data coming from an rt-ai Edge stream processing network via Manifold. The app creates a video window and sensor display panel for each physical device and then updates the data whenever new messages are received from the ZeroSensor.

This is the rt-ai Edge part of the design. All the blocks are synth modules to speed the design replication. The four ZeroManifoldSynth modules each contain two PutManifold stream processing elements (SPEs) to inject the video and sensor streams into the Manifold. The ZeroSynth modules contain the video and sensor capture SPEs. The ZeroManifoldSynth modules all run on the default node while the ZeroSynth modules run directly on the ZeroSensors themselves. As always with rt-ai Edge, deployment of new designs or design changes is a one click action making this kind of distributed system development much more pleasant.

The Unity graphics elements are basic as I take the standard programmer’s view of Unity graphics elements: they can always be upgraded later by somebody with artistic talent but the key is the underlying functionality. The next step moving forward is to hang these displays (and other much more interesting elements) on the walls of a 3D model of the sentient space. Ultimately the idea is that people can walk through the sentient space using AR headsets and see the elements persistently positioned in the sentient space. In addition, users of the sentient space will be able to instantiate and position elements themselves and also interact with them.

Even more interesting than that is the ability for the sentient space to autonomously instantiate elements in the space based on perceived user actions. This is really the goal of the sentient space concept – to have the sentient space work with the occupants in a natural way (apart from needing an AR headset of course!).

For the moment, I am going to develop this in VR rather than AR. The HoloLens is the only available AR device that can support the level of persistence required but I’d rather wait for the rumored HoloLens 2 or the Magic Leap One (assuming it has the required multi-room persistence capability).

Why not just use NiFi and MiNiFi instead of rt-ai Edge?

Any time I start a project I always wonder if I am just reinventing the wheel. After all, there is so much software out there (on GitHub and others)  that almost everything already exists in some form. The most obvious analog to rt-ai Edge is Apache NiFi and Apache MiNiFi. NiFi provides a very rich environment of processor blocks and great tools for joining them together to create stream processing pipelines. However, there are some characteristics of NiFi that I don’t particularly like. One is the reliance on the JVM and the consequent garbage collection issues that mess up latency guarantees. Tuning a NiFi installation can be a bit tricky – check here for example. However, many of these things are the price that is inevitably paid for having such a rich environment.

rt-ai Edge was designed to be a much simpler and lower overhead way of creating flexible stream processing pipelines in edge processors with low latency connections and no garbage collection issues. That isn’t to say that an rt-ai Edge pipeline module could not be written using a managed memory language if desired (it certainly could) but instead that the infrastructure does not suffer from this problem.

In fact, rt-ai Edge and NiFi can play together extremely well. rt-ai Edge is ideal at the edge, NiFi is ideal at the core. While MiNiFi is the NiFi solution for embedded and edge processors, rt-ai Edge can either replace or work with MiNiFi to feed into a NiFi core. So maybe it’s not a case of reinventing the wheel so much as making the wheel more effective.

rt-ai: real time stream processing and inference at the edge enables intelligent IoT

The “rt” part of rt-ai doesn’t just stand for “richardstech” for a change, it also stands for “real-time”. Real-time inference at the edge will allow decision making in the local loop with low latency and no dependence on the cloud. rt-ai includes a flexible and intuitive infrastructure for joining together stream processing pipelines in distributed, restricted processing power environments. It is very easy for anyone to add new pipeline elements that fully integrate with rt-ai pipelines. This leverages some of the concepts originally prototyped in rtndf while other parts of the rt-ai infrastructure have been in 24/7 use for several years, proving their intrinsic reliability.

Edge processing and control is essential if there is to be scalable use of intelligent IoT. I believe that dumb IoT, where everything has to be sent to a cloud service for processing, is a broken and unscalable model. The bandwidth requirements alone of sending all the data back to a central point will rapidly become unworkable. Latency guarantees are difficult to impossible in this model. Two advantages of rt-ai (keeping raw data at the edge where it belongs and only upstreaming salient information to the cloud along with minimizing required CPU cycles in power constrained environments) are the keys to scalable intelligent IoT.

DroNet – flying a drone using data from cars and bikes

Fascinating video about a system that teaches a drone to fly around urban environments using data from cars and bikes as training data. There’s a paper here and code here. It’s a great example of leveraging CNNs in embedded environments. I believe that moving AI and ML to the edge and ultimately into devices such as IoT sensors is going to be very important. Having dumb sensor networks and edge devices just means that an enormous amount of worthless data has to be transferred into the cloud for processing. Instead, if the edge devices can perform extensive semantic mining of the raw data, only highly salient information needs to be communicated back to the core, massively reducing bandwidth requirements and also allowing low latency decision making at the edge.

Take as a trivial example a system of cameras that read vehicle license plates. One solution would be to send the raw video back to somewhere for license number extraction. Alternately, if the cameras themselves could extract the data, then only the recognized numbers and letters need to be transferred, along with possibly an image of the plate. That’s a massive bandwidth saving over sending constant compressed video. Even more interesting would be edge systems that can perform unsupervised learning to optimize performance, all moving towards eliminating noise and recognizing what’s important without extensive human oversight.

The Bitcoin Lightning Network

Just started finding out about the Lightning system that is designed to avoid some of the key inefficiencies of Bitcoin. I’ve always thought the excitement about Bitcoin to be a bit strange given its seven transactions per second throughput limit, absurd mining energy requirements, the need to effectively bribe miners with a fee in order to get a transaction included in the blockchain and the ever increasing (and already quite large) blockchain itself.

As described here, Lightning helps solve the scaling problem by implementing a network of micropayment channels that bypass the blockchain. Something that I have been interested in is how to use blockchain technology in large IoT networks that can generate large amounts of data in real time. The benefit would be an immutable (append-only) record of everything that happened that could be relied upon for accuracy and not amenable to after the fact modification. This record could be of evidential quality. Whether Lightning (or similar technology) can help with this is the question.

Years ago I was working on a tamper-proof video system for surveillance cameras using steganography – basically every frame included embedded overlapping error correcting codes that could survive compression and indicate on a grid where a frame had been modified by checking the syndromes. By using the frame timestamp as the data in the code, encrypted with a private key embedded in the source camera, completeness of the video record could also be determined. What’s interesting is whether blockchain technology can be similarly leveraged to solve this and related problems.