Getting email alerts from the YOLOv3-based driveway detection system

The YOLOv3-based driveway detection system is now running full-time to see how workable the system is in real life. The associated rtaiDesigner design looks like this:

It has a new SPE called SendEmail that, well, does exactly that. The YOLOFilter SPE has been modified so that it also attaches a frame from the video captured during the detection. The SendEmail SPE then creates an email with the text message generated by YOLOFilter and attaches the image. The screen capture at the top shows an example of the email that is sent.

SendEmail can queue up messages if they occur at more than a preset rate so that the total email rate is limited. After a timeout, the email sent contains the detections that had been queued up during the hold-off period.

It is also possible to look at the historical data to see what actually transpired. The PutManifold SPE passes the video data and YOLO metadata to ManifoldStore for long-term storage. The rtaiView app can then be used to look back over the data. The screen capture above shows a frame from the same sequence displayed in rtaiView and the associated YOLO metadata. It’s all working quite well, actually.

Why not just use NiFi and MiNiFi instead of rt-ai Edge?

Any time I start a project I always wonder if I am just reinventing the wheel. After all, there is so much software out there (on GitHub and others)  that almost everything already exists in some form. The most obvious analog to rt-ai Edge is Apache NiFi and Apache MiNiFi. NiFi provides a very rich environment of processor blocks and great tools for joining them together to create stream processing pipelines. However, there are some characteristics of NiFi that I don’t particularly like. One is the reliance on the JVM and the consequent garbage collection issues that mess up latency guarantees. Tuning a NiFi installation can be a bit tricky – check here for example. However, many of these things are the price that is inevitably paid for having such a rich environment.

rt-ai Edge was designed to be a much simpler and lower overhead way of creating flexible stream processing pipelines in edge processors with low latency connections and no garbage collection issues. That isn’t to say that an rt-ai Edge pipeline module could not be written using a managed memory language if desired (it certainly could) but instead that the infrastructure does not suffer from this problem.

In fact, rt-ai Edge and NiFi can play together extremely well. rt-ai Edge is ideal at the edge, NiFi is ideal at the core. While MiNiFi is the NiFi solution for embedded and edge processors, rt-ai Edge can either replace or work with MiNiFi to feed into a NiFi core. So maybe it’s not a case of reinventing the wheel so much as making the wheel more effective.

rt-ai: real time stream processing and inference at the edge enables intelligent IoT

The “rt” part of rt-ai doesn’t just stand for “richardstech” for a change, it also stands for “real-time”. Real-time inference at the edge will allow decision making in the local loop with low latency and no dependence on the cloud. rt-ai includes a flexible and intuitive infrastructure for joining together stream processing pipelines in distributed, restricted processing power environments. It is very easy for anyone to add new pipeline elements that fully integrate with rt-ai pipelines. This leverages some of the concepts originally prototyped in rtndf while other parts of the rt-ai infrastructure have been in 24/7 use for several years, proving their intrinsic reliability.

Edge processing and control is essential if there is to be scalable use of intelligent IoT. I believe that dumb IoT, where everything has to be sent to a cloud service for processing, is a broken and unscalable model. The bandwidth requirements alone of sending all the data back to a central point will rapidly become unworkable. Latency guarantees are difficult to impossible in this model. Two advantages of rt-ai (keeping raw data at the edge where it belongs and only upstreaming salient information to the cloud along with minimizing required CPU cycles in power constrained environments) are the keys to scalable intelligent IoT.