An rt-xr SpaceObjects tour de force

rt-xr SpaceObjects are now working very nicely. It’s easy to create, configure and delete SpaceObjects as needed using the menu switch which has been placed just above the light switch in my office model above.

The video below shows all of this in operation.

The typical process is to instantiate an object, place and size it and then attach it to a Manifold stream if it is a Proxy Object. Persistence, sharing and collaboration works for all relevant SpaceObjects across the supported platforms (Windows and macOS desktop, Windows MR, Android and iOS).

This is a good place to leave rt-xr for the moment while I wait for the arrival of some sort of AR headset in order to support local users of an rt-xr enhanced sentient space. Unfortunately, Magic Leap won’t deliver to my zip code (sigh) so that’s that for the moment. Lots of teasers about the HoloLens 2 right now and this might be the best way to go…eventually.

Now the focus moves back to rt-ai Edge. While this is working pretty well, it needs to have a few bugs fixed and also add some production modes (such as auto-starting SPNs when server nodes are started). Then begins the process of data collection for machine learning. ZeroSensors will collect data from each monitored room and this will be saved by ManifoldStore for later use. The idea is to classify normal and abnormal situations and also to be proactive in responding to the needs of occupants of the sentient space.

3DView: visualizing environmental data for sentient spaces

Th 3DView app I mentioned in a previous post is moving forward nicely. The screen capture shows the app capturing real time from four ZeroSensors, with the real time data coming from an rt-ai Edge stream processing network via Manifold. The app creates a video window and sensor display panel for each physical device and then updates the data whenever new messages are received from the ZeroSensor.

This is the rt-ai Edge part of the design. All the blocks are synth modules to speed the design replication. The four ZeroManifoldSynth modules each contain two PutManifold stream processing elements (SPEs) to inject the video and sensor streams into the Manifold. The ZeroSynth modules contain the video and sensor capture SPEs. The ZeroManifoldSynth modules all run on the default node while the ZeroSynth modules run directly on the ZeroSensors themselves. As always with rt-ai Edge, deployment of new designs or design changes is a one click action making this kind of distributed system development much more pleasant.

The Unity graphics elements are basic as I take the standard programmer’s view of Unity graphics elements: they can always be upgraded later by somebody with artistic talent but the key is the underlying functionality. The next step moving forward is to hang these displays (and other much more interesting elements) on the walls of a 3D model of the sentient space. Ultimately the idea is that people can walk through the sentient space using AR headsets and see the elements persistently positioned in the sentient space. In addition, users of the sentient space will be able to instantiate and position elements themselves and also interact with them.

Even more interesting than that is the ability for the sentient space to autonomously instantiate elements in the space based on perceived user actions. This is really the goal of the sentient space concept – to have the sentient space work with the occupants in a natural way (apart from needing an AR headset of course!).

For the moment, I am going to develop this in VR rather than AR. The HoloLens is the only available AR device that can support the level of persistence required but I’d rather wait for the rumored HoloLens 2 or the Magic Leap One (assuming it has the required multi-room persistence capability).

Why not just use NiFi and MiNiFi instead of rt-ai Edge?

Any time I start a project I always wonder if I am just reinventing the wheel. After all, there is so much software out there (on GitHub and others)  that almost everything already exists in some form. The most obvious analog to rt-ai Edge is Apache NiFi and Apache MiNiFi. NiFi provides a very rich environment of processor blocks and great tools for joining them together to create stream processing pipelines. However, there are some characteristics of NiFi that I don’t particularly like. One is the reliance on the JVM and the consequent garbage collection issues that mess up latency guarantees. Tuning a NiFi installation can be a bit tricky – check here for example. However, many of these things are the price that is inevitably paid for having such a rich environment.

rt-ai Edge was designed to be a much simpler and lower overhead way of creating flexible stream processing pipelines in edge processors with low latency connections and no garbage collection issues. That isn’t to say that an rt-ai Edge pipeline module could not be written using a managed memory language if desired (it certainly could) but instead that the infrastructure does not suffer from this problem.

In fact, rt-ai Edge and NiFi can play together extremely well. rt-ai Edge is ideal at the edge, NiFi is ideal at the core. While MiNiFi is the NiFi solution for embedded and edge processors, rt-ai Edge can either replace or work with MiNiFi to feed into a NiFi core. So maybe it’s not a case of reinventing the wheel so much as making the wheel more effective.

rt-ai: real time stream processing and inference at the edge enables intelligent IoT

The “rt” part of rt-ai doesn’t just stand for “richardstech” for a change, it also stands for “real-time”. Real-time inference at the edge will allow decision making in the local loop with low latency and no dependence on the cloud. rt-ai includes a flexible and intuitive infrastructure for joining together stream processing pipelines in distributed, restricted processing power environments. It is very easy for anyone to add new pipeline elements that fully integrate with rt-ai pipelines. This leverages some of the concepts originally prototyped in rtndf while other parts of the rt-ai infrastructure have been in 24/7 use for several years, proving their intrinsic reliability.

Edge processing and control is essential if there is to be scalable use of intelligent IoT. I believe that dumb IoT, where everything has to be sent to a cloud service for processing, is a broken and unscalable model. The bandwidth requirements alone of sending all the data back to a central point will rapidly become unworkable. Latency guarantees are difficult to impossible in this model. Two advantages of rt-ai (keeping raw data at the edge where it belongs and only upstreaming salient information to the cloud along with minimizing required CPU cycles in power constrained environments) are the keys to scalable intelligent IoT.

DroNet – flying a drone using data from cars and bikes

Fascinating video about a system that teaches a drone to fly around urban environments using data from cars and bikes as training data. There’s a paper here and code here. It’s a great example of leveraging CNNs in embedded environments. I believe that moving AI and ML to the edge and ultimately into devices such as IoT sensors is going to be very important. Having dumb sensor networks and edge devices just means that an enormous amount of worthless data has to be transferred into the cloud for processing. Instead, if the edge devices can perform extensive semantic mining of the raw data, only highly salient information needs to be communicated back to the core, massively reducing bandwidth requirements and also allowing low latency decision making at the edge.

Take as a trivial example a system of cameras that read vehicle license plates. One solution would be to send the raw video back to somewhere for license number extraction. Alternately, if the cameras themselves could extract the data, then only the recognized numbers and letters need to be transferred, along with possibly an image of the plate. That’s a massive bandwidth saving over sending constant compressed video. Even more interesting would be edge systems that can perform unsupervised learning to optimize performance, all moving towards eliminating noise and recognizing what’s important without extensive human oversight.

The Bitcoin Lightning Network

Just started finding out about the Lightning system that is designed to avoid some of the key inefficiencies of Bitcoin. I’ve always thought the excitement about Bitcoin to be a bit strange given its seven transactions per second throughput limit, absurd mining energy requirements, the need to effectively bribe miners with a fee in order to get a transaction included in the blockchain and the ever increasing (and already quite large) blockchain itself.

As described here, Lightning helps solve the scaling problem by implementing a network of micropayment channels that bypass the blockchain. Something that I have been interested in is how to use blockchain technology in large IoT networks that can generate large amounts of data in real time. The benefit would be an immutable (append-only) record of everything that happened that could be relied upon for accuracy and not amenable to after the fact modification. This record could be of evidential quality. Whether Lightning (or similar technology) can help with this is the question.

Years ago I was working on a tamper-proof video system for surveillance cameras using steganography – basically every frame included embedded overlapping error correcting codes that could survive compression and indicate on a grid where a frame had been modified by checking the syndromes. By using the frame timestamp as the data in the code, encrypted with a private key embedded in the source camera, completeness of the video record could also be determined. What’s interesting is whether blockchain technology can be similarly leveraged to solve this and related problems.

Project Sopris

Interesting piece here about Microsoft’s Project Sopris. There’s also an interesting paper giving background to the approach. It’s become a thing when people talk about IoT to slam its security but, in the end, this is just another engineering problem to solve. If IoT class processors were available with built-in and easy to use (without being a security expert) security features, I’d like to think that people would use them and this barrier to implementation would be removed permanently.