I am fully aware that this probably means that I have too much time on my hands but, having read a few NTSB accident reports recently, I do find them to have considerable educational value in demonstrating how systems can go wrong in sometimes surprising ways. Take this one, for example. It contains a detailed description of how an autonomous vehicle tried to deal with a situation when its sensors were giving inconsistent information. Or this one, which contains a very detailed description of what went wrong to cause the collapse of a new pedestrian bridge in Florida last year. Many are concerned with aviation accidents. This report is an example.
I believe that a tremendous amount can be learned from these reports about expected and unexpected failure modes of complex systems, especially those where humans are part of the loop. The hope is of course that by understanding what went wrong, these failures and consequent loss of life never happen again.
For reference, this is a list of recent reports while the @NTSB_Newsroom Twitter feed is a good way of keeping up to date.
As a thought experiment, I considered how rt-ai Edge could be used to implement a next generation gym. The thought was sparked by Orangetheory who make nice use of technology to enhance the gym experience. The question was: where next? My answer is here: rt-ai smart gym. It would be fun to implement some of these ideas!
Very interesting video in the style of the original A-ha Take On Me Video.
Found this old (from January 2013) and rather dull video of me driving a robot using the touchscreen on an Android tablet. It came to mind because the current project is using some of the software originally developed for this.
The glove controlled robot worked a lot better 🙂
These days, machine learning techniques have led to the ability to create very realistic but fake video and audio that can be tough to distinguish from the real thing. The video above shows a very interesting example of this capability. The problem with this technology is that it will become impossible to determine if anything is genuine at all. What’s needed is some verification that a video of someone (for example) really is that person. Blockchain technology would seem to provide a solution for this.
Many years ago I was working on a digital watermarking-based system for detecting tampering in video records. Essentially, this embedded error-correcting codes in each frame that could be used to determine if any region of a frame had been modified after the digital watermark had been added. Cameras would add the digital watermark at source, limiting the opportunity for modification prior to watermarking.
One problem with this is that it worked on a frame by frame basis but didn’t ensure the integrity of an entire sequence. In theory this could be done with temporally distributed watermarks but blockchain technology provides a very nice alternative.
A simple strategy would be to have the sensor (camera, microphone, motion detector, whatever) create a hash for each unit of data (video frame, chunk of audio etc) and add this to a blockchain. Then a review app could create new hashes from the sensor data itself (stored elsewhere) and compare them to those in the blockchain. It could also determine that the account owner or device is who or what it is supposed to be in order to avoid spoofing. It’s easy to envisage an Etherium smart contract being the basis of such a system.
One issue with this is the potential rate at which hashes need to be added to the blockchain. This rate could be reduce by collecting more data (e.g. accumulating one second’s worth of data to generate one hash) or creating a hash of hashes at an appropriate rate. The only downside to this is losing temporal resolution of where changes have been made.
It’s worth considering the effects of lossy compression. Obviously if a stream is uncompressed or only uses lossless compression, watermarking and hash generation can be done at a very early stage. Watermarking of video is designed to withstand compression so that can still be done at a very early stage, even with lossy compression. The hash has to be be bit-accurate with the stream as stored on the video storage medium though so the hash must be computed after lossy compression.
It seems as though this blockchain concept could definitely be made to work and possibly combined with the digital watermarking technique in the case of video to provide temporal and spatial resolution of tampering. I am sure that variations of this concept are out there already or being developed and maybe, one day, it will be possible for anybody to check if a video of a well-known person is real or fake.
We cut down a bunch of trees at our house last year, creating a large new lawn that has just been seeded. Suddenly I realized that we would end up with a dust bowl unless the new grass was regularly watered, hence the messy system of automatic valves in the photo.
The area is pretty arid right now but the 8 heads cover the area reasonably well. Fortunately we are on a well water system here otherwise the water bill would be horrendous.
I think that I can safely say that this is the most incredible thing I have yet seen. The best bit is at 5:41. DNA RIP.