Using blockchain technology to create verifiable sensor records and detect fakes

These days, machine learning techniques have led to the ability to create very realistic but fake video and audio that can be tough to distinguish from the real thing. The video above shows a very interesting example of this capability. The problem with this technology is that it will become impossible to determine if anything is genuine at all. What’s needed is some verification that a video of someone (for example) really is that person. Blockchain technology would seem to provide a solution for this.

Many years ago I was working on a digital watermarking-based system for detecting tampering in video records. Essentially, this embedded error-correcting codes in each frame that could be used to determine if any region of a frame had been modified after the digital watermark had been added. Cameras would add the digital watermark at source, limiting the opportunity for modification prior to watermarking.

One problem with this is that it worked on a frame by frame basis but didn’t ensure the integrity of an entire sequence. In theory this could be done with temporally distributed watermarks but blockchain technology provides a very nice alternative.

A simple strategy would be to have the sensor (camera, microphone, motion detector, whatever) create a hash for each unit of data (video frame, chunk of audio etc) and add this to a blockchain. Then a review app could create new hashes from the sensor data itself (stored elsewhere) and compare them to those in the blockchain. It could also determine that the account owner or device is who or what it is supposed to be in order to avoid spoofing. It’s easy to envisage an Etherium smart contract being the basis of such a system.

One issue with this is the potential rate at which hashes need to be added to the blockchain. This rate could be reduce by collecting more data (e.g. accumulating one second’s worth of data to generate one hash) or creating a hash of hashes at an appropriate rate. The only downside to this is losing temporal resolution of where changes have been made.

It’s worth considering the effects of lossy compression. Obviously if a stream is uncompressed or only uses lossless compression, watermarking and hash generation can be done at a very early stage. Watermarking of video is designed to withstand compression so that can still be done at a very early stage, even with lossy compression. The hash has to be be bit-accurate with the stream as stored on the video storage medium though so the hash must be computed after lossy compression.

It seems as though this blockchain concept could definitely be made to work and possibly combined with the digital watermarking technique in the case of video to provide temporal and spatial resolution of tampering. I am sure that variations of this concept are out there already or being developed and maybe, one day, it will be possible for anybody to check if a video of a well-known person is real or fake.

Serious irrigation

We cut down a bunch of trees at our house last year, creating a large new lawn that has just been seeded. Suddenly I realized that we would end up with a dust bowl unless the new grass was regularly watered, hence the messy system of automatic valves in the photo.
The area is pretty arid right now but the 8 heads cover the area reasonably well. Fortunately we are on a well water system here otherwise the water bill would be horrendous.

The end of the taxiway for the 747 (in the US at least)

Nice photos and story here about the final flight by a US airline of a 747. This brought back memories because, during the 90s, I spent a lot of time on Virgin Atlantic 747s between LHR and JFK (and occasionally BOS). I remember some of the old Virgin aircraft names – Spirit of Sir Freddie, Ruby Tuesday (above) and Lady Penelope for example. One of the things I would do to alleviate the boredom was to try to get off the aircraft first. If you were sitting in the correct seat on the upper deck and managed to get down the stairs before anyone else, there was always a good chance!

The best time was when I managed to get in the cockpit jump seat for a Virgin Atlantic 747 landing at SFO (yes, this was most certainly pre 9/11). It was great to see the crew handle the aircraft and air traffic control and just confirmed something that I already knew – that this kind of stuff was best left to professionals (I was a terrible pilot!).

Virgin Atlantic gradually replaced the 747s with A340s which were just not the same at all but, by then, I had mostly stopped flying across the Atlantic on a regular basis.

The disaggregated smartphone and the road to ubiquitous AR

Nearly five years ago I posted this entry on a blog I was running at the time:

Breaking Apart The Smartphone

…it’s not difficult to imagine a time when the smartphone is split into three pieces – the processor and cellular interface, the display device and the input device. One might imagine that the current smartphone suppliers will end up producing the heart of the system – the display-less main CPU, the cellular interface, Bluetooth, WiFi, NFC etc. However, it will open up new opportunities for suppliers of display and input devices. It’s pretty safe to assume that Google Glass won’t be the only show in town and users will be able to pick and choose between different display devices – and possibly different display devices for different applications. Likewise input devices will vary depending on the environment and style of use.

Maybe we’ll look back on the current generation of smartphones as being inhibited by their touchscreens rather than enabled by them…

I was only thinking vaguely of AR at the time but now it seems even more relevant. A key enabling technology is a low power wireless connection between the processing core and the display. With this implemented in the smartphone core, things change tremendously.

For example, I have a smartphone that is pocketable in size, an iPad for things where a bigger screen is required, a smartwatch for when I am too lazy to get the phone out of my pocket etc. I only have a SIM for the smartphone because even having one cellular contract is annoying, let alone one per device. How would having a wireless display capability change this?

For a start, I would only have one smartphone core for everything. This would have the one and only SIM card. When I want a tablet type presentation, I could use a suitable size wireless display. This would be light, cheap and somewhat expendable, unlike the smartphone itself. However, in this concept, the smartphone can always be kept somewhere safe – expensive screen replacements would be a thing of the past, especially if the smartphone core doesn’t even have a screen. I like to ride a bike around and it would be nice to have easy access to the smartphone while doing so and in all weathers. You can get bike bags that you can put a smartphone in but they are pretty lame and actually quite annoying in general. Instead, I could have a cheap waterproof display mounted on the bike without any need for waterproof bags.

Since the display is remote, why not have a TV sized screen that connects in the same way? Everything streamable could be accessed by the smartphone and displayed on the TV without a need for any other random boxes.

Finally, AR. Right now AR headsets kind of suck in one way or another. I am intrigued by the idea that, one day, people will wear AR type devices most of the time and what that means for, well, everything. The only way this is going to happen in the near future is if the headset itself is kept as small and light as possible and just acts as a display and a set of sensors (inside out tracking, IMU, depth etc). Just like the other displays, it connects to a smartphone core via a wireless link (I believe that any sort of tethered AR headset is unacceptable in general). The smartphone core does all of the clever stuff including rendering and then the output of the GPU is sent up to the headset for display. An AR headset like this could be relatively cheap, waterproof, dustproof and potentially worn all day.

What does a world with ubiquitous AR actually look like? Who knows? But if people start to assume that everyone has AR headsets then “real world” augmentation (decoration, signage etc) will give way to much more flexible and powerful virtual augmentations – anyone not using an AR headset might see a very bland and uninformative world indeed. On the other hand, people using AR headsets might well see some sort of utopian version of reality that has been finely tuned to their tastes. It’s definitely Black Mirror-ish but not all technology has to have horrendous outcomes.