I am currently working with TensorFlow and I thought it’d be interesting to see what kind of performance I could get when processing video and trying to recognize objects with Inception-v3. While I’d like to get TensorFlow integrated with some of my Qt apps, the whole “build with Bazel” thing is holding that up right now (problems with Eigen includes – one day I’ll get back to that). As a way of taking the path of least resistance, I included TensorFlow in an inline MQTT filter written in Python. It subscribes to a video topic sourced from a webcam and outputs recognized objects in the stream.
As can be seen from the screen capture, it’s currently achieving 11 frames per second using 640 x 480 frames with a GTX 970 GPU. With a GTX 960 GPU, the rate falls to around 8 frames per second. This is pretty much what I have seen with other TensorFlow graphs – the GTX 970 is about 50% faster than a GTX 960, probably due to the restricted memory bus width on the GTX 960.
Hopefully I’ll soon have a 10 series GPU – that should be an interesting comparison.
This one looks quite a bit nicer than my previous attempt at this design! The functionality is the same but now a lot of the heavier processing has been moved into a new infrastructure that’s been developed to integrate artificial intelligence and machine learning functions into data flows very efficiently. Now I am able to leverage Apache NiFi‘s extensive range of processors to interface to all kinds of things but also escape the JVM environment to get bare metal performance for the higher level functions including access to GPUs and things like that. In this design I am just using NiFi’s MQTT and Elasticsearch processors but it could just as easily fire processed data into HDFS, Kafka etc.
Just came across this new book all about deep learning. I have only had time to scan through it so far but it looks to cover a lot of ground that is often assumed elsewhere. If you want to know all about how regularized autoencoders and recurrent neural nets work (to pick random examples), this is the place.
I have a project that requires identifying sequences of signals and classifying them in various ways and I have been looking for good techniques that could be applied to the problem. I came across a paper on Deep Gaussian Processes. They are somewhat related to deep neural networks but have an advantage in requiring a lot less training data. Since the generation of high quality training data is a big issue with DNNs, this is quite appealing. There are some GitHub repos with Python code to make getting started easier. The screenshot is from a demo in the deepGPy repo. Hopefully it will do what I want but, at the very least, I am learning some new mathematics.
I wanted to start working with Digits, NVIDIA’s nice deep learning tool but, inevitably, it required a GPU upgrade on my Ubuntu box. The GTX 970 is a sort of medium level Maxwell architecture GPU but, if this work continues, a Titan is probably in the plan somewhere.
Found this very interesting paper on deep convolutional neural networks via a post on the MIT Technology Review web site. It describes a system using multiple GPUs to achieve pretty accurate image recognition. What’s even better, code is available here for multiple NVIDIA CUDA systems. I need to look at it in more detail but it looks like it has all the necessary config files to set up the neural network as described in the paper and would be a good starting point for other uses.