Real time OpenPose on an iPad…with the help of remote inference and rendering

I wanted to use the front camera of an iPad to act as the input to OpenPose so that I could track pose in real time with the original idea being to leverage CoreML to run pose estimation on the device. There are a few iOS implementations of OpenPose (such as this one) but they are really designed for offline processing as they are pretty slow. I did try a different pose estimator that runs in real time on my iPad Pro but the estimation is not as good as OpenPose.

So the question was how to run iPad OpenPose in real time in some way – compromise was necessary! I do have an OpenPose SPE as part of rt-ai Edge that runs very nicely so an obvious solution was to run rt-ai Edge OpenPose on a server and just use the iPad as an input and output device. The nice plus of this new iOS app called iOSEdgeRemote is that it really doesn’t care what kind of remote processing is being used. Frames from the camera are sent to an rt-ai Edge Conductor connected to an OpenPose pipeline.

The rt-ai Edge design for this test is shown above. The pipeline optionally annotates the video and returns that and the pose metadata to the iPad for display. However, the pipeline could be doing anything provided it returns some sort of video back to the iPad.

The results are show in the screen captures above. Using a GTX 1080 ti GPU, I was getting around 19fps with just body pose processing turned on and around 9fps with face pose also turned on. Latency is not noticeable with body pose estimation and even with face pose estimation turned on it is entirely usable.

Remote inference and rendering has a lot of advantages over trying to squeeze everything into the iPad and use CoreML  for inference if there is a low latency server available – 5G communications is an obvious enabler of this kind of remote inference and rendering in a wide variety of situations. Intrinsic performance of the iPad is also far less important as it is not doing anything too difficult and leaves lots of resource for other processing. The previous Unity/ARKit object detector uses a similar idea but does use more iPad resources and is not general purpose. If Unity and ARKit aren’t needed, iOSEdgeRemote with remote inference and rendering is a very powerful system.

Another nice aspect of this is that I believe that future mixed reality headset will be very lightweight devices that avoid complex processing in the headset (unlike the HoloLens for example) or require cables to an external processor (unlike the Magic Leap One for example). The headset provides cameras, SLAM of some sort, displays and radios. All other complex processing will be performed remotely and video used to drive the displays. This might be the only way to enable MR headsets that can run for 8 hours or more without a recharge and be light enough (and run cool enough) to be worn for extended periods.

Getting the GTX 1070 working with CUDA on Ubuntu 16.04

GTX1070For some reason, using the latest NVIDIA driver (367.35 at time of writing) with the GTX 1070 meant that it wasn’t recognized by CUDA on my system. Instead, it was necessary to go back to 367.27. The file has to be manually downloaded from that link. To install, generally I find the easiest way to do it is to ssh in from another machine and enter:

sudo apt-get autoremove --purge nvidia-*
sudo service lightdm stop
sudo sh <path to driver>NVIDIA-Linux-x86_64-367.27.run

A reboot should then kick in the new driver. An issue with CUDA 7.5 (and 8RC I believe) is that it doesn’t like the gcc version that much when it comes to compiling the samples. The simple fix is to comment out the check line in /usr/local/cuda/include/host_config.h. Easiest way to find it it is to run the samples Makefile – the compiler will happily tell you where the error is coming from… Then just sudo edit the file and comment out the line indicated by the error message.

Deep convolutional neural networks in practice

Found this very interesting paper on deep convolutional neural networks via a post on the MIT Technology Review web site. It describes a system using multiple GPUs to achieve pretty accurate image recognition. What’s even better, code is available here for multiple NVIDIA CUDA systems. I need to look at it in more detail but it looks like it has all the necessary config files to set up the neural network as described in the paper and would be a good starting point for other uses.