OpenPose body pose estimation rt-ai Edge SPE for the Intel NCS 2

Following on from the GPU version, I now have OpenPose running in an Intel NCS 2 Stream Processing Element, as shown in the screen capture above. This wasn’t too hard as it is based on an Intel sample and model. The metadata format is consistent with the GPU version (apart from the lack of support for face and hand pose estimation) but that’s fine for a lot of applications.


This is the familiar simple test design. The OpenPoseVINO SPE is running at about 3fps on 1280 x 720 video using an NCS 2 (the GPU version with a GTX 1080ti gets about 17fps in body pose only mode). The current SPE inherited a blocking inference OpenVINO call from the demo rather than an asynchronous inference call – this needs to be changed to be similar to the technique used by the SSD version so that the full capabilities of multiple NCS 2s can be utilized for body pose estimation.

6 thoughts on “OpenPose body pose estimation rt-ai Edge SPE for the Intel NCS 2”

  1. Well, first of all, thank you Richard for taking your time to respond and discuss!
    The first issue I have is that I am not sure what the –input_shape argument should be (i.e. for Yolov3 I used [1,416,416,3]), then I am also not sure which frozen graph to use (as mentioned, I am using the following repo: https://github.com/ildoonet/tf-pose-estimation)…
    I also do not completely understand which demos we can run (is it only the demos that Intel are providing in the folder: inference_engine_samples_build/intel64/Release? – I am using the human_pose_estimation_demo executabile file, analogous to the object_detection_demo_ssd_async executable file you used in order to run the Yolov3 demo)… Does this mean that we can only run demos that Intel have provided in this folder?

    1. It doesn’t look like it is too obvious how to do this. You’d need to follow through the example scripts to see how the input images are handled and where the outputs come from. It looks if you use the mobilenet_thin model then the training image size was 432 x368. You would have to look at the TfPoseEstimator code to see what it is actually doing.

  2. Hi Richard, as far as I understand, you are using the pretrained Intel model (Human Pose Estimation Demo) for OpenPose… I would like to know if you have tried to convert a Tensorflow OpenPose model into an (Intel) IR?
    Since OpenPose is originally a Caffe model, I found an already converted OpenPose Tensorflow model (https://github.com/ildoonet/tf-pose-estimation) that I wanted to convert into an (Intel) IR with OpenVino in order to run it on Intel NCS2, but I am having troubles converting it…

    Have you tried to do it, and if yes, have you succeeded?

      1. I managed to do it for YOLOv3 as well, however, I had problems doing it for OpenPose… I wanted to convert OpenPose in order to see how well the converted OpenPose will work compared to the Intel (HumanPose) one… I am curios to see how many FPS the NCS2 can reach on a converted model and see if I can get a better result than with HumanPose … Unfortunately, I cannot manage to convert the model properly… Do you think there is no point in trying to do this at all (since Intel probably are doing a pretty good job and I should stick to using their model)?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.