Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

human pose estimation 3d demo

cazzari__daniele
Beginner
1,919 Views

Hi,

I'm currently trying to configure the new python demo about the human pose estimation.

I was able to convert the python module to ONNX and the to IR but once launched detection result is quite strange. The 2D output reports a very high number of connections like multiple persons were detected even if there is only me within the camera frame.

I'm using a Logitech c920 camera and not sure if this might be the problem. Anyhow that camera work perfectly on the C++ demo wondering if there is any issue with this demo model.

Thanks for your support.

Regards,

Daniele

 

0 Kudos
4 Replies
JesusE_Intel
Moderator
1,918 Views

Hi Daniele,

Could you please provide all the commands you used to convert the PyTorch Model to IR format and the command used to run the demo? Please also attach a photo of the output results you are seeing. 

I don't believe the camera should be an issue, However, please try a different camera if you have a second one.

Regards,

Jesus

0 Kudos
cazzari__daniele
Beginner
1,918 Views

Hi Jesus,

Please find attached the photo of the result I see. I used the embedded PC camera for this test and the result seems very similar so I don't think that the camera is the issue.

This is the command line used to launch the demo (OS is Windows 10)

    call "C:\Program Files (x86)\IntelSWTools\openvino\bin\setupvars.bat"
    python human_pose_estimation_3d_demo.py -i 0 -m ../../models/human-pose-estimation-3d-0001/FP32/human-pose-estimation-3d-0001.xml -d GPU

This is the code to convert the model to ONNX and the to IR:

python pytorch_to_onnx.py --model-name=PoseEstimationWithMobileNet --weights=human-pose-estimation-3d-0001.pth --input-shape=1,3,256,448 --output-file=human-pose-estimation-3d-0001.onnx --output-names=features,heatmaps,pafs --input-names=data --model-param=is_convertible_by_mo=True --import-module=model

python "%MODEL_OPT_DIR%/model_optimizer/mo.py"  --data_type FP32 --input_model "human-pose-estimation-3d-0001.onnx" --output_dir "%OUTPUT_DIR%/FP32"
python "%MODEL_OPT_DIR%/model_optimizer/mo.py"  --data_type FP16 --input_model "human-pose-estimation-3d-0001.onnx" --output_dir "%OUTPUT_DIR%/FP16"

Let me know if you need further information.

Regards,

Daniele

 

 

 

0 Kudos
cazzari__daniele
Beginner
1,918 Views

Hi,

I was able to fix the issue by modifying the model optimizer parameter as following:

python "%MODEL_OPT_DIR%/model_optimizer/mo.py" --data_type FP32 --input_model "%ONNX_PATH%.onnx" --output_dir "%OUTPUT_DIR%/FP32" --input=data --mean_values=data[128.0,128.0,128.0] --scale_values=data[255.0,255.0,255.0] --output=features,heatmaps,pafs

Above requirements aredescribed in model.yml file, but there is no mention on md file that is usually the first source of information.

It's true that by reading the code and yml files you might get all the information you need, but some additional documentation would make easier to navigate and understand the demo requirements/logics behind.

Thanks

Daniele

0 Kudos
JesusE_Intel
Moderator
1,918 Views

Hi Daniele,

I apologize for not responding sooner, not sure how I missed you previous message!

I'm glad you found the correct model optimizer parameters to convert the model properly. Thank you for providing your feedback, I will pass this along to the development team to improve the documentation. Feel free to reach out again if you have additional questions.

Regards,

Jesus

0 Kudos
Reply