Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Zheng__Rui
Beginner
274 Views

[ ERROR ] std::bad_alloc when using human_pose_estimation_demo, help!

I firstly convert an .onnx model into .xml and .bin files.

root@f920ae206f6f:/opt/intel/openvino_2019.1.094/deployment_tools/model_optimize
r# python3 mo_onnx.py --input_model /data_openvino/model_23.onnx --input_shape [1,3,224,224]
Model Optimizer arguments:
Common parameters:
    - Path to the Input Model:     /data_openvino/model_23.onnx
    - Path for generated IR:     /opt/intel/openvino_2019.1.094/deployment_tools/model_optimizer/.
    - IR output name:     model_23
    - Log level:     ERROR
    - Batch:     Not specified, inherited from the model
    - Input layers:     Not specified, inherited from the model
    - Output layers:     Not specified, inherited from the model
    - Input shapes:     [1,3,224,224]
    - Mean values:     Not specified
    - Scale values:     Not specified
    - Scale factor:     Not specified
    - Precision of IR:     FP32
    - Enable fusing:     True
    - Enable grouped convolutions fusing:     True
    - Move mean values to preprocess section:     False
    - Reverse input channels:     False
ONNX specific parameters:
Model Optimizer version:     2019.1.0-341-gc9b66a2

[ SUCCESS ] Generated IR model.
[ SUCCESS ] XML file: /opt/intel/openvino_2019.1.094/deployment_tools/model_optimizer/./model_23.xml
[ SUCCESS ] BIN file: /opt/intel/openvino_2019.1.094/deployment_tools/model_optimizer/./model_23.bin
[ SUCCESS ] Total execution time: 19.00 seconds.

Then, I use the infrence_engine to run the human_pose_estimation_demo

root@f920ae206f6f:/opt/intel/openvino_2019.1.094/deployment_tools/inference_engine# /root/inference_engine_samples_build/intel64/Release/human_pose_estimation_demo -i luboshangke.mp4 -m ../model_optimizer/model_23.xml -d CPU
InferenceEngine:
    API version ............ 1.6
    Build .................. custom_releases/2019/R1_c9b66a26e4d65bb986bb740e73f58c6e9e84c7c2
[ INFO ] Parsing input parameters
[ INFO ] Parsing input parameters: before judge FLAGS_i
[ INFO ] Parsing input parameters: after judge FLAGS_i
[ INFO ] Parsing input parameters before return true
[ INFO ] before huamnposeestimator  estimator
after cap read image
[ ERROR ] std::bad_alloc

Then,I went to the ./samples/human_pose_estimation_demo/main.cpp and src/human_pose_estimator.cpp, and std::cout some sentences.

root@f920ae206f6f:/opt/intel/openvino_2019.1.094/deployment_tools/inference_engine# /root/inference_engine_samples_build/intel64/Release/human_pose_estimation_demo -i luboshangke.mp4 -m ../model_optimizer/model_23.xml -d CPU
InferenceEngine:
    API version ............ 1.6
    Build .................. custom_releases/2019/R1_c9b66a26e4d65bb986bb740e73f58c6e9e84c7c2
[ INFO ] Parsing input parameters
[ INFO ] Parsing input parameters: before judge FLAGS_i
[ INFO ] Parsing input parameters: after judge FLAGS_i
[ INFO ] Parsing input parameters before return true
[ INFO ] before huamnposeestimator  estimator
bin file name../model_optimizer/model_23.bin
input info0x56353fb35a30
inputInfo->getTensorDesc().getDims()[3]224
inputInfo->getTensorDesc().getDims()[2]224
outputInfo:1
outputBlobsIt
after cap read image
[ ERROR ] std::bad_alloc

but with the downloaded human-pose-estimation-0001.xml,it is ok,

root@f920ae206f6f:/opt/intel/openvino_2019.1.094/deployment_tools/inference_engine# /root/inference_engine_samples_build/intel64/Release/human_pose_estimation_demo -i luboshangke.mp4 -m /data_openvino/human-pose-estimation-0001.xml -d CPU
InferenceEngine:
    API version ............ 1.6
    Build .................. custom_releases/2019/R1_c9b66a26e4d65bb986bb740e73f58c6e9e84c7c2
[ INFO ] Parsing input parameters
[ INFO ] Parsing input parameters: before judge FLAGS_i
[ INFO ] Parsing input parameters: after judge FLAGS_i
[ INFO ] Parsing input parameters before return true
[ INFO ] before huamnposeestimator  estimator
bin file name/data_openvino/human-pose-estimation-0001.bin
input info0x55e2a09e98c0
inputInfo->getTensorDesc().getDims()[3]456
inputInfo->getTensorDesc().getDims()[2]256
outputInfo:2
outputBlobsIt
lallaal
[ INFO ] after huamnposeestimator  estimator
[ INFO ] after image estimator
To close the application, press 'CTRL+C' or any key with focus on the output window
[ INFO ] after render human pose

 

what should I do? thank u very much!

0 Kudos
9 Replies
Shubha_R_Intel
Employee
274 Views

Dear Zheng, Rui,

As I mentioned in a very similar post on the dldt forum github issue 155 please tell me about the model you are using. Is it a publicly available model or is it a custom model you built ? If it's custom, can you attach it here ?

Thanks,

Shubha

 

Zheng__Rui
Beginner
274 Views

I went to the github https://github.com/opencv/dldt/issues/155 ,  I am also it's questioner. I replied it.

Thanks.

Shubha_R_Intel
Employee
274 Views

Dear Zheng, Rui,

OpenVino 2019 R1.1 was just released. Can you kindly give it a try ?

Thanks,

Shubha

 

Zheng__Rui
Beginner
274 Views

Hello, I use openvino_2019.1.144, sadly,  get the same error.

root@940453dbe675:/opt/intel/openvino_2019.1.144/deployment_tools/model_optimize
r# python3 mo_onnx.py --input_model /data_openvino/model_23.onnx --input_shape [1,3,224,224]
Model Optimizer arguments:
Common parameters:
    - Path to the Input Model:     /data_openvino/model_23.onnx
    - Path for generated IR:     /opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/.
    - IR output name:     model_23
    - Log level:     ERROR
    - Batch:     Not specified, inherited from the model
    - Input layers:     Not specified, inherited from the model
    - Output layers:     Not specified, inherited from the model
    - Input shapes:     [1,3,224,224]
    - Mean values:     Not specified
    - Scale values:     Not specified
    - Scale factor:     Not specified
    - Precision of IR:     FP32
    - Enable fusing:     True
    - Enable grouped convolutions fusing:     True
    - Move mean values to preprocess section:     False
    - Reverse input channels:     False
ONNX specific parameters:
Model Optimizer version:     2019.1.1-83-g28dfbfd

[ SUCCESS ] Generated IR model.
[ SUCCESS ] XML file: /opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/./model_23.xml
[ SUCCESS ] BIN file: /opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/./model_23.bin
[ SUCCESS ] Total execution time: 21.62 seconds.

 

root@940453dbe675:/opt/intel/openvino_2019.1.144/deployment_tools/inference_engi
ne/samples# sudo ./build_samples.sh

...

[ 99%] Linking CXX executable ../../intel64/Release/multi-channel-face-detection-demo
[ 99%] Built target multi-channel-face-detection-demo
[100%] Linking CXX executable ../../intel64/Release/multi-channel-human-pose-estimation-demo
[100%] Built target multi-channel-human-pose-estimation-demo

Build completed, you can find binaries for all samples in the /root/inference_engine_samples_build/intel64/Release subfolder.

 

root@940453dbe675:/opt/intel/openvino_2019.1.144/deployment_tools# /root/inferen
ce_engine_samples_build/intel64/Release/human_pose_estimation_demo -i /data_openvino/luboshangke_CUT.mp4 -m ./model_optimizer/model_23.xml -d CPU

InferenceEngine:
    API version ............ 1.6
    Build .................. custom_releases/2019/R1.1_28dfbfdd28954c4dfd2f94403dd8dfc1f411038b
[ INFO ] Parsing input parameters
[ ERROR ] std::bad_alloc

 

root@940453dbe675:/opt/intel/openvino_2019.1.144/deployment_tools# /root/inferen
ce_engine_samples_build/intel64/Release/human_pose_estimation_demo -i /data_openvino/luboshangke_CUT.mp4 -m /data_openvino/graph_opt.xml -d CPU

InferenceEngine:
    API version ............ 1.6
    Build .................. custom_releases/2019/R1.1_28dfbfdd28954c4dfd2f94403dd8dfc1f411038b
[ INFO ] Parsing input parameters
[ ERROR ] std::bad_alloc

 

Do you have any ideas?

Shubha_R_Intel
Employee
274 Views

Dear Zheng, Rui,

OpenVino 2019R1.1 is actually openvino_2019.1.148.  Can you kindly try with openvino_2019.1.148 ?

Thanks,

Shubha

 

Zheng__Rui
Beginner
274 Views

Emmm, I found the bottom of the converted both .xml is different from human-pose-estimation-0001.xml. The graph_opt.xml layer don not have datas, If I converted wrong or something else?

 

 

Zheng__Rui
Beginner
274 Views

I went to the web of openvino and download, It sent me an email , I download customizable packages, it's also 2019.1.144.

Additionally, after comparing with human-pose-estimation-0001-FP32.xml which is also having no data at the bottom at the .xml,maybe I loss the information such as '<output value="['Mconv7_stage2_L1', 'Mconv7_stage2_L2']"/>', I add two layers in graph_opt.pb, such as ' <output value="['Openpose/MConv_Stage6_L1_2_depthwise/depthwise','Openpose/MConv_Stage6_L1_5_pointwise/Conv2D']"/>' ,it's also not working.....

Shubha_R_Intel
Employee
274 Views

Dear Zheng, Rui,

The latest OpenVino release should be openvino_2019.1.148 for 2019R1.1. Maybe you're not downloading the latest release. Can you try downloading from https://software.intel.com/en-us/openvino-toolkit/choose-download  (or maybe your download location is different if not US-based) ?

Thanks,

Shubha

Zheng__Rui
Beginner
274 Views

Thank u. You are right, my download location is not US-based, I just tried the download link again and registered I am from US, hhha, it's not working. Can you kindly share an openvino_2019.1.148 full package using google drive?

Reply