Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

How to generate SSD model for object detection sample?

Dong_W_
Beginner
2,480 Views

Hi,

inference_engine has a object_detection_sample_ssd. I tried to generate the IR representation by downloading the VOC 07+12 SSD300 model from objehttps://github.com/weiliu89/caffe/tree/ssd.

./ModelOptimizer -w $SAMPLE_DIR/models/VGGNet/VOC0712Plus/SSD_300x300/VGG_VOC0712Plus_SSD_300x300_iter_240000.caffemodel -d $SAMPLE_DIR/models/VGGNet/VOC0712Plus/SSD_300x300/deploy.prototxt -p FP32 -f 1 -b 1 --target APLK -o $SAMPLE_DIR/models/VGGNet/VOC0712Plus/SSD_300x300/generated_ir -i --network LOCALIZATION
Start working...

Framework plugin: CAFFE
Target type: APLK
Network type: LOCALIZATION
Batch size: 1
Precision: FP32
Layer fusion: false
Horizontal layer fusion: PARTIAL
Output directory: /home/ubuntu/workspace/intel_sdk_samples/models/VGGNet/VOC0712Plus/SSD_300x300/generated_ir
Custom kernels directory: 
Network input normalization: 1
[libprotobuf ERROR google/protobuf/text_format.cc:245] Error parsing text-format caffe.NetParameter: 814:14: Message type "caffe.LayerParameter" has no field named "norm_param".
F0824 16:24:37.618919 23037 upgrade_proto.cpp:88] Check failed: ReadProtoFromTextFile(param_file, param) Failed to parse NetParameter file: /home/ubuntu/workspace/intel_sdk_samples/models/VGGNet/VOC0712Plus/SSD_300x300/deploy.prototxt
*** Check failure stack trace: ***
    @     0x7fd7c9462daa  (unknown)
    @     0x7fd7c9462ce4  (unknown)
    @     0x7fd7c94626e6  (unknown)
    @     0x7fd7c9465687  (unknown)
    @     0x7fd7c9967aee  caffe::ReadNetParamsFromTextFileOrDie()
    @     0x7fd7c9920dbb  readTopology_
    @     0x7fd7cb4955f7  Model2OpenVX::CaffeNetworkDescriptor::CaffeNetworkDescriptor()
    @     0x7fd7cb490ef1  Model2OpenVX::CaffeNet::init()
    @     0x7fd7cc2847b4  Model2OpenVX::FrameworkManager::GenerateIRFile()
    @     0x556b0d4f1370  main
    @     0x7fd7cb9edf45  (unknown)
    @     0x556b0d4f3867  (unknown)
    @              (nil)  (unknown)
Aborted (core dumped)

Best regards,

Dong

0 Kudos
1 Solution
Maxim_S_Intel
Employee
2,480 Views

Hi,
1) Since the SSD features layers that are not in the master Caffe (including specific variation of Normalize layer that drives the conversion tool crazy in your example), first you need to use the SSD branch:
    $ git clone https://github.com/weiliu89/caffe.git && cd ./caffe && git checkout ssd

2) Patch the code of the DetectionOutput layer. Currently, the ModelOPtimizer tool has limitation of not allowing the dynamic tensor sizing.
Specifically, add the following change to the DetectionOutputLayer<Dtype>::Reshape method:
   vector<int> top_shape2(2, 1);
   top_shape2.push_back(keep_top_k_);
   top_shape2.push_back(7);
   top[0]->Reshape(top_shape2);

3) Add the ModelOptimizer wrappers to the Caffe code, from <INSTALL_DIR>/mo/adapters folder, to the “caffe/include/caffe/” and  “caffe/src/caffe/” of your local Caffe respectively
4) Recompile the Caffe
5) Set path to the dir with resulting libCaffe.so as FRAMEWORK_HOME env, e.g.
   $export FRAMEWORK_HOME=<PATH_TO_YOUR_LOCAL_CAFFE>/build/lib


https://software.intel.com/en-us/model-optimizer-devguide-getting-started-with-deep-learning-model-optimizer-for-caffe

View solution in original post

0 Kudos
9 Replies
Maxim_S_Intel
Employee
2,481 Views

Hi,
1) Since the SSD features layers that are not in the master Caffe (including specific variation of Normalize layer that drives the conversion tool crazy in your example), first you need to use the SSD branch:
    $ git clone https://github.com/weiliu89/caffe.git && cd ./caffe && git checkout ssd

2) Patch the code of the DetectionOutput layer. Currently, the ModelOPtimizer tool has limitation of not allowing the dynamic tensor sizing.
Specifically, add the following change to the DetectionOutputLayer<Dtype>::Reshape method:
   vector<int> top_shape2(2, 1);
   top_shape2.push_back(keep_top_k_);
   top_shape2.push_back(7);
   top[0]->Reshape(top_shape2);

3) Add the ModelOptimizer wrappers to the Caffe code, from <INSTALL_DIR>/mo/adapters folder, to the “caffe/include/caffe/” and  “caffe/src/caffe/” of your local Caffe respectively
4) Recompile the Caffe
5) Set path to the dir with resulting libCaffe.so as FRAMEWORK_HOME env, e.g.
   $export FRAMEWORK_HOME=<PATH_TO_YOUR_LOCAL_CAFFE>/build/lib


https://software.intel.com/en-us/model-optimizer-devguide-getting-started-with-deep-learning-model-optimizer-for-caffe

0 Kudos
Dong_W_
Beginner
2,480 Views

Hi Maxim,

Thanks very much for the detailed instructions. It worked perfectly: ssd model IR generated and object_detection_sample_ssd worked!

Best regards,

Dong

 

0 Kudos
Kasi_V_Intel
Employee
2,480 Views

Hi,

i'm trying the same as well. i've got the proper caffe for SSD from Weiliu89 git. But i'm getting different strange error.

i'm trying to get the SSD IR model generated for Inference Engine.

Try 1:

./ModelOptimizer --target APLK --fuse false -w ~/caffe_ssd/caffe/models/VGGNet/VOC0712/SSD_300x300/VGG_VOC0712_SSD_300x300_iter_120000.caffemodel -p FP32 -d ~/caffe_ssd/caffe/models/VGGNet/VOC0712/SSD_300x300/deploy.prototxt -f 1 -i -o ~/ml/model_optimizer_projects/
Start working...

Framework plugin: CAFFE
Target type: APLK
Network type: CLASSIFICATION
Batch size: 8
Precision: FP32
Layer fusion: false
Horizontal layer fusion: PARTIAL
Output directory: /home/kasi/ml/model_optimizer_projects/
Custom kernels directory:
Network input normalization: 1
dlopen failed: :/home/kasi/caffe_ssd/caffe/build/lib/libcaffe.so: cannot open shared object file: No such file or directory
Could not load librarycaffe from: :/home/kasi/caffe_ssd/caffe/build/lib

Try 2: (with Sudo)

./ModelOptimizer --target APLK --fuse false -w ~/caffe_ssd/caffe/models/VGGNet/VOC0712/SSD_300x300/VGG_VOC0712_SSD_300x300_iter_120000.caffemodel -p FP32 -d ~/caffe_ssd/caffe/models/VGGNet/VOC0712/SSD_300x300/deploy.prototxt -f 1 -i -o ~/ml/model_optimizer_projects/
./ModelOptimizer: error while loading shared libraries: libCommonInterfaces.so: cannot open shared object file: No such file or directory

Not sure how to resolve this. In try 2, i'm having the libCommonInterfaces.so in the same working directory (binary of model optimizer)

PATH and LD_LIBRARY_PATH are updated as well.

kasi@kasi:/opt/intel/computer_vision_sdk_2017.0.113/mo/bin$ echo $PATH
/home/kasi/bin:/home/kasi/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/intel/computer_vision_sdk_2017.0.113/mo/bin
kasi@kasi:/opt/intel/computer_vision_sdk_2017.0.113/mo/bin$ echo $LD_LIBRARY_PATH
:/home/kasi/caffe_ssd/caffe/build/lib:/opt/intel/computer_vision_sdk_2017.0.113/mo/bin:/usr/local/bin

Kindly let me know how to solve this problem. Plz find attached log for more details.

 

0 Kudos
Stav_S_Intel
Employee
2,480 Views

Hi, 

Seems like you have some missing environmental (FAMEWORK_HOME)

Please refer to this document to start working with caffe:

https://software.intel.com/en-us/model-optimizer-devguide-getting-started-with-deep-learning-model-optimizer-for-caffe

 

Regards,

Stav

 

0 Kudos
Kasi_V_Intel
Employee
2,480 Views
Hi, i've got output from modeloptimizer and trying to run inference engine with object_detection_sample_ssd() example. working-directory> bin\intel64\Debug\object_detection_sample_ssd.exe -i samples\object_detection_sample_ssd\siamese_cat.bmp -m samples\object_detection_sample_ssd\VGG_VOC0712_SSD_300x300_deploy\VGG_VOC0712_SSD_300x300_deploy.xml -d GPU i get the following error. InferenceEngine: API version ............ 1.0 Build .................. 4463 API version ............ 0.1 Build .................. prod-01909 Description ....... clDNNPlugin Debug Options: Debug Layer Content: 0 Debug Layer Content Indexed: 0 Debug Layers Format: 0 Plugin Performance Prints: 0 Print Size: 3 Infer() time of one iteration: 425 ms Scoring failed! Critical error: Output blob size is not equal network output size (7!=1400). Not sure what was the error. i've attached my xml and bin files herewith. Also i needed to give an input image of 300x300 only as input, hence attached that image bmp file as well. regards, Kasi.
0 Kudos
Ilya_C_Intel
Employee
2,480 Views

Hi Kasi,

Did you do these steps before IR generation?

Maxim Shevtsov (Intel) wrote:

Hi,
1) Since the SSD features layers that are not in the master Caffe (including specific variation of Normalize layer that drives the conversion tool crazy in your example), first you need to use the SSD branch:
    $ git clone https://github.com/weiliu89/caffe.git && cd ./caffe && git checkout ssd

2) Patch the code of the DetectionOutput layer. Currently, the ModelOPtimizer tool has limitation of not allowing the dynamic tensor sizing.
Specifically, add the following change to the DetectionOutputLayer<Dtype>::Reshape method:
   vector<int> top_shape2(2, 1);
   top_shape2.push_back(keep_top_k_);
   top_shape2.push_back(7);
   top[0]->Reshape(top_shape2);

3) Add the ModelOptimizer wrappers to the Caffe code, from <INSTALL_DIR>/mo/adapters folder, to the “caffe/include/caffe/” and  “caffe/src/caffe/” of your local Caffe respectively
4) Recompile the Caffe
5) Set path to the dir with resulting libCaffe.so as FRAMEWORK_HOME env, e.g.
   $export FRAMEWORK_HOME=<PATH_TO_YOUR_LOCAL_CAFFE>/build/lib

https://software.intel.com/en-us/model-optimizer-devguide-getting-starte...

 

It looks like you got incorrect dimensions for the DetectionOutput layer.

0 Kudos
Branimir_M_Intel
Employee
2,480 Views

I am getting an error running the following:

object_detection_sample_ssd -i images/fish-bike.jpg -m smoke_test/VGG_VOC0712_SSD_300x300_deploy/VGG_VOC0712_SSD_300x300_deploy.xml  -d CPU
InferenceEngine:
        API version ............ 1.0
        Build .................. 6293
[ INFO ] Parsing input parameters
[ INFO ] No extensions provided
[ INFO ] Loading plugin

        API version ............ 1.0
        Build .................. lnx_2018.0.20170425
        Description ....... MKLDnnPlugin

[ INFO ] Loading network files
[ INFO ] Preparing input blobs
[ WARNING ] Batch size will be equal 1.
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ INFO ] Loading model to the plugin
[ ERROR ] Cannot get internal blob layer for node conv4_3_norm_mbox_conf.

ModelOptimizer before that seems to have finished correctly. Any idea what might cause this? The ModelOptimizer command is:

ModelOptimizer  -w VGG_ILSVRC_16_layers_fc_reduced.caffemodel -d deploy.prototxt -f 1 -p FP32 -o smoke_test -i --hfuse NONE
Start working...

Framework plugin: CAFFE
Network type: LOCALIZATION
Batch size: 256
Precision: FP32
Layer fusion: false
Horizontal layer fusion: NONE
Output directory: smoke_test
Custom kernels directory:
Network input normalization: 1
Writing binary data to: smoke_test/VGG_VOC0712_SSD_300x300_deploy/VGG_VOC0712_SSD_300x300_deploy.bin

 

0 Kudos
anusha_k_
Beginner
2,480 Views

Kasi V. (Intel) wrote:

Hi,

I have the same error which you have mentioned:

./ModelOptimizer: error while loading shared libraries: libCommonInterfaces.so: cannot open shared object file: No such file or directory

How did you resolve this error?

 

Hi,

i'm trying the same as well. i've got the proper caffe for SSD from Weiliu89 git. But i'm getting different strange error.

i'm trying to get the SSD IR model generated for Inference Engine.

Try 1:

./ModelOptimizer --target APLK --fuse false -w ~/caffe_ssd/caffe/models/VGGNet/VOC0712/SSD_300x300/VGG_VOC0712_SSD_300x300_iter_120000.caffemodel -p FP32 -d ~/caffe_ssd/caffe/models/VGGNet/VOC0712/SSD_300x300/deploy.prototxt -f 1 -i -o ~/ml/model_optimizer_projects/
Start working...

Framework plugin: CAFFE
Target type: APLK
Network type: CLASSIFICATION
Batch size: 8
Precision: FP32
Layer fusion: false
Horizontal layer fusion: PARTIAL
Output directory: /home/kasi/ml/model_optimizer_projects/
Custom kernels directory:
Network input normalization: 1
dlopen failed: :/home/kasi/caffe_ssd/caffe/build/lib/libcaffe.so: cannot open shared object file: No such file or directory
Could not load librarycaffe from: :/home/kasi/caffe_ssd/caffe/build/lib

Try 2: (with Sudo)

./ModelOptimizer --target APLK --fuse false -w ~/caffe_ssd/caffe/models/VGGNet/VOC0712/SSD_300x300/VGG_VOC0712_SSD_300x300_iter_120000.caffemodel -p FP32 -d ~/caffe_ssd/caffe/models/VGGNet/VOC0712/SSD_300x300/deploy.prototxt -f 1 -i -o ~/ml/model_optimizer_projects/
./ModelOptimizer: error while loading shared libraries: libCommonInterfaces.so: cannot open shared object file: No such file or directory

Not sure how to resolve this. In try 2, i'm having the libCommonInterfaces.so in the same working directory (binary of model optimizer)

PATH and LD_LIBRARY_PATH are updated as well.

kasi@kasi:/opt/intel/computer_vision_sdk_2017.0.113/mo/bin$ echo $PATH
/home/kasi/bin:/home/kasi/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/intel/computer_vision_sdk_2017.0.113/mo/bin
kasi@kasi:/opt/intel/computer_vision_sdk_2017.0.113/mo/bin$ echo $LD_LIBRARY_PATH
:/home/kasi/caffe_ssd/caffe/build/lib:/opt/intel/computer_vision_sdk_2017.0.113/mo/bin:/usr/local/bin

Kindly let me know how to solve this problem. Plz find attached log for more details.

 

0 Kudos
karcz__tomasz
Beginner
2,480 Views

Maxim Shevtsov (Intel) wrote:

Hi,
1) Since the SSD features layers that are not in the master Caffe (including specific variation of Normalize layer that drives the conversion tool crazy in your example), first you need to use the SSD branch:
    $ git clone https://github.com/weiliu89/caffe.git && cd ./caffe && git checkout ssd

2) Patch the code of the DetectionOutput layer. Currently, the ModelOPtimizer tool has limitation of not allowing the dynamic tensor sizing.
Specifically, add the following change to the DetectionOutputLayer<Dtype>::Reshape method:
   vector<int> top_shape2(2, 1);
   top_shape2.push_back(keep_top_k_);
   top_shape2.push_back(7);
   top[0]->Reshape(top_shape2);

3) Add the ModelOptimizer wrappers to the Caffe code, from <INSTALL_DIR>/mo/adapters folder, to the “caffe/include/caffe/” and  “caffe/src/caffe/” of your local Caffe respectively
4) Recompile the Caffe
5) Set path to the dir with resulting libCaffe.so as FRAMEWORK_HOME env, e.g.
   $export FRAMEWORK_HOME=<PATH_TO_YOUR_LOCAL_CAFFE>/build/lib

https://software.intel.com/en-us/model-optimizer-devguide-getting-starte...

 

 

Hi, I did all of these steps. Unfortunatly I'm getting error:

 

ubuntu@ubuntu:/opt/intel/deeplearning_deploymenttoolkit_2017.1.0.5852/deployment_tools/model_optimizer/model_optimizer_caffe/bin$ ./ModelOptimizer -p FP32 -w ~/models/VGG_VOC0712_SSD_300x300_iter_120000.caffemodel -d ~/models/VGG_VOC0712_SSD_300x300_deploy.prototxt -i -b 1
Start working...

Framework plugin: CAFFE
Network type: LOCALIZATION
Batch size: 1
Precision: FP32
Layer fusion: false
Horizontal layer fusion: NONE
Output directory: Artifacts
Custom kernels directory: 
Network input normalization: 1
[libprotobuf ERROR google/protobuf/text_format.cc:288] Error parsing text-format caffe.NetParameter: 1:6: Message type "caffe.NetParameter" has no field named "item".
F0305 06:58:35.386390 46605 upgrade_proto.cpp:88] Check failed: ReadProtoFromTextFile(param_file, param) Failed to parse NetParameter file: /home/ubuntu/models/VGG_VOC0712_SSD_300x300_deploy.prototxt
*** Check failure stack trace: ***
    @     0x7fb43890e5cd  google::LogMessage::Fail()
    @     0x7fb438910433  google::LogMessage::SendToLog()
    @     0x7fb43890e15b  google::LogMessage::Flush()
    @     0x7fb438910e1e  google::LogMessageFatal::~LogMessageFatal()
    @     0x7fb438f5e481  caffe::ReadNetParamsFromTextFileOrDie()
    @     0x7fb438d90a9e  readTopology_
    @     0x7fb43abe1557  Model2OpenVX::CaffeNetworkDescriptor::CaffeNetworkDescriptor()
    @     0x7fb43abe85da  Model2OpenVX::CaffeNet::init()
    @     0x7fb43b99ec48  Model2OpenVX::FrameworkManager::GenerateIRFile()
    @     0x55d708613ee1  main
    @     0x7fb43b130830  __libc_start_main
    @     0x55d708615419  _start
    @              (nil)  (unknown)
Aborted (core dumped)

Can you help me?

 

 

 

EDIT:

I DID EVERYTHING FROM THE BEGINNING , INCLUDING DELETING EVERY FILE AND IT WORKS. 


So in process i wrote instruction for my colleagues and future myself. It should work without a problem:
 

  1. Install on Ubuntu 16.04 l_deeplearning_deploymenttoolkit_2017.1.0.5852 by these steps:
    https://software.intel.com/en-us/dl-deployment-tool-installguide-using-the-gui-installation-wizard
  2. Install Caffe by these steps
    (based on this:
    https://software.intel.com/en-us/inference-engine-devguide-configuring-caffe
    this:
    https://software.intel.com/en-us/inference-engine-devguide-configuring-caffe
    and this:
    https://software.intel.com/es-es/forums/computer-vision/topic/742877):
    1. export MO_DIR= /opt/intel/deeplearning_deploymenttoolkit_2017.1.0.5852/deployment_tools/model_optimizer
    2. export CAFFE_HOME=~/caffe
    3. cd $MO_DIR/model_optimizer_caffe/install_prerequisites
    4. sudo ./install_Caffe_dependencies.sh
    5. edit file clone_patch_build_Caffe.sh:
      CAFFE_REPO=https://github.com/weiliu89/caffe.git
      CAFFE_BRANCH=ssd
      CAFFE_FOLDER=~/caffe
      CAFFE_BUILD_SUBFOLDER=build
    6. sudo ./clone_patch_build_Caffe.sh
    7. export FRAMEWORK_HOME="$CAFFE_HOME/build/lib"
    8. export LD_LIBRARY_PATH="$CAFFE_HOME/build/lib:$LD_LIBRARY_PATH"
    9. export LD_LIBRARY_PATH="$MO_DIR/model_optimizer_caffe/bin:$LD_LIBRARY_PATH"
    10. cd $CAFFE_HOME
    11. cp Makefile.config.example Makefile.config
    12. uncomment line 8th “# CPU_ONLY := 1” if you want to run CPU ONLY mode.
    13. export PYTHONPATH=$CAFFE_HOME/python
  1. make py
  2. In case of problem with the hdf5 library while building Caffe on Ubuntu* 16.04, edit file Makefile.config:
    1. INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial/
    2. LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib /usr/lib/x86_64-linux-gnu/hdf5/serial/
  1. Follow this instruction: https://github.com/weiliu89/caffe/tree/ssd from Preparation for getting VOC trained model.
  2. To test your model change file (line 39) $CAFFE_HOME/examples/ssd/ssd_detect.py:
    1. #caffe.set_device(gpu_id)
    2. #caffe.set_mode_gpu()
    3. caffe.set_mode_cpu()
    4. cd $CAFFE_HOME
    5. python examples/ssd/ssd_detect.py
  3. To generate Intermediate Representation (IR):
    1. Edit Preferances.xml in /opt/intel/deeplearning_deploymenttoolkit_2017.1.0.5852/deployment_tools/model_optimizer/model_optimizer_caffe/bin:
      <NetworkType Value="LOCALIZATION"/>
    2. mkdir ~/models
    3. cp $CAFFE_HOME /models/VGGNet/VOC0712/SSD_300x300/VGG_VOC0712_SSD_300x300_iter_120000.caffemodel ~/models/VGG_VOC0712_SSD_300x300_iter_120000.caffemodel
    4. cp $CAFFE_HOME /models/VGGNet/VOC0712/SSD_300x300/deploy.prototxt ~/models/deploy.prototxt
    5. cd $MO_DIR/model_optimizer_caffe/bin
    6. ./ModelOptimizer -p FP32 -w $MODEL_DIR/VGG_VOC0712_SSD_300x300_iter_120000.caffemodel -d $MODEL_DIR/deploy.prototxt -i -b 1
    7. If error occur: File not found: data/VOC0712/labelmap_voc.prototxt then:
      go to caffe location, open console in this location, write “pwd” coppy output of console and
      edit file ~/MODELS/deploy.prototxt:
      label_map_file: "data/VOC0712/labelmap_voc.prototxt" to "<paste>/VOC0712/labelmap_voc.prototxt"
    8. Go to $MO_DIR/model_optimizer_caffe/bin/Artifacts/<name_set_in.prototxt>/
    9. Copy these files and use with sample application: https://software.intel.com/en-us/inference-engine-devguide-using-sample-applications
       
0 Kudos
Reply