- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I trained a model faster_rcnn_resnet50 on oxford pets database, using tensorflow object detction api.
I fail to model optimize frozen_inference_graph.pb.
C:\Intel\computer_vision_sdk_2018.3.343\deployment_tools\model_optimizer>python mo_tf.py --input_model d:\TFS\LPR\IP\MAIN\SRC\PythonProjects\TensorFlow\FreezeGraph\FreezeGraph\faster_rcnn_resnet50_pets_shay\frozen_inference_graph.pb --tensorflow_object_detection_api_pipeline_config d:\TFS\LPR\IP\MAIN\SRC\PythonProjects\TensorFlow\FreezeGraph\FreezeGraph\faster_rcnn_resnet50_pets_shay\faster_rcnn_resnet50_pets_shay.config
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: d:\TFS\LPR\IP\MAIN\SRC\PythonProjects\TensorFlow\FreezeGraph\FreezeGraph\faster_rcnn_resnet50_pets_shay\frozen_inference_graph.pb
- Path for generated IR: C:\Intel\computer_vision_sdk_2018.3.343\deployment_tools\model_optimizer\.
- IR output name: frozen_inference_graph
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: Not specified, inherited from the model
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Offload unsupported operations: False
- Path to model dump for TensorBoard: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: d:\TFS\LPR\IP\MAIN\SRC\PythonProjects\TensorFlow\FreezeGraph\FreezeGraph\faster_rcnn_resnet50_pets_shay\faster_rcnn_resnet50_pets_shay.config
- Operations to offload: None
- Patterns to offload: None
- Use the config file: None
Model Optimizer version: 1.2.185.5335e231
[ ERROR ] Node Preprocessor/map/while/ResizeToRange/unstack has more than one outputs. Provide output port explicitly.
If I can optimize on the meta checkpoint file it would be great. Can you tell me how?
Thanks.
My files can be viewed at:
https://www.dropbox.com/sh/dh1c325m0t22qsn/AAAJRfedjbF0uMsTLWyS6uVYa?dl=0
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I tried to convert the model you attached, but encounter another issue might caused by the Tensorflow version mismatch. Which Tensorflow version did you use to freeze the model?
Also I tried to freeze the model from my end with the "FreezeGraphDogsCats.py" but you seemed forget to add the "checkpoint" file, I only see the index, meta and data. Could you add the checkpoint inside so I can freeze from my end? Thank you.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
my tensorflow version is 1.11 on ubuntu where I train. It is tensorflow-gpu.
my tensorflow version is 1.11 on windows (I am able to train on gpu).
my windows tensorflow version is 1.12 after run the model optimizer prerequisites It is getting installed !!!
After the installation of 1.12 my training are going on CPU only.
I run the optimizer on windows So I try optimize with 1.12 I guess.
I will look for the checkpoint file.
Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
May be I confused you.
I train on ubuntu with tf 1.11
I try to model optimize on windows with tf 1.12 (the version installed after mo prerequisets)
Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I uploaded checpoint.
Many thanks for your help
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi, Shay,
I've tried to convert your model, but it's confused, there are multiple pb and config files, I don't know what's the right paring of them.
I tried the "faster_rcnn_resnet50_pets_shay.config" and "faster_rcnn_resnet50_pets_shay.pb" for conversion, it shows error as attached "ERROR_SHAY_CONFIG.PNG". The command I used is as "Command_SHAY_CONFIG.PNG"
Another try with "faster_rcnn_resnet50_pets_shay.config" and "exported\frozen_inference_graph.pb", it shows error as attached "FROZEN_PB_SHAY_CONFIG.PNG". The command used is as "Command_FROZEN_PB_SHAY_CONFIG.PNG".
I tried with the faster_rcnn_resnet50_coco_2018_01_28 with object_detection_demo_ssd_async.exe, there is no error on conversion and application running. Can you make sure your didn't change the topology with your own training?
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page