Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6404 Discussions

TensorFlow Object Counting API with OpenVINO

n30
Beginner
1,545 Views

Hi,

is there a good guide or tutorial on how to use the TensorFlow Object Counting API with OpenVINO, ideally on Raspberry Pi + the Intel Neural Compute Stick and ideally for custom objects using a frozen model in form of a .pb file.

I really tried to find something, but encountered only solutions for parts of it, which then do not work together. If anyone has any links, please let me know.

Many thanks o_O

0 Kudos
6 Replies
David_C_Intel
Employee
1,522 Views

Hi n30,

Thanks for reaching out. We do not have an official guide or tutorial for that specific API, but you can refer to this Converting TensorFlow* Object Detection API Models documentation to try it and get an idea on how it works. Also, there might be other projects such as this People Counter that use OpenVINO™ toolkit and Intel® NCS2, which you can check and modify to suit your needs.

Best regards,

David C.


0 Kudos
n30
Beginner
1,484 Views

Dear David,

 

thank you for your reply.

 

I followed the Udacity tutorial. 

However, when I try to convert my .pb file to the IR format using the model optimizer, I get the following error:

[ FRAMEWORK ERROR ] Cannot load input model: TensorFlow cannot read the model file: "retractedpath/model_optimizer/model.pb" is incorrect TensorFlow model file.
The file should contain one of the following TensorFlow graphs:
1. frozen graph in text or binary format
2. inference graph for freezing with checkpoint (--input_checkpoint) in text or binary format
3. meta graph

Make sure that --input_model_is_text is provided for a model in text format. By default, a model is interpreted in binary format. Framework error details: Wrong wire type in tag..
For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #43.

 

Is it possible this is because I exported the .pb from Google Cloud AutoML Vision API.

 

I used the "Export your model as a TF Saved Model to run on a Docker container." export function. Is there any extra step here - it is a normal .pb file. There is no export for plain TensorFlow, only this and TensorFlowJS and TensorFlow Lite.

 

Thanks!

0 Kudos
n30
Beginner
1,483 Views

Dear David,

 

thank you for your reply.

 

I followed the Udacity tutorial. 

However, when I try to convert my .pb file to the IR format using the model optimizer, I get the following error:

[ FRAMEWORK ERROR ] Cannot load input model: TensorFlow cannot read the model file: "retractedpath/model_optimizer/model.pb" is incorrect TensorFlow model file.
The file should contain one of the following TensorFlow graphs:
1. frozen graph in text or binary format
2. inference graph for freezing with checkpoint (--input_checkpoint) in text or binary format
3. meta graph

Make sure that --input_model_is_text is provided for a model in text format. By default, a model is interpreted in binary format. Framework error details: Wrong wire type in tag..
For more information please refer to Model Optimizer FAQ question #43.

 

Is it possible this is because I exported the .pb from Google Cloud AutoML Vision API.

 

I used the "Export your model as a TF Saved Model to run on a Docker container." export function. Is there any extra step here - it is a normal .pb file. There is no export for plain TensorFlow, only this and TensorFlowJS and TensorFlow Lite.

 

Thanks!

0 Kudos
n30
Beginner
1,503 Views

Hi David,

 

thank you for your reply.

 

I followed the Udacity tutorial. 

However, when I try to convert my .pb file to the IR format using the model optimizer, I get the following error:

[ FRAMEWORK ERROR ] Cannot load input model: TensorFlow cannot read the model file: "retractedpath/model_optimizer/model.pb" is incorrect TensorFlow model file.
The file should contain one of the following TensorFlow graphs:
1. frozen graph in text or binary format
2. inference graph for freezing with checkpoint (--input_checkpoint) in text or binary format
3. meta graph

Make sure that --input_model_is_text is provided for a model in text format. By default, a model is interpreted in binary format. Framework error details: Wrong wire type in tag..


 

Is it possible this is because I exported the .pb from Google Cloud AutoML Vision API.

 

I used the "Export your model as a TF Saved Model to run on a Docker container." export function. Is there any extra step here - it is a normal .pb file. There is no export for plain TensorFlow, only this and TensorFlowJS and TensorFlow Lite.

 

Best

0 Kudos
David_C_Intel
Employee
1,493 Views

Hi n30,

Could you please answer the following:

  • From which topology is your model?
  • Could you share your .pb file and the command you used to convert it to IR format, so we can test it from our end?

Regards,

David C.


0 Kudos
David_C_Intel
Employee
1,470 Views

Hi n30,

If you have any additional questions, please submit a new thread as this discussion will no longer be monitored.

Best regards,

David C.


0 Kudos
Reply