Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6573 토론

TensorFlow Object Counting API with OpenVINO

n30
초급자
2,888 조회수

Hi,

is there a good guide or tutorial on how to use the TensorFlow Object Counting API with OpenVINO, ideally on Raspberry Pi + the Intel Neural Compute Stick and ideally for custom objects using a frozen model in form of a .pb file.

I really tried to find something, but encountered only solutions for parts of it, which then do not work together. If anyone has any links, please let me know.

Many thanks o_O

0 포인트
6 응답
David_C_Intel
직원
2,865 조회수

Hi n30,

Thanks for reaching out. We do not have an official guide or tutorial for that specific API, but you can refer to this Converting TensorFlow* Object Detection API Models documentation to try it and get an idea on how it works. Also, there might be other projects such as this People Counter that use OpenVINO™ toolkit and Intel® NCS2, which you can check and modify to suit your needs.

Best regards,

David C.


0 포인트
n30
초급자
2,827 조회수

Dear David,

 

thank you for your reply.

 

I followed the Udacity tutorial. 

However, when I try to convert my .pb file to the IR format using the model optimizer, I get the following error:

[ FRAMEWORK ERROR ] Cannot load input model: TensorFlow cannot read the model file: "retractedpath/model_optimizer/model.pb" is incorrect TensorFlow model file.
The file should contain one of the following TensorFlow graphs:
1. frozen graph in text or binary format
2. inference graph for freezing with checkpoint (--input_checkpoint) in text or binary format
3. meta graph

Make sure that --input_model_is_text is provided for a model in text format. By default, a model is interpreted in binary format. Framework error details: Wrong wire type in tag..
For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #43.

 

Is it possible this is because I exported the .pb from Google Cloud AutoML Vision API.

 

I used the "Export your model as a TF Saved Model to run on a Docker container." export function. Is there any extra step here - it is a normal .pb file. There is no export for plain TensorFlow, only this and TensorFlowJS and TensorFlow Lite.

 

Thanks!

0 포인트
n30
초급자
2,826 조회수

Dear David,

 

thank you for your reply.

 

I followed the Udacity tutorial. 

However, when I try to convert my .pb file to the IR format using the model optimizer, I get the following error:

[ FRAMEWORK ERROR ] Cannot load input model: TensorFlow cannot read the model file: "retractedpath/model_optimizer/model.pb" is incorrect TensorFlow model file.
The file should contain one of the following TensorFlow graphs:
1. frozen graph in text or binary format
2. inference graph for freezing with checkpoint (--input_checkpoint) in text or binary format
3. meta graph

Make sure that --input_model_is_text is provided for a model in text format. By default, a model is interpreted in binary format. Framework error details: Wrong wire type in tag..
For more information please refer to Model Optimizer FAQ question #43.

 

Is it possible this is because I exported the .pb from Google Cloud AutoML Vision API.

 

I used the "Export your model as a TF Saved Model to run on a Docker container." export function. Is there any extra step here - it is a normal .pb file. There is no export for plain TensorFlow, only this and TensorFlowJS and TensorFlow Lite.

 

Thanks!

0 포인트
n30
초급자
2,846 조회수

Hi David,

 

thank you for your reply.

 

I followed the Udacity tutorial. 

However, when I try to convert my .pb file to the IR format using the model optimizer, I get the following error:

[ FRAMEWORK ERROR ] Cannot load input model: TensorFlow cannot read the model file: "retractedpath/model_optimizer/model.pb" is incorrect TensorFlow model file.
The file should contain one of the following TensorFlow graphs:
1. frozen graph in text or binary format
2. inference graph for freezing with checkpoint (--input_checkpoint) in text or binary format
3. meta graph

Make sure that --input_model_is_text is provided for a model in text format. By default, a model is interpreted in binary format. Framework error details: Wrong wire type in tag..


 

Is it possible this is because I exported the .pb from Google Cloud AutoML Vision API.

 

I used the "Export your model as a TF Saved Model to run on a Docker container." export function. Is there any extra step here - it is a normal .pb file. There is no export for plain TensorFlow, only this and TensorFlowJS and TensorFlow Lite.

 

Best

0 포인트
David_C_Intel
직원
2,836 조회수

Hi n30,

Could you please answer the following:

  • From which topology is your model?
  • Could you share your .pb file and the command you used to convert it to IR format, so we can test it from our end?

Regards,

David C.


0 포인트
David_C_Intel
직원
2,813 조회수

Hi n30,

If you have any additional questions, please submit a new thread as this discussion will no longer be monitored.

Best regards,

David C.


0 포인트
응답