Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Load custom tensorflow model in OpenVino Sample

Kaufmann__Elia
Beginner
534 Views

Hi there

I trained, froze and converted a custom tensorflow model using the model optimizer. Conversion with the model optimizer completed successfully.

The model is a simple feedforward network that consumes a single image and produces an 8D vector. 

 

To get startet with OpenVino, I adapted the car detection tutorial from:

https://github.com/intel-iot-devkit/inference-tutorials-generic ;

The tutorial runs fine on my laptop when the provided model is loaded (vehicle-detection-adas-0002.xml). 

 

To run the code sample with my own model, I need to adapt the checks for input and outpt sizes (line 223 in ~/inference-tutorials-generic/car_detection_tutorial/step_2/main.cpp) like this:

if (objectSize != 8) { ...  } 

if (outputDims.size() != 2) { ... }

When I run the adapted sample and load my own model, the program fails to load the network. 

./car_detection_tutorial -m optimized_graph.xml
Output:
InferenceEngine: 
	API version ............ 1.4
	Build .................. 17328
[ INFO ] Parsing input parameters
[ INFO ] Reading input
[ INFO ] Loading plugin CPU

	API version ............ 1.4
	Build .................. lnx_20181004
	Description ....... MKLDNNPlugin
[ INFO ] Loading network files for VehicleDetection
[ INFO ] Batch size is set to 1 for Vehicle Detection
[ INFO ] Checking Vehicle Detection inputs
[ INFO ] Checking Vehicle Detection outputs
[ INFO ] Loading Vehicle Detection model to the CPU plugin
[ ERROR ] std::exception

I am aware that the demo will not work entirely with a different network, but I expected that the network should be at least loaded and the first image should be fed through the network. 

I attached the converted model from the model optimizer as well as the original frozen graph from tensorflow. 

Thanks for any help!

0 Kudos
1 Reply
Severine_H_Intel
Employee
534 Views

Dear Elia, 

I tried your network through our benchmark app (included in the samples), which allows to load a model without processing the output. The network is working properly, therefore I think the issue is from the output processing.

From my experience, it is better to ramp up from an easier sample like classification_sample or hello_classification in order to customize a sample. The sample you have chosen is having a more complicated structure that the samples I mention.

Best, 

Severine

0 Kudos
Reply