- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I created a custom faster rcnn tensorflow model using the tensorflow object detection api. I converted the model using the model optimizer to get the XML and the bin files(I downloaded the JSON files from the forums that support custom models). When I try to run an Inference using this code
net = IENetwork(model=model_xml, weights=model_bin) plugin = IEPlugin("CPU", plugin_dirs=plugin_dir) plugin.add_cpu_extension("inference_engine_samples_build/intel64/Release/lib/libcpu_extension.dylib") input_blob = "image_tensor" out_blob = next(iter(net.outputs)) exec_net = plugin.load(network=net,num_requests=2) image = cv2.imread(image_file) image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) image = np.expand_dims(cv2.resize(image, (600, 600)), 0) out = exec_net.infer({"image_tensor": image.astype(np.float32)})
The program crashes and says "Segmentation fault".
Is there something that I'm doing wrong?
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Kaundinya, Achal,
Nothing jumps out at me in your code as "being wrong". I see that you are using Mac OSX. Can you try running the C++ object_detection_demo first on your model ? Does it crash ? If it crashes on your custom trained faster rcnn tensorflow model, then it's an OpenVino bug. If it succeeds then the bug is in your code, so you can compare your code to the object_detection_demo sample and see what's different.
Thanks for using OpenVino,
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Shubha,
If I run the "object_detection_demo", it crashes with the same message. But, if I run the "object_detection_sample_ssd", it runs perfectly. In the python examples, all the examples take in only one input topology, my model has two input topologies "image_info" and "image_tensor", how do I go about making changes in the code so that I can take in two input topologies?
Regards,
Achal
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Shubha,
While I was checking the python API reference, I noticed that Mac OS is currently not listed in the supported OS list, is that why I'm getting the segmentation error?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Achal,
Python API is fully supported on MacOS for python3.5-3.7, it's just not mentioned in documentation unfortunately
Most likely your IR has two inputs - "image_tensor" with input image data and "image_info" with input image sizes. Please have a look to the .xml and search for the layers with type="Input" ("image_info" input can be not the first layer) . Or by the way you can use netron tool for models (xml files) vithualization, maybe it will be an easier way to explore the model and find the inputs.
So, if your model really have two inputs you have to fix line 10 in your code snippet like
_,_, h,w = image.shape out = exec_net.infer({"image_tensor": image.astype(np.float32), "image_info": (h, w, 1}))
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page