- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
I am trying to perform inference with a model i have converted from the tensorflow model zoo, that being faster_rcnn_inception_v2. i followed the parameters from the model_downloader. I tried to do this with the latest version but i kept on getting errors that being:
The command:
./mo_tf.py --reverse_input_channels --data_type FP16 --output_dir=/home/russell/model --input_shape=[1,600,1024,3] --input=image_tensor --output=detection_scores,detection_boxes,num_detections --transformations_config=/opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support.json --tensorflow_object_detection_api_pipeline_config=/home/russell/Downloads/faster_rcnn_inception_v2_coco_2018_01_28/pipeline.config --input_model=/home/russell/Downloads/faster_rcnn_inception_v2_coco_2018_01_28/frozen_inference_graph.pb
The error:
[ ERROR ] -------------------------------------------------
[ ERROR ] ----------------- INTERNAL ERROR ----------------
[ ERROR ] Unexpected exception happened.
[ ERROR ] Please contact Model Optimizer developers and forward the following information:
[ ERROR ] Exception occurred during running replacer "ObjectDetectionAPIDetectionOutputReplacement (<class 'extensions.front.tf.ObjectDetectionAPI.ObjectDetectionAPIDetectionOutputReplacement'>)": argument of type 'method' is not iterable
[ ERROR ] Traceback (most recent call last):
File "/opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 288, in apply_transform
for_graph_and_each_sub_graph_recursively(graph, replacer.find_and_replace_pattern)
File "/opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/mo/middle/pattern_match.py", line 58, in for_graph_and_each_sub_graph_recursively
func(graph)
File "/opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/mo/front/tf/replacement.py", line 95, in find_and_replace_pattern
self.replace_sub_graph(graph, match)
File "/opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/mo/front/common/replacement.py", line 144, in replace_sub_graph
remove_nodes = self.nodes_to_remove(graph, match)
File "/opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/extensions/front/tf/ObjectDetectionAPI.py", line 561, in nodes_to_remove
if output in graph.nodes:
TypeError: argument of type 'method' is not iterable
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/mo/main.py", line 307, in main
return driver(argv)
File "/opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/mo/main.py", line 272, in driver
ret_res = emit_ir(prepare_ir(argv), argv)
File "/opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/mo/main.py", line 237, in prepare_ir
graph = unified_pipeline(argv)
File "/opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/mo/pipeline/unified.py", line 29, in unified_pipeline
class_registration.ClassType.BACK_REPLACER
File "/opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 334, in apply_replacements
apply_replacements_list(graph, replacers_order)
File "/opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 324, in apply_replacements_list
num_transforms=len(replacers_order))
File "/opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/mo/utils/logger.py", line 124, in wrapper
function(*args, **kwargs)
File "/opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 312, in apply_transform
)) from err
Exception: Exception occurred during running replacer "ObjectDetectionAPIDetectionOutputReplacement (<class 'extensions.front.tf.ObjectDetectionAPI.ObjectDetectionAPIDetectionOutputReplacement'>)": argument of type 'method' is not iterable
[ ERROR ] ---------------- END OF BUG REPORT --------------
[ ERROR ] -------------------------------------------------
I couldn't find a fix to this so i downgraded to 2019R3 and i got an output with this command:
./mo_tf.py --input_model ~/model/frozen_inference_graph.pb --tensorflow_use_custom_operations_config /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support.json --tensorflow_object_detection_api_pipeline_config ~/model/pipeline.config --output_dir ~/model/ --input_shape [1,600,600,3] --reverse_input_channels --input=image_tensor --output=detection_scores,detection_boxes,num_detections --data_type FP16
I'm not 100% sure when faster RCNN support was added to deployment_tools but here it clearly stated that the NCS does support it:
https://docs.openvinotoolkit.org/latest/_docs_IE_DG_supported_plugins_MYRIAD.html
Now when i try to perform inference on the model i got nothing is detected but i do get a blank out put with this code:
import cv2
import os
import numpy as np
from imutils.video import VideoStream
from imutils.video import FPS
import time
from openvino.inference_engine import IENetwork, IECore, IEPlugin
CWD_PATH = os.getcwd()
MODEL_xml = os.path.join(os.getcwd(), "model", "frozen_inference_graph.xml")
MODEL_bin = os.path.join(os.getcwd(), "model", "frozen_inference_graph.bin")
labelsPath = os.path.join(os.getcwd(), "model", "classes.txt")
LABELS = open(labelsPath).read().strip().split("\n")
COLORS = np.random.randint(0, 255, size=(len(LABELS), 3), dtype="uint8")
if __name__ == "__main__":
plugin = IEPlugin("MYRIAD")
net = IENetwork(model=MODEL_xml, weights=MODEL_bin)
plugin.set_config({"VPU_HW_STAGES_OPTIMIZATION": "YES"})
for blob_name in net.inputs:
if len(net.inputs[blob_name].shape) == 4:
input_blob = blob_name
elif len(net.inputs[blob_name].shape) == 2:
img_info_input_blob = blob_name
out_blob = next(iter(net.outputs))
print(out_blob)
n, c, h, w = net.inputs[input_blob].shape
print(net.inputs[input_blob].shape)
exec_net = plugin.load(network=net)
del net
camera = VideoStream(usePiCamera=True).start()
time.sleep(2.0)
fps = FPS().start()
while True:
t1 = cv2.getTickCount()
frame = camera.read()
in_frame = cv2.resize(frame, (w, h))
in_frame = in_frame.transpose((2, 0, 1))
in_frame = in_frame.reshape((n, c, h, w))
start = time.time()
results = exec_net.infer(inputs={input_blob: in_frame})
end = time.time()
inf_time = end - start
print('Inference Time: {} Seconds Single Image'.format(inf_time))
detections = results[out_blob][0][0]
print(detections)
I attached the model i am using.
I am running this on a raspberry Pi 4 that i followed the install guide for.
Can anyone tell me what i am doing wrong? Can i get a valid output from the 2020 version of the model optimizer?
Many thanks,
Russell
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Russel,
Thanks for reaching out. We are currently working on your inquiry if we get any update we will let you know. As for now, we managed to run your code successfully using the IR files you shared by changing the line 36:
camera = VideoStream(usePiCamera=True).start()
To:
camera = VideoStream(0).start()
Instead of using the pi camera module we used a USB webcam for this to work.
Best regards,
David
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
Thanks for the reply i have figured out where i have went wrong.
If i run this code then i get a output from the inference that is correct!
plugin = IEPlugin("MYRIAD")
net = IENetwork(model=MODEL_xml, weights=MODEL_bin)
plugin.set_config({"VPU_HW_STAGES_OPTIMIZATION": "YES"})
feed_dict = {}
for blob_name in net.inputs:
if len(net.inputs[blob_name].shape) == 4:
input_blob = blob_name
elif len(net.inputs[blob_name].shape) == 2:
img_info_input_blob = blob_name
out_blob = next(iter(net.outputs))
print(out_blob)
n, c, h, w = net.inputs[input_blob].shape
if img_info_input_blob:
feed_dict[img_info_input_blob] = [h, w, 1]
print(net.inputs[input_blob].shape)
exec_net = plugin.load(network=net)
del net
camera = cv2.VideoCapture(1)
assert camera.isOpened(), "Can't open " + camera
cur_request_id = 0
while camera.isOpened():
ret, frame = camera.read()
if ret:
frame_h, frame_w = frame.shape[:2]
if not ret:
break
in_frame = cv2.resize(frame, (w, h))
in_frame = in_frame.transpose((2, 0, 1))
in_frame = in_frame.reshape((n, c, h, w))
feed_dict[input_blob] = in_frame
start = time.time()
results = exec_net.start_async(request_id=cur_request_id, inputs=feed_dict,)
Im not sure why this works and the other one doesn't. But it works well enough for me.
Many thanks,
Russell
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Russel,
Great you made it work!
In case you need further assistance let us know.
Best regards,
David
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page