- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I converted mobilenetSSD_v2_coco for use with openvino 2021.3 using mo.py. The xml and bin files from that conversion seem to work fine when loaded into openvino 2024.2 and 2024.3. Problem is I lost the source of where I downloaded the model I converted, and need to put instructions for downloading and converting the model for the next version of my project
https://github.com/wb666greene/AI-Person-Detector-with-YOLO-verification-Version-2/tree/main
since the bin file is too large to upload to github (and I'm not sure if it would be allowed or not).
I found this
http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v2_coco_2018_03_29.tar.gz
which looks to be the same frozen_inference_graph.pb file as I originally converted with mo.py, since both files are 69688296 bytes. Converting either frozen_inference_graph.pb file results in the same error when I load the 2024.x converted model into my code.
import openvino as ov
ov_model = ov.convert_model('frozen_inference_graph.pb')
ov.save_model(ov_model,'ssd_mobilenet_v2_coco')
# When I load the converted model in my code:
model_path = '../ssd_mobilenet_v2_coco.xml' # converted with openvion 2024
model = core.read_model(model_path)
if len(model.inputs) != 1:
log.error('Supports only single input topologies.')
return -1
if len(model.outputs) != 1:
log.error('Supports only single output topologies')
return -1
# it triggers this error:
ckpt = torch.load(file, map_location="cpu")
[ ERROR ] Supports only single output topologies
The mo.py command line I used in 2021 was:
python3 mo_tf.py --input_model /home/ai/ssdv2/frozen_inference_graph.pb --tensorflow_use_custom_operations_config /home/ai/ssdv2/ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config /home/ai/ssdv2/pipeline.config --data_type FP16
One obvious difference is the newly downloaded tensorflow model did not include the ssd_v2_support.json file.
All I know about converting a model comes from this page:
https://docs.openvino.ai/2024/openvino-workflow/model-preparation/convert-model-tensorflow.html
It seemed simple enough and I though I had it until I tried to run the 2024.3 converted model.
Link Copied
- « Previous
-
- 1
- 2
- Next »
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am using 2024.3 at the moment, I have pip installed openvino openvino-dev and openvino-telemetry (brought in by something else).
When I try to run the sample code I'm missing some module, what is the name that I pipe install?
~/MOtest$ python object_detection_demo.py -m frozen_inference_graph.xml -i images -d GPU
Traceback (most recent call last):
File "/home/wally/MOtest/object_detection_demo.py", line 29, in <module>
from model_api.models import DetectionModel, DetectionWithLandmarks, RESIZE_TYPES, OutputTransform
ModuleNotFoundError: No module named 'model_api'
pip install model_api fails with no matching distribution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
That error is related to Python Model API package.
Try to run pip install <omz_dir>/demos/common/python to install Model API from source.
You may refer to this documentation
Cordially,
Iffa
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question.
Cordially,
Iffa

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- « Previous
-
- 1
- 2
- Next »