Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6507 Discussions

Having trouble with model_convert and tensorflow model with pip installed 2024.3

wb666greene
Beginner
3,836 Views

I converted mobilenetSSD_v2_coco for use with openvino 2021.3 using mo.py.  The xml and bin files from that conversion seem to work fine when loaded into openvino 2024.2 and 2024.3.  Problem is I lost the source of where I downloaded the model I converted, and need to put instructions for downloading and converting the model for the next version of my project

https://github.com/wb666greene/AI-Person-Detector-with-YOLO-verification-Version-2/tree/main 

since the bin file is too large to upload to github (and I'm not sure if it would be allowed or not).

 

I found this

http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v2_coco_2018_03_29.tar.gz 

which looks to be the same frozen_inference_graph.pb file as I originally converted with mo.py, since both files are 69688296 bytes. Converting either frozen_inference_graph.pb file results in the same error when I load the 2024.x converted model into my code.

import openvino as ov
ov_model = ov.convert_model('frozen_inference_graph.pb')
ov.save_model(ov_model,'ssd_mobilenet_v2_coco')

# When I load the converted model in my code:
model_path = '../ssd_mobilenet_v2_coco.xml'    # converted with openvion 2024
model = core.read_model(model_path)
if len(model.inputs) != 1:
        log.error('Supports only single input topologies.')
        return -1
if len(model.outputs) != 1:
        log.error('Supports only single output topologies')
        return -1

# it triggers this error:
  ckpt = torch.load(file, map_location="cpu")
[ ERROR ] Supports only single output topologies

 The mo.py command line I used in 2021 was:

python3 mo_tf.py --input_model /home/ai/ssdv2/frozen_inference_graph.pb --tensorflow_use_custom_operations_config /home/ai/ssdv2/ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config /home/ai/ssdv2/pipeline.config --data_type FP16

One obvious difference is the newly downloaded tensorflow model did not include the ssd_v2_support.json file.

 

All I know about converting a model comes from this page:

https://docs.openvino.ai/2024/openvino-workflow/model-preparation/convert-model-tensorflow.html 

It seemed simple enough and I though I had it until I tried to run the 2024.3 converted model.

Labels (2)
0 Kudos
23 Replies
Iffa_Intel
Moderator
3,023 Views

Hi,

 

Try to use implement ovc your_model_file.pb instead.

I managed to convert it this way.

Iffa_Intel_0-1724228063693.png

 

You may refer to this documentation (click on CLI instead of Python)

 

 

Cordially,

Iffa

0 Kudos
wb666greene
Beginner
3,011 Views

I don't get it, what is the command you used?  "ovc frozen_model.pb"  I don't see any mention of "ovc" in the doc you linked 

 

import openvino as ov
ov_model = ov.convert_model('your_model_file.pb')

 

Is what I did, I'm pretty sure it is a tf1 model but I'm not seeing what is different for the command to convert a frozen.pb between tf1 and tf2 other than the tf1 argument is a *.pb file whereas the tf2 conversion argument is a directory.

What I downloaded has a saved_model sub-directory with a saved_model.pb file and a variables sub-directory, but the variables directory is empty and there is no assets sub-directory.  I will try pointing it at the saved_model directory and reply it it works, as I can do this in a few minutes.

 

Edit did:

    import openvino as ov
    ov_model = ov.convert_model('../ssd_mobilenet_v2_coco_2018_03_29/saved_model')
    ov.save_model(ov_model,'ssd_mobilenet_v2_coco.xml')

It created the *.xml and *.bin files but when I load them into my inference code I still get: 

ckpt = torch.load(file, map_location="cpu")
[ ERROR ] Supports only single output topologies

 

0 Kudos
Iffa_Intel
Moderator
2,985 Views

You will see that if you click on CLI section instead of Python

 

Iffa_Intel_0-1724285415270.png

 

Converted your model using ovc command in cmd/terminal.

Iffa_Intel_1-1724285586430.png

Inferencing result (the model has dynamic shape, need to provide shape during inferencing)

Iffa_Intel_2-1724285698724.png

 

 

Cordially,

Iffa

 

0 Kudos
wb666greene
Beginner
2,943 Views

These screen caps of terminal sessions are really hard to read and at the moment are near zero help.  I had hoped it could all be done with the pip install of OpenVINO as the apt installs of 2024 still seem broken in terms of python support.

0 Kudos
wb666greene
Beginner
2,910 Views

I didn't realize that the ovc command only exists (works) in the VENV virtual environment where openvino was PIP installed, when I tried it from a normal terminal I got:  Command 'ovc' not found which made me think it was part of the apt installation which is broken at the moment on 22.04 and 24.04 according to another thread here.

But using it made absolutely no difference in the conversion that I can see.

Using the ovc command:

[ INFO ] Reading the model: ../ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.xml
4 [<Output: names[detection_boxes:0, Postprocessor/BatchMultiClassNonMaxSuppression/map/TensorArrayStack/TensorArrayGatherV3:0] shape[?,100,4] type: f32>, <Output: names[detection_classes:0, add:0] shape[?,100] type: f32>, <Output: names[detection_scores:0, Postprocessor/BatchMultiClassNonMaxSuppression/map/TensorArrayStack_1/TensorArrayGatherV3:0] shape[?,100] type: f32>, <Output: names[num_detections:0, Postprocessor/ToFloat_3:0] shape[?] type: f32>]

 

Using the model I converted with 2021.3 from what appears to be the same frozen_inference_graph.pb file I get:

[ INFO ] Reading the model: mobilenet_ssd_v2/MobilenetSSDv2cocoIR10.xml
1 [<Output: names[DetectionOutput] shape[1,1,100,7] type: f32>]

The issue doesn't seem to be in the model conversion, the question really is "How do I reshape the outputs to what the conversion produced when I did it with 2021.3?"  The entire ovc vs. ov.convert_model() and ov.save_model() was a red herring.

0 Kudos
Iffa_Intel
Moderator
2,891 Views

The ovc will exist according to where you installed the OpenVINO. 

As instructed in the OpenVINO PyPI installation, you'll need to create virtual env (to avoid any conflict with your main host/system) and pip install OpenVINO in there.

Therefore, the ovc will exist only in that virtual environment.

 

The ssd_mobilenet_v2 model that you shared is a Dynamic Shaped model. Hence, the converted OpenVINO format model will inherit the Dynamic shape. That is why, it is required to provide the shape during inferencing. If the model has static shape,  then the converted model will inherit whatever shape of the original model has.

 

If you are asking about Dynamic Shapes in Output, you may refer here.

Please help to refer the Configuring the model section too, this section explains model.reshape method.

 

Cordially,

Iffa

 

 

 

 

 

 

 

 

0 Kudos
wb666greene
Beginner
2,850 Views

Thanks, it is a place to start, but why did 2021.1 (Edit: looking at some backups seems I used 2021.1, not 2021.3 for the conversion, if it matters)  convert the same frozen_inference_graph.pb file to a single output while 2024.3 is keeping the "dynamic output"?  I have created two VENV virtual environments one for Cuda yolo8 and CPU openvino or Corel TPU MobilenetSSD_v2 inferences, the other for openvino iGPU yolo8 inferences and either TPU or CPU inferences.

 

QA/QC alert both links you posted go to the exact same page, which was neither helpful nor informative.

 

0 Kudos
Iffa_Intel
Moderator
2,726 Views

They are on the same page.

However, they are in different subsections , explaining different functionality.

Including that the output dimensions depends on how the dynamic inputs are propagated through the model. 


You define the model shape that you want when converting. Set the model shape as static if you don't want them to be Dynamic.


Example for Static shape:

import openvino as ov

ov_model = ov.convert_model("MobileNet.pb", input=[2, 300, 300, 3])


Example for Dynamic shape:

import openvino as ov

ov_model = ov.convert_model("MobileNet.pb", input=[-1, -1, 300, 3])


Note: the -1 indicate Dynamic shape


The input parameter allows overriding original input shapes if it is supported by the model topology.

Shapes with dynamic dimensions in the original model can be replaced with static shapes for the converted model, and vice versa.


If you are using ovc and didn't provide --input parameter, then it will convert according to the default shape of the model.


Cordially,

Iffa


0 Kudos
wb666greene
Beginner
2,670 Views

The problem is not with the input, it is that the converted model returns a dictionary with 4 elements when converted with the 2024 ovc or  ov.convert_model() method.  Even when I convert the frozen_inference_graph.pb file that I downloaded in 2021. 

I get this error at  ppp.output().tensor().set_element_type(ov.Type.f32)

  File "/home/wally/AI_code/AI2/ssdv2.py", line 102, in <module>
    ppp.output().tensor().set_element_type(ov.Type.f32)
RuntimeError: Check 'm_impl->m_outputs.size() == 1' failed at src/core/src/preprocess/pre_post_process.cpp:123:
PrePostProcessor::output() - Model must have exactly one output, got 4
# line 102 is: ppp.output().tensor().set_element_type(ov.Type.f32)

The 2021 conversion produced an IR10 model with a single output, I suspect the issue stems from introducing IR11 in 2022 or there about, and the "backwards compatibility" is not there anymore. 

 

If I can get the four output model to load, compile, and inference I can easily modify my code to read the dictionary instead of the single output tensor.  I can get it to load and apparently compile but inference fails

 

0 Kudos
Iffa_Intel
Moderator
2,642 Views

Indeed, comparing OV from 2021 to 2024, there are definitely lots of changes and updates has been applied as you mentioned. 

 

Did you try to apply postprocessing by selecting specific output (  output(index)  ) ?

eg:  ppp.output(0).......

 

May I know your use case? Perhaps one of these OpenVINO Open Model Zoo is useful.

 

Cordially,

Iffa

0 Kudos
wb666greene
Beginner
2,620 Views

my project is here:

https://github.com/wb666greene/AI-Person-Detector-with-YOLO-verification-Version-2/tree/main 

 

Everything works if I use the 2021 converted model.  The problem is having others download and convert the model. The model output dictionary is supposed to have these four items:

 

 

# converted using the frozen_inference_graph.pb file downloaded in 2021 using 2024.3 ovc 
[ INFO ] Reading the model: frozen_inference_graph.xml
4 [<Output: names[detection_boxes:0, Postprocessor/BatchMultiClassNonMaxSuppression/map/TensorArrayStack/TensorArrayGatherV3:0] shape[?,100,4] type: f32>,
   <Output: names[detection_classes:0, add:0] shape[?,100] type: f32>,
   <Output: names[detection_scores:0, Postprocessor/BatchMultiClassNonMaxSuppression/map/TensorArrayStack_1/TensorArrayGatherV3:0] shape[?,100] type: f32>,
   <Output: names[num_detections:0, Postprocessor/ToFloat_3:0] shape[?] type: f32>]

 

 

I need num_detections, detection_boxes, and  detection_scores so a single output won't do it unless it is "flattened" like apparently happens with the 2021 model optimizer.

This my model optimizer command from 2021:

 

 

~/model_optimizer$ python3 mo_tf.py --input_model /home/wally/ssdv2/frozen_inference_graph.pb --tensorflow_use_custom_operations_config /home/wally/ssdv2/ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config /home/wally/ssdv2/pipeline.config --data_type FP16

 

 

The current download from the tensorflow site doesn't have the json and some other files that the original download did (I just can't find where the original came from)  but the frozen_inference_graph.pb files are exactly the same number of bytes on disk and have the exact same modification date of: Thu 29 Mar 2018 09:48:20 PM CDT

 

If you can point me to some real documentation for ppp.output(), not a just a dump of the OO structure and enumeration of the methods and parameters lacking any explanation, maybe I can figure it out.  A link to a tutorial using a model with an output "dictionary" would be even better.

 

Using a different model throws away all the experience since 2021 that proves the performance of this model for this task.

 

0 Kudos
wb666greene
Beginner
2,606 Views

Here is a sample code extracted from my openvino CPU inference thread (I use yolo8 with iGPU in a separate thread);

#! /usr/bin/python3
'''
    28JUL2024wbk -- OpenVINO_SSD_Thread.py
    Run MobilenetSSD_v2 inferences on CPU using OpenVINO 2024
    For use with AI2.py
    
    Setup and inference code largely lifted from the openvino python example:
    hello_reshape_ssd.py
    That was installed by the apt install of openvino 2024.2, the apt install is broken
    so I had to do a pip install to run the code :(
'''

import numpy as np
import cv2
import datetime
import logging as log
import sys
from imutils.video import FPS
import openvino as ov

if True:    # cut and paste from SSD_Thread avoid removing indents

    ###model_path = 'ssd_mobilenet_v2_coco.xml'    # converted with openvino 2024
    ###model_path = '../ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.xml'    # converted with openvion 2024
    model_path = 'frozen_inference_graph.xml'    # converted with openvion 2024 using frozen_inference_graph from 2021 conversion
###    model_path = 'mobilenet_ssd_v2/MobilenetSSDv2cocoIR10.xml'   # my IR10 conversion done with openvino 2021.3
    
    '''
    # simple python code to convert model,  too bad it produces a multi output model.
    import openvino as ov
    ov_model = ov.convert_model('../ssd_mobilenet_v2_coco_2018_03_29/saved_model')
    ov.save_model(ov_model,'ssd_mobilenet_v2_coco.xml')
    '''
    
    device_name = 'CPU'
    
    log.basicConfig(format='[ %(levelname)s ] %(message)s', level=log.INFO, stream=sys.stdout)

    ## basically lifted from hello_reshape_ssd.py sample code installed with apt install of openvino 2024, which is broken
    # --------------------------- Step 1. Initialize OpenVINO Runtime Core ------------------------------------------------
    log.info('Creating OpenVINO Runtime Core')
    core = ov.Core()
    print('[INFO] Using OpenVINO: ' + ov.__version__)
    devices = core.available_devices
    log.info('Available devices:')
    for device in devices:
        deviceName = core.get_property(device, "FULL_DEVICE_NAME")
        print(f"   {device}: {deviceName}")
        
    # --------------------------- Step 2. Read a model --------------------------------------------------------------------
    log.info(f'Reading the model: {model_path}')
    # (.xml and .bin files) or (.onnx file)
    model = core.read_model(model_path)
##    print(len(model.outputs), model.outputs)
##    print('')      

    if len(model.inputs) != 1:
        log.error('Supports only single input topologies.')
        ##return -1
    '''
    if len(model.outputs) != 1:
        log.error('Supports only single output topologies')
        print(len(model.outputs), model.outputs)
        print('')      
        ##return -1
    '''
    
# --------------------------- Step 3. Set up input --------------------------------------------------------------------
    ## create image to set model size
    '''
        This was very confusing, sample code says:
        'Reshaping the model to the height and width of the input image'
        which makes no sence to me.  If I feed in larger images it sort of works
        but boxes are wrong and detections are poor. I know my MobilenetSSD_v2
        model was for images sized 300x300 so I create a dummy image of this size
        and use it to "reshape" the model.
    '''
    imageM = np.zeros(( 300, 300, 3), np.uint8)
    imageM[:,:] = (127,127,127)
    input_tensor = np.expand_dims(imageM, 0)    # Add N dimension
    log.info('Reshaping the model to the height and width of the input image')
    n, h, w, c = input_tensor.shape
    model.reshape({model.input().get_any_name(): ov.PartialShape((n, c, h, w))})
    #print(n, c, w, h)
    
    
# --------------------------- Step 4. Apply preprocessing -------------------------------------------------------------
    ## I've made zero effort to understand this, but it seems to work!
    ppp = ov.preprocess.PrePostProcessor(model)
    # 1) Set input tensor information:
    # - input() provides information about a single model input
    # - precision of tensor is supposed to be 'u8'
    # - layout of data is 'NHWC'
    ppp.input().tensor() \
        .set_element_type(ov.Type.u8) \
        .set_layout(ov.Layout('NHWC'))  # noqa: N400
    # 2) Here we suppose model has 'NCHW' layout for input
    ppp.input().model().set_layout(ov.Layout('NCHW'))
    # 3) Set output tensor information:
    # - precision of tensor is supposed to be 'f32'
###    ppp.output().tensor().set_element_type(ov.Type.f32)
    # 4) Apply preprocessing modifing the original 'model'
    model = ppp.build()
    
# ---------------------------Step 4. Loading model to the device-------------------------------------------------------
    
    log.info('Loading the model to the plugin')
    compiled_model = core.compile_model(model, device_name)
    
    
###    input_layer_ir = compiled_model.input(0)
###    output_layer_ir = compiled_model.output("boxes")

    image = cv2.imread('TestDetection.jpg')
###    N, C, H, W = input_layer_ir.shape
    resized_image = cv2.resize(image, (w, h))
    input_tensor = np.expand_dims(resized_image, 0)    # Add N dimension
    cv2.imshow('SSD input', resized_image)
    cv2.waitKey(0)
    
    results = compiled_model.infer_new_request({0: input_tensor})
    print(results)

Running it as it gives this error:

[ INFO ] Creating OpenVINO Runtime Core
[INFO] Using OpenVINO: 2024.3.0-16041-1e3b88e4e3f-releases/2024/3
[ INFO ] Available devices:
   CPU: 12th Gen Intel(R) Core(TM) i9-12900K
   GPU: Intel(R) UHD Graphics 770 [0x4680] (iGPU)
[ INFO ] Reading the model: frozen_inference_graph.xml
[ INFO ] Reshaping the model to the height and width of the input image
Traceback (most recent call last):
  File "/home/wally/AI_code/AI2/ssdv2.py", line 83, in <module>
    model.reshape({model.input().get_any_name(): ov.PartialShape((n, c, h, w))})
RuntimeError: Check 'TRShape::broadcast_merge_into(output_shape, input_shapes[1], autob)' failed at src/core/shape_inference/include/eltwise_shape_inference.hpp:26:
While validating node 'opset1::Multiply Postprocessor/Decode/mul_2 (opset1::Multiply Postprocessor/Decode/div[0]:f32[191700], opset1::Subtract Postprocessor/Decode/get_center_coordinates_and_sizes/sub_1[0]:f32[1917]) -> (f32[?])' with friendly_name 'Postprocessor/Decode/mul_2':
Argument shapes are inconsistent.

This is using the 2024 converted model in line 25 of the code.  If I comment out line 25 and remove the comment from line 26 it uses the 2021 converted model and works fine.

Putting back the use of the 2024 converted model and commenting out line 83 it gets further but still fails:

[ INFO ] Loading the model to the plugin
Traceback (most recent call last):
  File "/home/wally/AI_code/AI2/ssdv2.py", line 121, in <module>
    results = compiled_model.infer_new_request({0: input_tensor})
  File "/home/wally/VENV/y8ovv/lib/python3.10/site-packages/openvino/runtime/ie_api.py", line 298, in infer_new_request
    return self.create_infer_request().infer(inputs)
  File "/home/wally/VENV/y8ovv/lib/python3.10/site-packages/openvino/runtime/ie_api.py", line 132, in infer
    return OVDict(super().infer(_data_dispatch(
RuntimeError: Exception from src/inference/src/cpp/infer_request.cpp:223:
Exception from src/plugins/intel_cpu/src/memory_desc/cpu_memory_desc.h:89:
ParameterMismatch: Can not clone with new dims. Descriptor's shape: {0 - ?, 0 - ?, 3, 0 - ?} is incompatible with provided dimensions: {1, 300, 300, 3}.

 

You can download my 2021 converted *.xml & *.bin files from this link as a *.tar.bz2 archive, ~29 MB:

https://1drv.ms/u/s!AnWizTQQ52Yzg1cAt2vo3tgzVGHn?e=EQgags 

The model I'm trying to convert with 2024.3 is from:

http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v2_coco_2018_03_29.tar.gz 

 

Here is the test image that I load in the sample code, edit line 114 to load a different image, it is zoomed-in detection image from the 2021 version of my system that I use to test the Email and/or MMS notifications:

TestDetection.jpg

0 Kudos
wb666greene
Beginner
2,494 Views

If I comment out all the stuff related to  ppp = ov.preprocess.PrePostProcessor(model) then the 2024.3 converted model runs and gets the same result on the skateboarders test image as does the 2021 converted model.  The second skateboarder is detected, but not the first (which is already highlighted in green from the original TPU SSDv2 inference, that code breaks out of the box drawing loop after the first person is detected).  This is the same result as the 2021 conversion on this test image.

 

Problem is, if I try an image with three persons in it the 2024.3 converted model still only gets a single detection, whereas the 2021 conversion returns all three people in the image.

 

I decode the output and draw the box with this added the previously posted code:

    results = compiled_model.infer_new_request({0: input_tensor})
    #print(results)
 
    num_detected=int(results[3][0]) # get number of objects detected, always seems to be 1 with 2024.3 converted model
    for i in range(num_detected):
        if results[2][0][i] > 0.75 and int(results[1][0][i]) == 1:
            startX = int(results[0][0][i][1] * W)   # box points
            startY = int(results[0][0][i][0] * H)
            endX = int(results[0][0][i][3] * W)
            endY = int(results[0][0][i][2] * H)
            cv2.rectangle(image, (startX, startY), (endX, endY), (0, 200, 200), 2)
            print(startX,startY,endX,endY)
    cv2.imshow('SSD results', image)
    cv2.waitKey(0)
0 Kudos
Iffa_Intel
Moderator
2,336 Views

Hi,


This issue has been escalated to engineering to be further investigated from the proper experts. This might take a while and we appreciate your patience.


Cordially,

Iffa






0 Kudos
wb666greene
Beginner
2,306 Views

Thanks,  It is not a show-stopper because I break out of the loop to evaluate the detection results when the first person above the confidence threshold is found.  But is sure seems like a weird issue, and would be a show-stopper for most other uses.  It also doesn't seem to be detecting vehicles or other objects at all, only people.

 

I'm not really a beginner, I retired in 2014 and started here with the original Movidius NCS before there was openvino and initially used the v1_SDK and have tried to keep up best I can to improve my project.  Been awhile since I needed to post here and my old login didn't work so I created a new one.

 

I'll be happy to supply other code snippets as necessary to help find a solution.

0 Kudos
Haarika
Moderator
2,145 Views

Hello wb666greene,


We have verified your use-case and issue with our engineering team and received confirmation that It is not a problem of ovc and TF FE. We are fully aligned with the original framework that means to have the same outputs and its number as TF outputs.


The legacy MO outputs the custom outputs and we are no longer following this approach that additionally requires custom configuration files for conversion.


We follow new approach:

1. convert out-of-the-box with no config file;

2. fully aligned with original framework.


We request you to adopt your code to this change.

If it is critical you can choose to continue to use openvino-dev mo tool (not ovc tool) as that still accepts config file.


Hope this helps to provide more clarification on the topic.


Cordially,

Haarika


0 Kudos
wb666greene
Beginner
1,755 Views

I did pip install openvino-dev tensorflow in the VENV and ran the mo with this command after changing to the downloaded model directory ssd_mobilenet_v2_coco_2018_03_29 I used this command:

 

 

mo --input_model frozen_inference_graph.pb --tensorflow_object_detection_api_pipeline_config pipeline.config

# this is the command I ran to convert with 2021.3:
python3 mo_tf.py --input_model /home/ai/ssdv2/frozen_inference_graph.pb --tensorflow_use_custom_operations_config /home/ai/ssdv2/ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config /home/ai/ssdv2/pipeline.config --data_type FP16

# note --data_type is not valid with mo 2024.3
# the downloaded model doesn't have the ssd_v2_support.json file now
# but the one I downloaded in 2021 did.

 

 

Problem is when I try to load the model into hello_reshape_ssd.py sample code it errors with:

 

 

python hello_reshape_ssd.py
[ INFO ] Creating OpenVINO Runtime Core
CPU: 12th Gen Intel(R) Core(TM) i9-12900K
GPU: Intel(R) UHD Graphics 770 [0x4680] (iGPU)
[ INFO ] Reading the model: frozen_inference_graph.xml
[ ERROR ] Sample supports only single output topologies

 

 

Note that the ssd_v2_support.json file is not part of the current download of the tensorflow mobilenetSSD_v2 download, it was from wherever I got it in 2021.  The frozen_inference_graph.pb from the 2021 download and the current one are identical, but other things in the download are not.

The problem is not my code, I rather easily modified it to support the multiple outputs, the problem is the converted model only returns a single detection object even though there should be multiples.  It never returns anything more than one single detection and it only seems to detect "person" objects, while the model converted in 2021 works as expected.  I can still use my converted model from 2021, but I've no way to distribute it.  It is easy to tell someone to do: 

cd $HOME # should be one level above the AI2 directory
wget http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v2_coco_2018_03_29.tar.gz
tar -zxf ssd_mobilenet_v2_coco_2018_03_29.tar.gz

 My code then does the automatic conversion:

 

model = ov.convert_model('../ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.pb')
ov.save_model(model,'mobilenet_ssd_v2/ssd_mobilenet_v2_coco_2018_03_29.xml')

 

Problem is after conversion it will never detect anything but one person.

 

 

0 Kudos
Iffa_Intel
Moderator
1,692 Views

Hi,

 

I managed to run your model with Object Detection Python Demo and it seems your model can detect one of the objects (bottle) and at certain time, it could detect the other two bottles.

 

Iffa_Intel_0-1726653531324.png

 

 

Perhaps you could take this demo code and modify it to fit your use case.

 

Cordially,

Iffa

 

0 Kudos
wb666greene
Beginner
1,540 Views

When I try to run this command with "everything" in a MOtest directory in my VENV openvino environment I'm missing some openvino module:

~/MOtest$ python object_detection_demo.py -m frozen_inference_graph.xml -i images -d GPU
Traceback (most recent call last):
  File "/home/wally/MOtest/object_detection_demo.py", line 29, in <module>
    from model_api.models import DetectionModel, DetectionWithLandmarks, RESIZE_TYPES, OutputTransform
ModuleNotFoundError: No module named 'model_api'

What is the name to pip install for the missing module(s)?

 

0 Kudos
Iffa_Intel
Moderator
1,346 Views

If you are using the newer version of OpenVINO (2024.3) you should be able to use the Python demos directly.

Iffa_Intel_0-1727152719455.png

 

Otherwise, (especially OV 2022.1 release) you need to install the Python* Model API Package before running the Demos or Model Tools: pip install <omz_dir>/demos/common/python

 

For more information and guide you can refer to this documentation.

 

 

Cordially,

Iffa

0 Kudos
Reply