Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

problem with model.reshape

Tetali-875
Beginner
226 Views
RuntimeError: Exception from src\inference\src\infer_request.cpp:231:
[ GENERAL_ERROR ] Shape inference of Reshape node with name /classificationModel_4/Reshape_1 failed: Exception from src\plugins\intel_cpu\src\shape_inference\custom\reshape.cpp:61:
[cpu]reshape: the shape of input data (1.8.14.9) conflicts with the reshape pattern (1.576.1)

Problem here is that my model receives dynamic input and is 4 dimensional but my output is 3 dimensional i had used
model = core.read_model(detect_ir_path)
# Or, set third and fourth dimensions as dynamic
model.reshape([-1, 3, -1, -1]) this commands for reshaping but now it is giving the error above.... output_layer_1 [1,?,4]
output_layer_2 [1,196560,1]
input_layer [?,3,?,?]

0 Kudos
2 Replies
Megat_Intel
Moderator
177 Views

Hi Tetali-875,

Thank you for reaching out to us.

 

Please share with us the following details for replication purposes:

 

  • OpenVINO™ version
  • The model you used
  • Full inference code
  • Hardware (CPU/GPU)

 

For your information, I ran an example Hello Classification Sample in Python and added the model.reshape() method for the mobilenet-v3-small-1.0-224-tf model. I can successfully run the code without any errors. I share the code and results here:

 reshape.png

import logging as log
import sys

import cv2
import numpy as np
import openvino as ov


def main():
    log.basicConfig(format='[ %(levelname)s ] %(message)s', level=log.INFO, stream=sys.stdout)

    # Parsing and validation of input arguments
    if len(sys.argv) != 4:
        log.info(f'Usage: {sys.argv[0]} <path_to_model> <path_to_image> <device_name>')
        return 1

    model_path = sys.argv[1]
    image_path = sys.argv[2]
    device_name = sys.argv[3]

# --------------------------- Step 1. Initialize OpenVINO Runtime Core ------------------------------------------------
    log.info('Creating OpenVINO Runtime Core')
    core = ov.Core()

# --------------------------- Step 2. Read a model --------------------------------------------------------------------
    log.info(f'Reading the model: {model_path}')
    # (.xml and .bin files) or (.onnx file)
    model = core.read_model(model_path)
    model.reshape([-1, -1, -1, 3])
    if len(model.inputs) != 1:
        log.error('Sample supports only single input topologies')
        return -1

    if len(model.outputs) != 1:
        log.error('Sample supports only single output topologies')
        return -1

# --------------------------- Step 3. Set up input --------------------------------------------------------------------
    # Read input image
    image = cv2.imread(image_path)
    image = cv2.cvtColor(image, code=cv2.COLOR_BGR2RGB)
    image = cv2.resize(src=image, dsize=(224, 224))
    # Add N dimension
    input_tensor = np.expand_dims(image, 0)

# --------------------------- Step 5. Loading model to the device -----------------------------------------------------
    log.info('Loading the model to the plugin')
    compiled_model = core.compile_model(model, device_name)

# --------------------------- Step 6. Create infer request and do inference synchronously -----------------------------
    log.info('Starting inference in synchronous mode')
    results = compiled_model.infer_new_request({0: input_tensor})

# --------------------------- Step 7. Process output ------------------------------------------------------------------
    predictions = next(iter(results.values()))

    # Change a shape of a numpy.ndarray with results to get another one with one dimension
    probs = predictions.reshape(-1)

    # Get an array of 10 class IDs in descending order of probability
    top_10 = np.argsort(probs)[-10:][::-1]

    header = 'class_id probability'

    log.info(f'Image path: {image_path}')
    log.info('Top 10 results: ')
    log.info(header)
    log.info('-' * len(header))

    for class_id in top_10:
        probability_indent = ' ' * (len('class_id') - len(str(class_id)) + 1)
        log.info(f'{class_id}{probability_indent}{probs[class_id]:.7f}')

    log.info('')

# ----------------------------------------------------------------------------------------------------------------------
    log.info('This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool\n')
    return 0


if __name__ == '__main__':
    sys.exit(main())

 

On the other hand, you can refer to our Hello Reshape SSD Sample for more information on the reshape implementation.

 

 

Regards,

Megat

 

0 Kudos
Megat_Intel
Moderator
70 Views

Hi Tetali-875,

Thank you for your question. If you need any additional information from Intel, please submit a new question as this thread is no longer being monitored. 

 


Regards,

Megat


0 Kudos
Reply