- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I'd like to extract feature map from Convolution layer by inputting
larg
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi hep77to,
Thanks for reaching out.
OpenVINO™ provides capabilities to change model input shape during the runtime. You can set a new input shape with the reshape method. Refer to Changing input shapes for the details. The reshape method is useful in case you would like to feed model an input that has different size than model input shape.
Apart from that, the Hello Reshape SSD C++ Sample demonstrates how to do synchronous inference of object detection models using input reshape feature. Below is the reshape method used in the sample:
/ Step 5. Reshape model to image size and batch size
// assume model layout NCHW
const ov::Layout model_layout{"NCHW"};
ov::Shape tensor_shape = model->input().get_shape();
size_t batch_size = 1;
tensor_shape[ov::layout::batch_idx(model_layout)] = batch_size;
tensor_shape[ov::layout::channels_idx(model_layout)] = image_channels;
tensor_shape[ov::layout::height_idx(model_layout)] = image_height;
tensor_shape[ov::layout::width_idx(model_layout)] = image_width;
std::cout << "Reshape network to the image size = [" << image_height << "x" << image_width << "] " << std::endl;
model->reshape({{model->input().get_any_name(), tensor_shape}});
printInputAndOutputsInfo(*model);
Hope this helps.
Regards,
Aznie
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi hep77to,
Thanks for reaching out.
OpenVINO™ provides capabilities to change model input shape during the runtime. You can set a new input shape with the reshape method. Refer to Changing input shapes for the details. The reshape method is useful in case you would like to feed model an input that has different size than model input shape.
Apart from that, the Hello Reshape SSD C++ Sample demonstrates how to do synchronous inference of object detection models using input reshape feature. Below is the reshape method used in the sample:
/ Step 5. Reshape model to image size and batch size
// assume model layout NCHW
const ov::Layout model_layout{"NCHW"};
ov::Shape tensor_shape = model->input().get_shape();
size_t batch_size = 1;
tensor_shape[ov::layout::batch_idx(model_layout)] = batch_size;
tensor_shape[ov::layout::channels_idx(model_layout)] = image_channels;
tensor_shape[ov::layout::height_idx(model_layout)] = image_height;
tensor_shape[ov::layout::width_idx(model_layout)] = image_width;
std::cout << "Reshape network to the image size = [" << image_height << "x" << image_width << "] " << std::endl;
model->reshape({{model->input().get_any_name(), tensor_shape}});
printInputAndOutputsInfo(*model);
Hope this helps.
Regards,
Aznie
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi hep77to,
This thread will no longer be monitored since this we have provided the information. If you need any additional information from Intel, please submit a new question.
Regards,
Aznie
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page