- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I am trying to run inference for custom trained YOLO-v2 model using the OpenVINO. Model is successfully converted to IR format using Model Optimizer but result from FP32 inference is incorrect. I am debugging the issue by trying to see the output of each of the layers by using following code snippet.
std::string out_name = "input";
Blob::Ptr blob = infer_request.GetBlob(out_name);
The above code works for input and output layers of the network. However i'm getting "Failed to find input or output with name : " error for other layers in the network. [Eg. Failed to find input or output with name : '0-convolutional' for first convolution layer in the network].
How do i get the correct blob names for rest of the layers. Appreciate your help here.
Link Copied
1 Reply
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
in order to visualize the values of a intermediate layer, you need to add it as output to your model. Then, your code snippet would also work.
To do so, here is a code snippet:
CNNNetwork network = networkReader.getNetwork();
network.addOutput("layer_name");
Best,
Severine

Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page