- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
how can I dump the input tensor after ppp involved?
ov::preprocess::PrePostProcessor ppp(model);
ppp.input("input_1").tensor()
.set_element_type(ov::element::u8)
.set_layout("NHWC");
ppp.input("input_1").preprocess().
convert_element_type(ov::element::f32)
.scale(255)
.mean(0.5);
ppp.input().model()
.set_layout("NCHW");
model = ppp.build();
ov::CompiledModel compiled_model = core.compile_model(model, device_name);
ov::InferRequest infer_request = compiled_model.create_infer_request();
infer_request.set_input_tensor(input_tensor);
infer_request.infer();
is it possibile dump tensor bfore infer_request.infer() ?
thanks.
Enlin.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Enlin,
Thank you for reaching out to us.
We are checking with the relevant team regarding your query. We will update you once we receive feedback from them. Thank you for your patience.
In the meantime, we would like to understand your situation more. Could you provide further details on why you would like to dump the Tensor? Is it to free the memory?
Regards,
Megat
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Megat,
thanks for your reply,
I got different result from pytorch model and c++ model after ppp, so I want to dump the input tensor after ppp to see what's wrong?
regards.
Enlin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Enlin,
We apologize for the delay.
For your information, OpenVINO C++ APIs use ov::Tensor for inference input and output, we can set input/output tensor for inference request to do inference, but it will be risky if we try to free it before inference is done, because its memory may be accessed during inference rather than copy them to internal buffer or device buffer.
If just want to dump the tensor's content, you can do as below:
auto size = input_tensor.get_byte_size(); // tensor bytes size
auto src=input_tensor.data(); // tensor host ptr
mempcy(dst, src, size); // copy to destination buffer
Hope this helps.
Regards,
Megat
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Enlin,
Thank you for your question. This thread will no longer be monitored since we have provided a suggestion. If you need any additional information from Intel, please submit a new question
Regards,
Megat
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page