Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6503 Discussions

dump input tensor after PrePostProcessor apply in C++

Enlin
New Contributor I
1,222 Views

how can I dump the input tensor after ppp involved?

ov::preprocess::PrePostProcessor ppp(model);

ppp.input("input_1").tensor()
.set_element_type(ov::element::u8)
.set_layout("NHWC");
ppp.input("input_1").preprocess().
convert_element_type(ov::element::f32)
.scale(255)
.mean(0.5);
ppp.input().model()
.set_layout("NCHW");

model = ppp.build();

ov::CompiledModel compiled_model = core.compile_model(model, device_name);

ov::InferRequest infer_request = compiled_model.create_infer_request();

infer_request.set_input_tensor(input_tensor);

infer_request.infer();

is it possibile dump tensor bfore infer_request.infer() ?

 

thanks.

 

Enlin.

 

 

0 Kudos
4 Replies
Megat_Intel
Moderator
1,196 Views

Hi Enlin,

Thank you for reaching out to us.

 

We are checking with the relevant team regarding your query. We will update you once we receive feedback from them. Thank you for your patience.

 

In the meantime, we would like to understand your situation more. Could you provide further details on why you would like to dump the Tensor? Is it to free the memory?

 

 

Regards,

Megat

 

0 Kudos
Enlin
New Contributor I
1,163 Views

Hi Megat,

thanks for your reply,

I got different result from pytorch model and c++ model after ppp, so I want to dump the input tensor after ppp to see what's wrong?

 

regards.

Enlin

0 Kudos
Megat_Intel
Moderator
915 Views

Hi Enlin,

We apologize for the delay.

 

For your information, OpenVINO C++ APIs use ov::Tensor for inference input and output, we can set input/output tensor for inference request to do inference, but it will be risky if we try to free it before inference is done, because its memory may be accessed during inference rather than copy them to internal buffer or device buffer.

 

If just want to dump the tensor's content, you can do as below:

auto size = input_tensor.get_byte_size(); // tensor bytes size
auto src=input_tensor.data(); // tensor host ptr
mempcy(dst, src, size); // copy to destination buffer

 

Hope this helps.

 

 

Regards,

Megat

 

0 Kudos
Megat_Intel
Moderator
846 Views

Hi Enlin,

Thank you for your question. This thread will no longer be monitored since we have provided a suggestion. If you need any additional information from Intel, please submit a new question

 

 

Regards,

Megat


0 Kudos
Reply