Intel® DevCloud
Help for those needing help starting or connecting to the Intel® DevCloud
1596 Discussions

Intel DevCloud simulating testing different hardware

bartlino
New Contributor I
458 Views

Can I get a tip on if I used a pretrained model like:

https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/person-detection-0202

 

And I have some sample video files from a camera system that can be opened with openCV2 (I use Python) on how to test frames per second, inference time, and which hardware (CPU, VPU, or FPGA) would potentially best meet the client’s needs, based on their budgets....

0 Kudos
1 Solution
Markus_B_Intel
Employee
371 Views

Can you share more details around the use-case, please?

In your use-case, will you receive a live-stream from a camera, will it be compressed content (e.g. RTSP-stream AVC/h.264-encoded) or raw content? Will it be FullHD resolution or higher like 4k?

Will you need to process multiple, maybe many streams concurrently? Will the cameras be connected via Ethernet, or MIPI(-CSI)?

Would a system with a GPU (integrated/embedded or discrete) be an option (for HW-accelerated video-decoding using a GPU) (as you haven't listed GPU besides CPU, VPU and FPGA)?

(using a GPU for decoding could have a benefit of using GPU-zero-copy between the HW-video-codec and doing the inferences in the GPU as well: the decoded video-frames won't be copied into OpenVINO's inference-engince, but just referenced in the GPU-video-memory)

Do you have requirements for the throughput and especially for the latency for your client?

Will your client's use-case be more within an "embedded" environment (ATOM&Core SoCs with integrated GPU, VPU like NCS2-MyriadX; INT8-quantized model, VNNI-CPU-instruction-set) or more like "scaling big" within a data-center (XEON CPUs, additional discrete GPUs; INT8-/BF16-quantized model, AMX-CPU-instruction-set)?

View solution in original post

0 Kudos
3 Replies
Hari_B_Intel
Moderator
419 Views

Hi bartlino,


Thank you for reaching out to us, regarding your inquiry. We will get back to you as soon as possible.



0 Kudos
JesusE_Intel
Moderator
398 Views

Hi bartlino,


Intel Developer Cloud for the Edge has a BenchmarkApp notebook that you can use to inference OpenVINO Models. Take a look at the following page: Tutorials for Using Intel® Developer Cloud for the Edge.


You can also use Deep Learning Workbench to tune, visualize and compare performance of models on various Intel architectures. DL Workbench can be found in the Additional Developer Solutions section on the Overview of Intel® Developer Cloud for the Edge page.


Hope this answers your question.


Regards,

Jesus


Markus_B_Intel
Employee
372 Views

Can you share more details around the use-case, please?

In your use-case, will you receive a live-stream from a camera, will it be compressed content (e.g. RTSP-stream AVC/h.264-encoded) or raw content? Will it be FullHD resolution or higher like 4k?

Will you need to process multiple, maybe many streams concurrently? Will the cameras be connected via Ethernet, or MIPI(-CSI)?

Would a system with a GPU (integrated/embedded or discrete) be an option (for HW-accelerated video-decoding using a GPU) (as you haven't listed GPU besides CPU, VPU and FPGA)?

(using a GPU for decoding could have a benefit of using GPU-zero-copy between the HW-video-codec and doing the inferences in the GPU as well: the decoded video-frames won't be copied into OpenVINO's inference-engince, but just referenced in the GPU-video-memory)

Do you have requirements for the throughput and especially for the latency for your client?

Will your client's use-case be more within an "embedded" environment (ATOM&Core SoCs with integrated GPU, VPU like NCS2-MyriadX; INT8-quantized model, VNNI-CPU-instruction-set) or more like "scaling big" within a data-center (XEON CPUs, additional discrete GPUs; INT8-/BF16-quantized model, AMX-CPU-instruction-set)?

0 Kudos
Reply