- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Can I get a tip on if I used a pretrained model like:
https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/person-detection-0202
And I have some sample video files from a camera system that can be opened with openCV2 (I use Python) on how to test frames per second, inference time, and which hardware (CPU, VPU, or FPGA) would potentially best meet the client’s needs, based on their budgets....
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Can you share more details around the use-case, please?
In your use-case, will you receive a live-stream from a camera, will it be compressed content (e.g. RTSP-stream AVC/h.264-encoded) or raw content? Will it be FullHD resolution or higher like 4k?
Will you need to process multiple, maybe many streams concurrently? Will the cameras be connected via Ethernet, or MIPI(-CSI)?
Would a system with a GPU (integrated/embedded or discrete) be an option (for HW-accelerated video-decoding using a GPU) (as you haven't listed GPU besides CPU, VPU and FPGA)?
(using a GPU for decoding could have a benefit of using GPU-zero-copy between the HW-video-codec and doing the inferences in the GPU as well: the decoded video-frames won't be copied into OpenVINO's inference-engince, but just referenced in the GPU-video-memory)
Do you have requirements for the throughput and especially for the latency for your client?
Will your client's use-case be more within an "embedded" environment (ATOM&Core SoCs with integrated GPU, VPU like NCS2-MyriadX; INT8-quantized model, VNNI-CPU-instruction-set) or more like "scaling big" within a data-center (XEON CPUs, additional discrete GPUs; INT8-/BF16-quantized model, AMX-CPU-instruction-set)?
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi bartlino,
Thank you for reaching out to us, regarding your inquiry. We will get back to you as soon as possible.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi bartlino,
Intel Developer Cloud for the Edge has a BenchmarkApp notebook that you can use to inference OpenVINO Models. Take a look at the following page: Tutorials for Using Intel® Developer Cloud for the Edge.
You can also use Deep Learning Workbench to tune, visualize and compare performance of models on various Intel architectures. DL Workbench can be found in the Additional Developer Solutions section on the Overview of Intel® Developer Cloud for the Edge page.
Hope this answers your question.
Regards,
Jesus
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Can you share more details around the use-case, please?
In your use-case, will you receive a live-stream from a camera, will it be compressed content (e.g. RTSP-stream AVC/h.264-encoded) or raw content? Will it be FullHD resolution or higher like 4k?
Will you need to process multiple, maybe many streams concurrently? Will the cameras be connected via Ethernet, or MIPI(-CSI)?
Would a system with a GPU (integrated/embedded or discrete) be an option (for HW-accelerated video-decoding using a GPU) (as you haven't listed GPU besides CPU, VPU and FPGA)?
(using a GPU for decoding could have a benefit of using GPU-zero-copy between the HW-video-codec and doing the inferences in the GPU as well: the decoded video-frames won't be copied into OpenVINO's inference-engince, but just referenced in the GPU-video-memory)
Do you have requirements for the throughput and especially for the latency for your client?
Will your client's use-case be more within an "embedded" environment (ATOM&Core SoCs with integrated GPU, VPU like NCS2-MyriadX; INT8-quantized model, VNNI-CPU-instruction-set) or more like "scaling big" within a data-center (XEON CPUs, additional discrete GPUs; INT8-/BF16-quantized model, AMX-CPU-instruction-set)?
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page