- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Hello,
I am trying to run the openvino inference becnhmark sample. However, I was wondering what back-end (runtime) does openvino use? To be specific, some inference benchmark uses tensorflow, onnxruntime, etc. However, during use the openvino inference benchmark I was not able to set up these back-end.
Therefore, I was wondering if openvino inference benchmark uses its own runtime.
Also, what I understand is that size of memory definitely can affect the results of throughput. However, even if I increase the size of memory 8GB to 16GB, the overall throughput are similar. Is it because I run the benchmark differently? or the size of memory does not really affect the results?
I am new to the openvino tools. Please correct me if I misunderstand something!
Thank you so much for the time and consideration
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Hi Joon,
The OpenVINO toolkit uses the Inference Engine to read, load and inference the model (Intermediate Representation) on the target device. You can find additional information on the Inference Engine Documentation. The benchmark app is measuring the latency for each executed infer request, please see the How It Works section in the app documentation.
Regards,
Jesus
링크가 복사됨
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Hi Joon,
The OpenVINO toolkit uses the Inference Engine to read, load and inference the model (Intermediate Representation) on the target device. You can find additional information on the Inference Engine Documentation. The benchmark app is measuring the latency for each executed infer request, please see the How It Works section in the app documentation.
Regards,
Jesus
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Hi Joon,
Just following up to see if you have any additional questions after my last response.
Regards,
Jesus
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question.
