Intel® DevCloud
Help for those needing help starting or connecting to the Intel® DevCloud
1134 Discussions

Edge AI Certification--NOOB Question

Benjamin_Thompson
259 Views

So, I'm taking the tests for each Edge AI Certification section and I've hit one that I can't decipher. It's in the test for lesson 5: Model Optimizer. 

 

Here are screenshots of the question and the response I get regardless of which items I select for an answer. 

 

Screen Shot 2022-01-03 at 5.12.41 PM.pngScreen Shot 2022-01-03 at 5.13.00 PM.png

 

Removed unused layers doesn't work.

Permutations with it don't work.

FP 16 and GPU don't work (as the incorrect response sort of suggests (if ONNX runs on the same hardware, it therefore supports FP16 and infers on GPUs, right?)

So--what's the right answer?

Thanks!

0 Kudos
1 Solution
Markus_B_Intel
Employee
218 Views

No, converting an ONNX model to IR format is not required to perform inference on the GPU - the ONNX runtime supports the same hardware as OpenVINO; the ONNX model can directly be used without a prior conversion to IR format.

View solution in original post

3 Replies
Markus_B_Intel
Employee
240 Views

It's a combination of three: "remove unused layers", plus "FP16 version" plus "non-default input and output".

Benjamin_Thompson
227 Views

Thanks! But not to run on a GPU (which requires FP16)?

Markus_B_Intel
Employee
219 Views

No, converting an ONNX model to IR format is not required to perform inference on the GPU - the ONNX runtime supports the same hardware as OpenVINO; the ONNX model can directly be used without a prior conversion to IR format.

Reply