Intel® DevCloud
Help for those needing help starting or connecting to the Intel® DevCloud
1644 Discussions

Edge AI Certification--NOOB Question

Benjamin_Thompson
695 Views

So, I'm taking the tests for each Edge AI Certification section and I've hit one that I can't decipher. It's in the test for lesson 5: Model Optimizer. 

 

Here are screenshots of the question and the response I get regardless of which items I select for an answer. 

 

Screen Shot 2022-01-03 at 5.12.41 PM.pngScreen Shot 2022-01-03 at 5.13.00 PM.png

 

Removed unused layers doesn't work.

Permutations with it don't work.

FP 16 and GPU don't work (as the incorrect response sort of suggests (if ONNX runs on the same hardware, it therefore supports FP16 and infers on GPUs, right?)

So--what's the right answer?

Thanks!

0 Kudos
1 Solution
Markus_B_Intel
Employee
654 Views

No, converting an ONNX model to IR format is not required to perform inference on the GPU - the ONNX runtime supports the same hardware as OpenVINO; the ONNX model can directly be used without a prior conversion to IR format.

View solution in original post

0 Kudos
3 Replies
Markus_B_Intel
Employee
676 Views

It's a combination of three: "remove unused layers", plus "FP16 version" plus "non-default input and output".

Benjamin_Thompson
663 Views

Thanks! But not to run on a GPU (which requires FP16)?

0 Kudos
Markus_B_Intel
Employee
655 Views

No, converting an ONNX model to IR format is not required to perform inference on the GPU - the ONNX runtime supports the same hardware as OpenVINO; the ONNX model can directly be used without a prior conversion to IR format.

0 Kudos
Reply