I'm beginner , sorry for poor English
I got similar (NCS2 was much slower than CPU)results, but not demo models.
I made a deep learnning models which predict MNIST, and then converted frozen .pb file to IR format(.bin .xml) and inferred using OpenVINO for test.
Does it depend on complexity of model(Is MNISTso simple that NCS2 accellate?)?
Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz
Thank you for posting on the Intel® communities.
We would like to inform you that we have a forum for these specific issues and products, so we are moving it to the "Intel® Distribution of OpenVINO™ Toolkit" forum so you can get better support for this matter.
Intel Customer Support Technician
Thanks for reaching out to us.
Measuring inference performance involves many variables and is extremely use-case and application dependent. Intel uses four parameters for measurements, which are key elements to consider for a successful deep learning inference application, as follows:
As you can see in the performance benchmark page, for throughput comparison, NCS2 performance is better than Intel Atom® x5-E3940 processor, but is lesser than Intel® Core™ processors.
This thread will no longer be monitored since we have provided references. If you need any additional information from Intel, please submit a new question.