Intel® Distribution of OpenVINO™ Toolkit
Community support and discussions about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all things computer vision-related on Intel® platforms.

How to run Intel models via Caffe?


In the Intel models' doc pages (eg /deployment_tools/intel_models/face-detection-adas-0001/description/face-detection-adas-0001.html), there are FPS performance comparisons b/w (Caffe CPU / IR CPU / IR GPU / etc). I like that, getting a sense of Caffe v IR. I'd like to try that myself under different configurations, but it looks like the Intel models (unlike the Public models) don't come with weights (.caffemodel), just the IR xml+bin and a .prototxt. I'm wondering how the "Caffe CPU" run was done? Is there an argument in the IR API to disable OpenVINO optimizations or something? Is there a code file in deployment_tools/ for doing these perf comparisons?

TL;DR: how to run Intel (not Public) models in Caffe mode (not IR mode), since they don't come w/ Caffe weights (.caffemodel)?

0 Kudos
1 Reply

Dear Tyler,

we do not provide the original caffe model for the intel models. The time for Caffe CPU was performed via inference in the Caffe framework. We do have commands in the Model Optimizer to turn off optimizations, but it already converts the model to our representation, so this is not what you want I think. 

For doing your own benchmark, I invite you to take any caffe model, run the inference through Caffe, and then convert it to OpenVINO and run inference again.