- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I set up Intel CV SDK beta R3 environment under Ubuntu 16.04, and tested according to official guide like https://software.intel.com/en-us/inference-engine-devguide-converting-your-caffe-model. ;However, when I test example commands from it, FP32 model conversion passed while FP16 command line is NOT accepted.
For example, I tried something similar to "./ModelOptimizer -p FP32 -w $MODEL_DIR/bvlc_alexnet.caffemodel -d $MODEL_DIR/deploy.prototxt -i -b 1", the FP32 model is generated successfully and works in Inference Engine.
But when I tried the FP16 example, similar to "./ModelOptimizer -* -t train_val_alexnet.prototxt -w bvlc_alexnet.caffemodel -
nl
4 -nv 4 -p FP16 -d alexnet_deploy.prototxt -i
", the console told me the "4" in "-nl 4" is not valid input arguments. If I removed "-nl 4" from the command, it in turns told me the learning iterations must be specified.
So, how can I convert Alexnet or similar model to an Fp16 model?
Thanks
Charles
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Charles,
Thank you for your interest to the product!
The problem is that in the command that you use for generating FP16 IR contains extra parameters.
More specifically, you should use the same command as for FP32 IR generation with only difference - specify another precision and specify normalization factor:
./ModelOptimizer -p FP16 -w $MODEL_DIR/bvlc_alexnet.caffemodel -d $MODEL_DIR/deploy.prototxt -i -b 1 -f 1
In the command you used, you specify "-nl, -nv, -t, -*" which were intended to be used for collecting statistics about the model, however it is not needed at all for generating the FP16 model in case you know the normalization factor. (This functionality is experimental and I would not recommend using it until you fully understand how it works.)
To make it more clear, for pure generation you need to provide:
- target precision (-p FP16)
- weights and biases (-w SOMETHING.caffemodel)
- deploy-ready prototxt (not the one used for training) - (-d deploy.prototxt)
- batch size - (-b 1)
- normalization factor (-f 1)
- flag to enable IR generation (-i)
Alexander
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
嗨,您好,fp16的模型你跑起来了吗?是直接使用上述命令转换,然后调用就可以了吗?fp32和fp16前传的时候,条用库函数有区别吗?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hou y. wrote:
嗨,您好,fp16的模型你跑起来了吗?是直接使用上述命令转换,然后调用就可以了吗?fp32和fp16前传的时候,条用库函数有区别吗?
No difference. And my command line succeeded like this:
./ModelOptimizer -p FP16 -w $my.caffemodel -d $deploy.prototxt -i -b 1 -f 1 -o $OUTPUT_DIR
So, just added -o to specifiy output directory of IR target.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
刚 隆. wrote:
Quote:
Hou y. wrote:
嗨,您好,fp16的模型你跑起来了吗?是直接使用上述命令转换,然后调用就可以了吗?fp32和fp16前传的时候,条用库函数有区别吗?
No difference. And my command line succeeded like this:
./ModelOptimizer -p FP16 -w $my.caffemodel -d $deploy.prototxt -i -b 1 -f 1 -o $OUTPUT_DIR
So, just added -o to specifiy output directory of IR target.
嗨,您好,我按照您给的命令把模型转成fp16的,用samples里面的multi_output_sample测试,报了如下错误:
terminate called after throwing an instance of 'std::logic_error'
what(): memory data type alignment do not match
Aborted (core dumped)
是执行infer()时候出错了,不知道您遇到过没有,或能给我点建议吗?谢谢!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Do you run it for CPU or GPU? for CPU side, it only have F32, GPU can support F16.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yi G. (Intel) wrote:
Do you run it for CPU or GPU? for CPU side, it only have F32, GPU can support F16.
yes, I run it with GPU, loading "clDNNPlugin" means creating engine with GPU, it's right?
InferenceEngine: API version ............ 1.0 Build .................. 5852 [ INFO ] Parsing input parameters [ INFO ] No extensions provided [ INFO ] Loading plugin API version ............ 0.1 Build .................. prod-02709 Description ....... clDNNPlugin [ INFO ] Loading network files [ INFO ] Preparing input blobs [ INFO ] Batch size is 1 [ INFO ] Preparing output blobs [ INFO ] Loading model to the plugin [ INFO ] Start inference (1 iterations) [ ERROR ] memory data type alignment do not match
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Hou y,
We have new version of CVSDK here.
https://software.intel.com/en-us/openvino-toolkit
Please try again with this version and let us know if you still have issues.
Regards,
Peter.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
SEUNGHYUK P. (Intel) wrote:
Hi Hou y,
We have new version of CVSDK here.
https://software.intel.com/en-us/openvino-toolkit
Please try again with this version and let us know if you still have issues.
Regards,
Peter.
thanks for you reply! yeah, I have try it, the new version is cool, the preference of fp16 is faster with less hardware resources.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page