- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi
I tried to use style_transfer_sample but I have some issues.
After the pipeline of downloading and run the model optimizer on the .caffemodel I have my IR model that work for classification_sample, for both VGG-16 and VGG-19.
I understand that this sample should work with VGG-16 and VGG-19 is this right?
If yes, in the output of the sample I have:
[ INFO ] InferenceEngine:
API version ............ 1.1
Build .................. 11653
[ INFO ] Parsing input parameters
[ INFO ] Loading pluginAPI version ............ 1.1
Build .................. lnx_20180510
Description ....... MKLDNNPlugin
[ INFO ] Loading network files
[ INFO ] Preparing input blobs
[ WARNING ] Image is resized from (320, 240) to (224, 224)
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ INFO ] first outputname: prob
[ INFO ] Loading model to the plugin
[ INFO ] Start inference (1 iterations)Average running time of one iteration: 1250.81 ms
[ INFO ] Output size [N,C,H,W]: 1, 1000, 11683184, 33
[ ERROR ] std::bad_alloc
So, the line Output size [N,C,H,W]: 1, 1000, 11683184, 33 means that the channel is 1000, and should be wrong value.
Any suggestions on this way ?
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Carmine,
VGG models are the correct style transfer models that you can use. I reproduced your issue on ubuntu 16.04.3 and used the following to make the sample work:
sudo ./style_transfer_sample -i ../../../../demo/car_1.bmp -m ../../../../model_optimizer/vgg16.xml -d CPU
what command line are you using to run the sample?
Kind Regards,
Monique Jones
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Monique Jones,
my command line is :
sudo ./style_transfer_sample -i /opt/intel/computer_vision_sdk/deployment_tools/demo/car_1.bmp -m /opt/intel/computer_vision_sdk/deployment_tools/my_ir_model/res_vgg_16.xml -d CPU
[ INFO ] InferenceEngine:
API version ............ 1.1
Build .................. 11653
[ INFO ] Parsing input parameters
[ INFO ] Loading pluginAPI version ............ 1.1
Build .................. lnx_20180510
Description ....... MKLDNNPlugin
[ INFO ] Loading network files
[ INFO ] Preparing input blobs
[ WARNING ] Image is resized from (749, 637) to (224, 224)
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ INFO ] first outputname: prob
[ INFO ] Loading model to the plugin
[ INFO ] Start inference (1 iterations)Average running time of one iteration: 872.015 ms
[ INFO ] Output size [N,C,H,W]: 1, 1000, 0, 33
[ INFO ] Image out1.bmp created!
[ INFO ] Execution successful
This line is added by me: [ INFO ] first outputname: prob
My output is out1.bmp 54byte unreadable.
Thanks for your time.
Greetings,
Carmine Spizuoco
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi All,
Reading at https://software.intel.com/en-us/articles/OpenVINO-IE-Samples#neural-transfer in OpenVino R3 there are some missing parameter like mean_val_r mean_val_g mean_val_b and nt.
This sample not work for me, and I have different doubt about how should it work.
Reading at StyleTransfer in more of one NN showed we have 2 input image, the style_image where we learn the "Style" of drawing and the content_image where we apply the style learned.
I'm happy if someone can clarify me how figure out the StyleTransfer with OpenVino.
Greetings,
Carmine.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page