- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
[Version: openvino_2020.1.033 on Wondows10]
Hi,
I have a TensorFlow* model. After using the "mo_tf.py" to convert model, I got the FP32&FP16 models. Then, try to inference both models on the difference devices[CPU, GPU], respectively. The code is as below:
try { InferenceEngine::CNNNetwork network = ie.ReadNetwork(xml_filename, bin_filename); network.setBatchSize(Batch); InferenceEngine::ExecutableNetwork exeNetwork = ie.LoadNetwork(network, device, {});//device is CPU or GPU InferenceEngine::InferRequest::Ptr infer_request = exeNetwork.CreateInferRequestPtr(); } catch (std::exception &e) { printf("%s", e.what()); return FALSE; }
It works on the device "CPU". However, when loading the FP32 or FP16 model on the device "GPU", I got the error as below:
..\inference-engine\thirdparty\clDNN\src\reshape.cpp at line: 92
Error has occured for: reshape:cropped_pos_reshape
Output layout count(=196608) is not equal to: input layout count(=202800)
Output layout of reshape primitive changes size of inpu?
Could someone know how to solve it or any idea?
Thank you for your response,
yenfu
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello lin, yenfu
If it runs on CPU but not GPU probably something wrong with OpenCL environment or most likely an older or unsupported GPU.
What is the output of your
clinfo
please?
Cheers,
nikos
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Yenfu,
Greetings to you.
From the clinfo.txt file, it appears that your system has two graphics cards, namely Nvidia GeForce GTX 1080 and Intel® UHD Graphics 630.
In light if this, I would like to suggest the following two steps:
- Please confirm that integrated graphics is enabled in BIOS.
- Please update your graphics driver as well, by referring to ‘Optional: Additional Installation Steps for Intel® Processor Graphics (GPU)’ page.
Additionally, would you please share more information about your model, whether it's an object/classification model, the layers used if it's a custom model, command given to Model Optimizer to convert the trained model to Intermediate Representation (IR), and also environment details (versions of Python, TensorFlow, CMake, etc.).
If possible, please share the trained model files for us to reproduce your issue (files can be shared via Private Message).
Regards,
Munesh

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page