Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Beginner
43 Views

openvino problem using GPU

I write my own inference code in visual studio 2019, and the batch_size is greater than 1. I want to try different batch_size using GPU. My GPU is Inter HD Graphics 530. My inference code using GPU can work fine when batch_size is smaller equal than 109. Namely, My inference code using GPU can work fine when batch_size is 109 and my inference code reported exception when batch_size is 110.

The exception is as follows:

 Line 71 in File ie_exception_conversion.hpp(I can't paste image, so write the location that report exception)

0x00007FFD119DA839 处(位于 inference.exe 中)引发的异常: Microsoft C++ 异常: cldnn::error,位于内存位置 0x000000C8812FCDF0 处。
0x00007FFD119DA839 处(位于 inference.exe 中)引发的异常: Microsoft C++ 异常: cldnn::error,位于内存位置 0x000000C8812FD830 处。
0x00007FFD119DA839 处(位于 inference.exe 中)引发的异常: Microsoft C++ 异常: InferenceEngine::details::InferenceEngineException,位于内存位置 0x000000C8812FF150 处。
0x00007FFD119DA839 处(位于 inference.exe 中)有未经处理的异常: Microsoft C++ 异常: InferenceEngine::details::InferenceEngineException,位于内存位置 0x000000C8812FF150 处。

In addition, I use single step debugging, and the code that went wrong in main.cpp is

InferRequest infer_request = executable_network.CreateInferRequest();

I get executable_network using ExecutableNetwork executable_network = ie.LoadNetwork(network, "GPU");

I don't know how to tackle it. Please help me. Thank you.

0 Kudos
7 Replies
Highlighted
Employee
43 Views

Dearest rongrong, wang,

Hmmm. That is quite an interesting problem. Can you kindly upgrade to OpenVino 2019R2 if you haven't already ? This is the latest release of OpenVino and it fixes many issues.  After you upgrade to R2 please experiment with the benchmark_app, batch_size is one of the knobs you can tweak.

Let me know how it works for you.

Thanks,

Shubha

0 Kudos
Highlighted
Beginner
43 Views

Shubha R. (Intel) wrote:

Dearest rongrong, wang,

Hmmm. That is quite an interesting problem. Can you kindly upgrade to OpenVino 2019R2 if you haven't already ? This is the latest release of OpenVino and it fixes many issues.  After you upgrade to R2 please experiment with the benchmark_app, batch_size is one of the knobs you can tweak.

Let me know how it works for you.

Thanks,

Shubha

Dear Shubha, I try the benchmark sample, the benckmark sample runs correcyly when I enter 110 images and the batch is the default value of 1. However, the benckmark sample fails to run when i enter 110 images and the batch is 110.I don't know where my settings are wrong.

the error is as follows:

[ INFO ] Resizing network to batch = 110
[ ERROR ] Failed to infer shapes for FullyConnected layer (1463) with error: New shapes [1,225280] make Kernels(), Channels(225280), Output depth(1000), Groups(1) not matching weights size: 225280000 vs 2048000

Thank you!

0 Kudos
Highlighted
Beginner
43 Views

Shubha R. (Intel) wrote:

Dearest rongrong, wang,

Hmmm. That is quite an interesting problem. Can you kindly upgrade to OpenVino 2019R2 if you haven't already ? This is the latest release of OpenVino and it fixes many issues.  After you upgrade to R2 please experiment with the benchmark_app, batch_size is one of the knobs you can tweak.

Let me know how it works for you.

Thanks,

Shubha

Dear Shubha,

My input is as follows:

C:\Users\michael\Documents\Intel\OpenVINO\inference_engine_samples_build\intel64\Release>benchmark_app.exe -d GPU -m C:\Users\michael\Documents\Intel\OpenVINO\inference_engine_samples_build\benchmark_app\resnet50-binary-0001.xml -i E:\visualStudio\inference\images -b 110

0 Kudos
Highlighted
Beginner
43 Views

I find the reason about this problem.

memory allocation failed: exceeded global device memory.

However, how i solve this problem?

Thank you!

0 Kudos
Highlighted
Employee
43 Views

Dear rongrong, wang,

It seems as if you have simply run into a memory limitation on your particular GPU hardware.

Batch size 110 was too big and the whole model didn’t fit into your GPU device memory.

There are two solutions:

  1. Lower the batch to fit the model into the device
  2. Use a lighter(less layers/less weights) model

Hope it helps !

Shubha

0 Kudos
Highlighted
Beginner
43 Views

Shubha R. (Intel) wrote:

Dear rongrong, wang,

It seems as if you have simply run into a memory limitation on your particular GPU hardware.

Batch size 110 was too big and the whole model didn’t fit into your GPU device memory.

There are two solutions:

  1. Lower the batch to fit the model into the device
  2. Use a lighter(less layers/less weights) model

Hope it helps !

Shubha

Thank you very much! I understand.

0 Kudos
Highlighted
Employee
43 Views

Dear rongrong, wang,

Of course ! Happy to help,

Shubha

0 Kudos