Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Raghavan_S_
Beginner
159 Views

Problem with certain input resolutions in R2

Hi,

We have a convolutional network that takes a variable input size.  The original network is created for a specific input size (e.g. 192x192), and before loading the network, a reshape is performed to match the new input size.  We found that when loading the network, the program crashes for certain input sizes.

I have attached code and model files that replicate this problem.  The model file simple2.bin is created for an input size of 192x192.  if the input size is changed to 512x600, it crashes when executing the "ie.load_network()" command.

Please note that this does not happen for all sizes.  For example 512x400 works fine.  This was not a problem prior to 2019 R2 release (2019 R1 worked fine)

 

Thanks
Raghavan

0 Kudos
9 Replies
Raghavan_S_
Beginner
159 Views

One of the suggestions was to use a size that is a multiple of 16.  I tried 512x640, and this failed as well (both are multiples of 128).

Shubha_R_Intel
Employee
159 Views

Dear Raghavan S,

The fact that it used to work and suddenly doesn't anymore in 2019R2 is definitely suspect. Thanks for attaching your *.zip. Give me a chance to debug this OK ? I promise to report back on this forum and also if there is a bug, I will file a bug on your behalf.

Thanks for using OpenVino and thanks for your patience !

Shubha

Shubha_R_Intel
Employee
159 Views

Dear Raghavan S,

your code crashed for me as well. But I have to ask - did you regenerate your IR again using the R2 release ? It's always important for you to regenerate your IR with each release. You can't assume that the old IR generated from the old release Model Optimizer will actually work with the new Release Inference Engine. Bugs are constantly fixed and new features are always being refined and improved, so it's best for both the IR and IE code to use the identical release.

You said :

This was not a problem prior to 2019 R2 release (2019 R1 worked fine)

So I want to make sure.

If you are sure that you regenerated your IR in 2019R2, then this is truly a bug. In order to further debug this issue, I will need your original model as well as your model optimizer command. Can you kindly attach a *.zip to this ticket containing your model ? If you prefer I can privately PM you and you can send me the model that way. 

You can also just wait for R3 which should be released very soon. The problem may be magically fixed in R3. But please make sure, as I said, to regenerate your IR in R3.

Let me know,

Thanks,

Shubha

 

 

Raghavan_S_
Beginner
159 Views

Yes, I did use 2019R2.  I you look into the xml file that I had sent, you can see a field that says "<MO_version value="2019.2.0-436-gf5827d4"/>".  If I recall correctly, the IR version changed from R1 to R2.  So an older version would not work at all.

Kindly try and get this fixed in R3.  We have releases to be made to our customers, and we cannot upgrade beyond R1.  I believe there are improvements for multu socket and scalable Xeon processors that are not there in R1.

Let me know you private email address so that I can send the original model (my address is raghavan@1llgovision.com)

Raghavan

Raghavan_S_
Beginner
159 Views

Apologies for the typo: my address is raghavan@allgovision.com

Shubha_R_Intel
Employee
159 Views

Dear Raghavan S.,

My mistake - sure thing, the IR xml itself will denote the Model Optimizer version number, you are correct about that.

I have sent you a Syncplicity email which will enable you to share your original model with me. Please do so and I will file a bug on the issue. R3 is a little late though, not sure if it can be fixed in time for R3 since it should be released pretty soon. But I will try. Don't be surprised if the problem is suddenly fixed in R3 though - the OpenVino engineering team is always proactively improving the product.

Also please give me your full Model Optimizer command.

Shubha

Raghavan_S_
Beginner
159 Views

I've uploaded the original models via syncplicity. The readme file has the model optimizer command.  Hope this helps finding the issue.

Raghavan

Shubha_R_Intel
Employee
159 Views

Dear Raghavan,

I received your model via Syncplicity. Thanks ! I was also able to reproduce your error except for me strangely (on the R3 build) I couldn't get 512x400 to work either, which is different from your observation on R2. In any case, I did file a bug for the main reason that this use case used to work in R1 and starting R2, it broke.

I will update you here on this forum.

Thanks for your patience !

Shubha

Kratos
Employee
159 Views

Hi Raghavan,

I am looking into this issue now. Can you also file a IPS ticket(your manager in touch with us might help you with that) That is fastest way to reach the right engineer with us.

I ran your attached network IR on R3 release (for the HDDL target) and I didn't find any crash as such. Please let me know how I can help you further.

Client side:

$ python3.6 test_load_crash.py 
1 1 192 192
1 1 512 600
dict_keys(['xmin_hat'])

Device side:

Load graph success, graphId=1 graphName=AGV_Test_Net

Regards,

Subash
 

 

Reply