Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6415 Discussions

RuntimeError: There should be only one instance of RegistersPool per thread

LiaoXi
Novice
2,365 Views

When I use ie.load_network(network=net_2,device_name="CPU") to load my warp model,my program causes an error:

Traceback (most recent call last):
File "F:\python_visual\my_project\project\test1.py", line 185, in <module>
main()
File "F:\python_visual\my_project\project\test1.py", line 114, in main
exec_net_2 = ie2.load_network(network=net_2,device_name="CPU")
File "ie_api.pyx", line 413, in openvino.inference_engine.ie_api.IECore.load_network
File "ie_api.pyx", line 457, in openvino.inference_engine.ie_api.IECore.load_network
RuntimeError: There should be only one instance of RegistersPool per thread

But when I change the device_name to GPU,it can run successfully,Can you tell me how to make my model to load successfully to a CPU?By the way,my CPU is Intel i7 10th.

11 Replies
LiaoXi
Novice
2,363 Views
My execution process is:

ie = IECore()
net_2 = ie.read_network(model=warp_weight_path_xml, weights=warp_weight_path_bin)
exec_net_2 = ie.load_network(network=net_2,device_name="CPU")

 

I am a colledge student,and This problem has troubled me for a long time. I would appreciate it if you could give me some solutions。

Best wishes. : )

0 Kudos
IntelSupport
Community Manager
2,323 Views

Hi LiaoXi,

 

Thanks for reaching out.

 

I have tested your model with OpenVINO Benchmark App and observed the same issue when running the model with CPU. Meanwhile, it succeeds when executed with GPU.

 

From my understanding, warp is a Python framework for GPU simulation and graphics. This might be the reason for the error. Can you share the source of your model for further investigation?

 

 

Regards,

Aznie

 

0 Kudos
LiaoXi
Novice
2,270 Views

Hi Aznie,

the model I use is from vitual try-on,I can give you the address:https://github.com/SenHe/Flow-Style-VTON,

and I use mobileNetv2 to make its size smaller.

Thank you to reply for me.

0 Kudos
IntelSupport
Community Manager
2,237 Views

 

Hi LiaoXi,

 

By looking at SenHe Github that you shared, I can see that GPU is used to train the model and after conversion, the model was not able to infer with CPU. I would advise you to make sure the original model (before converting to IR) is able to be inference with CPU when validating the model.

 

For warp framework information, you may refer to  Warp: A High-performance Python Framework for GPU Simulation and Graphics.

 

On another note, the error that you got indicates that it only allows RegistersPool per thread, so the model is trained and designed to execute with GPU, which supports multi-thread. That is the reason the CPU is not able to perform that inference (the CPU only supports a single layer per thread).

 

Hope this helps.

 

 

Regards,

Aznie


0 Kudos
netmaker
Beginner
1,952 Views

Dear Azine,

 

I am working on a project that involves converting a PyTorch model to OpenVINO format. I encountered an issue while trying to convert the model to OpenVINO, despite being able to successfully run the model on the CPU using PyTorch.

 

I have carefully reviewed the code and am still unable to identify the cause of the issue. I have tried changing the ONNX version from 9 to 16, but the problem persists. My PyTorch version is 1.8.0, CUDA 11.0, and I am using the latest version of OpenVINO.

 

Could you please provide any suggestions or insights on how to resolve this issue?

Thank you for your assistance.

 

Best regards,

Wang

0 Kudos
IntelSupport
Community Manager
2,155 Views

Hi LiaoXi,


This thread will no longer be monitored since we have provided information. If you need any additional information from Intel, please submit a new question.



Regards,

Aznie


0 Kudos
MarkStiff
Beginner
1,776 Views

Dear Azine,

I recently had the same problem with Openvino for model inference, even though I was able to successfully infer the model on the CPU. I'm wondering if there is still a multi-threaded problem with the model.

Can you please tell me if this problem has been solved now?
Thanks for your help!

Best regards,

Zhihao Li

0 Kudos
netmaker
Beginner
1,761 Views

你好

我已经成功解决了这个问题,这个问题的引起并不是由于多线程的操作。

我使用的是pytorch版本导出的onnx模型,在OPENVINO上出现了这个错误。后来我通过查询资料得知这个错误主要原因应该是torch的计算图或者算子与OPENVINO不兼容。

通过观察导出onnx时的warning我发现了我的model中有一个grid_sampler操作,这个操作在onnx中一直没有被支持,所以导致报这个错,我的解决方法是使用mmcv中的grid_sampler替换掉这个操作,最终成功解决了这个问题。至于我是如何得出这个解决方法的--直接把warning复制到百度

综上所述,当你的模型中有onnx或者OPENVINO不支持的计算操作时你就会遇到这个报错,如果你的model构建中确实存在这样的问题,你需要注意warning的内容并使用一个相应的,能够转化为onnx格式的库中作用一样的函数来替换原来的操作。

0 Kudos
MarkStiff
Beginner
1,748 Views

Hi, I tried to use mmcv.ops.grid_sampler as you did, but it was in version 1.x and has been removed in version 2.0. It seems that version 1.x only supports inference on the GPU, is there any way to make it implement model inference on the CPU?

Sincerely looking forward to your suggestion!

 

0 Kudos
netmaker
Beginner
1,742 Views

"""

from mmcv.ops.point_sample import bilinear_grid_sample
import torch.nn.functional as F
# input grid 自己按照自己的任务就可以 和torch中的grid sampler的输入是一致的
img = bilinear_grid_sample(tenInput, grid, align_corners=False)
img_o = F.grid_sample(input=tenInput, grid=g, mode='bilinear', padding_mode='border', align_corners=True)
print(img-img_o)

"""

you can see the example upper.

Hope you got success this time

By the way,you get up at 6:00AM to try this code,which really shocked me!

0 Kudos
MarkStiff
Beginner
1,735 Views

Thank you very much for your suggestion, I have successfully solved it and was able to compile the deployment model on the CPU,

because I did not use the bilinear_grid_sample function at the beginning.

Best wishes,

Zhihao Li

0 Kudos
Reply