Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Anyone managed to run YoloR?

goh__richard
Beginner
2,352 Views

As stated in title.

Tried to convert .pt weights to  .xml but could not find a way.

Anyone managed to convert/run YoloR?

Thank you

0 Kudos
1 Solution
Wan_Intel
Moderator
2,174 Views

Hi Goh_Richard,

 

I encountered the same error as you did when running YoloR IR with Multi-Channel Object Detection YOLO V3 C++ Demo.

 

For your information, YoloR IR was able to work with Benchmark Python Tool as shown in the attachment below.

 

However, we regret to inform you that YoloR is not yet supported as it is still under exploration by the development team.

 

On the other hand, I recommend you check out the following trained models provided by OpenVINO for learning and demo purposes or for developing deep learning software.

·        Intel’s Pre-trained Models

·        Public Pre-trained Models

 

 

Best regards,

Wan

 

View solution in original post

0 Kudos
8 Replies
Wan_Intel
Moderator
2,327 Views

Hi Goh_Richard,

Thanks for reaching out to us.

 

I have successfully converted YoloR into an ONNX file, then into Intermediate Representation using OpenVINO™ 2021.4 on Windows 10. You may refer to the steps below:

 

Steps to convert .pt weights to ONNX file is available at the following page:

https://github.com/ttanzhiqiang/onnx_tensorrt_project/tree/main/model/yolor

 

Steps to convert ONNX file to IR file is available at the following page:

https://github.com/Chen-MingChang/pytorch_YOLO_OpenVINO_demo#convert-onnx-file-to-ir-file

 

Disclaimer: Even though YoloR was able to convert into Intermediate Representation (IR) by using Model Optimizer, YoloR is yet to be validated by OpenVINO Developers. Therefore, we have no control over the inference results.

 

However, OpenVINO™ toolkit provides a set of Intel and public pre-trained models that you can use for learning and demo purposes or for developing deep learning software. Most recent version is available in the Open Model Zoo repository.

 

The table of Intel’s Pre-Trained Models Device Support is available at the following page:

https://docs.openvinotoolkit.org/2021.4/omz_models_intel_device_support.html

 

The table of Public Pre-Trained Models Device Support is available at the following page:

https://docs.openvinotoolkit.org/2021.4/omz_models_public_device_support.html

 

 

Regards,

Wan

 

0 Kudos
goh__richard
Beginner
2,297 Views

Thanks

Unfortunately I was no able to convert the weights

i) was not able to find the .py file in

              convert_to_onnx.py --weights yolor_csp_x_star.pt --cfg cfg/yolor_csp_x.cfg --output yolo_csp_x_star.onnx

ii) after googling and finding the file convert_to_onnx.py

     the output file was vgg16.onnx

iii) trying to convert vgg16.onnx results in "wrong format"

 

and on and on

 

Is is possible to assist to convert this weight file?  http://anpr.optasia.com.sg/download.cgi?f=yolor_p6.pt

Thanks

 

0 Kudos
Wan_Intel
Moderator
2,264 Views

Hi Goh_Richard,

Thanks for your information.

 

You may refer to the steps below for the conversion of Yolor to IR:

 

1.   Clone Yolor repository from GitHub to obtain yolor_p6.cfg under "cfg directory".

 

2.   Download this reporitory from Google Drive to obtain convert_to_onnx.py under "code directory".

 

3.   Convert .pt weights to ONNX file with the following command:

python convert_to_onnx.py --weights <path_to_pt>\yolor_p6.pt --cfg <path_to_cfg>\yolor_p6.cfg --output yolor_p6.onnx

 

4.   Convert ONNX file to IR file with the following command:

python mo_onnx.py --input_model <path_to_onnx>\yolor_p6.onnx -s 255 --reverse_input_channels --output Conv_509,Conv_599,Conv_689,Conv_779

 

Disclaimer: Even though YoloR was able to convert into Intermediate Representation (IR) by using Model Optimizer, YoloR is yet to be validated by OpenVINO Developers. Therefore, we have no control over the inference results.

 

On another note, I have attached Yolor ONNX and IR files in this link.

 

 

Regards,

Wan

 

0 Kudos
goh__richard
Beginner
2,255 Views

Dear Wan,

 

Thank you very much for your quick reply.

Followed the steps and encountered the following

convert_to_onnx.py -weights yolor_p6.pt --cfg cfg/yolor_p6.cfg --output yolor_p6.onnx
' @ error/constitute.c/WriteImage/1037.
' @ error/constitute.c/WriteImage/1037.e
' @ error/constitute.c/WriteImage/1037.
' @ error/constitute.c/WriteImage/1037.
from: can't read /var/mail/onnx
from: can't read /var/mail/models.models
./convert_to_onnx.py: line 7: $'\r': command not found
./convert_to_onnx.py: line 8: $'\r': command not found
./convert_to_onnx.py: line 10: syntax error near unexpected token `('
'/convert_to_onnx.py: line 10: ` parser = argparse.ArgumentParser()

 

 

Have confirmed there is sufficient disk space left.

 

Thanks

 

 

0 Kudos
Wan_Intel
Moderator
2,229 Views

Hi Goh_Richard,


May I know which Operating System, Compute Platform, and PyTorch version are you using on your local machine?


For your information, you have to install required packages by execute pip install -r requirements.txt from Yolor repository.


I have successfully converted yolor_p6.pt to IR on Ubuntu 18.04 LTS 64-bit and Microsoft Windows 10 64-bit with the latest PyTorch Version and CPU compute platform.


You may install the PyTorch version based on your Operating System and Compute Platform from here.


Other than that, I have converted yolor_p6.pt to IR file for you.

You may download the ONNX and IR files from this link.



Best regards,

Wan


0 Kudos
goh__richard
Beginner
2,217 Views

Thank you very much for your assistance.

Tried to run with the downloaded weights but got segmentation fault.

====

root@stratton:~/omz_demos_build/intel64/Release# multi_channel_object_detection_demo_yolov3 -m yolor_p6.xml -i "rtsp://192.168.1.209:554/user=admin&password=&channel=1&stream=0.sdp?real_stream--rtp-caching=100"
[ INFO ] InferenceEngine: API version ......... 2.1
Build ........... 2021.2.0-1877-176bdf51370-releases/2021/2
[ INFO ] Parsing input parameters
[ INFO ] Detection model: yolor_p6.xml
[ INFO ] Detection threshold: 0.5
[ INFO ] Utilizing device: CPU
[ INFO ] Batch size: 1
[ INFO ] Number of infer requests: 5
[ INFO ] Model path: yolor_p6.xml
Segmentation fault (core dumped)

 

0 Kudos
Wan_Intel
Moderator
2,175 Views

Hi Goh_Richard,

 

I encountered the same error as you did when running YoloR IR with Multi-Channel Object Detection YOLO V3 C++ Demo.

 

For your information, YoloR IR was able to work with Benchmark Python Tool as shown in the attachment below.

 

However, we regret to inform you that YoloR is not yet supported as it is still under exploration by the development team.

 

On the other hand, I recommend you check out the following trained models provided by OpenVINO for learning and demo purposes or for developing deep learning software.

·        Intel’s Pre-trained Models

·        Public Pre-trained Models

 

 

Best regards,

Wan

 

0 Kudos
Wan_Intel
Moderator
2,126 Views

Hi Goh_Richard,


This thread will no longer be monitored since this issue has been resolved. 

If you need any additional information from Intel, please submit a new question.


Regards,

Wan


0 Kudos
Reply