<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic FP16 model Inference on GPU gives all Nan values in output array in Intel® Distribution of OpenVINO™ Toolkit</title>
    <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/FP16-model-Inference-on-GPU-gives-all-Nan-values-in-output-array/m-p/1386858#M27348</link>
    <description>&lt;P&gt;Hi there,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;I'm working on a project that involves Token2Token-Vision Transformer for classification task. Information on model &lt;A href="https://github.com/yitu-opensource/T2T-ViT" target="_self"&gt;here.&lt;/A&gt;&amp;nbsp;During conversion from Pytorch weights to IR through onnx, some layers weren't supported with opset version 9, but I managed to export with opset version 12. INT8 &amp;amp; FP16 model works without any problem, but FP16 GPU inference outputs all Nan values. Is it because of version incompatibility?&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;I'm using the latest version of Openvino 2022.1.&amp;nbsp; Please see the attached screenshot.&amp;nbsp; Any suggestion would be appreciated.&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;Thanks and regards&lt;/P&gt;
&lt;P data-unlink="true"&gt;iJema&lt;/P&gt;</description>
    <pubDate>Tue, 24 May 2022 06:22:30 GMT</pubDate>
    <dc:creator>iJema</dc:creator>
    <dc:date>2022-05-24T06:22:30Z</dc:date>
    <item>
      <title>FP16 model Inference on GPU gives all Nan values in output array</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/FP16-model-Inference-on-GPU-gives-all-Nan-values-in-output-array/m-p/1386858#M27348</link>
      <description>&lt;P&gt;Hi there,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;I'm working on a project that involves Token2Token-Vision Transformer for classification task. Information on model &lt;A href="https://github.com/yitu-opensource/T2T-ViT" target="_self"&gt;here.&lt;/A&gt;&amp;nbsp;During conversion from Pytorch weights to IR through onnx, some layers weren't supported with opset version 9, but I managed to export with opset version 12. INT8 &amp;amp; FP16 model works without any problem, but FP16 GPU inference outputs all Nan values. Is it because of version incompatibility?&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;I'm using the latest version of Openvino 2022.1.&amp;nbsp; Please see the attached screenshot.&amp;nbsp; Any suggestion would be appreciated.&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;Thanks and regards&lt;/P&gt;
&lt;P data-unlink="true"&gt;iJema&lt;/P&gt;</description>
      <pubDate>Tue, 24 May 2022 06:22:30 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/FP16-model-Inference-on-GPU-gives-all-Nan-values-in-output-array/m-p/1386858#M27348</guid>
      <dc:creator>iJema</dc:creator>
      <dc:date>2022-05-24T06:22:30Z</dc:date>
    </item>
    <item>
      <title>Re:FP16 model Inference on GPU gives all Nan values in output array</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/FP16-model-Inference-on-GPU-gives-all-Nan-values-in-output-array/m-p/1387284#M27363</link>
      <description>&lt;P&gt;&lt;SPAN style="font-size: 12px;"&gt;Hi iJema,&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 12px;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 12px;"&gt;Thanks for reaching out.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 12px;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 12px;"&gt;Does the Nan values output occurs only when running on GPU? From the list of &lt;/SPAN&gt;&lt;A href="https://github.com/yitu-opensource/T2T-ViT#2-t2t-vit-models" rel="noopener noreferrer" target="_blank" style="font-size: 12px;"&gt;T2T-Vit Models&lt;/A&gt;&lt;SPAN style="font-size: 12px;"&gt;, which specified model you are using?&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 12px;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 12px;"&gt;Please share all the necessary files and inputs to reproduce this issue from our end. You can put all the files in Google Drive and share the link here or privately to my email:&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;A href="mailto:noor.aznie.syaarriehaahx.binti.baharuddin@intel.com" rel="noopener noreferrer" target="_blank" style="font-size: 12px;"&gt;noor.aznie.syaarriehaahx.binti.baharuddin@intel.com&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 12px;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 12px;"&gt;Regards,&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 12px;"&gt;Aznie&lt;/SPAN&gt;&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 25 May 2022 08:26:59 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/FP16-model-Inference-on-GPU-gives-all-Nan-values-in-output-array/m-p/1387284#M27363</guid>
      <dc:creator>IntelSupport</dc:creator>
      <dc:date>2022-05-25T08:26:59Z</dc:date>
    </item>
    <item>
      <title>Re: FP16 model Inference on GPU gives all Nan values in output array</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/FP16-model-Inference-on-GPU-gives-all-Nan-values-in-output-array/m-p/1389151#M27401</link>
      <description>&lt;P&gt;Hi Aznie,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Sorry for late replying. Yes, the Nan output occurs only when running on GPU. Also, I specifically tested T2T-VIT-14 model. Please find the code &lt;A href="https://drive.google.com/drive/folders/14laLK4DAREBO8aHO2OyHUw_CJzdhKGlZ?usp=sharing" target="_self"&gt;here&lt;/A&gt;&amp;nbsp;. The test_vmmr_img.py has been set up for GPU inference. Just run the file, in console you should be able to see the output from model. Please let me know if you face any problem running the code.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thanks and regards,&lt;/P&gt;
&lt;P&gt;iJema&lt;/P&gt;</description>
      <pubDate>Wed, 01 Jun 2022 06:53:10 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/FP16-model-Inference-on-GPU-gives-all-Nan-values-in-output-array/m-p/1389151#M27401</guid>
      <dc:creator>iJema</dc:creator>
      <dc:date>2022-06-01T06:53:10Z</dc:date>
    </item>
    <item>
      <title>Re: FP16 model Inference on GPU gives all Nan values in output array</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/FP16-model-Inference-on-GPU-gives-all-Nan-values-in-output-array/m-p/1389725#M27417</link>
      <description>&lt;P&gt;&lt;SPAN&gt;Hi iJema,&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;I observed the same output from my end and also encountered an IndexError when running the FP16 precision model on GPU.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="fp16_gpu_LI (2).jpg" style="width: 999px;"&gt;&lt;img src="https://community.intel.com/t5/image/serverpage/image-id/30175iC51CE4888F59FDA0/image-size/large?v=v2&amp;amp;px=999&amp;amp;whitelist-exif-data=Orientation%2CResolution%2COriginalDefaultFinalSize%2CCopyright" role="button" title="fp16_gpu_LI (2).jpg" alt="fp16_gpu_LI (2).jpg" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;The performance of T2T-VIT-14 with FP16 precision on GPU was expected and the error encountered was due to missing transposes to MatMul. Our development team is working to rectify this and potentially, the fixes will be available in the next releases.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;For the 2022.1 release, I would suggest you use INT8 with GPU instead.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Regards,&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Aznie&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 03 Jun 2022 02:12:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/FP16-model-Inference-on-GPU-gives-all-Nan-values-in-output-array/m-p/1389725#M27417</guid>
      <dc:creator>IntelSupport</dc:creator>
      <dc:date>2022-06-03T02:12:00Z</dc:date>
    </item>
    <item>
      <title>Re:FP16 model Inference on GPU gives all Nan values in output array</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/FP16-model-Inference-on-GPU-gives-all-Nan-values-in-output-array/m-p/1391493#M27485</link>
      <description>&lt;P&gt;Hi iJema,&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;This thread will no longer be monitored since we have provided a solution.&amp;nbsp;If you need any additional information from Intel, please submit a new question.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;Aznie&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 10 Jun 2022 02:51:17 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/FP16-model-Inference-on-GPU-gives-all-Nan-values-in-output-array/m-p/1391493#M27485</guid>
      <dc:creator>IntelSupport</dc:creator>
      <dc:date>2022-06-10T02:51:17Z</dc:date>
    </item>
    <item>
      <title>Re: FP16 model Inference on GPU gives all Nan values in output array</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/FP16-model-Inference-on-GPU-gives-all-Nan-values-in-output-array/m-p/1396691#M27706</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.intel.com/t5/user/viewprofilepage/user-id/6"&gt;@IntelSupport&lt;/a&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Sorry for late replying and thanks for the suggestion! I'll try the model again with next release of OpenVino.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Regards,&lt;/P&gt;
&lt;P&gt;iJema&lt;/P&gt;</description>
      <pubDate>Thu, 30 Jun 2022 07:07:16 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/FP16-model-Inference-on-GPU-gives-all-Nan-values-in-output-array/m-p/1396691#M27706</guid>
      <dc:creator>iJema</dc:creator>
      <dc:date>2022-06-30T07:07:16Z</dc:date>
    </item>
  </channel>
</rss>

