<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Is there any specific command for Dual Socket Xeon to run any Openvino demo ? in Intel® Distribution of OpenVINO™ Toolkit</title>
    <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Is-there-any-specific-command-for-Dual-Socket-Xeon-to-run-any/m-p/1442740#M28848</link>
    <description>&lt;P&gt;&lt;SPAN&gt;Hi Aqil-ITXOTIC,&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Thank you for reaching out to us.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;I have validated on my end &lt;/SPAN&gt;&lt;A style="font-size: 16px;" href="https://docs.openvino.ai/2022.2/omz_demos_object_detection_demo_python.html" target="_blank" rel="noopener noreferrer"&gt;Object Detection Python Demo&lt;/A&gt;&lt;SPAN&gt; with &lt;/SPAN&gt;&lt;A style="font-size: 16px;" href="https://docs.openvino.ai/2022.2/omz_models_model_vehicle_detection_0200.html" target="_blank" rel="noopener noreferrer"&gt;vehicle-detection-0200&lt;/A&gt;&lt;SPAN&gt; model using the &lt;/SPAN&gt;&lt;A style="font-size: 16px;" href="https://github.com/intel-iot-devkit/sample-videos/raw/master/car-detection.mp4" target="_blank" rel="noopener noreferrer"&gt;car-detection.mp4&lt;/A&gt;&lt;SPAN&gt; video from the &lt;/SPAN&gt;&lt;A style="font-size: 16px;" href="https://github.com/intel-iot-devkit/sample-videos" target="_blank" rel="noopener noreferrer"&gt;sample videos&lt;/A&gt;&lt;SPAN&gt; page. I ran the demo on both Ubuntu 20.04 and Ubuntu 22.10 using Intel® Core™ i7-11700K.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;I also configured the number of streams &lt;/SPAN&gt;&lt;EM style="font-size: 16px;"&gt;"-nstreams"&lt;/EM&gt;&lt;SPAN&gt; to auto, automatically set to 4 and manually to 10. I share my results below:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Ubuntu 20:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="u20_car-detect.png" style="width: 999px;"&gt;&lt;img src="https://community.intel.com/t5/image/serverpage/image-id/36660i1819A48E9BBE09DB/image-size/large?v=v2&amp;amp;px=999&amp;amp;whitelist-exif-data=Orientation%2CResolution%2COriginalDefaultFinalSize%2CCopyright" role="button" title="u20_car-detect.png" alt="u20_car-detect.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Ubuntu 20 &lt;/SPAN&gt;&lt;EM style="font-size: 16px;"&gt;"-nstreams 10"&lt;/EM&gt;&lt;SPAN&gt;:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="u20_10stream.png" style="width: 999px;"&gt;&lt;img src="https://community.intel.com/t5/image/serverpage/image-id/36661i252F0E5A75518595/image-size/large?v=v2&amp;amp;px=999&amp;amp;whitelist-exif-data=Orientation%2CResolution%2COriginalDefaultFinalSize%2CCopyright" role="button" title="u20_10stream.png" alt="u20_10stream.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Ubuntu 22:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="u22_car-detect.png" style="width: 999px;"&gt;&lt;img src="https://community.intel.com/t5/image/serverpage/image-id/36662i3834CB43F7B4D2DA/image-size/large?v=v2&amp;amp;px=999&amp;amp;whitelist-exif-data=Orientation%2CResolution%2COriginalDefaultFinalSize%2CCopyright" role="button" title="u22_car-detect.png" alt="u22_car-detect.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Ubuntu 22 &lt;/SPAN&gt;&lt;EM style="font-size: 16px;"&gt;"-nstreams 10"&lt;/EM&gt;&lt;SPAN&gt;:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="u22_car-detect_nstream10.png" style="width: 999px;"&gt;&lt;img src="https://community.intel.com/t5/image/serverpage/image-id/36663i5C8D6D83069F3510/image-size/large?v=v2&amp;amp;px=999&amp;amp;whitelist-exif-data=Orientation%2CResolution%2COriginalDefaultFinalSize%2CCopyright" role="button" title="u22_car-detect_nstream10.png" alt="u22_car-detect_nstream10.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;For your information, &lt;/SPAN&gt;&lt;A style="font-size: 16px;" href="https://docs.openvino.ai/2022.2/omz_models_model_vehicle_detection_0200.html" target="_blank" rel="noopener noreferrer"&gt;vehicle-detection-0200&lt;/A&gt;&lt;SPAN&gt; model is based on the MobileNetV2 backbone. From our &lt;/SPAN&gt;&lt;A style="font-size: 16px;" href="https://docs.openvino.ai/2022.2/openvino_docs_performance_benchmarks_openvino.html#benchmark-performance-results" target="_blank" rel="noopener noreferrer"&gt;Benchmark Results&lt;/A&gt;&lt;SPAN&gt;, the &lt;/SPAN&gt;&lt;A style="font-size: 16px;" href="https://docs.openvino.ai/2022.2/openvino_docs_performance_benchmarks_openvino.html#ssd-mobilenet-v2-coco-tf-300x300" target="_blank" rel="noopener noreferrer"&gt;ssd_mobilenet_v2_coco_tf&lt;/A&gt;&lt;SPAN&gt; model was validated on Intel® Xeon® Gold 5218T CPU and got 412 FPS for FP32 precision. Do note that the CPU configuration uses Ubuntu 20.04.3 LTS and Kernel 5.4.0-42-generic. You can find more information on the configuration details from &lt;/SPAN&gt;&lt;A style="font-size: 16px;" href="https://docs.openvino.ai/2022.2/openvino_docs_performance_benchmarks_openvino.html#benchmark-setup-information" target="_blank" rel="noopener noreferrer"&gt;Benchmark Setup Information&lt;/A&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;On another note, regarding the FPS performance issue on your CPU, I will contact the Engineering team for further information and will update you once I have obtained feedback from them.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Regards,&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Megat.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Fri, 30 Dec 2022 10:39:18 GMT</pubDate>
    <dc:creator>Megat_Intel</dc:creator>
    <dc:date>2022-12-30T10:39:18Z</dc:date>
    <item>
      <title>Is there any specific command for Dual Socket Xeon to run any Openvino demo ?</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Is-there-any-specific-command-for-Dual-Socket-Xeon-to-run-any/m-p/1442435#M28845</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;
&lt;P&gt;I am trying to run the Openvino Object Detection python demo on my server dual socket Intel Xeon &amp;nbsp;Gold 6230. The demo run perfectly but the result i get is really bad. I have an impression that the application runs solely on single core of one socket xeon.&lt;/P&gt;
&lt;P&gt;Here is the command i enter :&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;python3 /home/itxotic/Openvino/open_model_zoo-master/demos/object_detection_demo/python/object_detection_demo.py&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &lt;/SPAN&gt;-i demo_video/car-detection.mp4 -m /home/itxotic/Openvino/model/intel/vehicle-detection-0200/FP16/vehicle-detection-0200.xml -at ssd --no_show&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Here is the result i get after i enter the above command :&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Screenshot 2022-12-29 at 3.10.46 PM.png" style="width: 730px;"&gt;&lt;img src="https://community.intel.com/t5/image/serverpage/image-id/36618i51D7886828806522/image-size/large/is-moderation-mode/true?v=v2&amp;amp;px=999&amp;amp;whitelist-exif-data=Orientation%2CResolution%2COriginalDefaultFinalSize%2CCopyright" role="button" title="Screenshot 2022-12-29 at 3.10.46 PM.png" alt="Screenshot 2022-12-29 at 3.10.46 PM.png" /&gt;&lt;/span&gt;&lt;BR /&gt;&lt;BR /&gt;My server is running Ubuntu 22.10 Kernel 5.19.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;So, my question, is there any specific command that i need to precise so that Openvino will take advantage to run on two socket cpu ? I am expecting that my server would able to get like 1000+ fps for 10 streams. I appreciate any kind of help that you may to offer, or point out to me. Thanks.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;p/s : i did tried with option "-d CPU", same result.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 29 Dec 2022 07:18:18 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Is-there-any-specific-command-for-Dual-Socket-Xeon-to-run-any/m-p/1442435#M28845</guid>
      <dc:creator>Aqil-ITXOTIC</dc:creator>
      <dc:date>2022-12-29T07:18:18Z</dc:date>
    </item>
    <item>
      <title>Re: Is there any specific command for Dual Socket Xeon to run any Openvino demo ?</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Is-there-any-specific-command-for-Dual-Socket-Xeon-to-run-any/m-p/1442740#M28848</link>
      <description>&lt;P&gt;&lt;SPAN&gt;Hi Aqil-ITXOTIC,&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Thank you for reaching out to us.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;I have validated on my end &lt;/SPAN&gt;&lt;A style="font-size: 16px;" href="https://docs.openvino.ai/2022.2/omz_demos_object_detection_demo_python.html" target="_blank" rel="noopener noreferrer"&gt;Object Detection Python Demo&lt;/A&gt;&lt;SPAN&gt; with &lt;/SPAN&gt;&lt;A style="font-size: 16px;" href="https://docs.openvino.ai/2022.2/omz_models_model_vehicle_detection_0200.html" target="_blank" rel="noopener noreferrer"&gt;vehicle-detection-0200&lt;/A&gt;&lt;SPAN&gt; model using the &lt;/SPAN&gt;&lt;A style="font-size: 16px;" href="https://github.com/intel-iot-devkit/sample-videos/raw/master/car-detection.mp4" target="_blank" rel="noopener noreferrer"&gt;car-detection.mp4&lt;/A&gt;&lt;SPAN&gt; video from the &lt;/SPAN&gt;&lt;A style="font-size: 16px;" href="https://github.com/intel-iot-devkit/sample-videos" target="_blank" rel="noopener noreferrer"&gt;sample videos&lt;/A&gt;&lt;SPAN&gt; page. I ran the demo on both Ubuntu 20.04 and Ubuntu 22.10 using Intel® Core™ i7-11700K.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;I also configured the number of streams &lt;/SPAN&gt;&lt;EM style="font-size: 16px;"&gt;"-nstreams"&lt;/EM&gt;&lt;SPAN&gt; to auto, automatically set to 4 and manually to 10. I share my results below:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Ubuntu 20:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="u20_car-detect.png" style="width: 999px;"&gt;&lt;img src="https://community.intel.com/t5/image/serverpage/image-id/36660i1819A48E9BBE09DB/image-size/large?v=v2&amp;amp;px=999&amp;amp;whitelist-exif-data=Orientation%2CResolution%2COriginalDefaultFinalSize%2CCopyright" role="button" title="u20_car-detect.png" alt="u20_car-detect.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Ubuntu 20 &lt;/SPAN&gt;&lt;EM style="font-size: 16px;"&gt;"-nstreams 10"&lt;/EM&gt;&lt;SPAN&gt;:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="u20_10stream.png" style="width: 999px;"&gt;&lt;img src="https://community.intel.com/t5/image/serverpage/image-id/36661i252F0E5A75518595/image-size/large?v=v2&amp;amp;px=999&amp;amp;whitelist-exif-data=Orientation%2CResolution%2COriginalDefaultFinalSize%2CCopyright" role="button" title="u20_10stream.png" alt="u20_10stream.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Ubuntu 22:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="u22_car-detect.png" style="width: 999px;"&gt;&lt;img src="https://community.intel.com/t5/image/serverpage/image-id/36662i3834CB43F7B4D2DA/image-size/large?v=v2&amp;amp;px=999&amp;amp;whitelist-exif-data=Orientation%2CResolution%2COriginalDefaultFinalSize%2CCopyright" role="button" title="u22_car-detect.png" alt="u22_car-detect.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Ubuntu 22 &lt;/SPAN&gt;&lt;EM style="font-size: 16px;"&gt;"-nstreams 10"&lt;/EM&gt;&lt;SPAN&gt;:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="u22_car-detect_nstream10.png" style="width: 999px;"&gt;&lt;img src="https://community.intel.com/t5/image/serverpage/image-id/36663i5C8D6D83069F3510/image-size/large?v=v2&amp;amp;px=999&amp;amp;whitelist-exif-data=Orientation%2CResolution%2COriginalDefaultFinalSize%2CCopyright" role="button" title="u22_car-detect_nstream10.png" alt="u22_car-detect_nstream10.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;For your information, &lt;/SPAN&gt;&lt;A style="font-size: 16px;" href="https://docs.openvino.ai/2022.2/omz_models_model_vehicle_detection_0200.html" target="_blank" rel="noopener noreferrer"&gt;vehicle-detection-0200&lt;/A&gt;&lt;SPAN&gt; model is based on the MobileNetV2 backbone. From our &lt;/SPAN&gt;&lt;A style="font-size: 16px;" href="https://docs.openvino.ai/2022.2/openvino_docs_performance_benchmarks_openvino.html#benchmark-performance-results" target="_blank" rel="noopener noreferrer"&gt;Benchmark Results&lt;/A&gt;&lt;SPAN&gt;, the &lt;/SPAN&gt;&lt;A style="font-size: 16px;" href="https://docs.openvino.ai/2022.2/openvino_docs_performance_benchmarks_openvino.html#ssd-mobilenet-v2-coco-tf-300x300" target="_blank" rel="noopener noreferrer"&gt;ssd_mobilenet_v2_coco_tf&lt;/A&gt;&lt;SPAN&gt; model was validated on Intel® Xeon® Gold 5218T CPU and got 412 FPS for FP32 precision. Do note that the CPU configuration uses Ubuntu 20.04.3 LTS and Kernel 5.4.0-42-generic. You can find more information on the configuration details from &lt;/SPAN&gt;&lt;A style="font-size: 16px;" href="https://docs.openvino.ai/2022.2/openvino_docs_performance_benchmarks_openvino.html#benchmark-setup-information" target="_blank" rel="noopener noreferrer"&gt;Benchmark Setup Information&lt;/A&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;On another note, regarding the FPS performance issue on your CPU, I will contact the Engineering team for further information and will update you once I have obtained feedback from them.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Regards,&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Megat.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 30 Dec 2022 10:39:18 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Is-there-any-specific-command-for-Dual-Socket-Xeon-to-run-any/m-p/1442740#M28848</guid>
      <dc:creator>Megat_Intel</dc:creator>
      <dc:date>2022-12-30T10:39:18Z</dc:date>
    </item>
    <item>
      <title>Re: Is there any specific command for Dual Socket Xeon to run any Openvino demo ?</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Is-there-any-specific-command-for-Dual-Socket-Xeon-to-run-any/m-p/1443369#M28852</link>
      <description>&lt;P&gt;Hi Megat,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thank you for respond to my question.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I did the same benchmark on i9-12900K, i9-10980XE and i7-11700. With these 3 processors, we got the "expected" correct result. I can share with you of my excel sheet where i wrote the benchmark result for these 3 processors if you want.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I decided to run my testbench PCs with Ubuntu 22.10 Kernel 5.19 because with Kernel 5.19 and above gives an improvement in performance for intel 10th-gen and above. Comparing to Ubuntu 22.04 Kernel 5.15, we gained one more stream to push on these 3 processors in our "in-development-application" for CCTV surveillance with Kernel 5.19. Although for intel 12th-gen and above, there is a small problem with P-core and E-core management within the Kernel itself, thus resulting about the same performance with i9-10980XE. I believe this issue lies within the Ubuntu OS and its Kernel on how it handles 12th-gen intel processor. But, we see an improvement for i7-11700 and i9-10980XE if we run in Kernel 5.19 and above.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Now, we are planning to test our application within our local cloud server, which is dual socket intel Xeon Gold 6230. We wanna see how many streams and fps throughput we can push. But, the benchmark result give me a very disappointing result, which is way slower than i9-10980XE. To me, it doesn't make sense. We are hoping to fully use its dual socket cpu to fullest.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I wanna thank you for your sharing on your benchmark on i7-11700K. I can confirmed to you that i got similar result as yours.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;So here is my further question to ask. Intel has done the benchmark for intel Xeon Gold 5218T under FP32 precision. May i know the configuration that Intel has used ? Like what is the command that your team entered ? Are you running with single socket configuration or dual socket configuration ?&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;p/s : regarding my benchmark on my dual socket Xeon Gold 6230, i tried with NUMA options (which i believe should force the benchmark_app to use all threads on the cpu and use both socket cpu), which it did, i can see all the threads been using during inferencing, but still get the bad result (as i posted). I also did played with other possible option configuration for the command, and i still get a very bad result.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Looking forward to your response. Thank you.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 03 Jan 2023 03:18:24 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Is-there-any-specific-command-for-Dual-Socket-Xeon-to-run-any/m-p/1443369#M28852</guid>
      <dc:creator>Aqil-ITXOTIC</dc:creator>
      <dc:date>2023-01-03T03:18:24Z</dc:date>
    </item>
    <item>
      <title>Re:Is there any specific command for Dual Socket Xeon to run any Openvino demo ?</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Is-there-any-specific-command-for-Dual-Socket-Xeon-to-run-any/m-p/1443702#M28855</link>
      <description>&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Hi Aqil-ITXOTIC,&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Thank you for your patience.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;For your information, based on the feedback I got, it is suggested to move to C++ implementation if performance&amp;nbsp;is the priority.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;The C++ implementation is faster than Python for 2 reasons:&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;&lt;SPAN style="font-size: 16px;"&gt;The language itself is faster.&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN style="font-size: 16px;"&gt;There’s GIL in Python which drags performance down for multithreaded scenarios, which is the case for object_detection_demo.&lt;/SPAN&gt;&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;The demo needs to perform stuff like image reading from a video, running model pre/post-processing, and drawing bounding boxes on a resulting frame. That keeps 1 core busy (which is 2 threads because of hyperthreading). That core isn’t used for inference.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;This can be verified by comparing different scenarios for the demos. Use&amp;nbsp;-nthreads&amp;nbsp;to compare between scenarios.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;If you are experiencing FPS fluctuation which results in poor performance, having -nstreams 12 means that the system generates results for 12 frames almost at the same time. For the demo that means that it generates results for 12 images almost at the same time and after that, there’s a long pause.&amp;nbsp;You need to choose between throughput and latency scenarios.&amp;nbsp;The latency scenario is achieved by setting -nstreams 1. With that, it will reduce FPS fluctuations, but the average FPS will drop.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;It is suggested to reproduce the experiments with different arguments and see if there is any improvement in the demos. You can help to share the results as well.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;&lt;SPAN style="font-size: 16px;"&gt;Run the demo by&amp;nbsp;increasing the -nstreams and -nthreads values instead of default values.&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN style="font-size: 16px;"&gt;Choose between throughput and latency scenarios: -nstreams 1 for latency scenarios.&lt;/SPAN&gt;&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;On the other hand, I would suggest you try out our latest OpenVINO™ Toolkit 2022.3 version as this issue might be minimized in the newer version.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Regarding the &lt;/SPAN&gt;&lt;A href="https://docs.openvino.ai/2022.2/openvino_docs_performance_benchmarks_openvino.html#benchmark-performance-results" rel="noopener noreferrer" target="_blank" style="font-size: 16px;"&gt;Benchmark Results&lt;/A&gt;&lt;SPAN style="font-size: 16px;"&gt; for intel Xeon Gold 5218T under FP32 precision. You can find the listing of all platforms and configurations used for testing in the &lt;/SPAN&gt;&lt;A href="https://docs.openvino.ai/2022.2/_downloads/33ee2a13abf3ae3058381800409edc4a/platform_list_22.2.pdf" rel="noopener noreferrer" target="_blank" style="font-size: 16px;"&gt;HW&amp;nbsp;platforms&amp;nbsp;(pdf)&lt;/A&gt;&lt;SPAN style="font-size: 16px;"&gt; and &lt;/SPAN&gt;&lt;A href="https://docs.openvino.ai/2022.2/_downloads/fdd5a86ab44d348b13bf5be23d8c0dde/OV-2022.2-system-info-detailed.xlsx" rel="noopener noreferrer" target="_blank" style="font-size: 16px;"&gt;Configuration&amp;nbsp;Details&amp;nbsp;(xlsx)&lt;/A&gt;&lt;SPAN style="font-size: 16px;"&gt; documents. Additionally, from the &lt;/SPAN&gt;&lt;A href="https://docs.openvino.ai/2022.2/openvino_docs_OV_UG_supported_plugins_CPU.html#device-name" rel="noopener noreferrer" target="_blank" style="font-size: 16px;"&gt;CPU Devices&lt;/A&gt;&lt;SPAN style="font-size: 16px;"&gt; page it is mentioned that on multi-socket platforms, load balancing and memory usage distribution between NUMA nodes are handled automatically.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Regards,&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Megat&lt;/SPAN&gt;&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 04 Jan 2023 02:58:40 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Is-there-any-specific-command-for-Dual-Socket-Xeon-to-run-any/m-p/1443702#M28855</guid>
      <dc:creator>Megat_Intel</dc:creator>
      <dc:date>2023-01-04T02:58:40Z</dc:date>
    </item>
    <item>
      <title>Re: Re:Is there any specific command for Dual Socket Xeon to run any Openvino demo ?</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Is-there-any-specific-command-for-Dual-Socket-Xeon-to-run-any/m-p/1443728#M28857</link>
      <description>&lt;P&gt;Hi Megat,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thank you for your feedback.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As of now, i will stick to benchmark_app C++ to benchmark my server performance.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The issue of FPS fluctuation still an issue that i don't know the answer. But let's put that aside, i want to ask a new question.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I am running benchmark_app using the model mobilenet-v2 FP16 in Ubuntu Live Server 20.04 Kernel 5.04, and this is the result i get :&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Screenshot 2023-01-04 at 12.24.36 PM.png" style="width: 999px;"&gt;&lt;img src="https://community.intel.com/t5/image/serverpage/image-id/36782iFB0AAC42A92E8897/image-size/large/is-moderation-mode/true?v=v2&amp;amp;px=999&amp;amp;whitelist-exif-data=Orientation%2CResolution%2COriginalDefaultFinalSize%2CCopyright" role="button" title="Screenshot 2023-01-04 at 12.24.36 PM.png" alt="Screenshot 2023-01-04 at 12.24.36 PM.png" /&gt;&lt;/span&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Screenshot 2023-01-04 at 12.22.19 PM.png" style="width: 999px;"&gt;&lt;img src="https://community.intel.com/t5/image/serverpage/image-id/36783iC324BE0374174A40/image-size/large/is-moderation-mode/true?v=v2&amp;amp;px=999&amp;amp;whitelist-exif-data=Orientation%2CResolution%2COriginalDefaultFinalSize%2CCopyright" role="button" title="Screenshot 2023-01-04 at 12.22.19 PM.png" alt="Screenshot 2023-01-04 at 12.22.19 PM.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The FPS i got is reasonable good, but if we take a look at the CPU usage, we can see only single socket is doing the work, while the other socket is on idle.&lt;/P&gt;
&lt;P&gt;My question is, how do i get both socket to run on full load with benchmark_app ? Which option/flag in the command should i use ?&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 04 Jan 2023 05:11:25 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Is-there-any-specific-command-for-Dual-Socket-Xeon-to-run-any/m-p/1443728#M28857</guid>
      <dc:creator>Aqil-ITXOTIC</dc:creator>
      <dc:date>2023-01-04T05:11:25Z</dc:date>
    </item>
    <item>
      <title>Re: Is there any specific command for Dual Socket Xeon to run any Openvino demo ?</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Is-there-any-specific-command-for-Dual-Socket-Xeon-to-run-any/m-p/1444052#M28859</link>
      <description>&lt;P&gt;&lt;SPAN&gt;Hi Aqil-ITXOTIC,&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;For your information, there is a similar issue regarding the CPU only using half of the available cores from the GitHub page &lt;/SPAN&gt;&lt;A style="font-size: 16px;" href="https://github.com/openvinotoolkit/openvino/issues/14781" target="_blank" rel="noopener noreferrer"&gt;here&lt;/A&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;To run the benchmark application on full load using all cores on your CPU, you can try setting the number of threads (-nthreads) to 80 (the number of your virtual cores).&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;However, please note that using your CPU on full load using all cores might not equate to the best performance. This is an expected default behavior in the current design as the Single-socket platform will use maximum CPU utilization including hyper-threading processors while the Dual-socket platform uses 50% CPU utilization without hyper-threading processors.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Each physical core is configured to support two logical processors. Either of the two logical processors can use all of the resources of the physical core. Most processes do not use all the resources of the physical core. It is possible to use both logical processors on the physical core however, the performance might go down when using both.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;On the other hand, using one logical processor per physical core could be considered to be 100% CPU utilization, but is reported to be 50% CPU utilization. You can reach 50% CPU utilization either by using one logical processor on each physical core or by using two logical processors on one-half of the physical cores. You can find more details regarding CPU usage from the discussion &lt;/SPAN&gt;&lt;A style="font-size: 16px;" href="https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/HyperThreading-and-CPU-usage/m-p/1134673" target="_blank" rel="noopener noreferrer"&gt;here&lt;/A&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Additionally, as mentioned on the &lt;/SPAN&gt;&lt;A style="font-size: 16px;" href="https://docs.openvino.ai/2022.2/openvino_docs_OV_UG_supported_plugins_CPU.html#device-name" target="_blank" rel="noopener noreferrer"&gt;CPU Devices&lt;/A&gt;&lt;SPAN&gt; page, load balancing and memory usage distribution on multi-socket systems are handled automatically. The CPU plugin should optimize the number of parallel and queued inferences for CPU devices based on the number of CPU cores. However, you can try to change different parameters (e.g. -nstreams, -nthreads) to find the optimum configuration.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Regards,&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Megat&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 05 Jan 2023 06:21:09 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Is-there-any-specific-command-for-Dual-Socket-Xeon-to-run-any/m-p/1444052#M28859</guid>
      <dc:creator>Megat_Intel</dc:creator>
      <dc:date>2023-01-05T06:21:09Z</dc:date>
    </item>
    <item>
      <title>Re:Is there any specific command for Dual Socket Xeon to run any Openvino demo ?</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Is-there-any-specific-command-for-Dual-Socket-Xeon-to-run-any/m-p/1446306#M28888</link>
      <description>&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Hi Aqil-ITXOTIC,&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Thank you for your question. This thread will no longer be monitored since we have provided a suggestion. If you need any additional information from Intel, please submit a new question.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Regards,&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Megat&lt;/SPAN&gt;&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 13 Jan 2023 01:47:03 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Is-there-any-specific-command-for-Dual-Socket-Xeon-to-run-any/m-p/1446306#M28888</guid>
      <dc:creator>Megat_Intel</dc:creator>
      <dc:date>2023-01-13T01:47:03Z</dc:date>
    </item>
  </channel>
</rss>

