Authors: Narasimha Lakamsani, Kinara Pandya, Devang Aggarwal, Sesh Seshagiri, Jian Sun
Are you tired of switching between Windows and Linux environments to perform machine learning (ML) tasks? Do you want to accelerate inference of your ML applications in an effective way?
Fret not. There is a solution! This blog post is intended to serve as a guide to configure your Windows based system to get the most out of your Intel® Integrated Graphics Processing Unit (iGPU).
Now let’s see how Intel’s iGPU works with a Linux distribution (such as Ubuntu, openSUSE, Kali, Debian, Arch Linux, and more) on WSL to see the performance benefits of OpenVINO™ .
I gave this combination of tools a try, with the demo shown below, and was really amazed by how seamlessly it works — not to mention, with added acceleration!
WSL2 is indeed a lifesaver for developers or students who want to perform ML tasks on a windows machine but not sacrifice the benefits of Linux bash or grep.
Harnessing the power of Intel’s iGPU and OpenVINO™ with WSL gave me the best of both worlds — access to windows file/folder system, tools like PowerShell/Visual Studio Code all within the Linux environment for development.
Having Linux at my fingertips without having to dual boot was so easy. I loved being able to run Linux and Windows applications at the same time, saving a lot of time in maintenance activities (software updates, file backup on essentially two different machines).
I was excited to discover how using WSL in combination with OpenVINO™ on my Intel iGPU opened a whole new world of capabilities that you’ll see in the implementation below.
Side Note: To unleash the full potential of Intel hardware for AI workloads, you can use the OpenVINO™ toolkit natively. The free, open-source version is available here.
WSL2 in Detail
Windows Subsystem for Linux allows you to run a Linux environment on windows without the burden of a full-blown virtual machines. Rather, it uses an approach based on virtualization. It leverages the Windows Hypervisor Platform to power an actual Linux kernel sitting inside an extremely lightweight virtual machine. WSL2 is primarily created to enable developers to use Linux tools on a Windows device.
Let’s dive into the implementation details…
First things first — check your version of Windows
The following versions work completely:
- Windows 11 (build 22000.*)
- Windows 11 Insider Preview with builds 21362 or higher
The following versions work as long as the application does not require a GUI:
- Windows 10 version 2004 and higher (build 19041 and higher)
- Windows 11
Follow the next steps to install Ubuntu with WSL2:
- Open command prompt or PowerShell with administrator privileges and run the following commands
- wsl –update
- wsl –shutdown
- wsl — install — d Ubuntu-20.04
- wsl — list –v (verifies if that installation worked. The output of this command should show version as “2” for the Ubuntu installed)
- Refer to this link here for any more details
Next, Ensure correct drivers are installed on your Intel iGPU
Install the Intel Graphics Drivers:
- Download and install the latest GPU host driver from https://www.intel.com/content/www/us/en/download/19344/intel-graphics-windows-dch-drivers.html
- Make sure the version is 30.0.100.9955 or later
- Run the executable file downloaded and reboot the system to finish installation
Install OpenCL™ Drivers
- Start ubuntu on Windows by opening it from search
- Run the following commands
- sudo apt-get update
- sudo apt install ocl-icd-opencl-dev
- After finishing those steps o Follow the steps described in https://github.com/intel/compute-runtime/releases
Follow the steps below within ubuntu and you should be all set to run the demo with OpenVINO™ Toolkit in the next step.
Set up and activate Python Virtual Environment
- cd ~
- sudo apt-get update
- sudo apt install python3-venv
- python3 -m venv openvino_env
- source openvino_env/bin/activate
Install OpenVINO™ Toolkit (download here) and verify installation
- python -m pip install — upgrade pip
- pip install openvino-dev[tensorflow2,onnx]
- mo -h (verifies that installation worked. The help message for Model Optimizer should appear if installation finishes successfully)
Download and setup Open Model Zoo
- git clone https://github.com/openvinotoolkit/open_model_zoo.git
- pip install open_model_zoo/demos/common/python
Download model and video needed to run the inference
- cd open_model_zoo/demos/human_pose_estimation_demo/python
- wget https://storage.openvinotoolkit.org/data/test_data/videos/face-demographics-walking-and-pause.mp4
- omz_downloader — name human-pose-estimation-0005
The setup was super easy and took me less than 10 minutes to set it up.
Now that all the requirements are set, let’s get to the fun part — Run the demo!
Run the demo with CPU
- python3 human_pose_estimation_demo.py -m ~/open_model_zoo/demos/human_pose_estimation_demo/python/intel/human-pose-estimation-0005/FP32/human-pose-estimation-0005.xml -at ae -i face-demographics-walking-and-pause.mp4 -d CPU
Run the demo with Intel iGPU
- python3 human_pose_estimation_demo.py -m ~/open_model_zoo/demos/human_pose_estimation_demo/python/intel/human-pose-estimation-0005/FP32/human-pose-estimation-0005.xml -at ae -i face-demographics-walking-and-pause.mp4 -d GPU
Voilà! The demo displays the resulting frames with a predicted poses: body skeleton, which consists of a predefined set of key points and connections between them, for every person in an input image/video. It also reports FPS (frames per second) and Latency. You can use both metrics to measure application-level performance.
Conclusion
A lot of tools that developers might want to use are best implemented or only available on a Linux platform. To solve this problem, OpenVINO™ toolkit works with WSL. As a result, developers and students alike are enabled to learn machine learning or develop deep learning applications on the operating system of their choice. Not only does using OpenVINO™ toolkit on WSL help accelerate your ML workflows with the CPU, but it also enables you to leverage Intel’s iGPU to boost your AI inferencing workloads.
Notices & Disclaimers
Performance varies by use, configuration, and other factors. Learn more at www.Intel.com/PerformanceIndex.
No product or component can be absolutely secure.
Intel does not control or audit third-party data. You should consult other sources to evaluate accuracy.
Your costs and results may vary.
Intel technologies may require enabled hardware, software or service activation.
Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.
© Intel Corporation. Intel, the Intel logo, OpenVINO and the OpenVINO logo are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.