Artificial Intelligence (AI)
Discuss current events in AI and technological innovations with Intel® employees
649 Discussions

Harness PyTorch* and TensorFlow* for AI on Intel® Tiber™ Developer Cloud

Ramya_Ravi
Employee
1 0 5,038

Ramya Ravi, AI Software Marketing Engineer, Intel | LinkedIn

Sonya Wach, AI/ML Software Product Marketing, Intel | LinkedIn

Deep Learning (DL) is a branch of artificial intelligence which uses neural networks to enable computers to process complex problems and information in a way that mimics human neural processes. PyTorch* and TensorFlow* are the most popular deep learning frameworks designed to process a lot of data but differ in how they execute the code.

Intel® offers Intel® Tiber™ Developer Cloud which allows developers to build and accelerate AI development with Intel optimized software on the latest Intel hardware. Intel has also optimized major machine learning and deep learning frameworks using oneAPI libraries which issues top performance across Intel architectures. With these software optimizations, the users can achieve the best performance gains over stock implementations of the same frameworks. The Intel Tiber Developer Cloud provides access to a variety of hardware, such as Intel® Gaudi® 2 AI Accelerators and Intel® Xeon® Scalable processors, to power AI applications and solutions utilizing ML frameworks. Intel Tiber Developer Cloud allows developers to learn, prototype, test, and run workloads on their preferred CPU or GPU, with the option to test the platform and software optimizations through free-to-use Jupyter notebooks and tutorials.

This article demonstrates how to build and develop deep learning workloads using PyTorch and TensorFlow on Intel Tiber Developer Cloud. Before following the steps provided in this article, we recommend that you read the detailed guide on how to get started with Intel Tiber Developer Cloud.

General Intel® Tiber™ Developer Cloud Usage Instructions:

  • Navigate to cloud.intel.com
  • Sign in or click the Get Started button to choose a service tier and create an account
  • Navigate to SOFTWARE >Training on the left panel
  • Click the Launch JuptyerLab button on the top right

screenshot1.png

Several kernel types exist in the JupyterLab based on developer needs. The kernels are pre-installed Python environments wherein the JupyterHub finds the corresponding packages installation to the specified environment when a user opens a new notebook. In most cases, the Base kernel will include the necessary packages needed to run the sample codes below.

 

Get Started with Deep Learning on Intel Tiber Developer Cloud

Intel® Extension for PyTorch*: PyTorch is a Python based open-source library for deep learning and AI. The Intel® Extension for PyTorch optimizes deep learning training and inference performance on Intel processors by extending PyTorch with up-to-date features optimizations. Specific optimizations for Large Language Models (LLMs) for a variety of Generative AI (GenAI) applications is also included in the Intel Extension for PyTorch.

Below is a guide on how to run the Intel Extension for PyTorch Quantization Sample on the Intel Tiber Developer Cloud:

  1. Launch JupyterLab
  2. In the Dashboard, open the raw IntelPytorch_Quantization.ipynb by copy and pasting the URL to File > Open from URL...
  3. Change the kernel, click Kernel > Change Kernel > Select Kernel > PyTorch
  4. Run all the cells of the sample code and examine the outputs

This code sample showcases the quantization of a ResNet50 model while utilizing The Intel Extension for PyTorch (IPEX). The model runs inference with FP32 and INT8 precision, including static INT8 quantization and dynamic INT8 quantization. In this example, the inference time will be compared, showcasing the speedup of INT8 Quantization.

 

screenshot2.png

 

Intel® Extension for TensorFlow*: TensorFlow is an open-source framework for creating and deploying deep learning frameworks in a variety of applications. The Intel® Extension for TensorFlow utilizes OpenMP to parallelize deep learning execution among CPU cores. The extension allows users to flexibly plug an XPU into TensorFlow on-demand, exposing the computing power inside Intel's hardware.

Follow the below steps on how to run the Leveraging Intel Extension for TensorFlow with LSTM for Text Generation Sample on the Intel Tiber Developer Cloud:

  1. Launch JupyterLab
  2. In the Dashboard, open the raw TextGenerationModelTraining.ipynb by copy and pasting the URL to File > Open from URL...
  3. Change the kernel, click Kernel > Change Kernel > Select Kernel > TensorFlow GPU
  4. Run all the cells of the sample code and examine the outputs

This code sample demonstrates how to train the text generation model using LSTM and Intel Extension for TensorFlow on Intel processors. The main goal of the text generation model is to predict the probability distribution of the next word in a sequence based on the given input (for better results, provide the input from a text sentence). By using Intel Extension of TensorFlow, there will be faster training time and less GPU memory consumption.

 

screenshot 3.png

 

All the above frameworks optimized by Intel are available as part of AI Tools. There is also a guide for running popular ML frameworks like Scikit-learn*, Modin*, and XGBoost on Intel Tiber Developer Cloud.

Check out Intel Tiber Developer Cloud to access the latest silicon hardware and optimized software to help develop and power your next innovative AI projects! We encourage you to check out Intel’s AI Tools and Framework optimizations and learn about the unified, open, standards-based oneAPI programming model that forms the foundation of Intel’s AI Software Portfolio. Also discover how our other collaborations with industry-leading independent software vendors (ISV), system integrators (SI), original equipment manufacturers (OEM), and enterprise users accelerate AI adoption.

 

Useful resources

About the Author
Product Marketing Engineer bringing cutting edge AI/ML solutions and tools from Intel to developers.