Artificial Intelligence (AI)
Discuss current events in AI and technological innovations with Intel® employees
649 Discussions

Build and Develop ML workloads on Intel® Tiber™ Developer Cloud

Ramya_Ravi
Employee
1 0 3,428

Ramya Ravi, AI Software Marketing Engineer, Intel | LinkedIn
Sonya Wach, AI/ML Software Product Marketing, Intel | LinkedIn

Machine Learning (ML) is a field in artificial intelligence that uses statistical algorithms that learn from data and predict new outcomes on unseen data without explicit instructions. Training ML models require large amounts of data, computing power and infrastructure. Intel provides a more accessible and flexible solution - Intel® Tiber™ Developer Cloud allows developers to accelerate AI development with Intel optimized software on the latest Intel hardware.


Intel® has optimized major machine learning frameworks using oneAPI libraries which issues top performance across Intel architectures. These software optimizations help bring the best performance gains over stock implementations of the same frameworks. The Intel® Tiber™ Developer Cloud provides access to a variety of hardware, such as Intel® Gaudi® 2 AI Accelerators and Intel® Xeon® Scalable processors, to power AI applications and solutions utilizing ML frameworks. Intel Tiber Developer Cloud allows developers to learn, prototype, test, and run workloads on their preferred CPU or GPU, with the option to test the platform and software optimizations through free-to-use Jupyter notebooks and tutorials.


This article introduces the best practices to implement and develop ML workloads on Intel Tiber Developer Cloud. Before following the practices in this article, we recommend that you read the detailed guide on how to get started with Intel Tiber Developer Cloud.

General Intel® Tiber Developer Cloud Usage Instructions:

  • Navigate to cloud.intel.com
    Sign in or click the Get Started button to choose a service tier and create an account
    Navigate to SOFTWARE >Training on the left panel
    Click the Launch JuptyerLab button on the top right

screenshot1.png

Several kernel types exist in the JupyterLab based on developer needs. The kernels are pre-installed Python environments wherein the JupyterHub finds the corresponding packages installation to the specified environment when a user opens a new notebook. In most cases, the Base kernel will include the necessary packages needed to run the sample codes below.

Get Started with Machine Learning on Intel Tiber Developer Cloud

Intel® Extension for Scikit-learn*: Scikit-learn (sklearn) is a simple and efficient python package which is useful for predictive data analysis and machine learning. Intel Extension for Scikit-learn improves performance of many algorithms of scikit-learn on Intel® architectures. It attains the best performance for machine learning algorithms on Intel architectures, both single and multi-nodes.


Below is a guide on how to run the Intel Extension for Scikit-learn Performance Sample: SVC for Adult dataset on the Intel Tiber Developer Cloud:

  • Launch JupyterLab
  • In the Dashboard, open the raw Intel_Extension_for_SKLearn_Performance_SVC_Adult.ipynb by copy and pasting the URL to File > Open from URL...
  • Change the kernel, click Kernel > Change Kernel > Select Kernel > Base
  • Run all of the cells of the sample code and examine the outputs

This code sample shows how to train and predict with an SVC algorithm using Intel Extension for Scikit-learn. Also, the performance of Intel Extension for Scikit-learn and scikit-learn are compared and you will see that patching scikit-learn results in a significant increase in performance over the original scikit-learn.

screen 2.png

Modin*: Modin is a replacement for pandas. This enables data scientists to focus more on data analysis without having to change API code. This distribution adds optimizations to accelerate processing on Intel hardware. Intel up-streams all optimizations to open source Modin.

Follow the below steps on how to run the Modin Getting Started sample on the Intel Tiber Developer Cloud:

  • Launch JupyterLab
  • In the Dashboard, open the raw Modin_GettingStarted.ipynb by copy and pasting the URL to File > Open from URL...
  • Change the kernel, click Kernel > Change Kernel > Select Kernel > Modin
  • Run all the cells of the sample code and examine the outputs

This code sample demonstrates how to use Modin-accelerated Pandas functions and check the performance comparison for Modin and stock pandas functions.

screen 3.png

Gradient Boosting Optimizations from Intel: Gradient boosting is a machine learning ensemble technique that combines predictions from several models to construct a robust prediction model. XGBoost (Extreme Gradient Boosting) is an open-source machine learning library which implements scalable, distributed gradient-boosted decision tree. The users can accelerate gradient boosting inference without sacrificing accuracy by using the fast tree-inference capability in the daal4py library developed by Intel.


Below is a guide on how to run the XGBoost Getting Started sample on the Intel Tiber Developer Cloud:

  • Launch JupyterLab
  • In the Dashboard, open the raw IntelPython_XGBoost_GettingStarted.ipynb by copy and pasting the URL to File > Open from URL...
  • Change the kernel, click Kernel > Change Kernel > Select Kernel > Base
  • Run all of the cells of the sample code and examine the outputs

This code sample demonstrates how to set up, train a XGBoost model and predict based on the features of dataset using XGBoost optimizations from Intel.

screen 4.png

All the above frameworks optimized by Intel are available as part of AI Tools.


Check out Intel Tiber Developer Cloud to access the latest silicon hardware and optimized software to help develop and power your next innovative AI projects! We encourage you to check out Intel’s AI Tools and Framework optimizations and learn about the unified, open, standards-based oneAPI programming model that forms the foundation of Intel’s AI Software Portfolio. Also discover how our other collaborations with industry-leading independent software vendors (ISV), system integrators (SI), original equipment manufacturers (OEM), and enterprise users accelerate AI adoption.

Useful resources

About the Author
Product Marketing Engineer bringing cutting edge AI/ML solutions and tools from Intel to developers.