I knew the oneAPI through my professor, which told me that this solution could be very useful for training neural networks through different hardwares. I have my project written in Python but I'm still confused about what is the proposal of the oneAPI. Can it be used to train my deep learning algorithm using different hardwares? How could it be done?
Thanks in advance
- General Support
Modern workload diversity has resulted in a need for architectural diversity so no single architecture is best for every workload. A mix of scalar, vector, matrix, and spatial architectures deployed in CPU, GPU, and FPGA accelerators are required to extract the needed performance. Today, coding for CPUs and accelerators requires different languages, libraries, and tools. That means each hardware platform requires completely separate software investments and provides limited application code reusability across different target architectures. The OneAPI programming model simplifies the programming of CPUs and accelerators using modern C++ features to express parallelism with a programming language called Data-Parallel C++ (DPC++). The DPC++ language enables code reuse for the host (such as a CPU) and accelerators (such as a GPU, FPGA) using a single source language. Mapping within the DPC++ code can be used to transition the application to run on the hardware, or set of hardware, that best accelerates the workload.
Yes, you can definitely train your DL algorithm but only need is that you have to write it into DPC++ so that you can offload it over different set hardware or you can also see Intel® AI Analytics Toolkit or Intel® Distribution of OpenVINO™ Toolkit.
I am giving you some links which will help you in getting started with OneAPI
The heterogeneous platform support is available only for dpc++ programming language with oneAPI. Thus OneAPI as of now does not support running code written in python on different hardware. If you need your python project to run on different hardware the currently available solution is to offload the part you wish to run on a different hardware to dpc++ language from python.
However Intel has released the Intel® AI Analytics Toolkit as a part of the oneAPI initiative to help with python deep learning projects by providing Intel®-optimized DL frameworks and tools to optimize workloads on CPU's.
The Intel® AI Analytics Toolkit can be used to
1)Deliver high-performance training on CPUs and integrate deep learning (DL) inference into your AI applications with Intel®-optimized DL frameworks: TensorFlow* and PyTorch*.
2)Accelerate data science and analytics stages with compute-intensive Python* packages enhanced for Intel® architectures, including NumPy, SciPy, scikit-learn*, and XGboost*
You can see more about the toolkit by visiting the below link
If you are flexible enough to try out dpc++ , Intel has also launched Intel® oneAPI DL Framework Developer Toolkit which offers optimized building blocks to train deep neural networks through a high-level programming interface
The DL Framework Developer toolkit includes the Intel® oneAPI Deep Neural Network Library along with it. Intel® oneAPI Deep Neural Network Library could be utilized to Develop fast neural networks on Intel® CPUs and GPUs with performance-optimized building blocks, but to remind you once again these neural networks should be programmed in dpc++ programming language.
For more information on DL Framework Developer toolkit you could checkout the below link