I have read into oneAPI information. It seems like a great idea to ease the development for a heterogeous system. But I struggle to think of any real application that would need all of CPU, GPU, FPGA and AI accelerator. Intel must have thought about oneAPI's real use cases. Would you be able to share some of those thoughts behind oneAPI's real heterogeous use cases?
We liked it very much that you like the idea of OneAPI.
One thing I want to add here is that the main aim of OneAPI is it gives a single software abstraction that allows us to program on the variety of Hardware Architectures which might be scalar, vector, spatial or matrix. We know that there is no single architecture that is best for all workloads.
There might be some application which gives the best performance on CPU some might give on GPU, some might on AI and others might on FPGA depending on the behavior of code of the application.
For specific OneAPI's real use-cases we will ask our concerned team and will get back to you.
Stay Healthy Stay Safe!
Thank you for your reply and information.
I understand what oneAPI is trying to achieve and it's programming model etc. It's trying to unify several vertical programming models into one through the DPC++ lanugage and a set of optimized libraries and tools. And the beauity of it is its single source model.
Set aside the challenges for the DPC++ compiler, which I am sure it's just a matter of time for it to become mature and optimized to a point that it's competitive to the standalone way of programming different architecture as of today (in terms of performance and efficiency), I just struggle to see a real use case for one to benefit from such an unified tool set, if you compare to the standalone way. Today most use cases are around CPU+GPU, CPU+FPGA, CPU+AI ASIC. Can you give an example of a real system that oneAPI programming model can target and show advantages over other system configurations?
I understand what you want to ask and it seems quite interesting what you are doing.
Regarding the real use-cases of OneAPI, we are discussing it with our concerned team and will let you know as soon as possible.
Hi Frank, as you wrote, we expect the vast majority of accelerator configurations to be a CPU plus one or more of the same type of accelerator (e.g. CPU + GPU). It will be rare for a system to include more than one type of accelerator. The idea behind oneAPI is that it offers developers productivity benefits by reusing more code across architectures and leverage developers' experience using the same language, libraries, and analysis tools across CPU, GPU, FPGA and other future accelerators. For example, a developer may be trying to decide between use of a GPU or FPGA accelerator for a given project. oneAPI makes it easier to evaluate between those alternatives without having to use a different language and set of tools for each implementation. Another example scenario would be where a GPU accelerator was used in a previous project. A developer has decided to use an FPGA for a new project and wants to reuse some of that previous GPU project. oneAPI makes that possible as well.
Hi Kent, thanks for confirming my understanding. Let me break down your various points and discuss further if possible.
1. It will be rare for a system to include more than one type of accelerator.
[Frank's comment]: I can think of a possible use case scenario in cloud service providers such as AWS, Microsoft Azure, Alibaba, Tencent. It's common for CSP to consolidate hardware into compute resource pool, and offer compute instances to customer through virtualization/container technology. So it seems quite practical for CSP to put all the CPU, GPU, FPGA and ASIC into servers for such use case scenario. I wonder if this is a genuin intention from Intel oneAPI? And has any CSP shown real interests in using oneAPI to deply such setup? (I see only Tencent has joined oneAPI. No other CSP has done so.)
2. The idea behind oneAPI is that it offers developers productivity benefits by reusing more code across architectures and leverage developers' experience using the same language, libraries, and analysis tools across CPU, GPU, FPGA and other future accelerators.
[Frank's comment]: This is true at the high level. However, it is argueable that the libraries would remain the same when switching across different architectures. Libraries have different degree of association to the underneath hardware depending on which layer of the stack (e.g., Level Zero has the closest tight with hardware). Developers who are familiar with CPU might not have the same familarity with FPGA or AI accelerators. Hence the learning curve is still quite steep when switching architectures. At work place, it may be more realistic that you have a GPU team working on GPU related projects, and a FPGA team working on FPGA related projects. Because the skillsets are quite different. These hardware already have their mature development flows and tools. The vendors have been investing heavily to improve usability and performance for long time (e.g., Nvidia, Xilinx). How would oneAPI be able to change this?
Great questions. Yes, several CSPs have expressed interest in oneAPI and some are evaluating and providing feedback but we don’t have any public announcements to share at this time.
Regarding the trade offs between using single architecture developer tools vs. oneAPI, there will certainly be companies that choose to keep using the tools they know. But feedback from customers is that many prefer the oneAPI approach. Every organization will make their own decisions based on their specific requirements.
I appreciate these are not easily answered giving the development state of oneAPI. However I am seriously investigating oneAPI to see what I can do with it and if I should switch to follow its development. As you may know, switching development flow is expensive thing to do. I just want to have some full understanding and fair confidence before I do so. Therefore if you could point me to some more insides it would be much appreciated. Currently the public information stays at very high level.