Intel® DevCloud
Help for those needing help starting or connecting to the Intel® DevCloud
Announcements
This community is designed for sharing of public information. Please do not share Intel or third-party confidential information here.
891 Discussions

oneAPI: Access to RAMDISK

s_n
Beginner
725 Views

Hi,

 

First of all thank you for providing such interesting development environment. I am thrilled to learn more about oneAPI and accompanied technologies.

 

I would like to ask if use of RAM as a disk is possible during oneAPI interactive session or during scheduled jobs? If so, would it be possible to receive some how to information.

 

Unix-like operating systems have a special folder (usually /tmp) for temporary files which is mapped into the main memory thus it does not suffer from slow IO like a HDD. And the same is true for /dev/shm. I saw that /dev/shm is present in my environment, however, it seemed during the test that the speed was not comparable to RAM.

 

Best regards,

s_n

0 Kudos
10 Replies
JananiC_Intel
Moderator
705 Views

Hi,


Thanks for your interest in Intel OneAPI. You can explore further with this link(https://devcloud.intel.com/oneapi/get_started/)


Regarding your question to use RAM as a disk ,we would like to know more about your experiments or workload.

Could you let us know what you are trying to do in devcloud?


Regards,

Janani Chandran


s_n
Beginner
689 Views

Hi!

>Thanks for your interest in Intel OneAPI. You can explore further with this link(https://devcloud.intel.com/oneapi/get_started/)

Thanks! I did some preliminary reading. As my background is more on a business than a development side, may I also ask if Julia extension to oneAPI might also works on Intel Devcloud? [https://github.com/JuliaGPU/oneAPI.jl].

I know that Julia is not natively supported, however, I am aware that the language works great here, and seems to be in general interest of Intel [Parallel Universe Magazine - Issue 38, August 2019, p. 47 and Issue 29, July 2017, p. 23] thus my question about oneAPI.jl. At oneAPI.jl github page, it is written: "For now, the oneAPI.jl package also depends on the Intel implementation of the oneAPI spec. That means you need compatible hardware; refer to the Intel documentation for more details."

>Regarding your question to use RAM as a disk ,we would like to know more about your experiments or workload.

The experiment is related to the analysis of outputs of supercomputers simulations that come in a form of a significant number of files (as for now, minimal number is 3600 and soon (hopefully) it can go up to 50 - 100k files per session, with 4 sessions a day). Thus the analysis I am doing currently is a little bit IO dependent. In addition, my current understanding is that due to some reasons it might be better to keep it in the sphere of file system operations rather than using RAM directly in a programming environment. Thus was my question about the RAMDISK.

>Could you let us know what you are trying to do in devcloud?

Sure. It is a hobby project with particular areas of interest related to: i) adjoint and forward sensitivity methods; ii) artificial intelligence and fast radiative transfer model, iii) simulated and real raster imagery and numerical data; iii) traditional and quantum optimization algorithms. At Intel Devcloud, I am currently interested most in oneAPI and FPGAs because a custom traditional optimization software I am currently using is written in OpenCL and heavily utilizes GPUs. Moreover, I am planning to work on artificial intelligence with the use of Intel supported solutions.

In addition, as during my university time, when at Oracle, I very closely worked particularly with Intel but also with about 30 major IT companies on a project that was under auspices of The Ministry of Economy in my country and turned out a success, so, in general I am particularly, however not only, interested in learning about Intel technologies. I have to admit that despite the fact that I am over 40 now, and as I mentioned, I am on a business side, every time I put this "ssh devcloud" command in my terminal and I am sipping coffee from a cup with Oracle and Intel logos, I have a big smile on my face.

This is why I am here and what I am doing. I am not sure if this answers your questions, however, as for now, I am not able to provide much more additional information about general activities publicly.

With kind regards!
s_n

s_n
Beginner
684 Views

*might also work (sorry about a typo)

s_n
Beginner
670 Views

I did some additional thinking. Just wanted to add that if there is anything in my writing that is not in line with Intel Devcloud policy please let me now. I will adjust immediately. I am not looking for any trouble - this is a great development environment.

JananiC_Intel
Moderator
646 Views

Hi,


Thanks for sharing all the requested details.


We do not support Julia on Devcloud.You could give it a try as Julia is taking advantage of oneAPI Level Zero to do device offload.The L0 runtime is installed on the DevCloud so theoretically oneAPI.jl should work there, but we haven’t tested it from our end.Juila can be also installed in your home directory using pip which doesn't require any special privilege.


Regarding your question to use tmp folder,you can direct your program's results/output or temp filed to /tmp.

But just remember that others sharing the same device can also use /tmp which means sharing speed might not be as expected and files written can be seen by others in /tmp.


Best is to use it for temporary access for file write/read etc.


Please find some useful links which will help you to explore more on oneAPI

Intel oneapi AI Analytics toolkit-

https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html#gs.5vf5j...

Intel oneapi base toolkit

https://software.intel.com/content/www/us/en/develop/tools/oneapi/base-toolkit.html#gs.5vf7gv

https://software.intel.com/content/www/us/en/develop/tools/oneapi/components/fpga.html#gs.5vfsic


Hope this helps!


Regards,

Janani Chandran


JananiC_Intel
Moderator
609 Views

Hi,


Has the solution provided helped? Do you have any updates?


Regards,

Janani Chandran


s_n
Beginner
596 Views

Hi,

Thank you very much Janani for the information! I am sorry for the cypher freak approach during the registration process. I have tried to update the profile, however, it seems not to be possible in my browser. My name is Szymon. 

>We do not support Julia on Devcloud.You could give it a try as Julia is taking advantage of oneAPI Level Zero to do device offload.The L0 runtime is installed on the DevCloud so theoretically oneAPI.jl should work there, but we haven?t tested it from our end.Juila can be also installed in your home directory using pip which doesn't require any special privilege.

I usually install Julia with the help of Julia Installer 4 Linux (https://github.com/abelsiqueira/jill) and it was the case here in a base environment (v. 1.6.1). For a separate conda environment I used conda package (conda install -c mjohnson541 julia) and installed v. 1.6.0.

> The L0 runtime is installed on the DevCloud so theoretically oneAPI.jl should work there, but we haven?t tested it from our end.

My current understanding is that you are referring to Intel(R) Graphics Compute Runtime for oneAPI Level Zero and OpenCL(TM) Driver (https://github.com/intel/compute-runtime). I was able to do tests in both environments which include a separate, clean conda environment with Julia 1.6 installed (almost nothing else). At Introducing: oneAPI.jl, dated Nov 5, 2020 webpage (https://juliagpu.org/post/2020-11-05-oneapi_0.1/index.html) it is written "no additional drivers required! oneAPI.jl ships its own copy of the Intel Compute Runtime, which works out of the box on any (sufficiently recent) Linux kernel". So I understand that L0 runtime that was mentioned by you is not needed to be separately installed. If possible, please do correct me if my understanding is not right.

I was able to successfully run oneAPI.jl in both environments (base and newly created conda) on Coffee Lake machine (Intel(R) Xeon(R) E-2176G CPU @ 3.70GHz / Intel(R) UHD Graphics P630 [0x3e96]) in interactive mode (qsub -I -l nodes=1:gen9:ppn=2 -d .). I understand that according to the information at Intel(R) Graphics Compute Runtime for oneAPI Level Zero and OpenCL(TM) Driver GitHub page (https://github.com/intel/compute-runtime) Coffee Lake platform is fully supported and I have not experienced any problems so far.

I also tried to run it on Intel(R) Core(TM) i9-10920X CPU @ 3.50GHz with Intel(R) Iris(R) Xe MAX Graphics (quad core) (qsub -I -l nodes=1:iris_xe_max:quad_gpu:ppn=2 -d .) Cascade Lake platform that is not mentioned at https://github.com/intel/compute-runtime and I encountered some problems. I am providing this info as a supplementary information. I would like to underline that I understand Cascade Lake is not on a list of supported devices.

I was not able to test oneAPI.jl package. >julia ] test oneAPI resulted in severe problems mostly related to: a) error: double type is not supported on this platform b) error: backend compiler failed build.

Also I was not sure about the kernels programming c) WARNING: both LLVM and ExprTools export "parameters"; uses of it in module oneAPI must be qualified.

Level zero wrappers have not returned any problems at basic testing, however, please be informed that this topic is very new for me.

Julia's high-level array abstractions were not supporting FLOAT64 operations. After changing operations from Float64 to Float32 Julia's high-level array abstractions seemed to work ok at basic level. However, mapreduce with Float32 returned:
d) ERROR: InvalidIRError: compiling kernel partial_mapreduce_device(typeof(identity), typeof(max), Float32, Val{16}, CartesianIndices{1, Tuple{Base.OneTo{Int64}}}, CartesianIndices{1, Tuple{Base.OneTo{Int64}}}, oneDeviceMatrix{Float32, 1}, Base.Broadcast.Broadcasted{oneAPI.oneArrayStyle{1}, Tuple{Base.OneTo{Int64}}, typeof(abs), Tuple{oneDeviceVector{Float32, 1}}}) resulted in invalid LLVM IR
Reason: unsupported dynamic function invocation (call to emit_printf(::Val{fmt}, argspec...) where fmt in oneAPI at /home/uxxxxx/.julia/packages/oneAPI/bEvNc/src/device/opencl/printf.jl:27)

As for this part, I would like to ask additional questions:

- Is there any additional hardware available at Devcloud that is supporting oneAPI Level Zero apart to Gen9?
- It seems that the Level Zero specification includes FPGAs. Are FPGAs currently supported?

> Regarding your question to use tmp folder,you can direct your program's results/output or temp filed to /tmp. But just remember that others sharing the same device can also use /tmp which means sharing speed might not be as expected and files written can be seen by others in /tmp.

Thank you. I did some tests and I confirm it is working, as well as /dev/shm.

>Please find some useful links which will help you to explore more on oneAPI, Intel oneapi AI Analytics toolkit, Intel oneapi base toolkit.

Thank you. I will read with interest!

Szymon

JananiC_Intel
Moderator
443 Views

Hi,


Thanks for such detailed response and we could see two questions in that response. So regarding your first question "Is there any additional hardware available at Devcloud that is supporting oneAPI Level Zero apart to Gen9?" 


If device is public like gen9 or Intel Iris Xe Max Graphics then Level zero will be supported.


For your second question "It seems that the Level Zero specification includes FPGAs. Are FPGAs currently supported?" 


No, currently FPGAs are not supported.


Hope we have answered your questions. Please let us know incase of any queries or concerns.


Regards,

Janani Chandran




s_n
Beginner
424 Views

Hi,

Thanks so much Janani for the reply. I also found an interesting article about Level Zero in Intel July Dev Products Insights newsletter [https://jjfumero.github.io/posts/2021/09/introduction-to-level-zero/] that provides additional information. I also plan to discuss the subject with the representative of Julia GPU Team within the next weeks. Should I have any additional questions I will allow myself to open this topic again.

Best regards,
SZ

JananiC_Intel
Moderator
379 Views

Hi,


Thanks for the confirmation. If you need any additional information, please submit a new question as this thread will no longer be monitored.


Regards,

Janani Chandran


Reply