Intel® Tiber Developer Cloud
Help connecting to or getting started on Intel® Tiber Developer Cloud
273 Discussions

How to load modules like hdf5, cmake, spack

yehonatan123f
Novice
1,821 Views

Hello everyone,

I'm interested in constructing a specific branch of the OpenMC application, and for that, I require hdf5, spack, and cmake. However, when I run the "module avail" command, I couldn't find any module there. Is there a solution for this issue, or is there an alternative method to acquire these fundamental dependencies? I'm attempting to steer clear of local installations. However, when I attempt to download the hdf5 tar file (to build locally), it appears to be excessively large to upload to the developer cloud. Is there a workaround for this?

 

Thanks!

Labels (1)
0 Kudos
8 Replies
Athirah_Intel
Moderator
1,778 Views

Hi yehonatan123f,

Thank you for reaching out to us.


Please share the following information with us so that we can investigate this issue further:

  • Which guide are you referring to for installing the OpenMC application?
  • Commands used to install HDF5, CMake, and Spack.
  • The source for where you downloaded the HDF5 tar file from.



Lastly, can you provide us the Instance Details of your Intel® Developer Cloud as below: 

 

Instance ID:

Instance Type:

Start Time:

End Time:


Regards,

Athirah


0 Kudos
yehonatan123f
Novice
1,716 Views

Thank you for reaching back.

I am using the following repo to build OpenMC: https://github.com/jtramm/openmc_offloading_builder.

According to the build_openmc.sh script, these are the dependencies needed:

```

# HDF5 and CMake dependencies
module load spack
module load cmake
module load hdf5

```

I was thinking that, as I understand that no modules are currently available on the system, I should start with building HDF5 manually. 

Not sure what version of HDF5 is required, but tried to download hdf5-1.14.2.tar.gz (19.82 MB) from https://www.hdfgroup.org/downloads/hdf5/source-code. 

Actually, I started working on a login node on the Developer cloud (before I spend money on dedicated hardware). When I tried to upload the tar.gz file to my home directory, I got an upload error: "Invalid response: 413 Request Entity Too Large".   

 

Thank you again,

Yehonatan

 

0 Kudos
Athirah_Intel
Moderator
1,694 Views

Hi yehonatan123f,

Thank you for reaching out to us.


For clarification purposes, you've mentioned that you are using a login node. If so, are you currently using the https://devcloud.intel.com/oneapi/home/ platform?


 

Regards,

Athirah


0 Kudos
yehonatan123f
Novice
1,678 Views

Not sure what I am running on. 

On Jupyter Lab terminal it seems like this: 

XXX_my_user_name_XXX@idc-beta-batch-pvc-node-14.

 

 

0 Kudos
yehonatan123f
Novice
1,672 Views

As I understand, the problem with uploading the HDF5 files is in the upload GUI. 

When I run 'git clone' command, I find this problem resolved.  

So now I just will install all my dependencies manually and locally. But still, if there are any solutions via the module systems, that would be great.

 

Thank you

0 Kudos
Athirah_Intel
Moderator
1,569 Views

Hi yehonatan123f,

Thank you for sharing the information. 

 

For your information. I also received the same error when uploading the HDF5 files on Jupyter Lab. 

 error jupyter 2.png

 

 

However, no issue was observed when using the git clone method instead.

git clone2.png

 

 

We have informed the relevant team about this issue for further investigation and will update you as soon as possible.

 

 

Regards,

Athirah

 

 

0 Kudos
Athirah_Intel
Moderator
1,553 Views

Hi yehonatan123f,

We just got an update from the relevant team regarding this issue. 

 

There's currently an issue when uploading files using the GUI in the Jupyter Lab instances and it will take some time to fix it.

 

So the best way for now is to use git clone or a link to download the file. This is only temporary and should be fixed soon.

 

 

Regards,

Athirah

 

0 Kudos
Athirah_Intel
Moderator
1,448 Views

Hi yehonatan123f,

 

This thread will no longer be monitored since we have provided information. If you need any additional information from Intel, please submit a new question. 

 

 

Regards,

Athirah


0 Kudos
Reply