- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I simply want to load an LLM model using CUDA on a free GPU. I've installed transformers, accelerate, huggingface_hub, bitsandbytes etc. and they have been installed in the local path. When I use '!pip list' in my Jupyter Notebook, all the modules are listed properly, but when I try to import them, it raises error saying module not found! Even when I try to check whether CUDA is available or not, it says 'False' but all the NVIDIA drivers are actually installed. So, how can I install relevant packages and load up an LLM model on to a free GPU?
Note: My account type is Standard
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Kishore, here's what I learned from the business team:
Thank you for your interest in Intel Tiber AI Cloud (ITAC). While ITAC provides a series of services to run AI workloads, they all utilize GPU and AI Accelerator hardware produced by Intel. As CUDA does not support Intel hardware, there is no pathway to utilize the two together.
However, the ITAC Learning Catalog does contain a hosted Jupyter named "Gemma Model Fine-tuning using SFT and LoRA" focused on the Gemma model, available at:
https://console.cloud.intel.com/learning/notebooks/detail/99deeb99-b0c6-4d02-a1d5-a46d95344ff3
It is likely not exactly what you are looking for, but please take a look as it does integrate Gemma with Intel hardware in its approach.
Can I support you further?
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi KishuPro,
Thank you for reaching out to us.
For your information, have you tried to restart the Kernel? Kindly try to restart, and feel free to come back if the issues still persist.
If possible, please share the screenshots that might help us in troubleshooting.
Regards,
Athirah
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @Athirah_Intel Thank you so much for the response!
Yes, I have tried to restart kernels, change kernels and also tried to relog, but nothing works for me. Here are some screenshots that should give you some idea:
it-01.jpg - Shows the result of !pip list command and we can see that the 'accelerate' and 'huggingface-hub' packages are installed.
it-02.jpg - Shows what happens when I try to import from huggingface_hub, it says no module named 'huggingface_hub'
it-03.jpg - Shows the result of !pip show huggingface_hub command, we can see that it's installed in my local folder. Which I've tried to add to system path too, but it doesn't work either.
it-04.jpg - Shows that I can successfully import pytorch but there is no CUDA available! It's neither available in the PyTorch GPU kernel nor in any other kernel.
I want to load up an LLM model on to a free GPU if possible.
Regards,
Kishu
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi KishuPro,
We have informed the relevant team regarding this issue. We will get back to you once we receive any feedback from them. Thank you for your patience.
Regards,
Athirah
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Kishore, I think that the issue is caused by configuration of huggingface_hub library. According to the documentation, it's recommended to use virtual Python environment and only minimal dependencies are installed unless specified otherwise.
Could you please try installing the library once more according to the official guide and let us know if the import has worked?
The guide: https://huggingface.co/docs/huggingface_hub/en/installation
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Actually, now I can import transformers, huggingface_hub and bitsandbytes etc. successfully in the PyTorch 2.5 kernel (and only in this one). But it says CUDA is not available and NVIDIA drivers are not installed whenever I try to run any GPU related code. Please check the following images:
it-05.jpg: Shows that I can successfully import all the relevant packages I need in the PyTorch 2.5 kernel.
it-06.jpg: Shows that CUDA is not available, and NVIDIA drivers are not installed (In none of the kernels).
So, do I have to install NVIDIA drivers myself first? What is the process to get a free GPU up and running?
Thanks!
Regards,
Kishore
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Kishore,
I would need some details about your setup: OS, GPU model etc. to suggest a suitable driver installation.
You can also refer to this general Jupyter Notebook configuration guide. https://saturncloud.io/blog/how-to-run-jupyter-notebook-on-gpus/
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for the link, it's useful in general. But I do know how to install drivers if I've system level access to and information about a system. My account type is Standard (Free), I do not have much info about or a system level access here on Intel Tiber, that's why I am asking from you in the first place. Nevertheless, I've tried to gather some info as shown in the attached it-07.jpg image. I am new here so am sorry if I am missing something. Thanks!
Regards,
Kishore
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for additional information. In this case, I recommend to start with Nvidia driver installation using ubuntu-drivers tool first:
https://ubuntu.com/server/docs/nvidia-drivers-installation
Next configuration step would be CUDA:
https://docs.nvidia.com/cuda/cuda-installation-guide-linux/
Please let me know if this helps you proceed.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Unfortunately, I don't have sudo access as far as I understand. I do not know the password for it and my Intel-Tiber password does not work there. I just want to try out a free GPU, please guide me. Thanks!
Regards,
Kishore
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Kishore,
Would you like me to escalate this case to our business team?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The following is written when I choose 'Home > Get Started' on my cloud console:
For developers, students, and AI/ML researchers - Gain mastery in AI and accelerated computing with Jupyter notebooks running on Intel GPUs and AI accelerators.
As far as I can understand GPU access should be available in Jupyter Notebooks even on a Standard Free account. So, if this is the case, then please do escalate this. I'd surely like to use the facility that Intel Tiber is generously providing for learning and research. Otherwise, please do let me know if it's not possible. Thanks!
Regards,
Kishore
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Kishore,
Sure, let me check with the business team if you are eligible for this programme. I know that Intel has a university partnership programme but this is eligible for specific schools.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Kishore, here's what I learned from the business team:
Thank you for your interest in Intel Tiber AI Cloud (ITAC). While ITAC provides a series of services to run AI workloads, they all utilize GPU and AI Accelerator hardware produced by Intel. As CUDA does not support Intel hardware, there is no pathway to utilize the two together.
However, the ITAC Learning Catalog does contain a hosted Jupyter named "Gemma Model Fine-tuning using SFT and LoRA" focused on the Gemma model, available at:
https://console.cloud.intel.com/learning/notebooks/detail/99deeb99-b0c6-4d02-a1d5-a46d95344ff3
It is likely not exactly what you are looking for, but please take a look as it does integrate Gemma with Intel hardware in its approach.
Can I support you further?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
First of all, I'd really like to appreciate your patience and regular responses for my questions, thanks!
For my case, here are some key points:
1. CUDA is not compatible with Intel GPU Hardware. This was perhaps my mistake as I never checked it before.
2. I was able to adapt the notebook resource you provided for my workload very easily!
3. Ultimately, I was able to run my workload on an XPU (Intel(R) Data Center GPU Max 1100) from my standard account.
Can I support you further?
Very nice of you to ask! You have already solved my problem though. If anything, you can provide some tutorial links to learn more about the different Intel GPU Hardware available here on ITAC for me and others.
Another suggestion, if it's possible, you can change my original topic question from "How do I install and use packages?" to something more relevant like "How do I install packages and use Intel GPU hardware", so that it's easier to find for others searching for the same.
Thanks again!
Regards
Kishore
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Kishore, I'm glad your case is resolved, I will close this issue from our side.
I will check with my colleagues if a topic title change is possible, I can't do it from my account.
You can find more about available GPU instances in the documentation: https://console.cloud.intel.com/docs/reference/gpu_instances.html

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page