AI Tools from Intel
Find answers to your toolkit installation, configuration, and get-started questions.
公告
FPGA community forums and blogs on community.intel.com are migrating to the new Altera Community and are read-only. For urgent support needs during this transition, please visit the FPGA Design Resources page or contact an Altera Authorized Distributor.
156 讨论

Anyone Knows how to integrate my Intel Arc to windows or ubuntu to train and run LLMS on it

MJY
初学者
4,069 次查看

Hey 

i wanted to ask if you guys know how to use my intel GPU for AI training and Deploying 

i tried everything but nothing works wsl, torch extention kit for Intel gpus everything

soo yep

标签 (4)
0 项奖励
11 回复数
Vipin_S_Intel
主持人
3,986 次查看

Hi Maximilian, Could you please share the below details with us?

 

  • Provide the exact product name and version that you are using or wish to use.
  • Include the operating system and its build version on which you are installing the product.
  • Provide a brief explanation of your query.

 

To assist you further, we would require these details.


0 项奖励
MJY
初学者
3,940 次查看

Hey

 

  • Iam using Windows 10 ( Build version 22H2 with latest updates)

  • Intel Arc A750 latest drivers installed

  • WSL ( latest)

I try to integrate my intel arc A750 in Windows 10 in wsl ( Windows Subsystm for Linux ) to train and execute LLM on it with the oneapi toolkit but it never works even though I follow the guide on intel so I ask here for help if someone has done this before and can help me

0 项奖励
Srii
员工
3,913 次查看

Do you have an specific models in mind. Please keep in mind to use a LLM that fits in the memory of Arc . This link will help you get started. https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/LLM-Finetuning. Additionally if you are looking at training resnet based models you can refer here too : https://github.com/intel/intel-extension-for-pytorch/tree/xpu-master/examples/gpu/training








0 项奖励
MJY
初学者
3,899 次查看

Can i run and finetune them over WSL?

and do you have a post or a guide how i can finetune them and run them in WSL?

0 项奖励
MJY
初学者
3,683 次查看

Hey so i try to use oneapi toolkit on my windows 10 maschine wiht Visual studio 22 installed and the IDE integrated and oneAPI toolkid succesfully installed.

i follow this guide: "https://www.intel.com/content/www/us/en/docs/oneapi-base-toolkit/get-started-guide-windows/2024-0/run-a-sample-project-using-an-ide.html"

 

and when they start with the first project with this topic: 

Create a Project Using Microsoft Visual Studio*

 

and i get down succesfully without error until this comes: 

  1. In the center area, select Vector Add. Vector Add is a simple test application that will help verify that the tools are setup correctly and can access your system's GPU.
  2. Click OK.
  3. From the Solution Explorer, right-click on vector-add and select Rebuild.
  4. After the program is built, click Debug > Start Without Debugging. The results will display

i tried everything but nothing is wokring i dont get the Rebuild things in my  Visual studio tried everything wiht chatgpt some listed help guides nothing is working

0 项奖励
MJY
初学者
3,636 次查看

MJY_0-1726392224746.png

see there is no rebuild

0 项奖励
Srii
员工
3,562 次查看

This should help you finetune on arc770:https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/LLM-Finetuning/LoRA#finetuning-llama2-7b-on-single-arc-a770


And with respect to rebuild option not being shown, did you select continue without code option at the beginning?


0 项奖励
MJY
初学者
3,530 次查看

Yep i selected it and selected the Oneapi example the first one and the one that they told in the start guide i did everything exactly how the start guide told me

 

And i want to use my Intel Arc A750 GPU for finetunening.

0 项奖励
MJY
初学者
3,506 次查看

Hey again

 

Today i tried setting it up in ubuntu but intel extention for pytorch isnt working for me not inWSL or Ubuntu and your docs how to do it are so complicated becouse one says so but the other one the other.

So could you please guide me trough the Enviroment setup and how to see that Intel extention for pytorch is working.

0 项奖励
MJY
初学者
3,502 次查看

MJY_0-1726577724778.pngMJY_1-1726577753100.pngMJY_2-1726577774050.png

So firstly like in the how to i go to continue without code then to extentions and then to Intel and then to browse OneAPI samples and i select: Base: Vector Add i click on okey and then i want to do the rebuild but i dont have it.

MJY_3-1726577931802.png

 

i go up where you can see Build and there is an option named rebuild on i click on it and get this error:
Severity Code Description Project File Line Suppression State Details
Error LNK1104 cannot open file 'libircmt.lib' D:\Base_Vector_Add1\out\build\x64-Debug\Base_Vector_Add1 D:\Base_Vector_Add1\out\build\x64-Debug\LINK 1

but i get it becouse i installed one API onto an other Volume.

then i get to Cmakelist.txt and put the path in where the lib file is located

MJY_6-1726578289410.png

 

then i rebuild it again and get this error: 

Severity Code Description Project File Line Suppression State Details
Error CMake Error at D:\Base_Vector_Add1\CMakeLists.txt:31 (add_executable):
Cannot find source file:

src/main.cpp

Tried extensions .c .C .c++ .cc .cpp .cxx .cu .mpp .m .M .mm .ixx .cppm
.ccm .cxxm .c++m .h .hh .h++ .hm .hpp .hxx .in .txx .f .F .for .f77 .f90
.f95 .f03 .hip .ispc VectorAdd D:\Base_Vector_Add1\CMakeLists.txt 31

so yea i hope that helped a bit with troubleshooting 



 

 

0 项奖励
Ying_H_Intel
主持人
2,750 次查看

 

 Hi MJY,

To make the IPEX install in windows or Ubuntu,  here is the formal install guide:  https://intel.github.io/intel-extension-for-pytorch/index.html#installation

Ying_H_Intel_0-1733105081016.png

in order to make thing simplier, you may please enter  in  command line or conda prompt, for example, 

in unbuntu command line: 

install conda environment: 

wget https://github.com/conda-forge/miniforge/releases/download/24.7.1-0/Miniforge3-24.7.1-0-Linux-x86_64.sh

chmod +x Miniforge3-24.7.1-0-Linux-x86_64.sh

bash ./Miniforge3-24.7.1-0-Linux-x86_64.sh

source ~/miniforge3/bin/activate

conda create -n llm python=3.10
conda activate llm
python -m pip install torch==2.3.1+cxx11.abi torchvision==0.18.1+cxx11.abi torchaudio==2.3.1+cxx11.abi intel-extension-for-pytorch==2.3.110+xpu oneccl_bind_pt==2.3.100+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
export OCL_ICD_VENDORS=/etc/OpenCL/vendors
export CCL_ROOT=${CONDA_PREFIX}
python -c "import torch; import intel_extension_for_pytorch as ipex; print(torch.__version__); print(ipex.__version__); [print(f'[{i}]: {torch.xpu.get_device_properties(i)}') for i in range(torch.xpu.device_count())];"
after sanity test, you will see IPEX is installed successfully. 
 
#then install all needed modules by 
pip install transformers==4.42.0 datasets 

 

 

0 项奖励
回复