- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello FIKI
Thank you for posting on the Intel Communities. Nowadays with the constant development in the AI space we know is important to be able to run language models on your system for learning and productivity.
Our Arc A-Series GPUs have specialized XMX extensions to help with AI learning and training so you should be able to run a language model on your A770, however, you will need to verify the compatibility with the language model developer and for specific instructions about how to run the language model you will need to check their documentation.
In case you are already trying to run a specific language model and if you are facing a specific issue you can share more information with us and we can take a look.
Best Regards,
Hugo O.
Intel Customer Support Technician.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Hugo!
Thank you for your reply!!
At present, the more popular open source big language models are llama, chatglm, etc., these models are based on the pytorch framework, but pytorch does not support Arc series graphics cards, for ordinary consumers, it is too difficult to run these into Intel frameworks, can the official make some adaptations to this?
As far as I know, at present, I personally believe that the demand on the consumer side is mainly focused on the deployment of AI painting and AI language models. The A770 has a very good cost performance, with the advantage of 16GB of large video memory, and it is very good to run a large model. If Intel can set up a project to make it easy for consumers with ARC graphics cards to deploy and use these large models, I think it will greatly increase the visibility and sales of the ARC series!
Best wishes!
FIKI.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello FIKI
Those models both Llama and Chatglm are supported through OpenVino as documented in the following link:
https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/whats-new.html
Also, Pytorch is supported in our Intel Arc GPUs through OpeVino and the official Intel® Extension for PyTorch.
Another example of a generative AI that runs on Intel Arc would be Stable Diffusion, check the following article:
Does Stable Diffusion Work on an Intel® Arc™ GPU?
You would need OpenVino in order to run LLM, OpenVino is out of our scope of support but you can feel free to open a new topic on the official forums for OpenVino so you can get more information or in the developer forums.
Best Regards,
Hugo O.
Intel Customer Support Technician.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello FIKI
I wanted to check if you have further questions about the AI language model support for Arc graphics. As explained in our previous post, most language models should run through OpenVino. Please let us know if you have any other concerns related to this topic.
Best Regards,
Hugo O.
Intel Customer Support Technician.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello FIKI
I see there are no further questions related to this topic, I hope the information we provided under this thread related to some of the language models that are supported in our products was useful to you. We will be closing this thread, however, if you need further assistance, feel free to open a new topic.
Best Regards,
Hugo O.
Intel Customer Support Technician.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page