- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
How to collect inferences for stable diffusion and llama2 models for higher batch sizes also how can we run these models on Intel GPU ?
Link Copied
2 Replies
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Shravanthi,
OpenVINO™ offers two main paths for Generative AI use cases:
- Using OpenVINO as a backend for Hugging Face frameworks (transformers, diffusers) through the Optimum Intel extension.
- Using OpenVINO native APIs (Python and C++) with custom pipeline code.
For more information, you can refer to the Optimize and Deploy Generative AI Models.
Besides, there is also few Jupyter notebook tutorials for OpenVINO™ on running Generative AI models.
- Create an LLM-powered Chatbot using OpenVINO
- Text-to-Image Generation with Stable Diffusion and OpenVINO™
- Stable Diffusion Text-to-Image Demo
- Stable Diffusion v2.1 using Optimum-Intel OpenVINO
Regards,
Peh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Shravanthi,
This thread will no longer be monitored since we have provided suggestion and answer. If you need any additional information from Intel, please submit a new question.
Regards,
Peh

Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page