- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am trying to use intel-npu-acceleration library to enable local LLM inference task. However, it seems that the library does not support GGUF file parsing?
Link Copied
1 Reply
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Martin_HZK,
As the intel/intel-npu-acceleration-library: Intel® NPU Acceleration Library is not supported here, could you please to check your quesiton to Intel® Distribution of OpenVINO™ Toolkit - Intel Community
or Issues · openvinotoolkit/openvino
FYI, I also noticed some developer also try to use Ollama openvino : like zhaohb/ollama_ov: Add genai backend for ollama to run generative AI models using OpenVINO Runtime. you may try them.
Thanks

Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page