- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I followed the below document to run the ollama model in GPU using Intel IPEX
https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/ollama_quickstart.md
I couldn't get the inference from the model.
Error: llama runner process has terminated: exit status 0xc0000135
can anyone solve the issue
1 Solution
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Please open an issue on the ipex-llm github page. Here is the link : https://github.com/intel-analytics/ipex-llm/issues
Link Copied
1 Reply
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Please open an issue on the ipex-llm github page. Here is the link : https://github.com/intel-analytics/ipex-llm/issues

Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page