- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi everyone,
I am a Intel B580 GPU owner that has tried to run a LLM via llama.cpp's Vulkan version on Windows 11.
I am not even going to comment on the fact that it doesn't work at all in WSL2, although that should be fixed as well, since the llama.cpp Vulkan version doesn't work at all under WSL2. Granted, I didn't bother to compile the SYCL version, tho I understand that's deprecated.
As I mention in this thread, I have the faint feeling that the GPU measures the temperature of itself in a really odd and inconsequential manner, leading to llama.cpp crashing at ~65 degrees Celsius, for whatever reason.
I have tested it on the latest 32.0.101.8531 driver, and the previous version as well. No difference.
There's a Github thread here: https://github.com/ggml-org/llama.cpp/issues/18984
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page