<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: oneAPI error while running ComfyUI in AI Tools from Intel</title>
    <link>https://community.intel.com/t5/AI-Tools-from-Intel/oneAPI-error-while-running-ComfyUI/m-p/1700749#M993</link>
    <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp; Sorry to reply late!&lt;/P&gt;&lt;P&gt;&amp;nbsp; No need to create issue again!&amp;nbsp;Let me check this issue firstly.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp; Thank you!&lt;/P&gt;</description>
    <pubDate>Tue, 01 Jul 2025 01:59:58 GMT</pubDate>
    <dc:creator>Jianyu_Z_Intel</dc:creator>
    <dc:date>2025-07-01T01:59:58Z</dc:date>
    <item>
      <title>oneAPI error while running ComfyUI</title>
      <link>https://community.intel.com/t5/AI-Tools-from-Intel/oneAPI-error-while-running-ComfyUI/m-p/1699727#M985</link>
      <description>&lt;P&gt;I have no idea. Please help. I am not a programmer, nor do I have any tech knowledge in coding. I just install whatever is required to run ComfyUI with my B580. Am using Comfy CLI. Loaded Wan2.1 14B workflow, then these codes always show, then it dramatically slows down the workflow (almost at HALT). Please see below code.&lt;/P&gt;&lt;LI-CODE lang="bash"&gt;got prompt
Using pytorch attention in VAE
Using pytorch attention in VAE
VAE load device: xpu:0, offload device: cpu, dtype: torch.bfloat16
# 😺dzNodes: LayerStyle -&amp;gt; ImageScaleByAspectRatio V2 Processed 1 image(s).
Requested to load CLIPVisionModelProjection
loaded completely 10030.4796875 1208.09814453125 True
Requested to load WanTEModel
loaded completely 9.5367431640625e+25 10835.4765625 True
CLIP/text encoder model load device: cpu, offload device: cpu, current: cpu, dtype: torch.float16
model weight dtype torch.float8_e4m3fn, manual cast: torch.float16
model_type FLOW
Requested to load WanTEModel
loaded completely 0.0 10835.4765625 True
loaded completely 0.0 10835.4765625 True
Requested to load WanVAE
0 models unloaded.
loaded completely 0.0 242.02829551696777 True
onednn_verbose,v1,info,oneDNN v3.8.1 (commit df786faad216a0024da083786a5047af6014fe59)
onednn_verbose,v1,info,cpu,runtime:threadpool,nthr:6
onednn_verbose,v1,info,cpu,isa:Intel AVX2 with Intel DL Boost
onednn_verbose,v1,info,gpu,runtime:DPC++
onednn_verbose,v1,info,gpu,engine,sycl gpu device count:1
onednn_verbose,v1,info,gpu,engine,0,backend:Level Zero,name:Intel(R) Arc(TM) B580 Graphics,driver_version:1.6.33511,binary_kernels:enabled
onednn_verbose,v1,info,graph,backend,0:dnnl_backend
onednn_verbose,v1,primitive,info,template:operation,engine,primitive,implementation,prop_kind,memory_descriptors,attributes,auxiliary,problem_desc,exec_time
onednn_verbose,v1,graph,info,template:operation,engine,partition_id,partition_kind,op_names,data_formats,logical_tensors,fpmath_mode,implementation,backend,exec_time
onednn_verbose,v1,common,error,ocl,Error during the build of OpenCL program. Build log:
1:4014:1: error: no matching function for call to 'block2d_load'
DECLARE_2D_TILE_BLOCK2D_OPS(a_tile_type_dst, DST_DATA_T, SUBGROUP_SIZE,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1:3483:3: note: expanded from macro 'DECLARE_2D_TILE_BLOCK2D_OPS'
= block2d_load(ptr, m * e, n, ld * e, offset_r + ii * br, \
  ^~~~~~~~~~~~
1:3081:1: note: candidate disabled: wrong #rows
DEF_BLOCK2D_LOAD_STORE(half, ushort, 8, 16, u16_m8k16v1, 16, 8)
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1:3050:40: note: expanded from macro 'DEF_BLOCK2D_LOAD_STORE'
__attribute__((overloadable)) type##vl block2d_load(const global type *p, \
                                       ^
1:3082:1: note: candidate disabled: wrong #rows
DEF_BLOCK2D_LOAD_STORE(half, ushort, 8, 16, u16_m4k32v1, 32, 4)
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1:3050:40: note: expanded from macro 'DEF_BLOCK2D_LOAD_STORE'
__attribute__((overloadable)) type##vl block2d_load(const global type *p, \
                                       ^
1:3083:1: note: candidate disabled: wrong #rows
DEF_BLOCK2D_LOAD_STORE(half, ushort, 16, 16, u16_m8k32v1, 32, 8)
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1:3050:40: note: expanded from macro 'DEF_BLOCK2D_LOAD_STORE'
__attribute__((overloadable)) type##vl block2d_load(const global type *p, \
                                       ^
1:3084:1: note: candidate disabled: wrong #rows
DEF_BLOCK2D_LOAD_STORE(ushort, ushort, 8, 16, u16_m8k16v1, 16, 8)
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1:3050:40: note: expanded from macro 'DEF_BLOCK2D_LOAD_STORE'
__attribute__((overloadable)) type##vl block2d_load(const global type *p, \
                                       ^
1:3085:1: note: candidate disabled: wrong #rows
DEF_BLOCK2D_LOAD_STORE(ushort, ushort, 8, 16, u16_m4k32v1, 32, 4)
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1:3050:40: note: expanded from macro 'DEF_BLOCK2D_LOAD_STORE'
__attribute__((overloadable)) type##vl block2d_load(const global type *p, \
                                       ^
1:3086:1: note: candidate disabled: wrong #rows
DEF_BLOCK2D_LOAD_STORE(ushort, ushort, 16, 16, u16_m8k32v1, 32, 8)
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1:3050:40: note: expanded from macro 'DEF_BLOCK2D_LOAD_STORE'
__attribute__((overloadable)) type##vl block2d_load(const global type *p, \
                                       ^
1:4014:1: error: no matching function for call to 'block2d_store'
DECLARE_2D_TILE_BLOCK2D_OPS(a_tile_type_dst, DST_DATA_T, SUBGROUP_SIZE,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1:3497:52: note: expanded from macro 'DECLARE_2D_TILE_BLOCK2D_OPS'
_Pragma("unroll") for (int ii = 0; ii &amp;lt; nbr; ii++) block2d_store( \
                                                   ^~~~~~~~~~~~~
1:3084:1: note: candidate function not viable: no known conversion from '__private _e_a_tile_type_dst' (vector of 32 'ushort' values) to '__private ushort8' (vector of 8 'ushort' values) for 1st argument
DEF_BLOCK2D_LOAD_STORE(ushort, ushort, 8, 16, u16_m8k16v1, 16, 8)
^
1:3065:36: note: expanded from macro 'DEF_BLOCK2D_LOAD_STORE'
__attribute__((overloadable)) void block2d_store(type##vl v, \
                                   ^
1:3085:1: note: candidate function not viable: no known conversion from '__private _e_a_tile_type_dst' (vector of 32 'ushort' values) to '__private ushort8' (vector of 8 'ushort' values) for 1st argument
DEF_BLOCK2D_LOAD_STORE(ushort, ushort, 8, 16, u16_m4k32v1, 32, 4)
^
1:3065:36: note: expanded from macro 'DEF_BLOCK2D_LOAD_STORE'
__attribute__((overloadable)) void block2d_store(type##vl v, \
                                   ^
1:3086:1: note: candidate function not viable: no known conversion from '__private _e_a_tile_type_dst' (vector of 32 'ushort' values) to '__private ushort16' (vector of 16 'ushort' values) for 1st argument
DEF_BLOCK2D_LOAD_STORE(ushort, ushort, 16, 16, u16_m8k32v1, 32, 8)
^
1:3065:36: note: expanded from macro 'DEF_BLOCK2D_LOAD_STORE'
__attribute__((overloadable)) void block2d_store(type##vl v, \
                                   ^
1:3081:1: note: candidate function not viable: no known conversion from '__private _e_a_tile_type_dst' (vector of 32 'ushort' values) to '__private half8' (vector of 8 'half' values) for 1st argument
DEF_BLOCK2D_LOAD_STORE(half, ushort, 8, 16, u16_m8k16v1, 16, 8)
^
1:3065:36: note: expanded from macro 'DEF_BLOCK2D_LOAD_STORE'
__attribute__((overloadable)) void block2d_store(type##vl v, \
                                   ^
1:3082:1: note: candidate function not viable: no known conversion from '__private _e_a_tile_type_dst' (vector of 32 'ushort' values) to '__private half8' (vector of 8 'half' values) for 1st argument
DEF_BLOCK2D_LOAD_STORE(half, ushort, 8, 16, u16_m4k32v1, 32, 4)
^
1:3065:36: note: expanded from macro 'DEF_BLOCK2D_LOAD_STORE'
__attribute__((overloadable)) void block2d_store(type##vl v, \
                                   ^
1:3083:1: note: candidate function not viable: no known conversion from '__private _e_a_tile_type_dst' (vector of 32 'ushort' values) to '__private half16' (vector of 16 'half' values) for 1st argument
DEF_BLOCK2D_LOAD_STORE(half, ushort, 16, 16, u16_m8k32v1, 32, 8)
^
1:3065:36: note: expanded from macro 'DEF_BLOCK2D_LOAD_STORE'
__attribute__((overloadable)) void block2d_store(type##vl v, \
                                   ^
,src\gpu\intel\ocl\engine.cpp:166
onednn_verbose,v1,primitive,error,ocl,errcode -11,CL_BUILD_PROGRAM_FAILURE,src\gpu\intel\ocl\engine.cpp:270,src\gpu\intel\ocl\engine.cpp:270
onednn_verbose,v1,common,error,ocl,Error during the build of OpenCL program. Build log:
2:4014:1: error: no matching function for call to 'block2d_load'
DECLARE_2D_TILE_BLOCK2D_OPS(a_tile_type_dst, DST_DATA_T, SUBGROUP_SIZE,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2:3483:3: note: expanded from macro 'DECLARE_2D_TILE_BLOCK2D_OPS'
= block2d_load(ptr, m * e, n, ld * e, offset_r + ii * br, \
  ^~~~~~~~~~~~
2:3081:1: note: candidate disabled: wrong #rows
DEF_BLOCK2D_LOAD_STORE(half, ushort, 8, 16, u16_m8k16v1, 16, 8)
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2:3050:40: note: expanded from macro 'DEF_BLOCK2D_LOAD_STORE'
__attribute__((overloadable)) type##vl block2d_load(const global type *p, \
                                       ^
2:3082:1: note: candidate disabled: wrong #rows
DEF_BLOCK2D_LOAD_STORE(half, ushort, 8, 16, u16_m4k32v1, 32, 4)
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2:3050:40: note: expanded from macro 'DEF_BLOCK2D_LOAD_STORE'
__attribute__((overloadable)) type##vl block2d_load(const global type *p, \
                                       ^
2:3083:1: note: candidate disabled: wrong #rows
DEF_BLOCK2D_LOAD_STORE(half, ushort, 16, 16, u16_m8k32v1, 32, 8)
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2:3050:40: note: expanded from macro 'DEF_BLOCK2D_LOAD_STORE'
__attribute__((overloadable)) type##vl block2d_load(const global type *p, \
                                       ^
2:3084:1: note: candidate disabled: wrong #rows
DEF_BLOCK2D_LOAD_STORE(ushort, ushort, 8, 16, u16_m8k16v1, 16, 8)
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2:3050:40: note: expanded from macro 'DEF_BLOCK2D_LOAD_STORE'
__attribute__((overloadable)) type##vl block2d_load(const global type *p, \
                                       ^
2:3085:1: note: candidate disabled: wrong #rows
DEF_BLOCK2D_LOAD_STORE(ushort, ushort, 8, 16, u16_m4k32v1, 32, 4)
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2:3050:40: note: expanded from macro 'DEF_BLOCK2D_LOAD_STORE'
__attribute__((overloadable)) type##vl block2d_load(const global type *p, \
                                       ^
2:3086:1: note: candidate disabled: wrong #rows
DEF_BLOCK2D_LOAD_STORE(ushort, ushort, 16, 16, u16_m8k32v1, 32, 8)
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2:3050:40: note: expanded from macro 'DEF_BLOCK2D_LOAD_STORE'
__attribute__((overloadable)) type##vl block2d_load(const global type *p, \
                                       ^
2:4014:1: error: no matching function for call to 'block2d_store'
DECLARE_2D_TILE_BLOCK2D_OPS(a_tile_type_dst, DST_DATA_T, SUBGROUP_SIZE,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2:3497:52: note: expanded from macro 'DECLARE_2D_TILE_BLOCK2D_OPS'
_Pragma("unroll") for (int ii = 0; ii &amp;lt; nbr; ii++) block2d_store( \
                                                   ^~~~~~~~~~~~~
2:3084:1: note: candidate function not viable: no known conversion from '__private _e_a_tile_type_dst' (vector of 32 'ushort' values) to '__private ushort8' (vector of 8 'ushort' values) for 1st argument
DEF_BLOCK2D_LOAD_STORE(ushort, ushort, 8, 16, u16_m8k16v1, 16, 8)
^
2:3065:36: note: expanded from macro 'DEF_BLOCK2D_LOAD_STORE'
__attribute__((overloadable)) void block2d_store(type##vl v, \
                                   ^
2:3085:1: note: candidate function not viable: no known conversion from '__private _e_a_tile_type_dst' (vector of 32 'ushort' values) to '__private ushort8' (vector of 8 'ushort' values) for 1st argument
DEF_BLOCK2D_LOAD_STORE(ushort, ushort, 8, 16, u16_m4k32v1, 32, 4)
^
2:3065:36: note: expanded from macro 'DEF_BLOCK2D_LOAD_STORE'
__attribute__((overloadable)) void block2d_store(type##vl v, \
                                   ^
2:3086:1: note: candidate function not viable: no known conversion from '__private _e_a_tile_type_dst' (vector of 32 'ushort' values) to '__private ushort16' (vector of 16 'ushort' values) for 1st argument
DEF_BLOCK2D_LOAD_STORE(ushort, ushort, 16, 16, u16_m8k32v1, 32, 8)
^
2:3065:36: note: expanded from macro 'DEF_BLOCK2D_LOAD_STORE'
__attribute__((overloadable)) void block2d_store(type##vl v, \
                                   ^
2:3081:1: note: candidate function not viable: no known conversion from '__private _e_a_tile_type_dst' (vector of 32 'ushort' values) to '__private half8' (vector of 8 'half' values) for 1st argument
DEF_BLOCK2D_LOAD_STORE(half, ushort, 8, 16, u16_m8k16v1, 16, 8)
^
2:3065:36: note: expanded from macro 'DEF_BLOCK2D_LOAD_STORE'
__attribute__((overloadable)) void block2d_store(type##vl v, \
                                   ^
2:3082:1: note: candidate function not viable: no known conversion from '__private _e_a_tile_type_dst' (vector of 32 'ushort' values) to '__private half8' (vector of 8 'half' values) for 1st argument
DEF_BLOCK2D_LOAD_STORE(half, ushort, 8, 16, u16_m4k32v1, 32, 4)
^
2:3065:36: note: expanded from macro 'DEF_BLOCK2D_LOAD_STORE'
__attribute__((overloadable)) void block2d_store(type##vl v, \
                                   ^
2:3083:1: note: candidate function not viable: no known conversion from '__private _e_a_tile_type_dst' (vector of 32 'ushort' values) to '__private half16' (vector of 16 'half' values) for 1st argument
DEF_BLOCK2D_LOAD_STORE(half, ushort, 16, 16, u16_m8k32v1, 32, 8)
^
2:3065:36: note: expanded from macro 'DEF_BLOCK2D_LOAD_STORE'
__attribute__((overloadable)) void block2d_store(type##vl v, \
                                   ^
,src\gpu\intel\ocl\engine.cpp:166
onednn_verbose,v1,primitive,error,ocl,errcode -11,CL_BUILD_PROGRAM_FAILURE,src\gpu\intel\ocl\engine.cpp:270,src\gpu\intel\ocl\engine.cpp:270&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 25 Jun 2025 07:45:37 GMT</pubDate>
      <guid>https://community.intel.com/t5/AI-Tools-from-Intel/oneAPI-error-while-running-ComfyUI/m-p/1699727#M985</guid>
      <dc:creator>Jackie999</dc:creator>
      <dc:date>2025-06-25T07:45:37Z</dc:date>
    </item>
    <item>
      <title>Re: oneAPI error while running ComfyUI</title>
      <link>https://community.intel.com/t5/AI-Tools-from-Intel/oneAPI-error-while-running-ComfyUI/m-p/1700440#M986</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;SPAN&gt;Jackie999,&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Have you tried the small model like : Wan2.1 1.3B , does it work?&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;I will ask team to help the issue.&amp;nbsp; if it is convinent, please also submit the issue to pytorch Github&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;A href="https://github.com/pytorch/pytorch" target="_blank" rel="noopener"&gt;pytorch/pytorch: Tensors and Dynamic neural networks in Python with strong GPU acceleration&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;thanks&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;A href="https://docs.pytorch.org/docs/main/notes/get_start_xpu.html" target="_blank" rel="noopener"&gt;P.S Getting Started on Intel GPU — PyTorch main documentation&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 30 Jun 2025 01:42:32 GMT</pubDate>
      <guid>https://community.intel.com/t5/AI-Tools-from-Intel/oneAPI-error-while-running-ComfyUI/m-p/1700440#M986</guid>
      <dc:creator>Ying_H_Intel</dc:creator>
      <dc:date>2025-06-30T01:42:32Z</dc:date>
    </item>
    <item>
      <title>Re: oneAPI error while running ComfyUI</title>
      <link>https://community.intel.com/t5/AI-Tools-from-Intel/oneAPI-error-while-running-ComfyUI/m-p/1700471#M988</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have sent the required env output via email. Do you still need me to submit a bug report to pytorch github?&lt;/P&gt;&lt;P&gt;Kindly advise.&lt;/P&gt;&lt;P&gt;Jack&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 30 Jun 2025 03:35:26 GMT</pubDate>
      <guid>https://community.intel.com/t5/AI-Tools-from-Intel/oneAPI-error-while-running-ComfyUI/m-p/1700471#M988</guid>
      <dc:creator>Jackie999</dc:creator>
      <dc:date>2025-06-30T03:35:26Z</dc:date>
    </item>
    <item>
      <title>Re: oneAPI error while running ComfyUI</title>
      <link>https://community.intel.com/t5/AI-Tools-from-Intel/oneAPI-error-while-running-ComfyUI/m-p/1700474#M989</link>
      <description>&lt;P&gt;Both Wan2.1 1.3B and 14B models seem to work OK, but slow. I do not know if the error is relevant or contributes to the (kind of) slow inference process for my settings.&lt;/P&gt;&lt;P&gt;Other than that, the B580 is a great choice. I love it and will stay with it for a long while. Hope the software side of it (both the drivers and the env) will mature in no time.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thank you, Intel.&lt;/P&gt;&lt;P&gt;Jack&lt;/P&gt;</description>
      <pubDate>Mon, 30 Jun 2025 03:39:58 GMT</pubDate>
      <guid>https://community.intel.com/t5/AI-Tools-from-Intel/oneAPI-error-while-running-ComfyUI/m-p/1700474#M989</guid>
      <dc:creator>Jackie999</dc:creator>
      <dc:date>2025-06-30T03:39:58Z</dc:date>
    </item>
    <item>
      <title>Re: oneAPI error while running ComfyUI</title>
      <link>https://community.intel.com/t5/AI-Tools-from-Intel/oneAPI-error-while-running-ComfyUI/m-p/1700475#M990</link>
      <description>&lt;P&gt;My startup command for ComfyUI (via oneAPI command line) is:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;G:\Comfy_CLI\main.py --disable-xformers --disable-cuda-malloc --preview-method auto --use-quad-cross-attention --normalvram --oneapi-device-selector opencl:gpu;level_zero:gpu&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;and the screen reads:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;[START] Security scan&lt;BR /&gt;[DONE] Security scan&lt;BR /&gt;## ComfyUI-Manager: installing dependencies done.&lt;BR /&gt;** ComfyUI startup time: 2025-06-30 10:45:09.853&lt;BR /&gt;** Platform: Windows&lt;BR /&gt;** Python version: 3.10.16 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 16:19:12) [MSC v.1929 64 bit (AMD64)]&lt;BR /&gt;** Python executable: H:\pinokio\bin\miniconda\python.exe&lt;BR /&gt;** ComfyUI Path: G:\Comfy_CLI&lt;BR /&gt;** ComfyUI Base Folder Path: G:\Comfy_CLI&lt;BR /&gt;** User directory: G:\Comfy_CLI\user&lt;BR /&gt;** ComfyUI-Manager config path: G:\Comfy_CLI\user\default\ComfyUI-Manager\config.ini&lt;BR /&gt;** Log path: G:\Comfy_CLI\user\comfyui.log&lt;/P&gt;&lt;P&gt;Prestartup times for custom nodes:&lt;BR /&gt;0.0 seconds: G:\Comfy_CLI\custom_nodes\rgthree-comfy&lt;BR /&gt;0.0 seconds: G:\Comfy_CLI\custom_nodes\comfyui-easy-use&lt;BR /&gt;11.4 seconds: G:\Comfy_CLI\custom_nodes\ComfyUI-Manager&lt;/P&gt;&lt;P&gt;Set oneapi device selector to: opencl:gpu;level_zero:gpu&lt;BR /&gt;Checkpoint files will always be loaded safely.&lt;BR /&gt;Total VRAM 11874 MB, total RAM 65325 MB&lt;BR /&gt;pytorch version: 2.8.0.dev20250619+xpu&lt;BR /&gt;Set vram state to: NORMAL_VRAM&lt;BR /&gt;Device: xpu&lt;BR /&gt;Using sub quadratic optimization for attention, if you have memory or speed issues try using: --use-split-cross-attention&lt;BR /&gt;Python version: 3.10.16 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 16:19:12) [MSC v.1929 64 bit (AMD64)]&lt;BR /&gt;ComfyUI version: 0.3.42&lt;BR /&gt;ComfyUI frontend version: 1.23.4&lt;/P&gt;&lt;P&gt;...&lt;/P&gt;</description>
      <pubDate>Mon, 30 Jun 2025 03:46:25 GMT</pubDate>
      <guid>https://community.intel.com/t5/AI-Tools-from-Intel/oneAPI-error-while-running-ComfyUI/m-p/1700475#M990</guid>
      <dc:creator>Jackie999</dc:creator>
      <dc:date>2025-06-30T03:46:25Z</dc:date>
    </item>
    <item>
      <title>Re: oneAPI error while running ComfyUI</title>
      <link>https://community.intel.com/t5/AI-Tools-from-Intel/oneAPI-error-while-running-ComfyUI/m-p/1700749#M993</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp; Sorry to reply late!&lt;/P&gt;&lt;P&gt;&amp;nbsp; No need to create issue again!&amp;nbsp;Let me check this issue firstly.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp; Thank you!&lt;/P&gt;</description>
      <pubDate>Tue, 01 Jul 2025 01:59:58 GMT</pubDate>
      <guid>https://community.intel.com/t5/AI-Tools-from-Intel/oneAPI-error-while-running-ComfyUI/m-p/1700749#M993</guid>
      <dc:creator>Jianyu_Z_Intel</dc:creator>
      <dc:date>2025-07-01T01:59:58Z</dc:date>
    </item>
    <item>
      <title>Re: oneAPI error while running ComfyUI</title>
      <link>https://community.intel.com/t5/AI-Tools-from-Intel/oneAPI-error-while-running-ComfyUI/m-p/1700812#M996</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp; I try the example of Wan on B570 (driver 25.22.1502.2) on Windows 11.&lt;/P&gt;&lt;P&gt;&amp;nbsp; It's passed.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&amp;nbsp; Note,&amp;nbsp; don't need to enable oneAPI running time in CMD, like &lt;SPAN&gt;(via oneAPI command line)&amp;nbsp;&lt;/SPAN&gt;.&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&amp;nbsp; PyTorch for XPU and IPEX will install the oneAPI running time during installation. No need user install and enable them.&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&amp;nbsp; I guess this issue is due the external oneAPI running time doesn't match the version of PyTorch needed.&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp; Here is the installation process:&lt;/P&gt;&lt;PRE&gt;python -m pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/xpu --trusted-host download.pytorch.org&lt;BR /&gt;git clone &lt;A href="https://github.com/comfyanonymous/ComfyUI" target="_blank" rel="noopener"&gt;https://github.com/comfyanonymous/ComfyUI&lt;/A&gt;&lt;BR /&gt;cd ComfyUI&lt;BR /&gt;pip install -r requirements.txt&lt;BR /&gt;python main.py --disable-xformers --disable-cuda-malloc --preview-method auto --use-quad-cross-attention --normalvram --oneapi-device-selector opencl:gpu;level_zero:gpu&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I follow the example of Wan:&amp;nbsp;&lt;A href="https://comfyanonymous.github.io/ComfyUI_examples/wan/" target="_blank" rel="noopener"&gt;https://comfyanonymous.github.io/ComfyUI_examples/wan/&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp; download the model files and save them to folder "models"&lt;/P&gt;&lt;P&gt;&amp;nbsp; restart the ComfyUI.&lt;/P&gt;&lt;P&gt;&amp;nbsp; open the workflow json file:&amp;nbsp;&lt;A href="https://comfyanonymous.github.io/ComfyUI_examples/wan/text_to_video_wan.json" target="_blank" rel="noopener"&gt;https://comfyanonymous.github.io/ComfyUI_examples/wan/text_to_video_wan.json&lt;/A&gt;&amp;nbsp;by Web GUI:&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;A href="http://127.0.0.1:8188" target="_blank" rel="noopener"&gt;http://127.0.0.1:8188&lt;/A&gt;.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;After about 300+s, I got the result correctly.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="video_res.png" style="width: 503px;"&gt;&lt;img src="https://community.intel.com/t5/image/serverpage/image-id/67136i26879831642983DA/image-dimensions/503x395?v=v2&amp;amp;whitelist-exif-data=Orientation%2CResolution%2COriginalDefaultFinalSize%2CCopyright" width="503" height="395" role="button" title="video_res.png" alt="video_res.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Could you check your installation steps?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If you still have the issue, please share your difference: installation, model, hardware driver info.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thank you!&lt;/P&gt;</description>
      <pubDate>Tue, 01 Jul 2025 06:26:19 GMT</pubDate>
      <guid>https://community.intel.com/t5/AI-Tools-from-Intel/oneAPI-error-while-running-ComfyUI/m-p/1700812#M996</guid>
      <dc:creator>Jianyu_Z_Intel</dc:creator>
      <dc:date>2025-07-01T06:26:19Z</dc:date>
    </item>
    <item>
      <title>Re: oneAPI error while running ComfyUI</title>
      <link>https://community.intel.com/t5/AI-Tools-from-Intel/oneAPI-error-while-running-ComfyUI/m-p/1700973#M998</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I understand what you are saying. I do not have this problem anymore, since I do not use IPEX. I'd love to but it somehow still not stable.&lt;/P&gt;&lt;P&gt;I actually wanted to help report the issues so that the intel team can look into and fix.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Anyways, thanks for your info and I hope the whole intel team will be able to provide better software support for ARC graphics card in the near future.&amp;nbsp;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If I encounter further messages, will post accordingly.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Cheers.&lt;/P&gt;</description>
      <pubDate>Wed, 02 Jul 2025 01:55:35 GMT</pubDate>
      <guid>https://community.intel.com/t5/AI-Tools-from-Intel/oneAPI-error-while-running-ComfyUI/m-p/1700973#M998</guid>
      <dc:creator>Jackie999</dc:creator>
      <dc:date>2025-07-02T01:55:35Z</dc:date>
    </item>
    <item>
      <title>Re: oneAPI error while running ComfyUI</title>
      <link>https://community.intel.com/t5/AI-Tools-from-Intel/oneAPI-error-while-running-ComfyUI/m-p/1700977#M999</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp; It's great!&lt;/P&gt;&lt;P&gt;&amp;nbsp; Thank your feedback!&lt;/P&gt;</description>
      <pubDate>Wed, 02 Jul 2025 02:06:34 GMT</pubDate>
      <guid>https://community.intel.com/t5/AI-Tools-from-Intel/oneAPI-error-while-running-ComfyUI/m-p/1700977#M999</guid>
      <dc:creator>Jianyu_Z_Intel</dc:creator>
      <dc:date>2025-07-02T02:06:34Z</dc:date>
    </item>
    <item>
      <title>Re: oneAPI error while running ComfyUI</title>
      <link>https://community.intel.com/t5/AI-Tools-from-Intel/oneAPI-error-while-running-ComfyUI/m-p/1701092#M1000</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Even if marked as solved, it actually isn't. Please refer to attached file (wanvideo_T2V_example_02_wanwrapper). When using Kijai's wan video wrapper nodes, error still occurs as attached. Workflow comes directly from (&lt;A href="https://github.com/kijai/ComfyUI-WanVideoWrapper)," target="_blank"&gt;https://github.com/kijai/ComfyUI-WanVideoWrapper),&lt;/A&gt;&amp;nbsp;with no modifications (nothing added, just deleted some not used nodes). When I tested, I found out that the "WanVideo Decode" node created the problem for me. Once I replaced with a native VAE decoder, I ran fine.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I am posting in case your team or any other team might want to take a look, as these errors should not occur (I believe).&amp;nbsp; However, if I use the native nodes, I got no problem at all. It would be great if you could identify the problem.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Anyways, thanks for your support.&lt;/P&gt;&lt;P&gt;Jack&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 02 Jul 2025 12:15:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/AI-Tools-from-Intel/oneAPI-error-while-running-ComfyUI/m-p/1701092#M1000</guid>
      <dc:creator>Jackie999</dc:creator>
      <dc:date>2025-07-02T12:15:00Z</dc:date>
    </item>
    <item>
      <title>Re: oneAPI error while running ComfyUI</title>
      <link>https://community.intel.com/t5/AI-Tools-from-Intel/oneAPI-error-while-running-ComfyUI/m-p/1701225#M1002</link>
      <description>&lt;P&gt;Have you installed PyTorch?&amp;nbsp;&lt;/P&gt;&lt;P&gt;Could you share the python packages in your running time?&amp;nbsp;Like&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;pip list&lt;/PRE&gt;&lt;P&gt;By the oneDNN log, looks like you are using OpenCL device, instead of SYCL device.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Could you check by following method？&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;In CMD&amp;nbsp;&lt;SPAN class=""&gt;via oneAPI command line, run "sycl-ls".&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN class=""&gt;Maybe you could try following cmd for SYCL GPU only:&lt;/SPAN&gt;&lt;/P&gt;&lt;PRE&gt;python main.py --disable-xformers --disable-cuda-malloc --preview-method auto --use-quad-cross-attention --normalvram --oneapi-device-selector level_zero:gpu&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN class=""&gt;Thank you!&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 03 Jul 2025 01:58:51 GMT</pubDate>
      <guid>https://community.intel.com/t5/AI-Tools-from-Intel/oneAPI-error-while-running-ComfyUI/m-p/1701225#M1002</guid>
      <dc:creator>Jianyu_Z_Intel</dc:creator>
      <dc:date>2025-07-03T01:58:51Z</dc:date>
    </item>
    <item>
      <title>Re: oneAPI error while running ComfyUI</title>
      <link>https://community.intel.com/t5/AI-Tools-from-Intel/oneAPI-error-while-running-ComfyUI/m-p/1701233#M1003</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;Thanks for the follow up.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Here is my&amp;nbsp;&lt;STRONG&gt;&lt;SPAN class=""&gt;sycl-ls&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN class=""&gt; (in ComfyUI env):&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;(k:\comfyui_venv_XPU)&lt;/STRONG&gt; C:\Program Files (x86)\Intel\oneAPI&amp;gt;&lt;STRONG&gt;sycl-ls&lt;/STRONG&gt;&lt;BR /&gt;[level_zero:gpu][level_zero:0] Intel(R) oneAPI Unified Runtime over Level-Zero, Intel(R) Arc(TM) B580 Graphics 20.1.0 [1.6.33890]&lt;BR /&gt;[opencl:cpu][opencl:0] Intel(R) OpenCL, 12th Gen Intel(R) Core(TM) i5-12400F OpenCL 3.0 (Build 0) [2025.19.4.0.18_160000.xmain-hotfix]&lt;BR /&gt;[opencl:gpu][opencl:1] Intel(R) OpenCL Graphics, Intel(R) Arc(TM) B580 Graphics OpenCL 3.0 NEO [32.0.101.6913]&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;(k:\comfyui_venv_XPU)&lt;/STRONG&gt; C:\Program Files (x86)\Intel\oneAPI&amp;gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;After I removed the &lt;STRONG&gt;opencl:gpu&lt;/STRONG&gt; arg, the error has not shown again (yet).&amp;nbsp;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If It comes back (which I believe will not), I will ask for further support. Thanks so much for your help.&lt;/P&gt;&lt;P&gt;Jack.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;FONT size="3"&gt;&lt;STRONG&gt;***** (RESOLVED, thanks to team) *****&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 03 Jul 2025 02:39:39 GMT</pubDate>
      <guid>https://community.intel.com/t5/AI-Tools-from-Intel/oneAPI-error-while-running-ComfyUI/m-p/1701233#M1003</guid>
      <dc:creator>Jackie999</dc:creator>
      <dc:date>2025-07-03T02:39:39Z</dc:date>
    </item>
    <item>
      <title>Re: oneAPI error while running ComfyUI</title>
      <link>https://community.intel.com/t5/AI-Tools-from-Intel/oneAPI-error-while-running-ComfyUI/m-p/1701242#M1004</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp; It's good news!&lt;/P&gt;&lt;P&gt;&amp;nbsp; Parameter&amp;nbsp;&lt;STRONG&gt;opencl:gpu&lt;/STRONG&gt; is the root cause.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp; Intel GPU support both OpenCL and LevelZero running time.&lt;/P&gt;&lt;P&gt;&amp;nbsp; PyTorch focus on LevelZero for optimization. OpenVINO focus on OpenCL.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp; In this case, due to the parameter opencl:gpu, PyTorch has to run on OpenCL code path.&lt;/P&gt;&lt;P&gt;&amp;nbsp; But some code implementation isn't supported by oneDNN OpenCL code. There are the errors in above log.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp; Thank you for your cooperation!&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 03 Jul 2025 03:15:40 GMT</pubDate>
      <guid>https://community.intel.com/t5/AI-Tools-from-Intel/oneAPI-error-while-running-ComfyUI/m-p/1701242#M1004</guid>
      <dc:creator>Jianyu_Z_Intel</dc:creator>
      <dc:date>2025-07-03T03:15:40Z</dc:date>
    </item>
  </channel>
</rss>

