Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6411 Discussions

Stable Diffusion OpenVINO : "BackendCompilerFailed: openvino_fx..."

Jaz
Beginner
2,441 Views

Hello!

I would like to experiment with OpenVINO and Stable Diffusion but I can't seem to get it to work. I saw someone else had the same error as me, but I didn't see a solution on their post since the user seems to have stopped responding : Stable Diffusion - Deforum - Openvino - Intel Community . I cannot get this to function on my iGPU, dGPU, or CPU when I activate the special OpenVINO acceleration script in SD. However, if I do not activate the script and run SD as if it did not have OpenVINO, it works properly on the dGPU. I followed this guide on how to install this forked version of SD : Installation on Intel Silicon · openvinotoolkit/stable-diffusion-webui Wiki · GitHub .

I am really not understanding what the problem is so any help would be greatly appreciated.

The error I am getting in the web-ui is : 

 
BackendCompilerFailed: openvino_fx raised RuntimeError: ShapeProp error for: node=%self_norm1 : [#users=1] = call_module[target=self_norm1](args = (%input_tensor,), kwargs = {}) with meta={'nn_module_stack': {'self_norm1': <class 'torch.nn.modules.normalization.GroupNorm'>}, 'stack_trace': ' File "D:\\stable_diffusion\\iGPU\\stable-diffusion-webui\\venv\\lib\\site-packages\\diffusers\\models\\resnet.py", line 691, in forward\n hidden_states = self.norm1(hidden_states)\n'} While executing %self_norm1 : [#users=1] = call_module[target=self_norm1](args = (%input_tensor,), kwargs = {}) Original traceback: File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\resnet.py", line 691, in forward hidden_states = self.norm1(hidden_states) Set torch._dynamo.config.verbose=True for more information You can suppress this exception and fall back to eager by setting: torch._dynamo.config.suppress_errors = True

This is all of the output in the terminal from booting the web-ui to erroring out : 

Version: 1.6.0
Commit hash: 44006297e03a07f28505d54d6ba5fd55e0c1292d
Launching Web UI with arguments: --skip-torch-cuda-test --precision full --no-half
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Loading weights [6ce0161689] from D:\stable_diffusion\iGPU\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Creating model from config: D:\stable_diffusion\iGPU\stable-diffusion-webui\configs\v1-inference.yaml
Running on local URL: http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 4.6s (prepare environment: 0.1s, import torch: 1.5s, import gradio: 0.5s, setup paths: 0.4s, initialize shared: 0.1s, other imports: 0.3s, load scripts: 0.9s, create ui: 0.4s, gradio launch: 0.3s).
Applying attention optimization: InvokeAI... done.
Model loaded in 2.1s (load weights from disk: 0.5s, create model: 0.3s, apply weights to model: 1.2s).
{}
Loading weights [6ce0161689] from D:\stable_diffusion\iGPU\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
OpenVINO Script: created model from config : D:\stable_diffusion\iGPU\stable-diffusion-webui\configs\v1-inference.yaml
0%| | 0/20 [00:00<?, ?it/s][2024-01-09 21:47:09,920] torch._dynamo.symbolic_convert: [WARNING] D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x00000224FEC5BC70> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-01-09 21:47:10,198] torch._dynamo.symbolic_convert: [WARNING] D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x00000224FEC5BC70> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-01-09 21:47:10,222] torch._dynamo.symbolic_convert: [WARNING] D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x00000224FEC5BC70> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-01-09 21:47:10,244] torch._dynamo.symbolic_convert: [WARNING] D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x00000224FEC5BC70> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-01-09 21:47:10,313] torch._dynamo.symbolic_convert: [WARNING] D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x00000224FEC5BC70> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-01-09 21:47:10,402] torch._dynamo.symbolic_convert: [WARNING] D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x00000224FEC5BC70> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-01-09 21:47:10,422] torch._dynamo.symbolic_convert: [WARNING] D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x00000224FEC5BC70> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-01-09 21:47:10,526] torch._dynamo.symbolic_convert: [WARNING] D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x00000224FEC5BC70> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-01-09 21:47:10,548] torch._dynamo.symbolic_convert: [WARNING] D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py <function Conv2d.forward at 0x00000224FEBC1360> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-01-09 21:47:10,638] torch._dynamo.symbolic_convert: [WARNING] D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x00000224FEC5BC70> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-01-09 21:47:10,669] torch._dynamo.symbolic_convert: [WARNING] D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x00000224FEC5BC70> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
list index out of range
Traceback (most recent call last):
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\scripts\openvino_accelerate.py", line 200, in openvino_fx
compiled_model = openvino_compile_cached_model(maybe_fs_cached_name, *example_inputs)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\scripts\openvino_accelerate.py", line 426, in openvino_compile_cached_model
om.inputs[idx].get_node().set_element_type(dtype_mapping[input_data.dtype])
IndexError: list index out of range

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\fx\passes\shape_prop.py", line 147, in run_node
result = super().run_node(n)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\fx\interpreter.py", line 177, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\fx\interpreter.py", line 294, in call_module
return submod(*args, **kwargs)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 459, in network_GroupNorm_forward
return originals.GroupNorm_forward(self, input)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\normalization.py", line 273, in forward
return F.group_norm(
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2526, in group_norm
return handle_torch_function(group_norm, (input, weight, bias,), input, num_groups, weight=weight, bias=bias, eps=eps)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\overrides.py", line 1534, in handle_torch_function
result = mode.__torch_function__(public_api, types, args, kwargs)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\overrides.py", line 38, in __torch_function__
return func(*args, **kwargs)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2530, in group_norm
return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [320] and input of shape [2, 1280]
0%| | 0/20 [00:01<?, ?it/s]
*** Error completing request
*** Arguments: ('task(3uvsx5tt4qf7460)', 'bear', '', [], 20, 'DPM++ 2M Karras', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x00000225183CE830>, 1, False, '', 0.8, -1, False, -1, 0, 0, 0, 'None', 'None', 'GPU.0', True, 'Euler a', True, False, 'None', 0.8, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False) {}
Traceback (most recent call last):
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\scripts\openvino_accelerate.py", line 200, in openvino_fx
compiled_model = openvino_compile_cached_model(maybe_fs_cached_name, *example_inputs)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\scripts\openvino_accelerate.py", line 426, in openvino_compile_cached_model
om.inputs[idx].get_node().set_element_type(dtype_mapping[input_data.dtype])
IndexError: list index out of range

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\fx\passes\shape_prop.py", line 147, in run_node
result = super().run_node(n)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\fx\interpreter.py", line 177, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\fx\interpreter.py", line 294, in call_module
return submod(*args, **kwargs)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 459, in network_GroupNorm_forward
return originals.GroupNorm_forward(self, input)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\normalization.py", line 273, in forward
return F.group_norm(
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2526, in group_norm
return handle_torch_function(group_norm, (input, weight, bias,), input, num_groups, weight=weight, bias=bias, eps=eps)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\overrides.py", line 1534, in handle_torch_function
result = mode.__torch_function__(public_api, types, args, kwargs)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\overrides.py", line 38, in __torch_function__
return func(*args, **kwargs)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2530, in group_norm
return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [320] and input of shape [2, 1280]

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 670, in call_user_compiler
compiled_fn = compiler_fn(gm, self.fake_example_inputs())
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\debug_utils.py", line 1055, in debug_wrapper
compiled_gm = compiler_fn(gm, example_inputs)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\backends\common.py", line 107, in wrapper
return fn(model, inputs, **kwargs)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\scripts\openvino_accelerate.py", line 233, in openvino_fx
return compile_fx(subgraph, example_inputs)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\compile_fx.py", line 415, in compile_fx
model_ = overrides.fuse_fx(model_, example_inputs_)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\overrides.py", line 96, in fuse_fx
gm = mkldnn_fuse_fx(gm, example_inputs)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\mkldnn.py", line 509, in mkldnn_fuse_fx
ShapeProp(gm, fake_mode=fake_mode).propagate(*example_inputs)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\fx\passes\shape_prop.py", line 185, in propagate
return super().run(*args)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\fx\interpreter.py", line 136, in run
self.env[node] = self.run_node(node)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\fx\passes\shape_prop.py", line 152, in run_node
raise RuntimeError(
RuntimeError: ShapeProp error for: node=%self_norm1 : [#users=1] = call_module[target=self_norm1](args = (%input_tensor,), kwargs = {}) with meta={'nn_module_stack': {'self_norm1': <class 'torch.nn.modules.normalization.GroupNorm'>}, 'stack_trace': ' File "D:\\stable_diffusion\\iGPU\\stable-diffusion-webui\\venv\\lib\\site-packages\\diffusers\\models\\resnet.py", line 691, in forward\n hidden_states = self.norm1(hidden_states)\n'}

While executing %self_norm1 : [#users=1] = call_module[target=self_norm1](args = (%input_tensor,), kwargs = {})
Original traceback:
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\resnet.py", line 691, in forward
hidden_states = self.norm1(hidden_states)


The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\modules\txt2img.py", line 52, in txt2img
processed = modules.scripts.scripts_txt2img.run(p, *args)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\modules\scripts.py", line 601, in run
processed = script.run(p, *script_args)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\scripts\openvino_accelerate.py", line 1228, in run
processed = process_images_openvino(p, model_config, vae_ckpt, p.sampler_name, enable_caching, openvino_device, mode, is_xl_ckpt, refiner_ckpt, refiner_frac)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\scripts\openvino_accelerate.py", line 979, in process_images_openvino
output = shared.sd_diffusers_model(
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 840, in __call__
noise_pred = self.unet(
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\eval_frame.py", line 82, in forward
return self.dynamo_ctx(self._orig_mod.forward)(*args, **kwargs)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\eval_frame.py", line 209, in _fn
return fn(*args, **kwargs)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\unet_2d_condition.py", line 924, in forward
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\unet_2d_condition.py", line 1066, in <graph break in forward>
sample, res_samples = downsample_block(
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\unet_2d_blocks.py", line 1159, in forward
hidden_states = resnet(hidden_states, temb, scale=lora_scale)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\eval_frame.py", line 337, in catch_errors
return callback(frame, cache_size, hooks)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 404, in _convert_frame
result = inner_convert(frame, cache_size, hooks)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 104, in _fn
return fn(*args, **kwargs)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 262, in _convert_frame_assert
return _compile(
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\utils.py", line 163, in time_wrapper
r = func(*args, **kwargs)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 324, in _compile
out_code = transform_code_object(code, transform)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\bytecode_transformation.py", line 445, in transform_code_object
transformations(instructions, code_options)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 311, in transform
tracer.run()
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 1726, in run
super().run()
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 576, in run
and self.step()
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 540, in step
getattr(self, inst.opname)(inst)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 372, in wrapper
self.output.compile_subgraph(self, reason=reason)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 541, in compile_subgraph
self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 588, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\utils.py", line 163, in time_wrapper
r = func(*args, **kwargs)
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 675, in call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e) from e
torch._dynamo.exc.BackendCompilerFailed: openvino_fx raised RuntimeError: ShapeProp error for: node=%self_norm1 : [#users=1] = call_module[target=self_norm1](args = (%input_tensor,), kwargs = {}) with meta={'nn_module_stack': {'self_norm1': <class 'torch.nn.modules.normalization.GroupNorm'>}, 'stack_trace': ' File "D:\\stable_diffusion\\iGPU\\stable-diffusion-webui\\venv\\lib\\site-packages\\diffusers\\models\\resnet.py", line 691, in forward\n hidden_states = self.norm1(hidden_states)\n'}

While executing %self_norm1 : [#users=1] = call_module[target=self_norm1](args = (%input_tensor,), kwargs = {})
Original traceback:
File "D:\stable_diffusion\iGPU\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\resnet.py", line 691, in forward
hidden_states = self.norm1(hidden_states)


Set torch._dynamo.config.verbose=True for more information


You can suppress this exception and fall back to eager by setting:
torch._dynamo.config.suppress_errors = True


---

I have attempted to uninstall and reinstall : Python, GIT, Stable Diffusion. I have tried this with caching turned on and off. I have tried running this with the OpenVINO script enabled on the CPU, dGPU, and iGPU. I have tried different SD safetensor models. I have tried running this using the venv created at runtime and I have also tried running this without the venv. 
 
I just can't seem to get this to work and any help would be wonderful, thank you!
0 Kudos
3 Replies
Megat_Intel
Moderator
2,385 Views

Hi Jaz,

Thank you for reaching out to us.

 

For your information, I tried to reinstall Stable Diffusion for OpenVINO™ Toolkit following the guide here. I was able to run the txt2img with OpenVINO™ using my iGPU multiple times and did not encounter any errors.

 

OS: Windows 11

Python: 3.10.6

GPU: Intel(R) UHD Graphics

torch: 2.1.0+cpu

 

 sd-cmd .png

sd-webui.png

 

To investigate further, could you please provide us with the details below :

  • CPU:
  • iGPU:
  • dGPU:
  • Python Version:
  • torch:
  • Screenshot of the WebUI when encountering the error:

 

 

Regards,

Megat

0 Kudos
Jaz
Beginner
2,339 Views

Hello Megat!

Thank you for this reply.

I pretty quickly could see what the problem is with my build thanks to your post. I have reinstalled the dependencies multiple times and I can see that I have the wrong version of Torch, I had torch: 2.0.1+cu118. I am not sure if there is a way for the requirements for the github DL to be updated so that this does not happen to others, but I think it would be helpful. I have downloaded torch : 2.0.1+cpu and now I can run the OpenVINO custom script. 


0 Kudos
Megat_Intel
Moderator
2,318 Views

Hi Jaz,

We're glad that you can run the Stable Diffusion with OpenVINO™.

 

Thank you for your suggestion, we have informed the relevant team to update the torch version in the requirements. This thread will no longer be monitored since this issue has been resolved. If you need any additional information from Intel, please submit a new question.

 

 

Regards,

Megat


0 Kudos
Reply