Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Benchmark_app error

KW5
Beginner
1,317 Views

Dear Sir or Madam,

Excuse me for my dumb question as I'm new to AI domain.

I'm doing some simple benchmark to some LLM models on my company's hardware. I cloned some LLM models (for example, mistral-7b-instruct-v0.1-int8-ov) from HF openvino page (for example, https://huggingface.co/OpenVINO/mistral-7b-instruct-v0.1-int8-ov).

When I ran it, it showed I need to provide data shapes as "input_ids" is dynamic. The typical error message is as follows.

Exception: Input input_ids is dynamic. Provide data shapes!

I have read some docs online (mainly at "https://docs.openvino.ai/"),  but still could not figure it out.

For an IR format LLM model, how could I figure out what data shapes I should provide? For a new LLM model on HF, I tried to convert it to IR format by adding data shapes but always failed. Do you have a step-by-step example to show me how to do it? 

Your prompt response is appreciated! 

 

Kevin

Labels (1)
0 Kudos
17 Replies
Aznie_Intel
Moderator
1,250 Views

Hi KW5,

 

Thanks for reaching out.

For LLM models you can specify the shape value using the parameter -shape "input_ids[1,1],attention_mask[1,1],position_ids[1,1],beam_idx[1]"

llm model.JPG

 

 

Regards,

Aznie

 

 

0 Kudos
KW5
Beginner
1,224 Views

Hi Aznie,

Many thanks for your reply!

Once I applied that input shape, I got the following error message. Could you advise what I should do? Thanks a lot!

 

[Step 7/11] Loading the model to the device
[ ERROR ] Exception from src/inference/src/cpp/core.cpp:104:
Exception from src/inference/src/dev/plugin.cpp:53:
Exception from src/plugins/intel_cpu/src/cpu_memory.cpp:410:
Can not create StaticMemory object. The memory desc is undefined


Traceback (most recent call last):
File "/home/amd/openvino_env/lib/python3.12/site-packages/openvino/tools/benchmark/main.py", line 408, in main
compiled_model = benchmark.core.compile_model(model, benchmark.device, device_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/amd/openvino_env/lib/python3.12/site-packages/openvino/runtime/ie_api.py", line 543, in compile_model
super().compile_model(model, device_name, {} if config is None else config),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Exception from src/inference/src/cpp/core.cpp:104:
Exception from src/inference/src/dev/plugin.cpp:53:
Exception from src/plugins/intel_cpu/src/cpu_memory.cpp:410:
Can not create StaticMemory object. The memory desc is undefined

0 Kudos
KW5
Beginner
1,223 Views

Hi Aznie,

Btw, I just noticed from your screenshot that you ran your LLM on GPU. But in my case, I tried to benchmark it on CPU and got that error ("Can not create StaticMemory object. The memory desc is undefined").

Thanks again!

 

Kevin

0 Kudos
KW5
Beginner
1,197 Views

Hi Aznie,

I just googled the internet and one posting recommends to using llm_bench tool for LLM models instead of using benmark_app. I will try it and let you know. Thanks!

 

-Kevin

0 Kudos
Aznie_Intel
Moderator
1,073 Views

Hi KW5,

 

How do you download the models and generate the IR files? I didn’t observe any error when running on the CPU plugin. Below is the result when I ran on CPU:

huggingfacce.JPG

 

 

Regards,

Aznie

 

0 Kudos
KW5
Beginner
1,038 Views

Hi Aznie,

Which model are you running with your above example?

I downloaded existing openvino models from https://huggingface.co/OpenVINO website, and I used
"git clone" to download them.

Could you teach me your regular way to download and convert a model? Thanks!

-Kevin

0 Kudos
Aznie_Intel
Moderator
1,016 Views

Hi KW5,

 

I am using optimum-cli command to download LLM models. However, you can also use Git clone. I observed the same error when running the mistral-7b-instruct-v0.1-int8-ov models with the shapes parameter. I have tried with llm_bench and encountered the error below:

 

 Screenshot 2024-11-06 152117.png

 

I will check this with the developer and get back to you soon.

 

 

Regards,

Aznie

 

 

 

0 Kudos
Witold_Intel
Employee
982 Views

Hi Kevin,


could you give me more details of your setup so I can pick the best machine for reproducing your issue?


  • OpenVINO package version
  • compute runtime driver version if you're using one
  • Ubuntu or other OS version
  • CPU architecture, eg. Elkhart Lake
  • other details that you consider important


Thank you in advance,


0 Kudos
Witold_Intel
Employee
791 Views

Hi Kevin,


Can I have your setup details please?


0 Kudos
Witold_Intel
Employee
736 Views

Hi Kevin,


could you share your setup details with us please?


0 Kudos
Witold_Intel
Employee
691 Views

Hi Kevin,


please share your setup details with me. Otherwise I won't be able to reproduce your case and will have to close it without response in 3 business days.


0 Kudos
Witold_Intel
Employee
588 Views

This is a reminder that your setup details are needed to reproduce the issue.


0 Kudos
KW5
Beginner
581 Views

Hi Witold_Intel,

Excuse me for my late response as I was on vacation last couple of days. My setup is not special, and just a regular x86 server with Ubuntu 24.04. 

Aznie (above) said he could reproduce the issue, and I'm waiting for his further feedback.

At the same time, I can use "llm_bench" tool to conduct some LLM related testing. So I'm good now.

I met some other questions, I will initiate a new thread as it is a new question.

0 Kudos
Witold_Intel
Employee
502 Views

Thank you for your response. Indeed, Aznie has tried to reproduce, I can see it now. In this case I can open a Jira issue with OpenVino developers to investigate further. Can you post the link to the other topic here for completeness?


0 Kudos
Witold_Intel
Employee
408 Views

Hi Kevin, did you open a new thread or can we continue to support you in the current one?


0 Kudos
Witold_Intel
Employee
354 Views

Hi Kevin, did you open a new thread or can we continue to support you in the current one?


0 Kudos
Aznie_Intel
Moderator
221 Views

Hi Kevin,


Thank you for your question. If you need any additional information from Intel, please submit a new question as this thread is no longer being monitored.



Regards,

Aznie


0 Kudos
Reply