- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I am new to the FPGA AI Suite and would appreciate your help in better understanding it.
Referring to the attached Intel pipeline, specifically the path involving Quartus:
When Quartus is involved, is the OpenVINO runtime inference engine still required to run the application?
I assume that IP files are imported into Quartus. Do these files contain the model topology and weights needed to run the application, or is Quartus solely used to configure the FPGA hardware, with the inference handled by the OpenVINO runtime (via FPGA AI Suite)?
If I test the model using a deep learning framework and then use the FPGA AI Suite, how can I effectively collaborate with the FPGA developer?
I hope my questions are clear.
Best regards.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
HI,
- When Quartus is involved, is the OpenVINO runtime inference engine still required to run the application?
Quartus is only use for FPGA compilation while OpenVINO is use during runtime.
- I assume that IP files are imported into Quartus. Do these files contain the model topology and weights needed to run the application, or is Quartus solely used to configure the FPGA hardware, with the inference handled by the OpenVINO runtime (via FPGA AI Suite)?
The IP will contain the Architecture of the NPU. It can be configure during runtime to meet model requirement as long as the IP has all the feature needed for the model.
- If I test the model using a deep learning framework and then use the FPGA AI Suite, how can I effectively collaborate with the FPGA developer?
It will depend on the model use and will it be possible to run on the FPGA. You may use the FPGA AI Suite to simulate the model that you plan to run.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
HI,
- When Quartus is involved, is the OpenVINO runtime inference engine still required to run the application?
Quartus is only use for FPGA compilation while OpenVINO is use during runtime.
- I assume that IP files are imported into Quartus. Do these files contain the model topology and weights needed to run the application, or is Quartus solely used to configure the FPGA hardware, with the inference handled by the OpenVINO runtime (via FPGA AI Suite)?
The IP will contain the Architecture of the NPU. It can be configure during runtime to meet model requirement as long as the IP has all the feature needed for the model.
- If I test the model using a deep learning framework and then use the FPGA AI Suite, how can I effectively collaborate with the FPGA developer?
It will depend on the model use and will it be possible to run on the FPGA. You may use the FPGA AI Suite to simulate the model that you plan to run.

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page