Intel® oneAPI DPC++/C++ Compiler
Talk to fellow users of Intel® oneAPI DPC++/C++ Compiler and companion tools like Intel® oneAPI DPC++ Library, Intel® DPC++ Compatibility Tool, and Intel® Distribution for GDB*
Announcements
FPGA community forums and blogs on community.intel.com are migrating to the new Altera Community and are read-only. For urgent support needs during this transition, please visit the FPGA Design Resources page or contact an Altera Authorized Distributor.

Behaviour of multi queues on single device.

shenuo
Employee
1,256 Views

What's the execution order  of multi queues on 1 same device?

let's say 

    gpu_selector gs;

    queue Q1(gs);

    queue Q2(gs);

    queue Q3(gs);

    Q1.submmit(/***/);

   Q2.submmit(/***/);

   Q3.submmit(/***/);

 

 are they executed 1 by 1, or it's determined by the runtime, the order is uncertain, even concurrent?

0 Kudos
5 Replies
HemanthCH_Intel
Moderator
1,226 Views

Hi,


Thanks for posting in Intel Communities.


When we create multiple queues for the same device then the execution is in a serial manner(one after the other). When we create multiple contexts and multiple queues then the execution is concurrent.


Thanks & Regards,

Hemanth


0 Kudos
shenuo
Employee
1,222 Views

I'm sorry ,still a little confused.

 is that mean different device has different context?

can you give me a example to illustrate how to create concurrent queues on same device?

thanks !

0 Kudos
HemanthCH_Intel
Moderator
1,202 Views

Hi,

 

You can refer to the sample vector add program where we created multiple contexts and multiple queues to run concurrently on a single device.

 

Thanks & Regards,

Hemanth.

0 Kudos
HemanthCH_Intel
Moderator
1,189 Views

Hi,


We haven't heard back from you. Could you please provide an update on your issue?


Thanks & Regards,

Hemanth


0 Kudos
HemanthCH_Intel
Moderator
1,175 Views

Hi,


We assume that your issue is resolved. If you need any additional information, please post a new question as this thread will no longer be monitored by Intel.


Thanks & Regards,

Hemanth


0 Kudos
Reply