- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
When a function node with queueing policy is used, and it receives the input faster than it can handle, the next job it process can be from the input even if the internal buffer isn't empty.
For example, the code attached process a list of consecutive numbers in a simple tbb graph
tbb graph: source -> limiter -> func1 -> terminal
where terminal is a function_node with queueing policy, and is serial
result: terminal node processed input out of order when multiple tbb threads are assigned
Say
- terminal node is processing input 1
- func1 pushes input 2, now 2 is in terminal node's internal buffer
- func1 pushes input 3, and at the same time, terminal node finished processing input 1
- the next job the terminal node processes can be either 2 or 3
Since the function node has queueing policy, it's always gonna be push instead of pull. So when a job is done. How does the function node decide where to get the next job? It's not obvious to me from the source code https://github.com/oneapi-src/oneTBB/blob/2019_U8/include/tbb/internal/_flow_graph_node_impl.h#L250
Questions:
- Is this a bug or expected behaviour
- If it's expected, does it mean that I have to use the sequence node to guarantee the order?
Sample Code Result
➜ bin ✗ ./functionNodeTester
terminal node actual: 1713, expected:1712
terminal node actual: 1714, expected:1713
terminal node actual: 1712, expected:1714
Sample Code
#include <iostream> #include "tbb/flow_graph.h" #include "tbb/task_scheduler_init.h" using namespace tbb::flow; struct TerminalNode_t { continue_msg operator()(int v) { if (v != counter) std::cout << "terminal node actual: " << v << ", expected:" << counter << std::endl; counter++; return continue_msg(); } private: int counter = 0; }; static int const THRESHOLD = 3; static int const CYCLES = 10000; int main() { int count = 0; tbb::task_scheduler_init init(3); graph g; source_node<int> input(g, [&count](int& output) -> bool { if (count < CYCLES) { output = count; count++; return true; } return false; }); limiter_node<int> l( g, THRESHOLD); function_node<int,int> func1( g, serial, [](const int& val){ return val; } ); function_node<int, continue_msg> terminal( g, serial, TerminalNode_t() ); make_edge( l, func1 ); make_edge( func1, terminal ); make_edge( terminal, l.decrement ); make_edge( input, l ); g.wait_for_all(); return 0; }
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Any feedback, intel?
Is this a bug to be fixed or are we forced to use sequencer nodes throughout our pipeline if we need to guarantee serial processing order in a part of our pipeline?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Have you tried setting the source node to inactive and then enabling after connecting the edges?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I apologize for a long delay. Asked engineering team for help with your question.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
This bug still exists. Any updates?
-----
I found this issue: https://github.com/oneapi-src/oneTBB/issues/289. Nevermind.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page