Intel® oneAPI Threading Building Blocks
Ask questions and share information about adding parallelism to your applications when using this threading library.

Feedback about my code

morle
Beginner
1,179 Views
Hi,
I have implemented (tried to) a concurrent list as an exercise about mutexes.
I would like your feedback about it: if I used the mutexes correctly, if the mutexes should be better placed at another place, if I fotgot to use a mutex somewhere and any other advice you would be willing to provide.
Also, I would like your help about two topics.
The first one is the list destructor. I couldn't get it to work(I provide the commented code of the last thing I tried).
The second one is the InsertBefore function. This one inserts a new node before an existing one(since I haven't implemented any iterators for the list, I use the value of a node to search the place where the new one will be inserted). This function works well the first time I use it in main, however, the second time, it segfaults(the second call is commented in main).
Thanks!
[cpp]#include #include #include "tbb/atomic.h" #include "tbb/blocked_range.h" #include "tbb/parallel_for.h" #include "tbb/partitioner.h" #include "tbb/queuing_mutex.h" #include "tbb/spin_rw_mutex.h" typedef tbb::spin_rw_mutex ListMutexType; typedef tbb::spin_rw_mutex NodeMutexType; typedef tbb::queuing_mutex IOMutexType; class EmptyListException: public std::exception { private: std::string M; public: EmptyListException(void):std::exception(){ M = "Empty List"; } virtual ~EmptyListException(void) throw() {} const std::string& What(void) const { return M; } }; template class Node { template friend class List; private: T value; Node* left; Node* right; mutable NodeMutexType NodeMutex; public: static const T max_val; public: Node(void):value(max_val),left(0),right(0){ NodeMutex = NodeMutexType(); } Node(const T& v):value(v),left(0),right(0){ NodeMutex = NodeMutexType(); } Node(const T& v, Node* l, Node* r):value(v),left(l),right{ NodeMutex = NodeMutexType(); } ~Node(void) { NodeMutexType::scoped_lock lock( NodeMutex, true ); right = 0; left = 0; delete right; delete left; } int& Value(void) { NodeMutexType::scoped_lock lock( NodeMutex, false ); return value; } int Value(void) const { NodeMutexType::scoped_lock lock( NodeMutex, false ); return value; } }; template const T Node::max_val = std::numeric_limits::max(); template class List { private: Node* head; Node* tail; tbb::atomic size; mutable ListMutexType ListMutex; public: List(void):size(){ head = new Node(); tail = new Node(); head->right = tail; tail->left = head; ListMutex = ListMutexType(); } // ~List(void) { // THIS IS BEYOND ME // ListMutexType::scoped_lock lock( ListMutex, true ); // if( Empty() ) { // delete head; // delete tail; // } // else { // Node* aux = head; // Node* next = head->right; // while( next != tail ) { // delete aux; // --size; // aux = next; // ++next; // } // delete next; // delete aux; // } // } bool Empty(void) const { ListMutexType::scoped_lock lock( ListMutex, false ); if( head->right == tail ) return true; else return false; } const tbb::atomic& Size(void) const { return size; } const T& InsertFirst( Node* node ) { Node* aux(0); { NodeMutexType::scoped_lock lockhead( head->NodeMutex, true ); aux = head->right; head->right = node; node->left = head; } // end lockhead { NodeMutexType::scoped_lock locknext( aux->NodeMutex, true ); node->right = aux; aux->left = node; } ++size; aux = 0; delete aux; return node->value; } const T& InsertBefore( Node* node, const T& val ) { if( Empty() ) { { NodeMutexType::scoped_lock lockhead( head->NodeMutex, true ); head->right = node; node->left = head; } //end lockhead { NodeMutexType::scoped_lock locktail( tail->NodeMutex, true ); node->right = tail; tail->left = node; } //end locktail } else { Node* it(0); ListMutexType::scoped_lock locklist; locklist.acquire( ListMutex, false ); it = head; while( it->value != val && it != tail ) { ++it; } locklist.release(); Node* aux(0); if( it == tail ) { aux = tail->left; { NodeMutexType::scoped_lock locktail( tail->NodeMutex, true ); tail->left = node; node->right = tail; } { NodeMutexType::scoped_lock lockaux( aux->NodeMutex, true ); aux->right = node; node->left = aux; } } else { aux = it->left; { NodeMutexType::scoped_lock lockit( it->NodeMutex, true ); it->left = node; node->right = it; } { NodeMutexType::scoped_lock lockaux( aux->NodeMutex, true ); aux->right = node; node->left = aux; } } // end else
it = 0; delete it; aux = 0; delete aux; }// end outer else
++size;
return node->value; } int RemoveLast(void) { if( Empty() ) throw EmptyListException(); T toret( Node::max_val ); Node* aux(0); { NodeMutexType::scoped_lock lockend( tail->NodeMutex, true ); aux = tail->left; { NodeMutexType::scoped_lock lockprevious( aux->NodeMutex, true ); toret = aux->value; aux->left->right = aux->right; tail->left = aux->left; } } delete aux; --size; return toret; } }; template class ConcurrentBody { private: List* myl; static IOMutexType IOMutex; public: ConcurrentBody(List* l):myl(l){} ~ConcurrentBody(void){ myl = 0; delete myl; } void operator()(const tbb::blocked_range& r)const{ unsigned int val(0); for( unsigned int i(r.begin());i!=r.end();++i){ if( i%2 == 0 ) { val = myl->InsertFirst( new Node(i) ); IOMutexType::scoped_lock lock( IOMutex ); std::cout << "Inserted " << val << "\n"; } else { val = myl->RemoveLast(); IOMutexType::scoped_lock lock( IOMutex ); std::cout << "Removed " << val << "\n"; } } } }; template IOMutexType ConcurrentBody::IOMutex = IOMutexType(); int main(int argv, char* argc[]) { List l; std::cout << l.Empty() << std::endl; try { std::cout << "Inserted " << l.InsertFirst( new Node(7) ) << "\n"; std::cout << "Is empty: " << l.Empty() << " & size: " << l.Size() << std::endl; std::cout << "Inserted before 7: " << l.InsertBefore( new Node(99), 7 ) << " & size: " << l.Size() << std::endl; //THIS GOES WELL // std::cout << "Inserted before 99: " << l.InsertBefore( new Node(100), 99 ) << " & size: " << l.Size() << std::endl; // THIS SEGFAULTS tbb::parallel_for(tbb::blocked_range(0,8),ConcurrentBody(&l),tbb::auto_partitioner()); std::cout << "Is empty: " << l.Empty() << " & size: " << l.Size() << std::endl; } catch ( const EmptyListException& e ) { std::cerr << "This happened: " << e.What() << "\n"; } catch( const std::exception& e ) { std::cerr << "Oops: " << e.what() << std::endl; } catch( ... ) { std::cerr << "Unexpected\n"; } return 0; }[/cpp]
0 Kudos
26 Replies
SergeyKostrov
Valued Contributor II
197 Views
...
- In a cases when lots of elements are inserted into a Linked List you can drop-off a memory on exit
of an application by setting:
...
lnkList.m_bSkipNodesRelease = RTtrue;
...


Thescreenshot demosntrates three"abrupt" exitsofthe test applicationat the end oftests:



Note: Ifsome Memory Leaks Detection subsystem isused than in a Debug configurationall memory
that wasn't released would be reported as memory leaks. Incase of a debugging and alarge Linked
List I would recommend to set the 'm_bSkipNodesRelease' attribute to 'false'.

0 Kudos
SergeyKostrov
Valued Contributor II
197 Views

Here are some detected issues with a Test-Case ( already updated in Post #14 ):

>> An update for a 'Step 04' <<

...
// Step 04
lnkList.AddHead( 555 ); // Added
lnkList.AddTail( 777 ); // Added
pHead = lnkList.GetHead();
iValHead = lnkList.GetHeadValue();
pTail = lnkList.GetTail();
iValTail = lnkList.GetTailValue();
lnkList.DeleteAll();
...

>> An update for a 'Step 09' <<

...
// Step 09
lnkList.DeleteAll();

lnkList.InsertBefore( RTnull, 555 );
lnkList.InsertAfter( RTnull, 777 ); // Corrected
...

0 Kudos
morle
Beginner
197 Views
Hi!!

Thanks everybody for your help and comments!

Sergey, I'm speechless.

I will get into it right now.

Thanks!
0 Kudos
jimdempseyatthecove
Honored Contributor III
197 Views
Sergey,

The original code was for multi-threaded doubly linked list. The code you provided is single threaded (not multi-thread safe).

Jim Dempsey
0 Kudos
SergeyKostrov
Valued Contributor II
197 Views
The original code was for multi-threaded doubly linked list. The code you provided is single threaded (not multi-thread safe).

Jim Dempsey


That is why I added a comment in a Post #21:
...
- You will need to add a syncronization functionality
...

0 Kudos
RafSchietekat
Valued Contributor III
197 Views
"You will need to add a syncronization functionality"
That means the code is nearly finished, doesn't it? :-)

Unfortunately, in reality it means you may have to virtually start over from scratch, examining your assumptions about the concurrency and scalability requirements, and perhaps even adapting the API, much like with TBB's built-in container classes.
0 Kudos
Reply