Intel® oneAPI Threading Building Blocks
Ask questions and share information about adding parallelism to your applications when using this threading library.
2465 Discussions

Multiple undefined reference errors while trying to use intel oneTBB

EllipsePolitique
4,934 Views


So trying to compile graphnn with oneTBB (ie using #include "oneapi/tbb.h" instead of "tbb/tbb.h") I get the error with the code as-is (since I didn't write the original:

src/tensor/cpu_row_sparse_tensor.cpp:298:8: error: « mutex » is not a member of « tbb »
tbb::mutex ll;
^~~~~

It will compile correctly if I both of these lines:

#define __TBB_PREVIEW_MUTEXES 1
[...]
#include <tbb/mutex.h>

but then when trying to compile the MNIST example, I get a very long list of errors for both cpu_row_dense_tensor and cpu_row_sparse_tensor:


../../build/lib/libgnn.a(cpu_dense_tensor.o) : In the function « tbb::detail::d1::task_arena_function<tbb::detail::d1::graph::wait_for_all()::{lambda()#1}::operator()() const::{lambda()#1}, void>::operator()() const » :
cpu_dense_tensor.cpp:(.text._ZNK3tbb6detail2d119task_arena_functionIZZNS1_5graph12wait_for_allEvENKUlvE_clEvEUlvE_vEclEv[_ZNK3tbb6detail2d119task_arena_functionIZZNS1_5graph12wait_for_allEvENKUlvE_clEvEUlvE_vEclEv]+0x14) : undefined reference to « tbb::detail::r1::wait(tbb::detail::d1::wait_context&, tbb::detail::d1::task_group_context&) »
../../build/lib/libgnn.a(cpu_dense_tensor.o) : In the function « tbb::detail::d1::graph::~graph() » :
cpu_dense_tensor.cpp:(.text._ZN3tbb6detail2d15graphD2Ev[_ZN3tbb6detail2d15graphD5Ev]+0x5b) : undefined reference to « tbb::detail::r1::initialize(tbb::detail::d1::task_arena_base&) »
cpu_dense_tensor.cpp:(.text._ZN3tbb6detail2d15graphD2Ev[_ZN3tbb6detail2d15graphD5Ev]+0x9a) : undefined reference to « tbb::detail::r1::execute(tbb::detail::d1::task_arena_base&, tbb::detail::d1::delegate_base&) »
cpu_dense_tensor.cpp:(.text._ZN3tbb6detail2d15graphD2Ev[_ZN3tbb6detail2d15graphD5Ev]+0xa3) : undefined reference to « tbb::detail::r1::is_group_execution_cancelled(tbb::detail::d1::task_group_context&) »
cpu_dense_tensor.cpp:(.text._ZN3tbb6detail2d15graphD2Ev[_ZN3tbb6detail2d15graphD5Ev]+0xb5) : undefined reference to « tbb::detail::r1::reset(tbb::detail::d1::task_group_context&) »
[...]
../../build/lib/libgnn.a(cpu_row_sparse_tensor.o) : In the function « gnn::TensorTemplate<gnn::CPU, gnn::ROW_SPARSE, float>::RowSparseCopy(gnn::TensorTemplate<gnn::CPU, gnn::DENSE, float>&) » :
cpu_row_sparse_tensor.cpp:(.text._ZN3gnn14TensorTemplateINS_3CPUENS_10ROW_SPARSEEfE13RowSparseCopyERNS0_IS1_NS_5DENSEEfEE[_ZN3gnn14TensorTemplateINS_3CPUENS_10ROW_SPARSEEfE13RowSparseCopyERNS0_IS1_NS_5DENSEEfEE]+0xb9) : undefined reference to « tbb::detail::r1::initialize(tbb::detail::d1::task_group_context&) »
cpu_row_sparse_tensor.cpp:(.text._ZN3gnn14TensorTemplateINS_3CPUENS_10ROW_SPARSEEfE13RowSparseCopyERNS0_IS1_NS_5DENSEEfEE[_ZN3gnn14TensorTemplateINS_3CPUENS_10ROW_SPARSEEfE13RowSparseCopyERNS0_IS1_NS_5DENSEEfEE]+0xd1) : undefined reference to « tbb::detail::r1::allocate(tbb::detail::d1::small_object_pool*&, unsigned long) »
cpu_row_sparse_tensor.cpp:(.text._ZN3gnn14TensorTemplateINS_3CPUENS_10ROW_SPARSEEfE13RowSparseCopyERNS0_IS1_NS_5DENSEEfEE[_ZN3gnn14TensorTemplateINS_3CPUENS_10ROW_SPARSEEfE13RowSparseCopyERNS0_IS1_NS_5DENSEEfEE]+0x12b) : undefined reference to « tbb::detail::r1::max_concurrency(tbb::detail::d1::task_arena_base const*) »
cpu_row_sparse_tensor.cpp:(.text._ZN3gnn14TensorTemplateINS_3CPUENS_10ROW_SPARSEEfE13RowSparseCopyERNS0_IS1_NS_5DENSEEfEE[_ZN3gnn14TensorTemplateINS_3CPUENS_10ROW_SPARSEEfE13RowSparseCopyERNS0_IS1_NS_5DENSEEfEE]+0x19d) : undefined reference to « tbb::detail::r1::execute_and_wait(tbb::detail::d1::task&, tbb::detail::d1::task_group_context&, tbb::detail::d1::wait_context&, tbb::detail::d1::task_group_context&) »
cpu_row_sparse_tensor.cpp:(.text._ZN3gnn14TensorTemplateINS_3CPUENS_10ROW_SPARSEEfE13RowSparseCopyERNS0_IS1_NS_5DENSEEfEE[_ZN3gnn14TensorTemplateINS_3CPUENS_10ROW_SPARSEEfE13RowSparseCopyERNS0_IS1_NS_5DENSEEfEE]+0x1a5) : undefined reference to « tbb::detail::r1::destroy(tbb::detail::d1::task_group_context&) »
cpu_row_sparse_tensor.cpp:(.text._ZN3gnn14TensorTemplateINS_3CPUENS_10ROW_SPARSEEfE13RowSparseCopyERNS0_IS1_NS_5DENSEEfEE[_ZN3gnn14TensorTemplateINS_3CPUENS_10ROW_SPARSEEfE13RowSparseCopyERNS0_IS1_NS_5DENSEEfEE]+0x1d8) : undefined reference to « tbb::detail::r1::destroy(tbb::detail::d1::task_group_context&) »
[...]


I can try using more defines and it seems to reduce the number of errors, but I don't think this is really the right way to be using oneTBB, since it's already odd all the features are disabeled.


If instead I replace oneTBB with regular TBB (from 2018) I recieve the same error unless I add the mutex.h file, but then I get the following error:

cpp
g++ -Wall -O3 -std=c++14 -I/local/java/cuda-11.2//include -I~/intel/oneapi/mkl/latest/include -I/usr/include/tbb -Iinclude -fPIC -DUSE_GPU -MMD -c -o build/objs/cxx/nn/relu.o src/nn/relu.cpp -lm -lmkl_rt -ltbb -L/local/java/cuda-11.2//lib64 -lcudart -lcublas -lcurand -lcusparse
In file included from /usr/include/tbb/mutex.h:32,
from src/tensor/cpu_row_sparse_tensor.cpp:12:
/usr/include/tbb/tbb_stddef.h: At the global level:
/usr/include/tbb/tbb_stddef.h:409:14: error: expected type-specifier before « split »
operator split() const { return split(); }
^~~~~
In file included from src/tensor/cpu_row_sparse_tensor.cpp:12:
/usr/include/tbb/mutex.h:231:1: error: expected constructor, destructor, or type conversion before « } » token
} // namespace tbb
^

 I would prefer to use oneTBB, and I feel like I'm using it wrong, but if using regular TBB works better then I don't really care. Any hints as to what I'm doing wrong?

0 Kudos
1 Solution
EllipsePolitique
4,605 Views

Hello, 

 

Those were the version I was using, but it still didn't work. 

 

I've given up on trying to run it on my institution's local machines and just rented a VPS where it works.

 

Thanks for your help, it's a shame it didn't work.

View solution in original post

0 Kudos
16 Replies
HemanthCH_Intel
Moderator
4,917 Views

Hi,


Thanks for reaching out to us.


Could you please let us know the changes you have made in Makefile, make_common, cpu_row_sparse_tensor.cpp files, if possible please share those files with us to investigate your issue from our end.


Could you please let us know the OS version and oneAPI toolkit version you are using?


Thanks & Regards,

Hemanth.


0 Kudos
EllipsePolitique
4,903 Views

Thanks for your response,

 

You can see all the changes in this commit https://github.com/s-clerc/graphnn/commit/eea6debfc296222fbe7add36c28a40e35295b1db, (it is my personal repository) and you can see all the files there as well. Currently it shows how the code looks like to use regular TBB (not oneTBB), but you can see code in cpu_row_sparse_tensor.cpp commented out for oneTBB. 

 

I'm not sure how to check the toolkit version, but for TBB the folder version is 2021.5.0, and for MKL it is 2022.0.1. In terms of OS:

NAME="Rocky Linux"
VERSION="8.5 (Green Obsidian)"
ID="rocky"
ID_LIKE="rhel centos fedora"
VERSION_ID="8.5"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Rocky Linux 8.5 (Green Obsidian)"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:rocky:rocky:8:GA"
HOME_URL="https://rockylinux.org/"
BUG_REPORT_URL="https://bugs.rockylinux.org/"
ROCKY_SUPPORT_PRODUCT="Rocky Linux"
ROCKY_SUPPORT_PRODUCT_VERSION="8"

 

0 Kudos
HemanthCH_Intel
Moderator
4,888 Views

Hi,

 

We can run the graphnn application successfully on Ubuntu 20.04 Machine.

 

In cpu_row_sparse_tensor.cpp file, we added oneapi/tbb.h header file and removed tbb/mutex.h.

In the same file we have added oneapi namespace and replaced tbb::mutex with std::mutex, as tbb::mutex has been deprecated as shown in the screenshot below:

HemanthCH_Intel_0-1642676846955.png

 

We have attached cpu_row_sparse_tensor.cpp & output as screenshot for your reference.

 

Thanks & Regards,

Hemanth.

 

0 Kudos
EllipsePolitique
4,852 Views

Thank you for your proposal. I have tested your proposal, and unfortunately it does not resolve my issue with running the MNIST example. You can see how I implemented your fix here Add proposed fix. I also tried reïnstalling oneTBB to see if that would resolve the issue with no success.

0 Kudos
Alexei_K_Intel
Employee
4,714 Views

The observed errors are related to link-time (not compile-time). It seems libtbb.so.12 is not properly linked to the application. Try to add -L<path_to_tbb>/lib/intel64/gcc4.8/ to the linker command.

0 Kudos
HemanthCH_Intel
Moderator
4,826 Views

Hi,

 

Could you please add the relevant TBB path in the make_common file. We have attached our make_common file and we can observe that TBB_PATH is not included in your screenshot.

 

Thanks & Regards,

Hemanth.

 

0 Kudos
EllipsePolitique
4,818 Views

Hello,

 

I'm not sure what you mean, because TBB_ROOT is in my make_common. Here is the content of my make_common, this has been unchanged during our entire exchange.

 

dir_guard = @mkdir -p $(@D)

INTEL_ROOT := ~/intel/oneapi
MKL_ROOT = $(INTEL_ROOT)/mkl/latest
TBB_ROOT = $(INTEL_ROOT)/tbb/latest
USE_GPU = 1



FIND := find
CXX := g++
CXXFLAGS += -Wall -O3 -std=c++14
LDFLAGS += -lm -lmkl_rt -ltbb
ifeq ($(USE_GPU), 1)
    #CUDA_HOME := /usr/local/cuda-8.0
    NVCC := $(CUDA_HOME)/bin/nvcc
    NVCCFLAGS += --default-stream per-thread
    LDFLAGS += -L$(CUDA_HOME)/lib64 -lcudart -lcublas -lcurand -lcusparse
endif

CUDA_ARCH := -gencode arch=compute_61,code=sm_61 

 

 The Intel root and corresponding TBB and MKL roots are slightly different to your's due to a different install directory, but it can be verified that they is the correct one:

 

[... ~]$ cd intel/oneapi/tbb/latest/
[... latest]$ ls
env  include  lib  licensing  modulefiles
[... latest]$ cd ~
[... ~]$ cd intel/oneapi/mkl/latest/
[... latest]$ ls
benchmarks  bin  documentation  env  examples  include  interfaces  lib  licensing  modulefiles  tools

 

Thank you for your time

0 Kudos
HemanthCH_Intel
Moderator
4,792 Views

Hi,

 

Could you please add the "$(TBB_ROOT)/include" in the "graphnn/examples/mnist/Makefile" as attached in the screenshot and try running the "make clean && make" command?

 

HemanthCH_Intel_0-1643618910329.png

 

Thanks & Regards,

Hemanth.

 

0 Kudos
EllipsePolitique
4,782 Views

Hello I tried that, I also tried adding -ltbb to no avail:

GNN_HOME=../..

include $(GNN_HOME)/make_common
USE_GPU = 1

ifeq ($(USE_GPU), 1)
	lib_dir := $(GNN_HOME)/build/lib
	CXXFLAGS += -DUSE_GPU
else
	lib_dir := $(GNN_HOME)/build_cpuonly/lib
endif

gnn_lib := $(lib_dir)/libgnn.a

include_dirs := $(CUDA_HOME)/include $(MKL_ROOT)/include $(TBB_ROOT)/include $(GNN_HOME)/include include
CXXFLAGS += $(addprefix -I,$(include_dirs))

all: build/mnist

build/mnist: mnist.cpp $(gnn_lib)
	$(dir_guard)
	$(CXX) $(CXXFLAGS) -o $@ $^ -L$(lib_dir) -lgnn $(LDFLAGS) -ltbb

clean:
	rm -rf build
0 Kudos
HemanthCH_Intel
Moderator
4,738 Views

Hi,


Thanks for your update.

We are looking into your issue and will get back to you soon.


Thanks & Regards,

Hemanth.


0 Kudos
HemanthCH_Intel
Moderator
4,663 Views

Hi,

 

As Alxei_K_Intel said, "It seems libtbb.so.12 is not properly linked to the application". So, try to add -L<path_to_tbb>/lib/intel64/gcc4.8/ to the linker command. If your issue still persists please get back to us.

 

Thanks & Regards,

Hemanth.

 

0 Kudos
EllipsePolitique
4,653 Views

Hello I tried this to no avail. I also attempted reversing the order of the LDFLAGS which also didn't hel.

 

You can see how I implemented the fix in this commit: Add proposed linker fix in case I did it wrong.

 

Thanks, sorry for the delay.

0 Kudos
HemanthCH_Intel
Moderator
4,633 Views

Hi,

 

We have successfully build the graphnn application using oneAPI TBB 2021.5 on Rocky Linux machine with CUDA-11.2 version. Could you please try with that version and let us know if it works?

 

Thanks & Regards,

Hemanth. 

 

0 Kudos
EllipsePolitique
4,606 Views

Hello, 

 

Those were the version I was using, but it still didn't work. 

 

I've given up on trying to run it on my institution's local machines and just rented a VPS where it works.

 

Thanks for your help, it's a shame it didn't work.

0 Kudos
HemanthCH_Intel
Moderator
4,413 Views

Hi,


Thanks for accepting the solution. Can we go ahead and close this thread?


Thanks & Regards,

Hemanth.


0 Kudos
HemanthCH_Intel
Moderator
4,273 Views

Hi,


We haven't heard back from you. If you need any additional information, please post a new question as this thread will no longer be monitored by Intel.


Thanks & Regards,

Hemanth.


0 Kudos
Reply