Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Beginner
11 Views

Layout of structure dnnlayout_t in MKL DNN routines

Good evening, 

I am starting to use the Deep neural network routines in MKL. The definition of each node (weights, etc) is stored in a variable of type dnnlayout_t which is opaque. But, how I can save or load a trained network if I cannot extract the weights? So, I need the internal layout of that type or a way to access it.

Thanks in advance,

Jean

In the include file, the type is defined as following:

#if defined(__cplusplus_cli)
struct _uniPrimitive_s {};
struct _dnnLayout_s {};
#endif

typedef struct _uniPrimitive_s* dnnPrimitive_t;
typedef struct _dnnLayout_s* dnnLayout_t;
typedef void* dnnPrimitiveAttributes_t;

 

0 Kudos
5 Replies
Highlighted
Beginner
11 Views

Good evening,

Perhaps I am missing something, but how I feed the network with inputs, for example an image? Are the input values also in the dnnlayout_t type?

Thanks in advance,

Jean

 

0 Kudos
Highlighted
Employee
11 Views

Hi Jean,

Well, the MKL DNN actually realized low level convolution calculation & some other algorithm, the dnnlayout is point to the data set for each time you doing the convolution processing. It's not the layout of neural network, I believe you probably want to point the layout for some framework, for instance theano, caffe... I could understand under a supervised network training, the convolution weight would be adjusted through the backpropagation that you may could not just use MKL DNN to realize a complete network training. You probably could refer caffe framework which is already optimized with MKL.

Best regards,
Fiona

0 Kudos
Highlighted
Beginner
11 Views

Good morning,

The problem is that the Caffe and Theano frameworks optimized with MKL cannot be built for Windows. I tried to build them and got several errors. Even the C++ code returned errors when I tried compiling it with Intel C++. So that's why I need to access the weights. Does that means that a separate procedure is needed to adjust the weights during back propagation ?

Also, I notice that there is no optimizer in the DNN routines to find the optimal weights when training the network. Is there an optimizer available in MKL that could be used or the optimizing routine has to be programmed?

Thanks in advance,

Jean

0 Kudos
Highlighted
Employee
11 Views

Hi Jean,

For DNN training, you could refer an example provided under MKL installation path: $MKLROOT/examples/examples_core_c\examples_core_c\dnnc\source\s_train_sample.c

During a training, you must do forward first to describe expect result firstly, You need to bundle layout for input data, filt, bias and output to primitives; Then, you need to create layout for backward calculation data from bwd primitive; Finally you execute fwd firstly, then bwd. After that, you could get weight by using 'dnnConversionExecute' function to transfer primitive data to address to pointer of floating/double data. The weight is stored in filt and you could get filt through the pointer for floating data.

Learn more info, please follow the sample. Thanks.

 

0 Kudos
Highlighted
Beginner
11 Views

Thanks a lot!

I indeed found the example code by chance  last week and I was able to start working on my project! However, I suggest that the documentation supplied with MKL clearly mentions its existence (as it is the case for other routines).

Regards,

Jean

 

0 Kudos