Intel® oneAPI Data Analytics Library
Community support for building compute-intensive applications that run fast on Intel® architecture.
Announcements
Welcome to the Intel Community. If you get an answer you like, please mark it as an Accepted Solution to help others. Thank you!

Autoencoder formulation

Matt_P_1
Beginner
188 Views

Hi,

I'm attempting to build an autoencoder in DAAL using the Python API, and have a question about how to formulate the loss layer. The NN is architected to output a vector of the same length as the input vector, and the loss function I would like to use is sum of component-wise squared error. From what I can see it seems like the "Loss Forward" layer is what I need, but I cannot figure out how to set the loss function for this layer; the documentation (specifically https://software.intel.com/en-us/daal-programming-guide-loss-forward-layer) seems to imply this is possible, but does not mention how to actually go about doing it. Is this possible, and if so does anyone know how to do this?

Thanks,

Matt

0 Kudos
1 Reply
Ruslan_I_Intel
Employee
188 Views

Hi Matt,

The present version of Intel DAAL for Python does not support custom algorithms including loss layers for neural networks. However, you can implement your own loss layer on C++ side. I’ve attached code samples that demonstrate how to do that:

  • daal_custom_loss_layer.h contains implementation of the MSE loss-function suitable for computing element-wise squared error.
  • autoencoder.cpp shows how to build the simple autoencoder with fully-connected layers and MSE loss function.

Please, let me know, if it addresses your request

Reply