- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I'm attempting to build an autoencoder in DAAL using the Python API, and have a question about how to formulate the loss layer. The NN is architected to output a vector of the same length as the input vector, and the loss function I would like to use is sum of component-wise squared error. From what I can see it seems like the "Loss Forward" layer is what I need, but I cannot figure out how to set the loss function for this layer; the documentation (specifically https://software.intel.com/en-us/daal-programming-guide-loss-forward-layer) seems to imply this is possible, but does not mention how to actually go about doing it. Is this possible, and if so does anyone know how to do this?
Thanks,
Matt
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Matt,
The present version of Intel DAAL for Python does not support custom algorithms including loss layers for neural networks. However, you can implement your own loss layer on C++ side. I’ve attached code samples that demonstrate how to do that:
- daal_custom_loss_layer.h contains implementation of the MSE loss-function suitable for computing element-wise squared error.
- autoencoder.cpp shows how to build the simple autoencoder with fully-connected layers and MSE loss function.
Please, let me know, if it addresses your request
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page