I have converted some neural net code from matlab which consists of adding/subtracting very small probabilities and is of the form log( sum( Array) ). This may be affected by underflow. There is a common workaround on the internet called the log sum exp trick which involves shifting back and forward by a value equal to maxval(Array) see http://machineintelligence.tumblr.com/post/4998477107/the-log-sum-exp-trick for example. I could replicate this is fortran but before I do I though I would ask. Is there a MKL function that computes log( sum (Array) )) with minimal underflow/overflow before I reinvent the wheel - Here is the matlab code - repmat is similar to fortran spread(), ones creates a matrix of 1's and
Alternately are there any fortran specific tricks for handling very small numbers accurately ?
if(length(xx(:))==1) ls=xx; return; end xdims=size(xx); if(nargin<2) dim=find(xdims>1); end alpha = max(xx,,dim)-log(realmax)/2; repdims=ones(size(xdims)); repdims(dim)=xdims(dim); ls = alpha+log(sum(exp(xx-repmat(alpha,repdims)),dim));
MKL does not provide a function that specifically minimize underflow/overflow for such computation. Looks like the trick still applies when you call MKL functions to compute log-sum-exp.