Before you tell me to post on the MKL forum, I did with the early problems, but I struck an interesting thing. I was asked to look at the damping measurements. I had been putting it off, but now is as good a time as any.
It helps to have some experimental data that has some observable movement. The mall in Houston has a second floor that moves and you can feel the movement. It is a bit like a small intraplate earthquake passing, just without any noise.
I wanted to use the MKL Fast Fourier solver. They are quite good to set up.
The manual for the FFT in MKL references Brigham, who wrote good texts on the FFT's in the 70's and 80's. Brigham published code in Basic, which in the early 90's I had translated to Fortran. When I got to Uni, I could use the Numerical Recipes Fortran 70 routines.
In Brigham's book, he provides a nice sample of a sine wave. Pg 176, his book is online.
I had to provide the calibration factor for the f(t) <=> F(w) change, I can remember spending some time on this in the early 90's to make sure I had a reasonable value.
The standard Brigham problem has an input time amplitude of 1 on the cosine wave, and the peak output for a 32 step problem is 16. Brigham divides the 16 by N, ie 32. He uses a time step of 1 second, which removes the time issue in the calibration. He calls the input amplitude and the frequency magnitude. That is another interesting problem, although for simplicity for the engineers who use this stuff, I just leave the magnitude called acceleration (g). We are not really interested in the amplitude only the relative values, and it gives them a feel.
So I have spent an enjoyable few days reading a lot of stuff on the calibration of the FFT. It is interesting the differing opinions. Some want the unit sign wave to have a unit amplitude in the FFT. There is a lot written on the three alternative methods.
MKL is silent on the calibration, but the interesting thing is they reference Brigham as one of their three standard references. I was digging into the Numerical Recipes book and they also have Brigham as the first reference.
As a final check, I dragged out the f90 code for the Numerical Recipes and used it to look at the problems today. In the early 90's I had used the F70 code, and so it was fun to work out how the people put the F90 code together.
A few things had me thinking for a while, the first is :
SUBROUTINE fourrow(data,isign) USE nrtype; USE nrutil, ONLY : assert,swap IMPLICIT NONE COMPLEX(DPC), DIMENSION(:,:), INTENT(INOUT) :: data INTEGER(I4B), INTENT(IN) :: isign
Data is created in the calling program as a real array and then assumed to be complex in the subroutine. They load the real part into 1,3,5, and the imaginary into 2,4, 6 ..
is that correct is a complex array just a real vector?
INTERFACE arth MODULE PROCEDURE arth_r, arth_d, arth_i END INTERFACE
They use one interface to define three routines that take different data types.
!BL FUNCTION arth_r(first,increment,n) REAL(SP), INTENT(IN) :: first,increment INTEGER(I4B), INTENT(IN) :: n REAL(SP), DIMENSION(n) :: arth_r INTEGER(I4B) :: k,k2 REAL(SP) :: temp if (n > 0) arth_r(1)=first if (n <= NPAR_ARTH) then do k=2,n arth_r(k)=arth_r(k-1)+increment end do else do k=2,NPAR2_ARTH arth_r(k)=arth_r(k-1)+increment end do temp=increment*NPAR2_ARTH k=NPAR2_ARTH do if (k >= n) exit k2=k+k arth_r(k+1:min(k2,n))=temp+arth_r(1:min(k,n-k)) temp=temp+temp k=k2 end do end if END FUNCTION arth_r !BL FUNCTION arth_d(first,increment,n) REAL(DP), INTENT(IN) :: first,increment INTEGER(I4B), INTENT(IN) :: n REAL(DP), DIMENSION(n) :: arth_d INTEGER(I4B) :: k,k2 REAL(DP) :: temp if (n > 0) arth_d(1)=first if (n <= NPAR_ARTH) then do k=2,n arth_d(k)=arth_d(k-1)+increment end do else do k=2,NPAR2_ARTH arth_d(k)=arth_d(k-1)+increment end do temp=increment*NPAR2_ARTH k=NPAR2_ARTH do if (k >= n) exit k2=k+k arth_d(k+1:min(k2,n))=temp+arth_d(1:min(k,n-k)) temp=temp+temp k=k2 end do end if END FUNCTION arth_d !BL FUNCTION arth_i(first,increment,n) INTEGER(I4B), INTENT(IN) :: first,increment,n INTEGER(I4B), DIMENSION(n) :: arth_i INTEGER(I4B) :: k,k2,temp if (n > 0) arth_i(1)=first if (n <= NPAR_ARTH) then do k=2,n arth_i(k)=arth_i(k-1)+increment end do else do k=2,NPAR2_ARTH arth_i(k)=arth_i(k-1)+increment end do temp=increment*NPAR2_ARTH k=NPAR2_ARTH do if (k >= n) exit k2=k+k arth_i(k+1:min(k2,n))=temp+arth_i(1:min(k,n-k)) temp=temp+temp k=k2 end do end if END FUNCTION arth_i !BL
I had not used this type of overload in Fortran, but it is neat.
The final thing is how they loaded the real data into the complex array before the FFT.
So this counts through tmp one step at a time and dataA two steps.
Finally they called the main array, data, I changed it to dataA, so I could turn off the blue Fortran coding in VS.
I enclose the running Numerical Recipes solution file. it is interesting to look at the style changes in coding from the 70's to the 90's to now.
The MKL solution graphed using the Brigham convention for the calibration, is shown for the shaky floor in Houston.
The red circle is the floor shaking. the two yellow is the power cables frequency interacting with the building. Houston power is not of a high quality in terms of freq. We see this all over the world. Electric motors and hard drives if we are measuring with an old computer on it, can cause a problem.
The acceleration data is shown - this floor has a big load. This is 2**19 time steps.
it has been fun.
This forum is slowly turning into a WIKI light, with the sets of methods.