Hello there everyone! I'm new to this forum and i hope that i can get along with everyone!:)Currently, i'm doing my project using independent component analysis(ICA) to separate 2 audio sources in matlab. I have successfully developed and tested the algorithm in matlab using 2 source signals which are .wav file format. I'm trying to implement the algorithm into altera's DE2 board using it's nios II soft processor... I plan to use the on board audio codec (wolfson wm8731) to record 2 source signals into an on-chip RAM which will store the samples which are either 16 bits or 32 bits wide each sample. My question is, ...How does the processor understand that the 16 bits or 32 bits samples in the RAM are actually representing an amplitude(as in the quantization process)? note that the algorithm will be converted to C language to be implemented into the nios II processor. Generally, how does the FPGA going to read the 16bits or 32bits samples? do i need to perform any conversion to floating point or etc? Any advice or help on this matter is greatly appreciated, have been trying this for weeks to no avail in finding the solution to this matter. Any ideas or advice?
The C language doesn't give any standard for integers sizes unfortunately. Usually the "short" and "unsigned short" are 16 bits whereas "long" and "unsigned long" are 32 bits. "int" and "unsigned int" can be 16 or 32 bits depending on the architecture (maybe even 64 bits on some architectures).If you want to be sure that your code uses the correct size, (and make it easier to port it to a different architecture one day, if needed) you can include <alt_types.h> and use the types defined in the Altera HAL:
--- Quote Start --- how does the FPGA going to read the 16bits or 32bits samples? --- Quote End --- You'll need to write HDL code to operate the audio codec and transfer the samples to a PIO of the NIOS II design. Some DE2 demonstration code is available on the DE2 system CD, other code has been posted in the Forum.
First of all, thank you so much (both Daixiwen and FvM) for the replies! it's a great honour to have your aid and explanation!:) will try and write the HDL codes to fetch the sample from the audio codec to the nios II as suggested...i have one ambiguity which remains...if i have developed my algorithm in 32 bits, do i need to change my sampling width to 32 bits as well? many thanx again!:)
The CPU uses a 32-bit ALU, so you won't suffer any penalty by doing all your calculations in 32 bits. In fact if you store all your samples as 16-bit integers, you could loose a few clock cycles due to the bit shifting that the CPU needs to do. On the other hand, 32-bit samples use obviously twice the memory, so it could be a problem if you need to store a lot of samples and are a bit short on memory.I think the easiest way to do it is to use 32-bit samples. Your hardware part can present directly 32-bit values to the CUP by padding the incoming sample with zeroes.
Hi daixiwen,Thanks so much for the explanation...it cleared up my ambiguities haha...currently constructing a simulink model and will use dsp builder to generate code for the model constructed...btw,from your experience, which is better in terms of performance? 1) use sopc builder to construct a system along with eclipse 2)raw verilog/vhdl which will be used to implement the algorithm which requires a lot of computing processes.
It depends on your experience... If you know a bit about software and embedded systems, then SOPC builder is the way to go in my opinion. Use dsp builder for the signal processing chain itself, and next to that a Nios CPU with some software to control and configure the operation.If you have more experience in VHDL you can write everything in VHDL, but it will probably take a longer time to develop, and the control part will be less flexible.