Intel® Quartus® Prime Software
Intel® Quartus® Prime Design Software, Design Entry, Synthesis, Simulation, Verification, Timing Analysis, System Design (Platform Designer, formerly Qsys)
16557 Discussions

Fixed Point Multiplication in Verilog or Quartus II

Altera_Forum
Honored Contributor II
4,877 Views

Hello, 

 

I plan to implement a controller digitally in my FPGA & it involves numerous fixed-point additions, multiplications & divisions. 

 

Therefore here is a generic question: 

 

Suppose I have to multiply two 16-bit signed fixed-point numbers with non-matching binary points, say 

A = 1(sign bit) + I1 (integer bits) + F1 (fraction bits) = 16-bits 

B = 1(sign bit) + I2 (integer bits) + F2 (fraction bits) = 16-bits 

 

Depending on what range of data A & B will take, I have decided upon the position of binary-point in the result, let us say: 

 

result_AxB = 1(sign bit) + I3 (integer bits) + F3 (fraction bits) = 16-bits 

(I1, I2, I3 , F1, F2, F3 are all known) 

 

So, how do I implement this ? ( as a Verilog code or in Quartus as a block diagram ... anything would help)
0 Kudos
11 Replies
Altera_Forum
Honored Contributor II
2,912 Views

I would say: 

If A and B are signed, just multiply both (C<=A*B) and slice C correctly i.e.  

C[31] = sign 

C[30:18] = integer bits 

C[17:0] = fraction bits.
0 Kudos
Altera_Forum
Honored Contributor II
2,912 Views

In signed multiply, you get two sign bits, not one. But you can't simply discard one of them, you need to handle the product of the two most negative numbers explicitely.

0 Kudos
Altera_Forum
Honored Contributor II
2,912 Views

@amilcar : Thanks. The slicing idea seems nice ! 

 

@ FvM : Don't quite understand the problem you've suggested. 

 

Right now, what I've done is the following: 

 

As far as multiplication of fixed-point fractions is concerned, I've changed the product of fixed-point numbers A*B into the form: 

 

a*b = k*l*(m/n)  

where k & l = unknown integers; m & n are known integers

(I've also chosen M & N such that N is a power of 2 (n = 2^p) , to replace the 'division-by-n' with 'p-lsbs cutoff' - since this will save time). 

 

So now, I'm only concerned about integer multiplication. 

For that I'm using the 'lpm_mult' megafunction in Quartus II. It generates (by default) a 32-bit result for two 16-bit signed input multiplicands. I store this result in mult_32out. 

 

Now, i'm assuming that this "lpm_mult" computes its sign-bit correctly (i.e. deals with whatever problem you have specified). 

 

So, then - I just take the mult_32out[31]-bit as the correct result sign bit, and proceed. 

 

In the end, I have my required 16-bit result as: 

result_axb = {mult_32out[31], mult_32out[p + 14 : p]}; 

 

Its simulation yields correct results, so my assumption seems to be correct. What say ?
0 Kudos
Altera_Forum
Honored Contributor II
2,913 Views

The method is correct, if you can be sure, that no overflow occurs. Generally you can't, I think.

0 Kudos
Altera_Forum
Honored Contributor II
2,913 Views

What FvM is talking about is the problems that happen when you multiply the two most negative numbers i.e. 

 

A is 2 bits (1 sign bit + 1 bit) 

B is 2 bits (1 sign bit + 1 bit) 

 

Both A and B can represent decimal numbers between [-2 .. 1] 

 

But if A=-2 and B=-2 then a overflow will occur and you need to take care of that. 

 

FvM correct me if I'm wrong here.
0 Kudos
Altera_Forum
Honored Contributor II
2,913 Views

Yes. Actually I was thinking of the case I3 = I2+I1, where no overflow occurs, except for the the product of the two most negative numbers. In case of I3 < I1 + I2, overflow has to be considered generally.  

 

In my opinion, saturation logic should be used to handle both cases. It can be implemented by comparing mult_32out[31] and mult_32out[30:P + 15] (following the above notation).  

 

If any bit out of mult_32out[30:P + 15] is different from mult_32out[31], overflow has occured and the result has to be replaced by the most positive respectively most negative number.
0 Kudos
Altera_Forum
Honored Contributor II
2,913 Views

Thanks a lot fvm, for such an enlightening post. 

 

@amilcar : Not that I couldn't understand how an overflow would occur in the case of multiplying two most negative numbers; what confused me was the term "two sign bits". 

 

But, now I completely understand what FvM had to say. Even I cross-checked my logic, and here's my elaborate proof:  

 

Consider this - All we need is to multiply 2 numbers A & B (which are 16-bit signed). 

 

We assume that (we have checked that) whatever range of values A & B will take (fractional or integer) - the integer-part of our "result_axb" is small enough to be represented by less than or equal to 15-bits.  

(i.e. -32768 <= result_axb <= 32767

 

Which means, that we have already checked that no overflow would occur with our result_AxB ( 'overflow' w.r.t. 16-bits signed representation

 

now, in 32-bit signed representation

Any positive no. less than +32767: has all it 17 msb's [31:15] = 0 (= sign bit) 

Any negative no. greater than -32768: has all it 17 msb's [31:15] = 1 ( = sign bit) 

 

I am attempting to generate a 32-bit signed result: mult_out32 = k*l*m  

( refer my 2nd post: A*B = K*L*(M/N) ). This product (K*L*M) may exceed the 16-bit signed range.  

 

But, since i'm sure that later on, dividing it by N (i.e. stripping-off its P LSBs) would bring the result back in the 16-bit signed range  

... and in this resulting form, all its [31-p:15] msbs would be equal ) ...  

therefore, before stripping-off, these same bits are at the position [31:P+15], and hence these are necessarily equal

(We can collectively call all these equal bits as 'The sign bit' !) 

 

So, summing up -  

I can rest assured that, iff I've checked the 16-bit overflow condition at the start - then mult_out32[31] & mult_out32[30:p+15] are all equal

 

Moreover, 

result_axb = {mult_32out[31], mult_32out[p + 14 : p]}; 

will give me the correct 16-bit signed result, in the end ! 

 

Hence proved ! :-) 

 

p.s.: Thanks again FvM, for helping me arrive at this proof; and also giving an Overflow-condition-checker, in case we wish to make a more generic module !
0 Kudos
Altera_Forum
Honored Contributor II
2,913 Views

 

--- Quote Start ---  

Yes. Actually I was thinking of the case I3 = I2+I1, where no overflow occurs, except for the the product of the two most negative numbers. In case of I3 < I1 + I2, overflow has to be considered generally.  

--- Quote End ---  

 

 

 

I3=I2+I1 

If no. of bit of I3 = I2+I1, it wont caused overflow. Let say -2*-2 = -4(mistake:is 4 actually). It is 100 in 2nd complement. How can it cause overflow since the I3 has 2+2= 4 bit?
0 Kudos
Altera_Forum
Honored Contributor II
2,913 Views

-2 * -2 = 4, not -4 :)

0 Kudos
Altera_Forum
Honored Contributor II
2,913 Views

To all: 

 

Since I am new to Altera, can someone explain to me what one would even use this for and kind of breakdown the basics in laments terms 

 

Lew
0 Kudos
Altera_Forum
Honored Contributor II
2,913 Views

A Fixed point matrix multiplication in Verilog as follows: 

http://www.fpga4student.com/2016/12/fixed-point-matrix-multiplication-in-verilog.html 

Hope it helps.
0 Kudos
Reply