Intel® Fortran Compiler
Build applications that can scale for the future with optimized code designed for Intel® Xeon® and compatible processors.

Bit problem

Valued Contributor II

A new device I am looking at says it has a 15 g range -- 

The device has a 27 bit output -- 

In Fortran what is the likely resolution of the output number 

If I divide the range 30 by the number represented by 2^27 I get 2.23517E-07 which implies 0.2 microg 

Do I have to allow for a +- bit etc..

Before I buy I would like to check, I am sure I am missing something

0 Kudos
5 Replies
Black Belt Retired Employee

I am not sure what you are asking, but I will throw some things out and maybe it will help.

Single precision IEEE float, which is default REAL in Intel Fortran, has 24 fraction bits, but it is normalized so that the most significant bit is always 1 and is not represented in the stored value. The exponent is 8 bits plus one for the sign. Double precision has 53 fraction bits, 11 exponent bits and one for the sign.

Valued Contributor II

Thank you -- ok so this is before the number is translated to an IEEE number. 

1. The voltage is measured in a range that is translated to +-15 g = so there is a 30.000000 difference

2. The voltage is then divided into unit steps that represent a differential voltage - so if you look at data files there is always a minimum step size in digital data - in our temperature it is 0.125 degrees C. 


The number of bits in the A/D converter determines the resolution of the system. System resolution is determined by the channel(s) having the widest dynamic range and/or the channel(s) that require measurement of the smallest data increment. For example, assume a channel that measures pressure has a dynamic range of 4000psi that must be measured to the nearest pound. This will require an A/D converter with a minimum resolution of 4000 digital codes. A 12-bit A/D converter will provide a resolution of 2 12 or 4096 codes—adequate for this requirement. The actual resolution of this channel will be 4000/4096 or 0.976 psi. The A/D converter can resolve this measurement to within ± 0.488 psi (±1/2LSB)


we have 27 bits - there must be a chip that does this 

Thanks again

Black Belt

Your A/D outputs 27 bits. These bits represent integer numbers between -226 and +226-1, if no special bit patterns are used to represent error conditions. If you multiply these signed integers by 15 X 2-26, you obtain real numbers in the range -15.0 to +15.0.  If you want 26 bit precision, you may use IEEE 64-bit floating point numbers, which have 52+1 bits of precision, thus twice the size you want. You cannot use IEEE 32-bit floating point numbers, which have only 23+1 bits and cannot hold the measured values with 26-bit precision.

Black Belt

>>we have 27 bits - there must be a chip that does this 

Texas Instruments ADS1262/3 lists as 32-bit A/D. Factoring out noise, TI reports 27 bit ENOB @ 2.5 samples per second

Check slide ADS1262/3 in

INTEGER :: S ! Sample from device , assumed to be signed 27-bit to be converted to +/-15g range
! **** ADS1262/3 may return signed 32-bits, of which the 27 msbs are the precise values, with 5 lsbs of noise
! **** the following assumes the internal 32-bit >> 5 is reported out as signed result
REAL(8) :: G

G = REAL(S,8) * 15.0_8 / (2.0_8**26) ! when signed value in 27lsb's
! G = REAL(S,8) * 15.0_8 / (2.0_8**31) ! when signed value in 27msb's

Jim Dempsey


Valued Contributor II

Thanks to all -- I understand now --