I am updatingcode which reads from an older device (SLC500 PLC) which uses 16-bit reals, delivered raw as two bytes, possibly in IEEE format. This used to work directly with 16-bit MS Fortran.
IVF does not seem to have back-compatible support for REAL*2. I would be very appreciative if anyone could provide/link to code which translates 2-byte reals to 4-byte reals.
TIA
Paul Curtis
連結已複製
It's not going to be anything like IEEE format. There is a 16-bit floating format used by graphics processors, but I don't think there's a standard. All the 16-bit Fortrans I knew used 32-bit reals - there aren't enough bits in a 16-bit type to do anything useful for most scientific and engineering applications.
So far, I've been unable to find any documentation on a float format the SLC500 may have used. Do you have any? Or do you have sample values in hex and decimal?
Steve, thanks for the immediate reply.
My recollection is that in previous incarnations the two data bytes were simply equivalenced (directly, or maybe in swapped order) to a 16-bit REAL and the value magically appeared, but I could be mistaken. A-B says theirREAL value range is 10**38, whichseems right for 16-bit REAL format.My post was submitted aftermy own fruitless search for A-B's format details, but finding any info from that company is always a major Easter-egg hunt.I will continue searching for A-B documentation.
Thanks again.
If we had some hex values to look at, with known decimal equivalents, that would help. That the range is 10**38 tells you that the exponent is 8 bits, but it doesn't tell you what the exponent bias is nor where the implied radix point is in the fraction. There are several variations of this so you can't assume anything. It could even use hex normalization like the IBM 360, but I doubt it. You also need to know the byte ordering.
Most likely, it is similar to an IEEE single but with only 7 fraction bits. That would make some sense for a PLC.
A 16-bit real sounds mighty fishy!
If it has a 38 plus or minus range exponent, that uses up 8 bits, leaving only 8 bits for significant digits. And just barely digits, as a byte only gives you a smidgen over two significant digits.
they're probably 32-bit reals. Peer at the code some more and see if that might be happening.
7 fraction bits. You forgot the sign bit.
I have heard of such things, mainly in graphics processing. I could believe that they could be used in some devices that don't need a whole lot of precision.
> fraction bits. You forgot the sign bit.
Yep, I did. And you forgot the (possible) assumed most significant bit that's always a "1" in the newer FP formats. So we're back to eight lousy bits of significand. Not exactly high-precison but perhaps good enough for some video or process-control applications.
But as I look up "Allen-Bradley reals" they talk of reals taking up two registers, or a DWORD, which sounds more like 32 bits