Wikipedia:Reference desk/Archives/Computing/2019 March 14
Computing desk | ||
---|---|---|
< March 13 | << Feb | March | Apr >> | Current desk > |
Welcome to the Wikipedia Computing Reference Desk Archives |
---|
The page you are currently viewing is a transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
March 14
editInteger-plus-floating-point representation
editDo any notable programs represent numbers as an integer plus a floating-point number, so that for example values close to 1 could be as precise as values close to zero? (I'm aware that log1p
and expm1
help with specific cases of that.) NeonMerlin 22:37, 14 March 2019 (UTC)
- No notable programs that you have ever heard of, but in the embedded systems world it is done all of the time. For example, a while back I was working on the design of a precision servo system. The feedback signal was always very close to ten volts (if it wasn't, the system worked to bring it back to ten volts). We programmed it with a fixed ten volt offset plus (or minus) a floating point number. This was for the exact reason you mentioned; we didn't need precision at close t zero volts -- we needed it at close to ten volts. I have also seen cases where a floating point constant was used as an offset for a floating point variable.--Guy Macon (talk) 22:49, 14 March 2019 (UTC)
- If I understand you right, you shifted the origin of the coordinate by 10, which in this case gives you about 3.5 bits of extra precision. On embedded systems, where you probably use 32 bit floats, that may be significant. But if you actually need to store both the integer and the float, for a total of 64 bits, I have a hard time imagining a situation where that would be better than a 64 bit double float. And it would complicate the code quite a lot. However, the OP might be interested in the fact that many LISP/Scheme systems have exact rational arithmetic, storing values as normalised fractions of arbitrary precision integers. --Stephan Schulz (talk) 20:31, 16 March 2019 (UTC)
- Try designing a servo system around an 8-bit Microchip ATTINY with 4KB of ROM, 256 Bytes of RAM, a 10-bit ADC and PWM for the DAC, and a dual opamp to scale it all to the +/-10 volts needed at the output. The system we were controlling was very slow responding, so we were able to get another 4 bits or so of real-world resolution from the DAC/ADC by averaging, and we used a 16-bit signed integer constant in ROM and a 16-bit float in RAM. All of this was in hand coded assembly language. See Half-precision floating-point format. In our particular application, it worked a lot better if we adjusted the constant so that the float spent a lot of time in the Denormal number region.
- That was good info about the LISP/Scheme systems. Thanks! --Guy Macon (talk) 21:48, 16 March 2019 (UTC)
- Python also has exact rationals (see docs for the fractions module). But in general, to avoid precision losses like in the OP request, you'll want to do different things depending on the problem. That's why numerical analysis (which is basically the study of what to do in those situations) is a big area of mathematics rather than a few tricks like "represent a real as an integer plus a fraction". 173.228.123.166 (talk) 07:23, 17 March 2019 (UTC)