# Difference between revisions of "Interval package"

(Written some paragraphs as a motivation for the package) |
m (Add theory secion) |
||

Line 30: | Line 30: | ||

Interval arithmetic addresses above problems in its very special way and introduces new possibilities for algorithms. For example, the [http://en.wikipedia.org/wiki/Interval_arithmetic#Interval_Newton_method interval newton method] is able to find ''all'' zeros of a particular function. | Interval arithmetic addresses above problems in its very special way and introduces new possibilities for algorithms. For example, the [http://en.wikipedia.org/wiki/Interval_arithmetic#Interval_Newton_method interval newton method] is able to find ''all'' zeros of a particular function. | ||

+ | |||

+ | == Theory == | ||

+ | |||

+ | === Moore's fundamental theroem of interval arithmetic === | ||

== Quick start introduction == | == Quick start introduction == |

## Revision as of 09:20, 14 October 2014

The interval package provides data types and fundamental operations for real valued interval arithmetic based on the common floating-point format “binary64” a. k. a. double-precision. **Interval arithmetic** produces mathematically proven numerical results. It aims to be standard compliant with the (upcoming) IEEE 1788 and therefore implements the *set-based* interval arithmetic flavor.

Warning: The package has not yet been released.

## Contents

## Motivation

Give a digital computer a problem in arithmetic, and it will grind away methodically, tirelessly, at gigahertz speed, until ultimately it produces the wrong answer. … An interval computation yields a pair of numbers, an upper and a lower bound, which are guaranteed to enclose the exact answer. Maybe you still don’t know the truth, but at least you know how much you don’t know.—Brian Hayes, DOI: 10.1511/2003.6.484

Standard floating point arithmetic | Interval arithmetic |
---|---|

octave:1> 19 * 0.1 - 2 + 0.1 ans = 1.3878e-16 |
octave:1> x = infsup ("0.1"); octave:2> 19 * x - 2 + x ans = [-3.1918911957973251e-16, +1.3877787807814457e-16] |

Floating point arithmetic, as specified by IEEE 754, is available in almost every computer system today. It is wide-spread, implemented in common hardware and integral part in programming languages. For example, the extended precision format is the default numeric data type in GNU Octave. Benefits are obvious: The performance of arithmetic operations is well-defined, highly efficient and results are comparable between different systems.

However, there are some downsides of floating point arithmetic in practice, which will eventually produce errors in computations.

- Floating point arithmetic is often used mindlessly by developers. [1]
- The binary data types categorically are not suitable for doing financial computations. Very often representational errors are introduced when using “real world” decimal numbers.
- Even if the developer would be proficient, most developing environments / technologies limit floating point arithmetic capabilities to a very limited subset of IEEE 754: Only one or two data types, no rounding modes, …
- Results are hardly predictable. All operations produce the best possible accuracy
*at runtime*, this is how floating point works. Contrariwise, financial computer systems typically use a fixed-point arithmetic (COBOL, PL/I, …), where overflow and rounding can be precisely predicted*at compile-time*. - If you do not know the technical details, cf. first bullet, you ignore the fact that the computer lies to you in many situations. For example, when looking at numerical output and the computer says “
`ans = 0.1`

,” this is not absolutely correct. In fact, the value is only*close enough*to the value 0.1.

Interval arithmetic addresses above problems in its very special way and introduces new possibilities for algorithms. For example, the interval newton method is able to find *all* zeros of a particular function.