# Interval package

The interval package provides data types and fundamental operations for real valued interval arithmetic based on the common floating-point format “binary64” a. k. a. double-precision. **Interval arithmetic** produces mathematically proven numerical results. It aims to be standard compliant with the (upcoming) IEEE 1788 and therefore implements the *set-based* interval arithmetic flavor.

Warning: The package has not yet been released.

## Contents

## Motivation

Give a digital computer a problem in arithmetic, and it will grind away methodically, tirelessly, at gigahertz speed, until ultimately it produces the wrong answer. … An interval computation yields a pair of numbers, an upper and a lower bound, which are guaranteed to enclose the exact answer. Maybe you still don’t know the truth, but at least you know how much you don’t know.—Brian Hayes, DOI: 10.1511/2003.6.484

Standard floating point arithmetic | Interval arithmetic |
---|---|

octave:1> 19 * 0.1 - 2 + 0.1 ans = 1.3878e-16 |
octave:1> x = infsup ("0.1"); octave:2> 19 * x - 2 + x ans = [-3.1918911957973251e-16, +1.3877787807814457e-16] |

Floating point arithmetic, as specified by IEEE 754, is available in almost every computer system today. It is wide-spread, implemented in common hardware and integral part in programming languages. For example, the extended precision format is the default numeric data type in GNU Octave. Benefits are obvious: The performance of arithmetic operations is well-defined, highly efficient and results are comparable between different systems.

However, there are some downsides of floating point arithmetic in practice, which will eventually produce errors in computations.

- Floating point arithmetic is often used mindlessly by developers. [1]
- The binary data types categorically are not suitable for doing financial computations. Very often representational errors are introduced when using “real world” decimal numbers.
- Even if the developer would be proficient, most developing environments / technologies limit floating point arithmetic capabilities to a very limited subset of IEEE 754: Only one or two data types, no rounding modes, …
- Results are hardly predictable. All operations produce the best possible accuracy
*at runtime*, this is how floating point works. Contrariwise, financial computer systems typically use a fixed-point arithmetic (COBOL, PL/I, …), where overflow and rounding can be precisely predicted*at compile-time*. - If you do not know the technical details, cf. first bullet, you ignore the fact that the computer lies to you in many situations. For example, when looking at numerical output and the computer says “
`ans = 0.1`

,” this is not absolutely correct. In fact, the value is only*close enough*to the value 0.1.

Interval arithmetic addresses above problems in its very special way and introduces new possibilities for algorithms. For example, the interval newton method is able to find *all* zeros of a particular function.

## Theory

### Moore's fundamental theroem of interval arithmetic

Let **Failed to parse (syntax error): {\displaystyle \mathbf{y} = f(\mathbf{x})}**
be the result of
interval-evaluation of over a box **Failed to parse (syntax error): {\displaystyle \mathbf{x} = (x_1,\ldots{},x_n)}**
using any interval versions of its component library functions. Then

- In all cases, contains the range of over , that is, the set of at points of where it is defined:
- If also each library operation in is everywhere defined on its inputs, while evaluating , then is everywhere defined on , that is .
- If in addition, each library operation in is everywhere continuous on its inputs, while evaluating , then is everywhere continuous on .
- If some library operation in is nowhere defined on its inputs, while evaluating , then is nowhere defined on , that is .