Machine Precision or Machine Epsilon Formula

$\begingroup$

I'm reading Heath's Scientific Computing.

The machine epsilon is denoted by $\epsilon$. With rounding to nearest, the machine epsilon can be determined by the following formula:

$\epsilon = \frac{1}{2} \beta^{1-p}$

where $\beta$ is the base and $p$ is the precision (i.e. number of significant digits).

For $\beta = 10$ and $p = 3$ since $1 + 0.005 = 1.01 > 1$, $\epsilon = 0.005 = \frac{1}{2} 10^{1-3}= \frac{1}{2} \beta^{1-p}$

For $\beta = 2$ and $p = 3$ since $(1 + 0.001)_2 = (1.01)_2 > 1$, $\epsilon = (0.001)_2 = \frac{1}{2} 2^{1-3} = \frac{1}{2} \beta^{1-p}$

For $\beta = 3$ and $p = 3$ since $(1 + 0.002)_3 = (1.01)_3 > 1$, $\epsilon = (0.002)_3 = \frac{2}{27}$ which does not agree with $\frac{1}{2}3^{1-3}= \frac{1}{18}$

Is the formula only applicable to bases that can represent $\frac{1}{2}$ exactly or did I misunderstand how the formula is to be applied?

$\endgroup$

2 Answers

$\begingroup$

Machine epsilon $\epsilon$ is the distance between 1 and the next floating point number.

Machine precision $u$ is the accuracy of the basic arithmetic operations. This number is also know as the unit roundoff.

When the precision is $p$ and the radix is $\beta$ we have $$ \epsilon = \beta^{1-p}.$$ To see this, simply a 1 to the last digit. If we round to nearest, then $$ u = \frac{1}{2} \beta^{1-p}.$$

The wrong formula gave you the right result for $\beta \in \{2,10\}$ because you rounded up, i.e. applied the default rounding mode for human calculators.

To be fair, the literature is not in agreement and different terms abound. I recommend that you follow Higham's book: "Accuracy and stability of numerical algorithms".

$\endgroup$ $\begingroup$

You are confusing real numbers with their representations.

You write $(1 + 0.002)_3 \stackrel?= (1.01)_3,$which is an abuse of notation to begin with; the left side is not equal to the right, it rounds to the right-hand side when $p = 3.$

The question Heath appears to be trying to answer is, "What is the largest relative error due to a single rounding off?" For $\beta = 10$ and $p = 3$ we can use $1.005$ as a test case because that is the very smallest number between $1.00$ and $1.01$ that rounds up instead of down.

But when $\beta = 3$ and $p = 3,$lots of other numbers that are less than $1.002_3$ also round off to $1.01_3$, for example $(1 + 0.001112)_3.$

In fact, the very smallest number between $1$ and $1.01_3$ that rounds up to $1.01_3$is$$1.001111\ldots_3 = 1 + \frac1{18}.$$

$\endgroup$

Your Answer

Sign up or log in

Sign up using Google Sign up using Facebook Sign up using Email and Password

Post as a guest

By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy

You Might Also Like