Blunders: Sometimes a test run with known results is worthwhile, but is no guarantee of freedom from foolish error.
Truncation Error: i.e., approximate by the cubic power
approximating with the cubic gives an inexact answer. The error is due to truncating the series.
When to cut series expansion
be satisfied with an approximation to the exact analytical answer
Propagated Error:
more subtle
by propagated we mean an error in the succeeding steps of a process due to an Occurrence of an earlier error
of critical importance
stable numerical methods; errors made at early points die out as the method continues
unstable numerical method; does not die out
Round-off Error:
All computing devices represents numbers, except for integers and some fractions, with some imprecision
floating-point numbers of fixed word length; the true values are usually not expressed exactly by such representations
if the number are rounded when stored as floating-point numbers, the round-off error is less than if the trailing digits were simply chopped off
Absolute vs Relative Error:
accuracy
great importance
A given size of error is usually more serious when the magnitude of the true value is small
Floating-Point Arithmetic: Performing an arithmetic operation no exact answers unless only integers or exact powers of 2 are involved
floating-point (real numbers) not integers
resembles scientific notation
Table 1:
Floating Normalized.
floating
normalized (shifting the decimal point)
13.524
-0.0442
IEEE standard storing floating-point numbers (see the Table 1.4)
the sign
the fraction part (called the mantissa)
the exponent part
There are three level of precision (see the Fig. 3)
Figure 3:
Level of precision.
Rather than use one of the bits for the sign of the exponent, exponents are biased. For single precision:
=256, 0 (255)
-127 (128). An exponent of -127 (128) stored as 0 (255).
0
00000000 = 0
255
11111111=255
So biased
, mantissa gets 1 as maximum
Largest: 3.40282E+38; Smallest: 2.93873E-39
For double and extended precision the bias values are 1023 and 16383, respectively.
Certain mathematical operations are undefined,
EPS: short for epsilon
used for represent the smallest machine value that can be added to 1.0 that gives a result distinguishable from 1.0
In MATLAB:
>> eps
ans=2.2204E=016
Round-off Error vs Truncation Error: Round-off occurs, even when the procedure is exact, due to the imperfect precision of the computer
Analytically
: procedure
approximate value for wit a small value for h.
, the result is closer to the true value
truncation error is reduced
but at some point (depending of the precision of the computer) round-off errors will dominate
less exact
There is a point where the computational error is least
Well-posed and well-conditioned problems: The accuracy depends not only on the computer's accuracy
A problem is well-posed if a solution; exists, unique, depends on varying parameters
A nonlinear problem
linear problem
infinite
large but finite
complicated
simplified
A well-conditioned problem is not sensitive to changes in the values of the parameters (small changes to input do not cause to large changes in the output)
Modelling and simulation; the model may be not a really good one
if the problem is well-conditioned, the model still gives useful results in spite of small inaccuracies in the parameters
Forward and Backward Error Analysis:
where is the value we would get if the computational error were absent
Example:
used only two digits
, relative error 0.3 , relative error 0.15
Examples of Computer Numbers:
Say we have six bit representation (not single, double) (see the Fig. 4)
Figure 4:
Computer numbers with six bit representation.
For positive range
For negative range
; even discontinuity at point zero since it is not in the ranges.
Figure 5:
Upper: number line in the hypothetical system, Lower: IEEE standard.
Very simple computer arithmetic system the gaps between stored values are very apparent. Many values can not be stored exactly. i.e., 0.601, it will be stored as if it were 0.6250 because it is closer to , an error of
In IEEE system, gaps are much smaller but they are still present. (see the Fig. 5)
Anomalies with Floating-Point Arithmetic:
For some combinations of values, these statements are not true
adding 0.0001 one thousand times should equal 1.0 exactly but this is not true with single precision