file-binaryTesting for Floating-point Errors

Prerequisite

It is important to understand the underlying CS knowledge for floating-point arithmetic found on this page to make sense of the tests and test values on this page.

Patterns to test

For testing, choose non-power-of-2 fractions → 0.1, 0.2, 0.3, 0.15, 0.05, etc.

Here is what you can do to try to reveal rounding or similar errors.

1. Small fractions

Why they cause errors:

  • Non-power-of-2 fractions cannot be represented exactly.

  • Adding/subtracting them accumulates rounding errors.

Operation
What can go wrong

0.1 + 0.2

0.30000000000000004 instead of 0.3

0.05 + 0.1

Might yield 0.15000000000000002

0.15 - 0.1

Might yield 0.04999999999999999

4.0 + 0.3

Might yield 4.2999999999999998

2. Subtraction of numbers very close together (catastrophic cancellation)

See Catastrophic Cancellation Wikiarrow-up-right.

Example with simplified rounding for illustration, no floating-point arithmetic yet:

  • Two Lengths. L1 = 253.51 cm and L2 = 252.49 cm

    • The difference is L2 - L1 = 1.02 cm

  • Their rounded approximations are 254 and 252.

    • The difference is L2 - L1 = 2 cm

As you can see, the new difference (2cm) is almost double the original (1.02cm).

circle-info

Isolated rounding may be acceptable, but the subtraction result and the relative difference may be misleading or unacceptable, depending on context.

The above may happen if developers round too early. It may happen with sums as well.

Catastrophic Cancellation is not a property of floating-point arithmetic itself but of subtracting approximate values whose difference is small compared to their magnitude.

It applies just as much to large and small inputs. It depends only on how large the difference is, and on the error of the inputs.

Exactly the same error would arise by subtracting 2.00052 km from 2.00054 km as approximations to 2.0005249 km and 2.0005351 km.

In floating-point arithmetic, values are already approximations due to limited precision. As a result, the same amplification of error can occur without any explicit rounding by the programmer. Subtracting two floating-point numbers that are close in value can therefore expose large relative errors, even when each value appears reasonable on its own.

How to pick test data

  • Use input pairs that are very close (e.g. 1.0000001 and 1.0000000)

  • Prefer large + small and nearly equal combinations

  • Compare results using:

    • tolerances (epsilon)

    • higher-precision references (e.g., Java's BigDecimal)

  • Watch for:

    • unexpected zeros

    • loss of small deltas

Input expression
Might yield
Rationale for testing

1.0000001 - 1.0

0.000000100000000058

Tests representation error. 0.1 is imprecise in binary; this reveals how the system handles values that cannot be stored exactly. The test distinguishes between how many digits a system displays versus how many it stores. Test may not crash the system, but at least reveal a UI display bug, i.e. handled correctly in the backend, but displays poorly on the UI.

(1e16 + 1) - 1e16

0.0

Tests absorption (see below the table). Since 1016 exceeds the 253 limit of significand precision in float64, it demonstrates the "dead zone" where small increments are ignored. Subtraction cannot recover it.

(x + y) - x where x = 1e20, y = 3.14

0.0

Running Totals: In accounting software, if you add small transactions (y) to a very large account balance (x), the balance may stop increasing entirely.

Once you reach 1016, "step size" (the smallest possible change the float can represent) becomes 2.0.

  • If you have 1016, and try to add 1, the number 1 is "too small to matter.", and the result is rounded back down. The 1 absorbed and vanishes. Hence the "dead zone".

Where it may matter:

  • Large Counters: If you have a global counter for "Total Bytes Sent" or "Database Rows" and it hits 1016 , adding 1 to it will do nothing. The counter will appear frozen forever.

  • Timestamp Precision: High-precision timestamps (nanoseconds since epoch) quickly approach this limit. Adding a few nanoseconds to a large timestamp can result in Zero Change, causing infinite loops in logic that waits for a time to "increase."

  • Unique IDs: If a JavaScript frontend treats numbers as Numbers (which are 64-bit floats), it will round the last few digits, effectively corrupting the ID and pointing to the wrong record.

Further reading

Last updated