Deterministic Computing

Round-to-Nearest-Even: The Rounding Mode That Makes Determinism Possible

Why banker's rounding matters for bit-identical machine learning

Published
January 20, 2026 19:00
Reading Time
6 min
Round-to-Nearest-Even decision tree showing how halfway cases round to even

When you round 2.5 to an integer, what do you get?

If you said 3, you’re thinking like most programmers. If you said 2, you’re thinking like someone who needs deterministic systems. Both answers are mathematically valid — and that’s exactly the problem.

The Halfway Problem

Most rounding methods agree on the easy cases: 2.3 rounds to 2, 2.7 rounds to 3. The disagreement happens at the exact midpoint: 2.5.

The “round half up” rule (taught in schools) says 2.5 → 3. Simple, consistent, and subtly biased. Over millions of operations, this bias accumulates. Values drift upward. In safety-critical systems, drift is the enemy.

Round-to-Nearest-Even (RNE), also called banker’s rounding, takes a different approach: when the value is exactly halfway, round to the nearest even number.

  • 1.5 → 2 (rounds up to even)
  • 2.5 → 2 (rounds down to even)
  • 3.5 → 4 (rounds up to even)
  • 4.5 → 4 (rounds down to even)

The bias cancels out. Half the time you round up, half the time down. Over millions of operations, the statistical error approaches zero.

Why This Matters for Machine Learning

Neural network training involves billions of arithmetic operations. Each multiplication, each accumulation, each gradient update requires a rounding decision when you’re working in fixed-point arithmetic.

Consider a single training step with 1 million weight updates. If each update has even a tiny systematic bias, you’re introducing 1 million small errors — all in the same direction. After 10,000 training steps, that’s 10 billion biased operations.

With RNE, the errors are unbiased. They still exist (quantisation error is unavoidable), but they don’t accumulate in one direction. The trained model converges to the same place regardless of whether you’re running on x86, ARM, or RISC-V.

The Implementation

Here’s how certifiable-training implements RNE in pure C:

int32_t dvm_round_shift_rne(int64_t x, uint32_t shift, 
                            ct_fault_flags_t *faults)
{
    if (shift > 62) {
        faults->domain = 1;
        return 0;
    }
    if (shift == 0) {
        return dvm_clamp32(x, faults);
    }
    
    int64_t half = 1LL << (shift - 1);
    int64_t mask = (1LL << shift) - 1;
    int64_t frac = x & mask;
    int64_t truncated = x >> shift;
    
    if (x >= 0) {
        if (frac > half) {
            truncated += 1;
        } else if (frac == half) {
            /* Exactly halfway: round to even */
            if (truncated & 1) {
                truncated += 1;
            }
        }
    } else {
        int64_t abs_frac = (-x) & mask;
        if (abs_frac > half) {
            truncated -= 1;
        } else if (abs_frac == half) {
            /* Exactly halfway: round to even (toward zero if already even) */
            if (truncated & 1) {
                truncated -= 1;
            }
        }
    }
    
    return dvm_clamp32(truncated, faults);
}

The key insight is the truncated & 1 check. If the integer part is already even, we leave it alone. If it’s odd, we bump it to the nearest even value.

Test Vectors

For any RNE implementation to be certifiable, it must pass these exact test vectors (from CT-MATH-001 §8):

Input (Q16.16)ShiftExpectedReasoning
0x00018000 (1.5)162Halfway, 2 is even
0x00028000 (2.5)162Halfway, 2 is even
0x00038000 (3.5)164Halfway, 4 is even
0x00048000 (4.5)164Halfway, 4 is even
0x00058000 (5.5)166Halfway, 6 is even
-0x18000 (-1.5)16-2Halfway, -2 is even
-0x28000 (-2.5)16-2Halfway, -2 is even
-0x38000 (-3.5)16-4Halfway, -4 is even

If your implementation produces different results for any of these inputs, it’s not RNE-compliant — and it won’t produce bit-identical results with other compliant implementations.

The Practical Impact

We’ve verified bit-identity across platforms using these test vectors. The certifiable-harness runs the full pipeline — data loading, training, quantisation, inference — and produces identical hashes on:

  • Google Cloud Debian VM (x86_64)
  • 11-year-old MacBook (x86_64)
  • RISC-V validation (in progress)

The same seed, the same data, the same hyperparameters → the same trained model, bit-for-bit. Not “close enough.” Not “within tolerance.” Identical.

Why Not Just Use Floating Point?

IEEE-754 floating point actually mandates RNE as the default rounding mode. So why not use floats?

Because “default” doesn’t mean “guaranteed.” Different compilers, different optimisation levels, different FPU implementations can produce different results. The x87 FPU uses 80-bit extended precision internally. Fused multiply-add operations change the rounding sequence. -ffast-math throws all guarantees out the window.

Fixed-point with explicit RNE removes the ambiguity. Every operation is defined. Every intermediate result is specified. There’s no hidden precision, no compiler freedom, no platform variance.

The Trade-off

RNE adds complexity. The halfway check requires extra logic. The negative number handling is subtle (and easy to get wrong). It’s slower than simple truncation.

For systems where “close enough” is acceptable, this overhead isn’t justified. For systems where reproducibility is mandatory — aerospace, medical devices, autonomous vehicles — the overhead is negligible compared to the cost of non-determinism.

Conclusion

Round-to-Nearest-Even is a small detail with large consequences. It’s the difference between “our model training is reproducible” and “our model training is provably reproducible.”

The certifiable-* ecosystem uses RNE throughout: in matrix multiplication, in gradient computation, in activation functions, in loss calculation. Every rounding decision follows the same rule, producing the same result, on every platform.

For safety-critical machine learning, that consistency isn’t a nice-to-have. It’s a requirement.

As with any architectural approach, suitability depends on system requirements, risk classification, and regulatory context. For systems that must be certifiable, RNE is the foundation.


The certifiable- ecosystem is open source. Explore the implementation or read the CT-MATH-001 specification for the complete mathematical foundation.*

About the Author

William Murray is a Regenerative Systems Architect with 30 years of UNIX infrastructure experience, specializing in deterministic computing for safety-critical systems. Based in the Scottish Highlands, he operates SpeyTech and maintains several open-source projects including C-Sentinel and c-from-scratch.

Discuss This Perspective

For technical discussions or acquisition inquiries, contact SpeyTech directly.

Get in touch
← Back to Insights