Deterministic ML GPL-3.0

Certifiable Inference

Deterministic, bit-perfect neural network inference for safety-critical systems

GitHub Repository Live Demo
Published
January 15, 2026 10:00
Reading Time
2 min
Fixed-point inference pipeline showing input through convolution, activation, and pooling with bit-identical outputs

Most AI infrastructure is built for research, where “mostly reproducible” is good enough. In aerospace, medical devices, and autonomous vehicles, non-determinism isn’t just a bug — it’s a liability.

Modern ML inference engines like TensorFlow Lite, ONNX Runtime, and PyTorch Mobile are fundamentally non-deterministic. Floating-point operations vary across platforms. Hash table iteration is unpredictable. Memory allocation affects behaviour.

For safety-critical systems, you cannot prove correctness if behaviour varies.

The Three Pillars

certifiable-inference replaces the “black box” of modern ML with a deterministic pipeline:

1. Exact Math

Fixed-point arithmetic (Q16.16) ensures Input A + Model B = Output C across all platforms — x86, ARM, RISC-V — forever. No floating-point drift. No platform-dependent rounding.

typedef int32_t fixed_t;

#define FIXED_SHIFT 16
#define FIXED_ONE   (1 << FIXED_SHIFT)  // 65536

With 16 integer bits and 16 fractional bits, we get:

  • Range: -32768.0 to +32767.99998
  • Precision: 0.0000152588 (1/65536)
  • Determinism: Bit-identical across any compliant platform

2. Bounded Resources

No malloc() after initialisation. O(1) memory and time complexity for every inference pass. All buffers pre-allocated by the caller.

// All buffers provided by caller — no hidden allocation
fixed_t input_buf[256];
fixed_t kernel_buf[9];
fixed_t output_buf[196];

fx_matrix_t input, kernel, output;
fx_matrix_init(&input, input_buf, 16, 16);
fx_matrix_init(&kernel, kernel_buf, 3, 3);
fx_matrix_init(&output, output_buf, 14, 14);

This enables static memory analysis required for DO-178C Level A certification. The certifier can prove exactly how much memory the system will use — no runtime surprises.

3. Traceable Logic

Pure C99 implementation with zero hidden dependencies. No external libraries. No compiler intrinsics. Every operation is explicit and auditable.

Q16.16
Fixed-Point
Zero
malloc() calls
<5%
P95 Jitter
C99
Standard

What’s Implemented

The engine provides the core building blocks for convolutional neural networks:

  • Fixed-point arithmetic — Q16.16 with deterministic rounding
  • Matrix operations — Multiply, transpose, element-wise ops
  • 2D Convolution — Zero dynamic allocation, O(OH×OW×KH×KW)
  • Activation functions — ReLU, deterministic thresholding
  • Max Pooling — 2×2 stride-2 dimension reduction
  • Timing verification — Proven <5% jitter at 95th percentile

Usage Example

#include "matrix.h"
#include "convolution.h"
#include "activation.h"
#include "pooling.h"

// Pre-allocated buffers (no malloc)
fixed_t input_buf[256];    // 16×16 input
fixed_t kernel_buf[9];     // 3×3 kernel
fixed_t conv_buf[196];     // 14×14 after conv
fixed_t pool_buf[49];      // 7×7 after pooling

// Initialize matrices
fx_matrix_t input, kernel, conv_out, pool_out;
fx_matrix_init(&input, input_buf, 16, 16);
fx_matrix_init(&kernel, kernel_buf, 3, 3);
fx_matrix_init(&conv_out, conv_buf, 14, 14);
fx_matrix_init(&pool_out, pool_buf, 7, 7);

// Inference pipeline
fx_conv2d(&input, &kernel, &conv_out);    // Convolution
fx_relu(&conv_out, &conv_out);            // Activation
fx_maxpool_2x2(&conv_out, &pool_out);     // Pooling

// Result: 7×7 feature map, bit-perfect across all platforms

Why It Matters

Medical Devices

A pacemaker must deliver a signal within a 10ms window. Non-deterministic timing causes life-threatening delays. With fixed-point arithmetic and bounded execution time, timing is provable.

Autonomous Vehicles

ISO 26262 ASIL-D requires provable worst-case execution time (WCET). Floating-point variance makes this impossible — the same operation can take different amounts of time depending on input values. Fixed-point operations have constant timing.

Aerospace

DO-178C Level A demands complete requirements traceability. “The model runs inference” is not certifiable. “The model executes exactly these operations in exactly this order with exactly this memory footprint” is certifiable.

The Certifiable Pipeline

certifiable-inference is part of a complete deterministic ML pipeline:

ProjectPurposeStatus
certifiable-dataDeterministic data loading and augmentation✅ Released
certifiable-trainingDeterministic training with Merkle audit trail✅ Released
certifiable-inferenceDeterministic inference✅ Released

Together, these provide end-to-end determinism: from data loading through training to deployment. The same seed produces the same model produces the same predictions, forever.

Certification Support

The implementation is designed to support certification under:

  • DO-178C — Aerospace software (Level A capable)
  • IEC 62304 — Medical device software (Class C capable)
  • ISO 26262 — Automotive functional safety (ASIL-D capable)
  • IEC 61508 — Industrial safety systems (SIL-4 capable)

Documentation includes Software Requirements Specifications (SRS), verification methods, and traceability matrices.

Getting Started

git clone https://github.com/williamofai/certifiable-inference
cd certifiable-inference
mkdir build && cd build
cmake ..
make
make test

Roadmap

  • Model loader — ONNX import for pre-trained models
  • Quantisation tools — FP32 to Q16.16 conversion with error bounds
  • Additional layers — Batch normalisation, additional pooling modes

Patent Protection

This implementation is built on the Murray Deterministic Computing Platform (MDCP), protected by UK Patent GB2521625.0. Commercial licensing available for proprietary use in safety-critical systems.


Same input. Same output. Always. GPL-3.0 licensed.

About the Author

William Murray is a Regenerative Systems Architect with 30 years of UNIX infrastructure experience, specializing in deterministic computing for safety-critical systems. Based in the Scottish Highlands, he operates SpeyTech and maintains several open-source projects including C-Sentinel and c-from-scratch.

Questions or Contributions?

Open an issue on GitHub or get in touch directly.

View on GitHub Contact
← Back to Open Source