Every C programmer’s journey starts the same way:
#include <stdio.h>
int main(void) {
printf("Hello, World!\n");
return 0;
}Compile. Run. Celebrate. You’re a C programmer now.
Except you’re not. You’ve just learned that C is a language where you type things and hope they work. That lesson - hope as a development strategy - will follow you through pointers, memory allocation, and eventually into production systems where hope gets people killed.
This isn’t hyperbole. The Therac-25 killed patients because of race conditions that “usually worked.” Knight Capital lost $440 million in 45 minutes because of code that “worked in testing.” The traditional C tutorial path doesn’t just fail to prepare engineers for safety-critical work - it actively teaches habits that certification auditors will reject.
The Problem with printf
The first line of most C tutorials introduces a function that violates nearly every principle of safety-critical software:
printf uses dynamic memory. The format string parsing, buffer management, and output handling all involve allocation patterns that cannot be statically analysed for worst-case behaviour.
printf has unbounded execution time. Depending on the format string, arguments, and output device, the same printf call can take microseconds or seconds. In a system with real-time constraints, this is unacceptable.
printf can fail silently. Most tutorials ignore the return value. In safety-critical systems, every function call must have its return value checked and its failure modes handled.
printf teaches output-first thinking. The programmer’s first instinct becomes “print something to see if it worked” rather than “prove it works before running it.”
None of this means printf is evil. It’s a useful debugging tool and perfectly appropriate for user-facing applications. The problem is that it’s the first thing we teach, and first lessons shape everything that follows.
The Malloc Mindset
After “Hello World,” tutorials progress to dynamic memory:
int *numbers = malloc(n * sizeof(int));
if (numbers == NULL) {
// handle error... somehow
}
// use numbers
free(numbers);This teaches several dangerous patterns:
Memory size determined at runtime. The program cannot know its memory requirements until execution. Static analysis cannot prove memory bounds. The certifier cannot verify worst-case resource usage.
Allocation can fail. And the tutorial’s “handle error… somehow” comment reveals that nobody really knows what to do when it does. In safety-critical systems, “somehow” isn’t a specification.
Manual lifetime management. The programmer must remember to free memory, in the right order, exactly once. Forget and you leak. Free twice and you corrupt. Use after free and you get undefined behaviour.
Heap fragmentation is invisible. A program can run successfully for months, then fail when fragmentation finally prevents allocation. No amount of testing catches this reliably.
For safety-critical systems governed by DO-178C, IEC 62304, or ISO 26262, dynamic memory allocation is either prohibited entirely or requires extensive justification and analysis. Yet it’s the second thing we teach new C programmers.
The Undefined Behaviour Trap
By lesson three or four, most tutorials have introduced undefined behaviour without naming it:
int array[10];
for (int i = 0; i <= 10; i++) { // Off-by-one: accesses array[10]
array[i] = i;
}The tutorial might mention this is “a bug,” but it doesn’t explain that this isn’t just wrong - it’s undefined. The C standard permits the compiler to do literally anything: crash, corrupt memory, send your browsing history to your employer, or appear to work perfectly until production.
“Appears to work” is the most dangerous outcome. The off-by-one error might never manifest in testing because the memory after the array happens to be unused. Then in production, with different memory layout, the system fails in ways that cannot be reproduced or diagnosed.
Traditional tutorials treat undefined behaviour as an edge case to avoid. Safety-critical development treats it as an existential threat to be eliminated by design.
What Certification Actually Requires
Engineers who transition to safety-critical domains discover that everything they learned needs to be unlearned. DO-178C Level A (for flight-critical avionics software) requires:
100% code coverage. Every statement, every branch, every condition must be exercised by tests. Printf debugging doesn’t generate coverage data.
Traceability. Every line of code must trace to a requirement, and every requirement must trace to tests that verify it. “It seemed like a good idea” isn’t traceable.
Static analysis. Tools must verify that code cannot exhibit undefined behaviour, exceed resource bounds, or enter unanalysable states. Malloc defeats most static analysers.
Deterministic execution. Given the same inputs, the system must produce the same outputs. Always. Dynamic memory allocation introduces non-determinism through fragmentation and allocation timing.
Bounded resource usage. Memory, stack depth, and execution time must all have provable upper bounds. Recursion, dynamic allocation, and unbounded loops are prohibited or severely restricted.
The traditional tutorial path - printf, malloc, hope - is precisely what certification prohibits.
The Alternative: Math First
The approach taught in c-from-scratch inverts the traditional progression:
Problem → Math → Proof → Structs → Code → TestsInstead of starting with syntax and hoping correctness emerges, this methodology starts with correctness and derives syntax.
Step 1: Define the Problem
Before writing any code, define what problem you’re solving and what failure looks like. Not “I want to track a heartbeat” but “I need to detect when a periodic signal stops arriving within a bounded time, with no false positives during normal jitter and guaranteed detection of actual failures.”
Step 2: Express It Mathematically
A heartbeat monitor can be expressed as a finite state machine with three states:
- AWAITING: No pulse received yet
- ALIVE: Last pulse received within timeout window
- DEAD: Timeout exceeded without pulse
The state transitions are:
AWAITING + pulse → ALIVE
ALIVE + pulse → ALIVE (reset timer)
ALIVE + timeout → DEAD
DEAD + pulse → ALIVEThis isn’t pseudocode. It’s a mathematical specification that can be analysed for completeness (all state/input combinations handled), determinism (each combination has exactly one outcome), and termination (no infinite loops possible).
Step 3: Prove Properties Before Implementation
From the state machine, we can prove:
Totality: Every state has a defined transition for every input. There are no undefined cases.
Convergence: From any state, the system eventually reaches either ALIVE (if pulses arrive) or DEAD (if they don’t).
Bounded memory: The state machine requires exactly one state variable and one timestamp. O(1) memory, provable statically.
Bounded time: Each transition is a constant-time operation. O(1) execution, provable statically.
These proofs exist before any C code is written.
Step 4: Design Structs That Enforce the Math
The struct is not a convenient data container - it’s a physical encoding of the mathematical model:
typedef enum {
PULSE_AWAITING,
PULSE_ALIVE,
PULSE_DEAD
} pulse_state_t;
typedef struct {
pulse_state_t state;
uint64_t last_pulse_time;
uint64_t timeout_ns;
} pulse_monitor_t;Every field exists because the math requires it. There are no “might be useful later” fields. The struct is the state machine.
Step 5: Code as Transcription
Implementation becomes mechanical transcription of the mathematical specification:
int pulse_update(pulse_monitor_t *ctx, uint64_t current_time, bool pulse_received) {
if (ctx == NULL) return PULSE_ERR_NULL;
if (pulse_received) {
ctx->last_pulse_time = current_time;
ctx->state = PULSE_ALIVE;
return PULSE_OK;
}
// No pulse - check timeout
if (ctx->state == PULSE_ALIVE) {
uint64_t elapsed = current_time - ctx->last_pulse_time;
if (elapsed > ctx->timeout_ns) {
ctx->state = PULSE_DEAD;
}
}
return PULSE_OK;
}This code contains no creativity. It’s a direct translation of the state machine. The creativity happened in the mathematical design, where it could be analysed and proven correct.
Step 6: Test the Proofs
Tests don’t verify that the code “seems to work.” They verify that the mathematical contracts hold:
// CONTRACT: Pulse within timeout maintains ALIVE state
void test_alive_maintained(void) {
pulse_monitor_t mon;
pulse_init(&mon, TIMEOUT_NS);
pulse_update(&mon, 1000, true); // First pulse
assert(pulse_status(&mon) == PULSE_ALIVE);
pulse_update(&mon, 1000 + TIMEOUT_NS - 1, true); // Just before timeout
assert(pulse_status(&mon) == PULSE_ALIVE);
}
// CONTRACT: Missing pulse beyond timeout transitions to DEAD
void test_timeout_detected(void) {
pulse_monitor_t mon;
pulse_init(&mon, TIMEOUT_NS);
pulse_update(&mon, 1000, true); // First pulse
pulse_update(&mon, 1000 + TIMEOUT_NS + 1, false); // Beyond timeout
assert(pulse_status(&mon) == PULSE_DEAD);
}Each test corresponds to a mathematical property. If the test fails, either the implementation is wrong or the proof was flawed. Either way, we know exactly what to fix.
The Init-Update-Status-Reset Pattern
The c-from-scratch framework standardises on a four-function interface for all safety monitors:
int module_init(module_t *ctx, const module_config_t *cfg);
int module_update(module_t *ctx, input_t input, uint64_t timestamp);
module_state_t module_status(const module_t *ctx);
void module_reset(module_t *ctx);This pattern enables:
Static allocation. The caller provides memory. The module never allocates.
Deterministic state transitions. All state changes happen in update(). Status queries are pure functions.
Composability. Modules with identical interfaces can be combined, chained, and verified independently.
Testability. Each function has clear preconditions and postconditions that tests can verify.
This isn’t an arbitrary convention. It’s the minimal interface that supports formal verification while remaining practical for real systems.
Why This Matters Beyond Aerospace
The math-first approach isn’t just for flight software. Consider:
Medical devices: IEC 62304 Class C software (which can cause death or serious injury) has requirements similar to DO-178C. The same methodology applies.
Automotive: ISO 26262 ASIL-D (highest automotive safety level) requires similar rigour. Self-driving systems need provable behaviour.
Financial systems: Knight Capital’s $440 million loss came from code that “worked” until it didn’t. Deterministic, provable systems prevent this class of failure.
Infrastructure: Power grids, water treatment, and industrial control systems increasingly require formal methods. The traditional tutorial path doesn’t prepare engineers for this work.
Even in non-safety-critical domains, the discipline pays off. Code designed mathematically is easier to test, easier to maintain, and easier to reason about. The techniques scale down gracefully.
Unlearning Hello World
The first step is recognising that traditional C education optimises for the wrong thing. It optimises for “getting something working quickly” when it should optimise for “knowing why it works.”
The second step is accepting that correctness requires more upfront effort. Defining invariants before writing code feels slow. It is slow - initially. The time is recovered many times over during testing, debugging, and maintenance.
The third step is building new habits:
- Before writing a function, write its contract (preconditions, postconditions, invariants)
- Before designing a struct, draw the state machine it represents
- Before implementing a loop, prove it terminates
- Before allocating memory, prove you know the maximum
These habits feel awkward at first. Like all habits, they become automatic with practice.
Getting Started
The c-from-scratch repository provides a complete framework for learning this approach. Each module includes:
- Mathematical specification with proofs
- Struct design with field-by-field justification
- Implementation as literal math transcription
- Contract-based tests with 100k+ input fuzzing
Start with the Pulse module. It’s roughly 200 lines of code, but those 200 lines demonstrate every principle: closed state, total functions, bounded resources, deterministic execution.
For a structured learning path, “C From Scratch: Learn Safety-Critical C the Right Way” walks through the methodology from first principles, building toward a complete capstone project.
Conclusion
“Hello World” isn’t wrong. It’s just the wrong starting point for engineers who will work on systems that matter.
The traditional C tutorial path teaches hope-based development: write code, run it, hope it works, debug when it doesn’t. This approach cannot produce software that meets safety certification requirements, and it builds habits that must be painfully unlearned.
The alternative - proving correctness before implementation - requires more discipline but produces code that can be trusted. When that code controls an aircraft, monitors a pacemaker, or manages a power grid, “can be trusted” isn’t a nice-to-have. It’s the entire point.
As with any methodology, suitability depends on context. Not every program needs formal verification. But every programmer benefits from understanding what formal verification requires - because it clarifies what “correct” actually means.
Hope is not a certification strategy. Mathematics is.