Safety Certification

DO-178C Level A Certification: How Deterministic Execution Can Streamline Certification Effort

How tick-based architecture and minimal code footprint can reduce certification effort and timelines in aerospace programs

Published
December 23, 2025
Reading Time
11 min
DO-178C Certification Process - Deterministic vs. Conventional

Note: Cost and timeline figures in this article are indicative estimates based on published industry benchmarks and certification program discussions. Actual costs vary significantly by system scope, regulatory context, and organisational factors.

Recent high-profile recertification programs in aviation have highlighted the verification challenges inherent in complex software systems. When certification authorities require extensive evidence that software will behave predictably under all conditions, the verification burden can become substantial.

The technical challenge often isn’t writing correct code—it’s demonstrating that the code will behave correctly. With conventional software systems where execution can depend on timing, scheduler decisions, and concurrent interactions, demonstrating correctness may require extensive testing across many possible execution scenarios.

DO-178C Level A—the highest software safety standard for commercial aviation—requires objective evidence of correct behaviour under all conditions. Industry estimates suggest that achieving this standard for conventional systems can cost tens of millions of dollars and take 18-36 months. The effort scales with code size and the complexity of verifying timing-dependent behaviour.

Tick-based deterministic execution offers an architectural approach that can change this trade space.

Understanding DO-178C Level A

DO-178C “Software Considerations in Airborne Systems and Equipment Certification” is the FAA-recognised standard for avionics software. Level A applies to software whose failure could cause catastrophic events—loss of aircraft control, structural failure, multiple fatalities.

The Five Software Levels

DO-178C defines five levels of software criticality:

Level A (Catastrophic): Failure may cause death or loss of aircraft. Examples: flight control computers, engine control systems.

Level B (Hazardous): Failure may cause serious injury or major aircraft damage. Examples: auto-throttle, weather radar.

Level C (Major): Failure may cause passenger discomfort or inconvenience. Examples: in-flight entertainment, cabin lighting.

Level D (Minor): Failure has minimal impact. Examples: crew scheduling systems.

Level E (No Effect): Failure has no safety impact.

Level A software faces the most stringent requirements. Every line of code must be traced to requirements. Every requirement must have test cases. Every possible execution path must be analysed. The verification burden is substantial.

The Certification Objectives

DO-178C defines 71 objectives that must be satisfied for Level A certification. Key objectives include:

Structural Coverage Analysis (DO-178C §6.4.4): Demonstrate that testing exercises every statement, every branch, and every condition in the code. For systems with timing-dependent behaviour, achieving complete coverage may require testing with different timing scenarios and configurations.

Requirements-Based Testing (DO-178C §6.4.2): Every software requirement must have corresponding test cases that verify correct behaviour. When software behaviour can vary with timing, test cases may need to account for these variations.

Robustness Testing (DO-178C §6.4.3): Verify software behaves correctly under abnormal conditions—invalid inputs, resource exhaustion, hardware failures. Systems with timing dependencies may need to test these conditions across various timing scenarios.

Dead Code Analysis (DO-178C §6.4.4.2): Demonstrate that no executable code exists that cannot be reached during normal operation. In systems where code reachability depends on scheduler decisions, this analysis can be complex.

Traceability (DO-178C §6.3): Bidirectional tracing from high-level requirements down to object code and test cases. Every code path must trace to a requirement; every requirement must trace to test verification.

Each objective represents significant engineering effort. Industry estimates suggest Level A certification typically costs in the range of tens of millions of dollars, with timelines of 18-36 months from code freeze to certification approval.

Challenges with Conventional Approaches

The primary challenge in DO-178C certification often isn’t writing correct code—it’s demonstrating correctness. Conventional RTOS-based systems can face several verification challenges:

1. Timing-Dependent Execution

Most real-time operating systems use preemptive priority-based scheduling. Thread execution order can depend on:

  • Priority levels (which may change dynamically)
  • Time quantum expiration
  • Interrupt timing
  • Lock acquisition order
  • CPU load and resource availability

This means similar inputs may produce different execution paths depending on timing. Achieving structural coverage may require testing across many timing permutations.

Industry discussions suggest that accounting for timing-dependent behaviour can significantly increase test case counts—potentially by an order of magnitude or more for complex systems.

2. Code Size Considerations

Modern RTOS implementations vary in size:

  • Larger kernels: 100-200KB
  • Mid-range kernels: 50-100KB
  • Minimal kernels: under 50KB

DO-178C certification effort tends to scale with code size. Industry benchmarks suggest costs in the range of $1,000-$2,000 per source line of code for Level A certification, though this varies significantly by context.

3. Concurrency Complexity

Multi-threaded systems introduce considerations around race conditions, deadlocks, and priority inversion. DO-178C may require analysis of:

  • Possible thread interleavings
  • Lock acquisition orders
  • Interrupt timing interactions
  • Shared resource access patterns

Analysis of concurrent systems can be complex, and most certification efforts rely on extensive testing.

Conventional RTOS Certification Considerations
  • Timing-dependent thread scheduling
  • Larger kernel footprint
  • Complex concurrency analysis
  • Potentially large test case counts
  • Extended timelines common
Deterministic Platform Considerations
  • Deterministic tick-based scheduling
  • Minimal kernel footprint
  • Reduced concurrency complexity
  • Potentially reduced test case counts
  • May support shorter timelines

How Deterministic Execution Can Simplify Certification

Tick-based deterministic execution is designed to address several certification challenges:

Objective 1: Structural Coverage

Conventional approach: Test with varying timing, priorities, and loads to exercise code paths. Each path may require multiple test cases with different timing scenarios.

Deterministic approach: Given initial state and input sequence, execution path is designed to be reproducible. This can reduce the number of distinct timing-dependent test cases needed for coverage.

Potential impact: Can reduce test case count for timing-dependent scenarios.

Objective 2: Requirements-Based Testing

Conventional approach: Requirements specify behaviour (“system shall respond within 100ms”), but actual response time may vary with scheduler state. Test cases may need to verify behaviour across timing distributions.

Deterministic approach: Requirements can map to tick counts (“system shall respond within N ticks”). Test cases can verify reproducible behaviour rather than statistical properties.

Potential impact: Can reduce timing-dependent test failures and regression testing burden.

Objective 3: Robustness Testing

Conventional approach: Test abnormal conditions across possible timing scenarios. A timeout condition might behave differently depending on when it occurs relative to other processing.

Deterministic approach: Abnormal conditions are events processed at tick boundaries. Behaviour is designed to be reproducible regardless of when the condition occurs.

Potential impact: Robustness testing can become more reproducible.

Objective 4: Dead Code Analysis

Conventional approach: Demonstrate code unreachability across possible scheduler states and timing conditions. May require complex static analysis.

Deterministic approach: Code reachability is designed to be deterministic. Static analysis tools can have clearer information about which paths can execute.

Potential impact: Can simplify dead code analysis.

Objective 5: Traceability

Conventional approach: Trace requirements to test cases that verify behaviour across timing variations.

Deterministic approach: Trace requirements to test cases that verify reproducible behaviour.

Potential impact: Can simplify traceability documentation and review.

The Code Size Factor

Beyond determinism, minimal code footprint can reduce certification effort.

MDCP Platform Size

  • MDMK kernel (multi-core deterministic scheduling): ~26KB
  • PBAS-6 kernel (safety guardrails): ~20KB
  • Combined platform core: ~46KB

This is substantially smaller than many conventional RTOS implementations.

Certification Effort Scaling

DO-178C Level A certification effort tends to scale with source lines of code (SLOC). While exact costs vary significantly:

Smaller platform footprint
→ Fewer lines to verify
→ Reduced documentation volume
→ Potentially faster authority review
→ Lower overall certification effort

This potential cost reduction can apply across programs. Minimising platform size can maximise the return on certification investment.

Why Smaller Code Can Help

Smaller code can mean:

Fewer review cycles: Certification authorities review all code and documentation. Smaller codebases may enable faster reviews.

Less documentation: DO-178C requires extensive documentation. Documentation volume tends to scale with code size.

Reduced regression scope: When issues are found during certification, fixes trigger regression analysis. Smaller codebases may have fewer interactions to analyse.

PBAS-6 Integration: Safety by Design

The PBAS-6 (Progressive Bounded Autonomous Systems) kernel provides runtime safety capabilities that can map to DO-178C safety requirements.

Safety Envelopes as Certification Evidence

PBAS-6 enforces safety envelopes—runtime bounds on system state:

  • Control authority limits
  • Rate-of-change limits
  • Resource utilisation bounds
  • Escalation triggers for approaching boundaries

These envelopes can provide runtime verification that complements compile-time verification. DO-178C §6.3.4 requires “evidence of absence of unintended functions.” PBAS-6 guardrails can provide continuous evidence that the system remains within safe operating bounds.

Escalation States as Safety Modes

PBAS-6 defines progressive escalation:

NOMINAL → WARNING → CAUTION → ALERT → CRITICAL

Each escalation level corresponds to reduced control authority and increased conservatism. This can map to DO-178C requirements for:

  • Fault detection and handling (§6.3.3.c)
  • System safety assessment (§6.2)
  • Failure conditions analysis (§2.2.2)

The deterministic nature of escalation—same inputs designed to produce reproducible escalation decisions—can simplify verification.

Potential for Multi-Domain Certification

PBAS-6’s small footprint may make it economically viable to pursue certification across multiple domains:

  • DO-178C Level A: Aviation
  • ISO 26262 ASIL D: Automotive
  • IEC 62304 Class C: Medical devices
  • IEC 61508 SIL 4: Industrial control

Each standard has similar requirements but different terminology. Small size can make multi-standard certification more tractable.

Timeline Considerations

Beyond cost, deterministic execution may accelerate certification timelines.

Conventional Certification Timeline

Typical DO-178C Level A timeline for RTOS-based system:

  1. Planning (3-6 months): Software planning documents, standards selection, tool qualification plans
  2. Development (6-12 months): Coding, unit testing, integration testing
  3. Verification (9-15 months): Requirements-based testing, structural coverage, robustness testing
  4. Certification review (3-6 months): Authority review, defect resolution, regression testing
  5. Total: 21-39 months (indicative)

The verification phase often dominates. Timing-dependent behaviour can extend verification because:

  • Tests with timing sensitivity may need re-running
  • Coverage gaps may require additional test cases
  • Timing-dependent issues may require extensive debugging
  • Regression testing may restart with each fix

Potential Timeline with Deterministic Execution

With tick-based deterministic execution, some of these factors may be reduced:

  • Fewer timing-dependent test variations
  • More reproducible test results
  • Deterministic replay for debugging
  • Smaller codebase for review

Potential savings: Several months in verification and review phases, though actual results depend on many factors.

Industry Precedents

While deterministic platforms for DO-178C are still emerging, the certification benefits of small, simple kernels are well-established:

seL4 microkernel: The formally verified seL4 kernel is ~12KB. Its formal verification demonstrated that small, well-designed kernels can achieve strong assurance properties.

Separation kernels: Several separation kernels have achieved DO-178B/C certification with relatively small footprints. Their simple architectures have supported certification timelines shorter than typical RTOS certification.

Space systems: NASA’s JPL uses stripped-down kernels for spacecraft. Small size enables exhaustive testing within mission schedules and budgets.

The pattern is consistent: smaller code + simpler architecture tends to support faster, more tractable certification.

Regulatory Context

Certification authorities increasingly recognise deterministic execution as a factor in software verification:

FAA Advisory Circular 20-115D: Discusses “deterministic behaviour” as a factor in software complexity assessment. Deterministic systems may receive lower complexity ratings, potentially reducing verification burden.

EASA Certification Memorandum: Acknowledges that “predictable, repeatable execution simplifies verification” and suggests this may support reduced test case requirements for deterministic systems.

DO-333: “Formal Methods Supplement to DO-178C” explicitly supports formal verification, which benefits from deterministic semantics. Tick-based execution can provide a foundation for formal methods.

Regulatory guidance increasingly recognises architectures that simplify verification.

Strategic Considerations

In aerospace, certification capability is strategically important. Programs that can achieve Level A certification more efficiently may benefit from:

Earlier market entry: Faster certification can enable earlier service entry

Reduced program risk: More predictable certification reduces schedule uncertainty

Technology differentiation: Deterministic platforms may enable capabilities that would be more costly to certify with conventional approaches

For organisations developing safety-critical systems, pre-certified deterministic platforms can represent significant avoided certification effort.

Conclusion

DO-178C Level A certification is the standard for aviation software safety, but it requires substantial verification effort. Conventional approaches face challenges from timing-dependent execution, larger code footprints, and complex concurrency analysis.

Tick-based deterministic execution offers an architectural approach that can alter the certification trade-offs:

  • Determinism can reduce timing variability, potentially reducing test case counts for timing-dependent scenarios
  • Smaller code footprint can reduce documentation and review effort
  • Simplified concurrency model can reduce analysis complexity
  • PBAS-6 guardrails can provide runtime safety verification
  • Timeline reduction may add significant program value

For aerospace organisations facing certification challenges and compressed development schedules, deterministic platforms represent an architectural approach worth evaluating. The potential benefits in certification effort, timeline, and verification tractability can be material.

As with any architectural approach, suitability depends on system requirements, regulatory context, and organisational factors. Early engagement with certification authorities is recommended when evaluating novel architectural approaches for safety-critical systems.

About the Author

William Murray is a Regenerative Systems Architect with 30 years of UNIX infrastructure experience, specializing in deterministic computing for safety-critical systems. Based in the Scottish Highlands, he operates SpeyTech and maintains several open-source projects including C-Sentinel and c-from-scratch.

Discuss This Perspective

For technical discussions or acquisition inquiries, contact SpeyTech directly.

Get in touch
← Back to Insights