Why Static Analysis—and Why Now?

Why Static Analysis—and Why Now?

Static analysis has become a cornerstone of safety‑critical software development, especially in automotive, industrial, and aerospace systems built with C and C++. Static analysis examines source code without executing it. It complements dynamic analysis (testing/debugging) by catching issues early: division‑by‑zero risks, type problems, guideline violations, and complexity hot spots. In safety‑critical domains, this early detection reduces downstream risks and costs.

It can be used to:

  • Enforce coding guidelines (MISRA C/C++).

  • Detect programming errors

  • Spot undefined behavior before it causes portability or security problems

  • Identify dangerous race conditions and potential safety violations

Static vs. dynamic verification methods and formal techniques used in static analysis

The State Space Explosion Problem

The State Space Explosion Problem

As systems grow, the number of possible program states grows exponentially. Exhaustively exploring them becomes infeasible. Practical approaches—abstraction, compositional reasoning, and symbolic representations—help tame the explosion so analysis can scale. Parallelizable approaches can leverage parallel hardware architectures, thereby elevating the analysis to the next level and producing useful results where sequential approaches fail.

Standards That Shape Practice: ISO 26262 & MISRA C/C++

Standards That Shape Practice: ISO 26262 & MISRA C/C++

ISO 26262 recommends control‑flow/data‑flow analysis and encourages formal verification where applicable. It also expects teams to apply robust programming guidelines.

MISRA Rule 1.3: Do not rely on unspecified or undefined behavior. C/C++ leaves certain behaviors undefined (e.g., operand evaluation order), which can lead to divergent results across compilers and platforms. This occurs because platform-specific optimizations may alter the semantics of execution, causing programs to behave differently in different environments. Therefore, avoiding these pitfalls is essential for ensuring safety and portability.

Requirements from ISO 26262

Code Coverage: What It Is and Why Safety Requires It

Code Coverage: What It Is and Why Safety Requires It

Code coverage measures which parts of software are exercised by your tests. Safety standards tie required coverage levels to risk (ASIL/SIL): the higher the risk, the more rigorous the metric and the more tests are required (up to MC/DC for the most demanding systems).

Coverage tools instrument the source, run test suites, and report parts executed vs. missed—identifying coverage gaps that need targeted tests. Closing those gaps responsibly often requires solver‑guided inputs —always backed by solid oracles (expected results) rather than “blind” auto‑generated outputs.

Methods That Matter: Abstract Interpretation & Symbolic Execution

Methods That Matter: Abstract Interpretation & Symbolic Execution

  • Abstract Interpretation approximates values and states using domains (e.g., ranges) and flags potential problems as alarms. It mitigates the state space explosion problem by employing abstractions that sometimes fail to produce concrete results. In such cases, the user may need to manually verify results flagged as “maybe” false. Abstract interpretation is not intended to close code coverage gaps.

  • Symbolic Execution explores program paths by treating inputs as symbolic values and forking at conditional statements. It is effective for constructing tests that reach deep paths and for checking assertions via theorem solvers—though it suffers from state space explosion and benefits from pruning and parallelization.

Practical Examples

Practical Examples

  1. Division by Zero
    A function with 10 / (x + i) shows how analysis surfaces a concrete witness for potential division by zero, while correctly recognizing non‑issues when preconditions guarantee x + i != 0.

  2. Assertion‑Driven Debugging in Data Structures
    When inserting a node into a tree, we expect nodes_after = nodes_before + 1. Symbolic checking produced a counterexample: equal‑value handling was missing, exposing an implementation error.

  3. Closing Coverage in C++ STL sort
    During qualification, the team found stubborn coverage gaps at specific lines. Solver‑guided analysis produced a target input (length‑13 sequence with extreme values) that drove execution through previously untested branches. Heavy compute demands made parallelization key.

  4. Race Conditions in Multithreaded Code
    In a simple stack with push/pop, analysis flagged hazardous areas where the shared stackPointer is written and then read—creating windows where an interleaving can corrupt state. Introduce proper synchronization around non‑atomic updates to avoid race conditions that cannot be proven to be absent.

  5. Casting int to enum: Out‑of‑Range Checks
    Int values from a data bus are cast to an enum {OK, REPEAT, ERROR}. Analysis identified out‑of‑range casts and produced witness inputs (e.g., 3 or very large unsigned values), guiding teams to add robust range checks.

What Today’s Tools Get Wrong—and What We Need Next

What Today’s Tools Get Wrong—and What We Need Next

From ongoing tool comparisons:

  • Abstract interpretation doesn’t provide coverage and can generate “maybe” alarms that require manual triage.

  • Undefined behavior remains hard to detect automatically; many rules still rely on review.

  • Real‑world use demands configuration, stubs for low‑level libraries, handling recursion, memory allocation, and advanced C++ features (ad‑hoc polymorphism, complex resolution rules).

  • Toolchains may support Clang but not GCC, complicating pipelines.

What we’re building toward:

  • Automated coverage‑problem solving with definitive verdicts (fewer “maybe” alarms).

  • Better detection of undefined behavior.

  • Smarter, more automated configuration.

  • Full C++ support at production scale.

  • Parallelizable approaches for better scalability.

Static Analysis Meets AI

Static Analysis Meets AI

We’re researching ways to integrate static analysis with AI‑generated assertions: let AI propose invariants; use formal methods to prove or refute them. This hybrid approach could raise the quality of AI‑assisted programming while respecting safety constraints.