Most ATC training units run simulator programmes that work. ATCOs qualify, safety records hold, instructors know their job. When an ICAO USOAP audit finds a training-related non-conformity, it’s rarely because the training was inadequate.

The finding is almost always about evidence. An inspector asks: how do you know trainee A and trainee B received equivalent coverage of the same competencies, assessed by the same criteria, against the same performance objectives?
The answer, in most organisations, is “we believe they did.”

That answer doesn’t survive scrutiny.

The Universal Safety Oversight Audit Programme assesses member states against ICAO SARPs, including Annex 1 and the procedures in Doc 9868 and Doc 10056. In the training domain, auditors look for explicit traceability: each simulator exercise mapped to specific performance objectives, PCS distribution intentional and documented, KSA assessments consistent across instructors, evidence structured well enough to be inspected without a three-week preparation sprint.

Few organisations have built that infrastructure yet.

Why the methodology isn’t enough

Adopting CBTA as a methodology (restructuring training around competency units, performance criteria, and behavioural indicators) is necessary. It isn’t sufficient.

ICAO Doc 9868 specifies not just what a competency-based programme should contain, but how its effectiveness should be documented. Doc 10056 addresses the assessment dimension. Together, they require traceability, consistency, and longitudinal records.

Traceability means the link from competency objective to PCS to simulator exercise is explicit, versioned, and visible to an auditor. Consistency means two trainees completing the same rating receive equivalent training demonstrably, not just intuitively. Longitudinal records mean KSA assessment data is captured continuously throughout training, not reconstructed from memory before an inspection.

For most training units, exercise design is still experience-driven. Instructors build good programmes based on years of operational knowledge, but the logic lives in their heads, not in a queryable database. Assessment forms exist but aren’t standardised across instructors. Evidence is scattered across paper binders, local spreadsheets, and email threads.

None of that survives serious audit scrutiny.

The multi-site problem

For ANSPs managing training across multiple control centres (common in Southeast Asia, the Gulf states, and Sub-Saharan
Africa), USOAP compliance adds a harder layer.

Inspectors look at whether a controller completing TWR training at one site received training equivalent to one completing it 500 kilometres away. Without centralised management of exercises, assessments, and progression data, that comparison can’t be made objectively. Harmonised outcomes across sites aren’t a nice-to-have. They’re a compliance requirement.

The organisations that handle this best don’t treat it as a documentation problem. They treat it as an infrastructure problem. The solution isn’t more paperwork, it’s a system that generates consistent exercises and captures consistent evidence by design.

What audit-ready infrastructure actually requires

Four components need to be in place before an USOAP audit becomes a reporting exercise rather than a reconstruction one.

First, a documented mapping from performance objectives to PCS to simulator exercises. This is the evidentiary backbone. Without it, no other piece of evidence holds.

Second, structured exercise generation sets built according to explicit constraints around PCS frequency, control load balance, and competency coverage. The generation logic needs to be auditable itself, not just the outputs.

Third, standardised KSA assessment forms applied consistently across all instructors. When different instructors apply different criteria, the data loses comparability and its evidentiary value collapses.

Fourth, historical pedagogical data storage. All training events (exercise completions, assessment scores, progression decisions) stored as structured, timestamped records in a Learning Record Store that can be queried across cohorts, sites, and time periods.

When these four components are in place, audit preparation changes character. The evidence already exists. Producing it becomes a matter of hours.

A note on timing

USOAP audits don’t follow a predictable calendar. Notification arrives with limited lead time. For training units relying on manual processes and fragmented documentation, that triggers a scramble that consumes instructor bandwidth and creates genuine risk.

The organisations best positioned for USOAP scrutiny treat audit-readiness as an operational baseline rather than a periodic exercise. That posture requires infrastructure,and the earlier it’s built, the more data it captures.

How EDS addresses this

EDS is a cloud-based platform built specifically for CBTA-compliant ATCO simulator training, aligned with ICAO Doc 9868, Doc 10056 and Eurocontrol’s Common Core Content.

It handles the four components of audit-ready infrastructure natively: competency-objective mapping, constraint-based exercise generation, standardised KSA assessment, and xAPI/LRS storage with multi-site synchronisation.

At the Base aéronavale de Lann-Bihoué (Marine Nationale, France), EDS has reduced exercise creation time by a factor of three while producing fully traceable, audit-ready training records since August 2024.