Most ATCO training units have experience with simulator-based training. Far fewer have designed a complete exercise programme that satisfies the traceability and homogeneity requirements of CBTA as defined in ICAO Doc 9868 and Doc 10056. The two problems look similar on the surface but require fundamentally different processes.
This article walks through the practical steps for designing CBTA-compliant pre-OJT simulator exercises from competency model to audit-ready evidence.
Why pre-OJT is the hardest phase to get right under CBTA
Pre-OJT (the simulator phase before controllers move to live operations) sits at the intersection of two difficult requirements. Exercises must be pedagogically coherent (building complexity progressively, exposing trainees to the right situations at the right moment) and statistically balanced (ensuring every trainee receives equivalent coverage of competencies across the programme).
Under a traditional, experience-based approach, instructors design exercises using institutional knowledge and intuition. The results are often good pedagogically, but they are inherently subjective, difficult to audit, and inconsistent across sites or cohorts. When a regulator asks “how do you know trainee A and trainee B received equivalent training?”, the honest answer is frequently: we don’t, with certainty.
CBTA demands a different answer… one backed by structured data.
Step 1 — Build from the competency model, not from the scenario
The most common design error is starting with scenarios (“what traffic situations do we want trainees to handle?”) rather than with competencies (“what must trainees be able to demonstrate by the end of training?”).
ICAO Doc 9868 structures ATCO competencies around a defined set of units — Air Traffic Control Clearances, Communication, Workload Management, and so on, each with associated performance criteria and behavioural indicators. Eurocontrol’s Common Core Content provides the corresponding catalogue of performance objectives for each rating (TWR, APP, En Route).
The correct starting point is to:
- Select the applicable performance objectives for your rating and operational context.
- Map each objective to the Particular Control Situations (PCS) that provide evidence of its attainment.
- Assign a weighting to each PCS based on its difficulty and its contribution to the overall competency load.
Only once this mapping is in place does exercise design become a tractable problem.
Step 2 — Apply constraints to exercise generation
A typical pre-OJT programme for a single rating involves 12 to 20 exercises. Each exercise should contain a limited amount of PCS. The challenge is distributing PCS across exercises so that:
- Each PCS appears several times across the programme (ICAO recommends at least three occurrences to allow meaningful progression assessment).
- Control load (the mental effort) is balanced across sessions.
- No exercise is dominated by a single competency area, which would create assessment bias.
Solving this manually for a programme of 15 exercises and 40+ PCS is feasible, but it takes weeks and produces results that are difficult to verify or replicate. When you extend to three ratings (TWR, APP, En Route) and multiple cohorts per year, the combinatorial complexity becomes unmanageable by hand.
This is precisely the problem that EDS solves. EDS’ engine can generate a set of exercises satisfying all constraints in seconds, with full transparency on the distribution, and producing an identical result for any cohort.
Step 3 — Design the roadbook and simulator scenario
Once exercises are defined at the PCS level, each exercise needs a concrete scenario: flight plans, aircraft performance profiles, simulation configuration, timing, and ancillary tasks. This is where the pedagogical intent is translated into something the simulator can execute.
Key considerations at this stage:
- Timing accuracy matters. Aircraft separation, conflict geometries, and workload peaks need to occur at the right moment within the session. Automated timing calculation based on aircraft performance and route coordinates reduces errors and cuts scenario design time significantly.
- The transition from exercise design to simulator execution should be as frictionless as possible. A workflow that exports directly to the simulator’s format, rather than requiring manual re-entry of flight plans and logs, eliminates a significant source of delay and error.
- Visual verification of the 2D traffic picture before committing to a session is not a luxury. An exercise that looks balanced on paper can reveal conflicts or dead zones in the traffic flow only when you see the aircraft positions and timings together.
Step 4 — Structure the assessment to produce audit-ready evidence
CBTA assessment is formative as well as summative. Instructors need to evaluate KSA performance (Knowledge, Skills & Attitude) during simulator sessions against standardised criteria, not at the end of the programme, but continuously, building a longitudinal record of each trainee’s progression.
This has two practical implications. First, assessment forms need to be standardised across all instructors. When different instructors apply different criteria or rating scales, the data loses comparability and its evidentiary value collapses. Second, every assessment event needs to be stored in a way that is searchable, aggregatable, and exportable for oversight purposes.
The xAPI/LRS standard (Learning Record Store, as defined by IEEE) is the appropriate technical infrastructure for this. It captures every training interaction as a structured, timestamped record that can be queried across cohorts, sites, and time periods.
| Evidence required by regulators | What it requires operationally |
|---|---|
| Alignment between objectives, exercises and competencies | Documented PCS-to-objective mapping, stored and versioned |
| Equitable training across trainees | Mathematical exercise generation with auditable PCS distribution |
| Consistent assessment criteria | Standardised KSA forms, same criteria across all instructors |
| Longitudinal trainee progression records | xAPI-compliant LRS with per-trainee and cohort dashboards |
| Multi-site harmonisation | Centralised synchronisation of exercises and assessment data |
The audit preparation problem
Under traditional approaches, preparing for a regulatory audit of simulator training typically takes three to four weeks. That time is spent collecting exercise records from disparate sources, reconstructing the logic behind training decisions, and assembling documentation that was never designed with audit review in mind.
With a CBTA-compliant digital workflow where the mapping from objective to exercise to assessment is captured natively, audit preparation becomes a reporting task rather than a reconstruction task. The evidence already exists in structured form. Preparation time drops to a matter of days.
“The use of EDS has been particularly beneficial for aligning our training objectives with ICAO, EASA and DIRCAM recommendations, and structuring our programming methodology in a clear, rigorous and efficient way.”
— Major David Le Luyer, Head of ATC Training Unit, Base aéronavale de Lann-Bihoué, Marine Nationale
Where to start
If your training unit is beginning its CBTA implementation, the highest-leverage first step is a competency gap analysis: map your existing exercises against the PCS catalogue for your ratings, and identify where coverage is missing, unverified, or inconsistent across instructors. That mapping will define the scope of redesign needed and it will be the foundation of everything that follows.
EDS handles every step described in this article — from competency-objective mapping and ILP-based exercise generation to KSA assessment, xAPI/LRS storage, and multi-site synchronisation. Designed specifically for ATCO simulator training, aligned with ICAO Doc 9868, Doc 10056 and EU Regulation 2025/2143.

Leave A Comment