Part 3: Information

Civil Aviation Authority: Certification and surveillance functions.

3.1
The CAA’s regulatory role involves making decisions on, for example:

  • when an operator has satisfied the requirements for certification;
  • the depth and frequency of surveillance required to ensure that operators are complying with the Act and the CARs; and
  • at what point an operator has sufficient non-compliance and/or non-conformance with the Act and the CARs to warrant regulatory sanctions.

3.2
To be effective in its regulatory role, the CAA’s decision-making must be effective. Good decision-making depends on good information and good analysis of that information so it can, if necessary, lead to action.

3.3
In this Part, we report on:

Sources of information

3.4
The CAA collects a large quantity of data in its Aviation Safety Management System, most of it reported by participants in the civil aviation system (pilots-incommand, owners, operators, air traffic controllers and others). For example:

  • aircraft owners are required to provide their aircraft’s flying hours annually (for private owners) or quarterly (for commercial owners);
  • pilots-in-command (or, if they are unable to, operators) must notify the CAA of the details of all accidents, as well as any aircraft and airspace incidents they have been involved in;
  • operators must get approval from the Director for changes in key personnel or the scope of their operations, including any changes of aircraft;
  • air traffic controllers must advise the CAA of any aircraft or airspace incidents they have been involved in (for example, misidentification of an aircraft by a radar operator) or are aware of (for example, undershooting, over-running, or running off the edges of runways);
  • the Transport Accident Investigation Commission reports on the results of its investigations;
  • the Aviation Security Service reports on security incidents; and
  • members of the public and the industry may also lodge complaints against operators (Aviation Related Concerns).

3.5
The CAA also gathers a large amount of information from its certification and surveillance functions, and from its own investigations of accidents and incidents and Aviation Related Concerns reported to it. However, in order for this information to be useful, it has to be analysed so it can lead to action if necessary. Action may involve changes to the CARs, education programmes that target high-risk areas of the industry, or additional audits/inspections of individual operators.

Analysis of industry information

Aviation Safety Report

3.6
The CAA provides information about the civil aviation industry in its Aviation Safety Report (6-monthly) and Aviation Safety Summary Report (quarterly). These reports are produced from data in the CAA’s Aviation Safety Management System, and provide a snapshot of the size, shape, and activities of the civil aviation industry in New Zealand. They also allow the safety performance of each STG to be measured against the safety targets.

3.7
The Aviation Safety Report is the more detailed of the 2 reports, and contains:

  • industry activity statistics – for example, the number of registered aircraft, the number and type of licences, the number of movements at aerodromes (including takeoffs, landings, and missed approaches), the number of air transport flights, and total hours flown;
  • trends over time – for example, in aircraft accidents, airspace and defect incidents, and how these compare to the safety targets, including a brief description of serious and significant events; and
  • where the factors causing accidents have been assigned, an analysis of them by aircraft group and by aircraft flight operations.

3.8
The Aviation Safety Report informs CAA managers of the outcomes of the CAA’s safety programme. Concerns over the reliability of the data that operators report to the CAA have been discussed earlier (see paragraphs 2.39-2.43).

3.9
The Aviation Safety Report could be improved by:

  • Including more interpretative analysis of the information in it, making it a basis for future action. CAA Safety and Analysis staff agree that the reports would be more useful if they contained recommendations based on an analysis of that information.
  • Improving its timeliness. In the past, the information has been up to 12 months old before being received by CAA managers. For example, the report for the 6 months to 31 December 2002 was not published until November 2003. We noted an improvement, in that the report for the 6 months to 31 December 2003, was produced in June 2004. However, to be useful, this time lag needs to be further reduced.

Analysis of accidents and incidents

3.10
In our view, the CAA needs to improve its analysis of accident and incident data. For the period 1 July 2002 to 31 December 2003, for example, causal factors were assigned to only 37% of air accidents. We consider that this figure is low, given that accidents are “failures” of the safety system. We believe that the causes of these failures should be investigated to determine whether the CARs need to be changed, or surveillance tailored to address identified risk areas in relation to particular types of operators or functions.

3.11
At its April 2004 meeting, the Authority questioned why only 33% of accidents investigated for the period 1 October 2002 to 30 September 2003 had causes assigned.14 CAA staff responded that this was a “fairly typical” figure, and that it was largely a result of investigator workload.

3.12
The other main reasons given for causal factors not being assigned to accidents were:

  • 30% were sport-related (including hang gliders and parachutes). These accidents were assigned a lower priority for investigation. Basic information was “captured” but no causal factors were assigned.
  • 15% were not investigated in sufficient depth to determine causal factors. These involved a management judgement call, ensuring that resources were not deflected from cases that had greater potential for safety improvements.
  • 10% were still under investigation when the Aviation Safety Report was produced.

3.13
The Authority was also concerned that accident and investigation findings were not being fed back into the surveillance process. In the case of a fatal accident report, for example, the Authority asked–

  • Should the CAA establish a more rigorous checking process for organisations undertaking single pilot IFR operations?
  • How could the CAA deal constructively with anecdotal concerns relating to organisations and individual operators?
  • What can be done to raise the levels of experience in the sector?

3.14
We share the Authority’s concerns. The CAA advised us that these matters are included in the training courses it conducts. However, we consider that they should also be used to better target routine audits in areas of higher risk.

Analysis of operator information

3.15
In addition to requiring operators to submit information and statistics about their operations, the CAA collects information on individual operators, primarily through its certification and surveillance functions. To enable a risk profile to be established for each operator, the CAA has developed 3 tools:

  • Client Risk Assessments;
  • the Non-Compliance Index; and
  • the Quality Index.

Client Risk Assessments

3.16
Client Risk Assessments are produced by the CAA’s Safety and Analysis Unit. This assessment considers 9 factors, which have been assessed by the CAA to affect the potential risk inherent in an operator’s business and operational environment. These factors are:

  • Operator profile – the risk inherent in what an operator does (for example, a single pilot flying with instruments is a higher risk than a light twin aircraft in visual flight conditions).
  • Operator type – looks at the type of operation (for example, unscheduled would be higher risk than scheduled).
  • Operator management – one person covering more than one senior position is considered potentially higher risk.
  • Management stability – weights how long the management of the operation has been in place (for example, a change in management is considered to increase risk, if only in the short term).
  • Operational stability – weights how long the operator has been doing the job with the current equipment (for example, the introduction of new or a different type of aircraft or opening a new base of operation is considered to increase risk).
  • Occurrence evaluation – looks at the number of incidents and accidents an operator has had (for example, the higher the number of occurrences, the higher the risk).
  • Financial status – is scored if the operator has not paid money owing to the CAA within the required time (for example, if money is owing, it indicates higher risk).
  • The latest Quality Index score – the lower the score, the higher the risk.
  • The current Non-Compliance Index score – the higher the score, the higher the risk.

3.17
The risk assessment results in a score that indicates whether the operator is a “low”, “moderate”, “high” or “very high” risk.

3.18
Client Risk Assessments are generated either when inspectors request them, or when there is a change to any of 9 key items (for example, a change in the operation, or any new accidents, incidents or other occurrences, reported by or about the operator) that increases the risk rating to a “moderate” or higher grading. The reason for the increased risk is then reviewed and appropriate action taken if the inspectors consider it necessary.

3.19
During the trial phase in mid-2000, an operator criticised the process, noting that some changes (such as replacing a senior staff member with a better skilled and more experienced person) might actually reduce risk, whereas the current system resulted in an increased risk rating. The then Director replied saying that he recognised the relative lack of precision, the difficulty in weighing and balancing some of the factors, and the fact that some – perhaps many – changes are likely to prove positive after a short period of instability or disruption. He then went on to say that the risk assessment scores are not intended to be acted on in their “raw” form, but to be a simple “flag” for operators that may require further attention.

3.20
This view still prevails today. CAA staff we interviewed found the assessments to be of limited use, and most thought the system too unsophisticated to effectively measure risk. The General Manager of the Airline Group thought the present system of risk assessment could be substantially improved by the addition of data provided by the client airline. This additional data, which could include information such as financial and “on time” performance, would give the CAA a more focused and immediate assessment of operator risk.

3.21
To improve the quality of the assessments and increase staff confidence in them, we consider that they should be used to highlight operational changes, but that the details then be given to the inspectors (who have a more detailed knowledge of the operator) so that they can assess what impact the changes have had on operator risk. Rather than the system calculating risk, it would be the inspectors’ responsibility to assign an overall risk score, which would then be recorded in the CAA’s Management Information System.

3.22
Client Risk Assessments should also better reflect the operator’s financial condition. Currently, financial risk is based on whether the operator has paid the CAA’s fees (including any CAA surveillance fees), but cash-flow shortages increase the risk that discretionary costs (for example, maintenance, training, and replacing or upgrading aircraft) will be deferred. Potentially, cash-flow shortages also increase the pressure for operators/pilots to fly in marginal weather conditions, or at the limit of, or beyond, their capability.

3.23
Financial risk should be assessed as part of the certification and surveillance functions. Both of these functions should include a discussion with an organisation’s Chief Executive about:

  • intended/planned expansion or retrenchment in the organisation;
  • the organisation’s financial position at the end of the previous year;
  • the cash budget for the current year, and how the organisation is currently performing against that budget;
  • any strategies in place to improve cash flow within the organisation, and the likelihood of their success; and
  • any other business risks facing the organisation – for example, competitors coming into the locality, changes to the scale of competitor operations, or the availability of a qualified maintenance engineer.

3.24
In 2004, the CAA commenced a review of its Surveillance Policy and related processes, and has advised us that more emphasis will be placed on the Client Risk Assessments in the future, to determine the type and extent of surveillance. This tool is seen to be the most comprehensive of the 3 tools, in that it includes the current Non-Compliance Index score and the latest Quality Index score, as well as the 7 other factors (set out in paragraph 3.16) which are continually monitored.

Non-Compliance Index

3.25
The Non-Compliance Index (NCI)15 weights instances of non-compliance identified by either the operators or the CAA over a 12-month period. The combined weights are then divided by the latest number of CAA routine audit hours16 completed for the particular operation. The index is designed to rank operators in their respective sector groups.

3.26
We consider that the under-reporting by inspectors of instances of non-compliance and their routine audit hours (see paragraphs 6.2-6.16 and 7.3), and the underreporting by operators of instances of non-compliance (see paragraphs 2.39-2.43), affect the accuracy of this tool and therefore reduce its effectiveness.

3.27
We noted from the 31 December 2003 Aviation Safety Report that 2 aero clubs of similar size received significantly different NCI scores. One scored 780 and the other 226.5. The club that scored 226.5 had 4 more instances of “major non-compliance” than the other club, but its NCI score was lower because it had 15.75 more routine audit hours. So the aero club that appeared to be the higher risk (because a larger number of major non-compliances were identified) actually had a lower NCI score because the routine audit hours skewed the results.

3.28
Staff from the Safety and Analysis Unit have already recognised this concern, and acknowledge that a better measure is needed to reflect the size of an organisation for the purposes of the NCI.

3.29
The number of instances of non-compliance is also likely to be understated, because the CAA:

  • Relies on operators to advise it of all instances of non-compliance. During our audit, CAA staff advised us that, although operators are required and encouraged to report non-compliance, this does not necessarily happen.
  • Requires its inspectors to identify and report instances of non-compliance. During our audit, we detected instances where this did not occur (see paragraphs 6.2-6.16), and our conclusion is supported by the CAA’s internal auditors’ findings (see paragraph 6.17).

Quality Index

3.30
The Quality Index (QI) was introduced in response to the 1998 Ministerial Inquiry that recommended that inspectors record a “level of confidence” in a certificate-holder’s adherence to the CARs and their exposition.

3.31
The QI score is a qualitative rating based on the audit work done and observations made during the audit. The QI requires inspectors, as part of their routine audit, to assess and rate the organisational culture and internal functioning17 of each part of the organisation.

3.32
For operators in the General Aviation sector, the QI score can be determined for both flight operations and maintenance, which would result in 2 QI scores. For the Airline sector, where operators are audited under Customised Audit Programmes (see paragraph 5.43), a QI score is determined for each module.

3.33
The CAA’s Quality Index Policy requires that information in the report to the Director, on the results of the audit (the audit report), must support the QI score. A copy of the audit report to the Director and the QI scores is also given to operators in the General Aviation sector. Operators in the Airline sector are given QI scores only on request, as the CAA considers that the independent audit modules make an overall QI score difficult to calculate.

3.34
The CAA is concerned that QI scores have become a quasi-performance measure for operators in relation to staff performance, and that they are also being used in promotional material. The scores were never intended to be used for either purpose.

3.35
Under the Surveillance Policy, the QI score should influence both the depth and frequency of surveillance. For example, for a QI score of 30 or less, the organisation is to be referred directly to exit control for further investigation with a view to Certification action. For QI scores of less than 65, inspectors are to consider special purpose audits, more frequent audits, intensive spot checks and Chief Executive interviews. Scores of 65-80 or better indicate that the organisation is at a satisfactory level of compliance and likely to remain so, and need therefore be subject to only routine audits and spot checks. For scores of 80 or more, inspectors can consider reducing the depth and frequency of future routine audits.

3.36
We reviewed 36 QI scores and audit reports prepared by General Aviation Group inspectors. We noted that the Group’s Policy and Procedures document required the QI score to be incorporated within the audit report, and that the QI score should be supported by information contained in the audit report.

3.37
We found that neither of these procedures had been followed. The QI scores were included in the letter to the document-holder, along with a copy of the audit report, but were not incorporated in the audit report. Not following this procedure means that there is a risk that the Director, for whom the formal audit report is prepared, does not get a copy of the QI scores. He or she therefore does not have access to the inspector’s assessment of the document-holder’s organisational culture (i.e. the likelihood that an organisation will remain compliant with the CARs).

3.38
More importantly, out of our sample of 36 audit reports, 35 did not have sufficient information to support the QI scores. This lack of analysis and support may contribute significantly to the inconsistency of QI scores (both for the same operator over a period of time as well as between different operators). Providing the required support and analysis will not necessarily increase the length of the report. For example, the one report that we identified that provided the best linkage was no longer than the average report.

3.39
In one example of inconsistency, an operator’s scores went from 62% to 71% for flight operations and from 64% to 76% for maintenance in one year. The audit report, however, did not explain the increase. Another operator advised us that, although his practices had not changed, his QI score had increased by 11 percentage points over the year. We reviewed the respective audit reports for this operator and found that they did not explain the increase.

3.40
Inconsistency in scoring, and lack of explanation of the scores in audit reports, has reduced the effectiveness of the QI. Overall, the operators we spoke to did not consider the QI score helpful. In fact, one operator commented that it did not make him any safer. However, operators acknowledged that some sort of ranking was needed – as long as it was supported by feedback on how they could do better.

Recommendation 2
We recommend that the CAA improve its analysis of industry information by:
  • including more analysis of the information in the Aviation Safety Report and the Aviation Safety Summary Report to support further action, and to improve the timeliness of these reports.
  • improving analysis of accident and incident data (for example, by identifying further opportunities – such as the CAA’s joint study of pilot-caused and controller-caused airspace incidents18), from which the CAA will draft recommendations for safety intervention mechanisms.
Recommendation 3
We recommend that the CAA further develop the tools it uses to assess the risks associated with individual operators. For example:
  • For the Non-Compliance Index to be more effective, CAA inspectors need to correctly record all instances of non-compliance, as well as the actual audit hours spent with each operator. Operators need to be further encouraged to advise the CAA of instances of non-compliance.
  • For the Quality Index score to be more consistent, it should be supported by the information in the routine audit report, and reasons for significant changes should be explained.
  • For Client Risk Assessments to be more useful to the surveillance process, the CAA needs to re-assess their function. These assessments identify changes to a company’s operation, but not necessarily changes to risk. We recommend that this tool be used to highlight any changes in the company’s operations for inspectors, who would then be responsible for assessing the effect of those changes on the risk of an individual operator.
Recommendation 4
We recommend that the CAA use better indicators of the financial status of operators when assessing operator risk, both at certification and during surveillance.

14: As reported in the October-December 2003 Aviation Safety Summary Report.

15: Each instance of non-compliance is scored for relative severity as critical (30 points), major (2 points) and minor (1 point).

16: Routine audit hours are used to “normalise” the data so that different-sized organisations can be compared, on the basis that the number of audit hours are directly related to the size of the organisation.

17: The following 10 areas are assessed:

  • management and staff attitude towards safety;

  • clarity of quality management system;

  • documentation;

  • facility suitability and upkeep;

  • tools/equipment/materials;

  • adherence to standards and specifications;

  • personnel skills, knowledge and numbers;

  • control/management system effectiveness;

  • corrective and preventative actions; and

  • inspector assessment.

Each area is marked using a scale of 1 to 10, with 10 being exemplary. When 1 of the 10 areas is ‘not observed’, and is therefore not scored, the total raw score is scaled to achieve a final score out of 100.

18: A joint study was undertaken with the Centre for Transport Studies, Imperial College London. A report on this study has been published in The Aeronautical Journal, the Royal Aeronautical Society, May 2004, enitiled Airspace safety in New Zealand: A causal analysis of controller caused airspace incidents between 1994-2002.

page top