Part 6: Measuring and Reporting Performance

Inland Revenue Department: Performance of Taxpayer Audit.

Introduction

6.1
In this part we examine:

Performance Against the Purchase Agreement

6.2
We examined performance in terms of numbers of audits and audit hours, and rates of return for taxpayer audits, for the three years to June 2002. Under-performance in some years and activities was matched by over-performance in other years and activities. For example, in the year to June 2002, audit hours undertaken by the Corporates Division matched planned activity. In the previous three years there had been some under-delivery (the lowest in 2000, by 11.1%), matched by over-delivered hours for compliance risk analysis.

6.3
Rates of return per hour have been running at a much higher level in 2002-03 compared with those achieved in the previous year. Some of the difference is likely to reflect the difficulty of predicting the year in which returns will fall for audits of aggressive tax issues and for audits by the Corporates Division.

6.4
Overall, the IRD’s taxpayer audit has delivered what was agreed with the Government, and its reporting against output measures was in line with the Purchase Agreement. However, these measures provide only a weak indicator of whether the IRD is meeting its legislative obligation to collect ‘the highest net revenue over time’.

6.5
The IRD is currently identifying the changes required to move towards a focus on the delivery of outcomes, and this work will include an examination of the performance targets for taxpayer audit and how these might be improved.

Measuring Discrepancies Attributed to Taxpayer Audit

6.6
We identified a number of examples in which the discrepancies attributed to taxpayer audit included amounts where there was little, if any, likelihood of the additional tax revenue actually being collected or of a claimed refund being paid out. All related to results that have been counted in the IRD’s reports of performance.

6.7
The examples included:

  • Reassessment of carried-forward losses where the taxpayer was unlikely in any event to have a sufficient future surplus to make use of the losses.
  • Imputation credit account adjustments where the likelihood of the taxpayer being able to declare a dividend was minimal.
  • Recording discrepancies that were valid under previous legislation but are no longer valid.
  • Counting the full adjustment when the discrepancy is only a matter of timing – where the taxpayer is in default because they have claimed tax deductions in the wrong (earlier) period. (Unless differing tax rates apply in the two periods, the real value of the discrepancy is only the time value of the monetary advantage that the taxpayer would otherwise have enjoyed.)

6.8
In addition, the IRD has a policy of recognising the gross amount of voluntary disclosures as discrepancies. For example, if a taxpayer makes a voluntary disclosure of $100 but, after an audit to verify the accuracy of the disclosure, the correct amount of tax owing is $120, the IRD claims $120 as the audit discrepancy. In our view, only the adjustment to the amount of the voluntary disclosure arising from audit activity (in this case, $20) should be counted as contributing towards audit targets.

6.9
The IRD acknowledges that the definition of a discrepancy needs to be tightened, and is preparing guidelines to assist audit staff with more meaningful reporting of the value of discrepancies identified in the course of their work. In our view, the IRD’s reports to Parliament should distinguish between the different types of discrepancies identified by taxpayer audit to provide a more transparent view of the value of additional tax assessed.

Effect of Performance Targets on Case Selection

6.10
The IRD sets each service centre a target for the number and type of audits to be undertaken. Each team leader is also set a revenue target for the value of discrepancies identified. Standard times are assigned for each type of audit task.

6.11
It is entirely appropriate that these targets should drive the selection of audits to some extent, and in practice they do. However, we identified some examples where the targets were having unintended effects:

  • At the beginning of each year, investigators are given a set of tasks they must complete. Their focus tends to be on completing the tasks in the required numbers rather than on ensuring that the particular tasks they select take account of relative risk to the tax base.
  • Some investigators tend to select cases that will easily achieve the targets, without sufficient consideration of other important factors. For example, because the targets do not measure tax actually collected as a result of audits, an investigator may prefer a case with a likelihood of a large assessable but uncollectable discrepancy over a case where a smaller discrepancy, if identified, is ultimately likely to be collected.

6.12
We noted a tendency among some investigators to focus on the easier cases. A number of team leaders we interviewed felt that there was a risk that complex cases that could, nevertheless, present a greater long-term risk to the tax base, would not be investigated. There is also a consequential risk that investigators will not maintain or increase their capability to undertake the kind of work demanded by more complex cases.

6.13
A current project to improve audit risk identification and analysis involves the redesign of audit tasks undertaken by investigators. The project is building risk analysis and case planning tools to help investigators to focus on compliance risks rather than on specific audit tasks. The need for this project was identified at a workshop run by the Design and Monitoring Group in August 2000 and is being delivered in phases:

  • various Best Practice Standards were issued on 7 July 2003;
  • risk analysis and case planning tools will be finalised in December 2003; and
  • revamped Investigations Manuals will be available in June 2004.

Extent of Taxpayer Compliance

6.14
An ideal outcome is that every taxpayer pays the correct amount of tax that is due. However, complete compliance is unlikely to be achieved. Nor would it be possible to demonstrate that complete compliance had been achieved, because there is no internationally agreed methodology for measuring the size of a country’s cash economy (sometimes referred to as “the black economy”).

6.15
Measuring the extent of overall compliance is similarly difficult. The IRD is looking to its Industry Partnerships initiative and the further development of the Data Warehouse (see Part Five on pages 65-66) to support monitoring and reporting on the level of compliance achieved. These long-term developments are equally as important for assessing and reporting on the performance of other parts of the IRD as for taxpayer audit.

6.16
The IRD is investigating whether it is possible to create an econometric model to estimate improved compliance attributable to Industry Partnerships. This work is at an early stage and there are challenges ahead – including understanding whether or not current data is sufficient and reliable enough to apply the model successfully.

Internal Reporting

6.17
Each month the IRD produces a Taxpayer Audit Output Class Report, showing performance against both external output targets and internal targets set at the start of each year and published in its Performance Management and Monitoring Document.

6.18
All reporting is of actual performance against targets. Performance reporting is analysed monthly at national, service centre, and (within service centres) team leader level, and in the Corporates Division. The executive summary of the Taxpayer Audit Output Class Report contains analysis of reasons for variances and indicates areas of focus that are necessary to meet targets.

6.19
The IRD has started work on establishing outcome measures, in order to bring its performance reporting into line with its strategic focus on improving taxpayer compliance, and to measure the extent to which it is meeting its key long-term priority of protecting the tax base.

6.20
The work will not be straightforward, because effective measurement will not be achievable solely through the kinds of output measures that are currently being used. The performance reporting model will have to cope with difficult apparent contradictions for taxpayer auditors. For instance, audits that detect discrepancies in future will continue to be seen as positive outcomes, as will those that do not detect discrepancies where they demonstrate that the compliance model is working.

Quality Measurement System

6.21
To provide a consistent and accurate measure of audit quality and to lift the overall standard through monitoring and learning, the IRD introduced the Quality Measurement System (QMS) (which was piloted from 1 October 2001 to 30 June 2002). Since 1 July 2002, the QMS has been used to measure and report on the quality of audits. Currently, about 150 completed cases are selected each month for a quality review.

6.22
The objective of the QMS was to enable review and promotion of improvements in a range of areas of audit work, including:

  • adequate case planning;
  • appropriate documentation of planning and the audit;
  • clear feedback from team leaders during the audit and at case closure; and
  • consistent application of practices relating to obtaining agreement to a discrepancy.

6.23
In October 2001, the Design and Monitoring Group informed Field Delivery of the following areas of concern:

  • Size and composition of the quality review panels – that panels should comprise no less than five members and be a mix of team leaders, experienced investigators, and Technical and Legal Support Group staff, with team leaders predominating.
  • A lack of universal acceptance of the process (and the option of reviews being undertaken between, rather than just within, service centres was suggested).
  • Inconsistent staff awareness of the new process – staff in one service centre did not have information that should have been made available through local management.
  • The level of quality achieved – in order to achieve quantitative targets, audit staff had over the years taken shortcuts that reduced the quality of their work. Work practices also needed to be reviewed.

6.24
In the results reported in February 2003 (covering the period January to October 2002), the IRD indicated some improvement in the areas of case planning, risk assessment, and cost-benefit analysis. However, when we examined reviews of QMS cases in service centres in September 2002, we found little evidence of best practice – there were many more failures than passes.

6.25
We also noted that many of the reviews were completed under the time allowed, and that in some service centres there were only isolated examples of comprehensive reviews. We formed the view that the time taken on some reviews made it unlikely that the potential learning from the reviews was being maximised.

6.26
We found a better standard of practice in respect of the cases and reviews undertaken in the Corporates Division.

6.27
We identified two further problems with the implementation of the QMS:

  • Delays in providing feedback to investigators on reviews of their work – routinely three months after the work was performed – reduce opportunities for effective learning. We received a generally negative message from service centre staff about the value of the reviews.
  • Though membership of the review panels has changed since the 2001 workshops, no further training on how to perform reviews has been provided for new panel members.

6.28
A National Quality Committee – made up of team leaders, senior investigators (from Corporates Division), audit design representatives, and an audit area manager – first met in January 2003. Its role is to demonstrate management’s commitment to the QMS by ensuring a high level of national consistency in its application and by requiring area managers to take greater responsibility for addressing areas of concern raised through the QMS.

6.29
At this first meeting, the common issues included unsuitability of the review questions for mass-marketed aggressive tax issue cases, the nonrandom method of selecting cases for review, and the lack of timeliness in selection of cases for review. Case planning and case plan quality were also discussed. A review of the QMS is in progress looking at initiatives for improvement.

6.30
The IRD’s Statement of Intent for 2002-03 includes quality measures related to the time taken to complete audits and the percentage of audits that meet internal quality assurance standards, which will rely on measurement through the QMS. It will be important to have evidence of substantial improvements in the application of the QMS – including adoption of the initiatives outlined above – in order to have confidence in the data used to report on these quality measures in future Annual Reports.

page top