6054 Testing automated controls
Sep-2022

Testing automated controls

CAS Guidance

Because of the inherent consistency of IT processing, it may not be necessary to increase the extent of testing of an automated control. An automated control can be expected to function consistently unless the IT application (including the tables, files, or other permanent data used by the IT application) is changed. Once the auditor determines that an automated control is functioning as intended (which could be done at the time the control is initially implemented or at some other date), the auditor may consider performing tests to determine that the control continues to function effectively. Such tests may include testing the general IT controls related to the IT application (CAS 330.A29).

Similarly, the auditor may perform tests of controls that address risks of material misstatement related to the integrity of the entity’s data, or the completeness and accuracy of the entity’s system-generated reports, or to address risks of material misstatement for which substantive procedures alone cannot provide sufficient appropriate audit evidence. These tests of controls may include tests of general IT controls that address the matters in paragraph 10(a). When this is the case, the auditor may not need to perform any further testing to obtain audit evidence about the matters in paragraph 10(a) (CAS 330.A30).

When the auditor determines that a general IT control is deficient, the auditor may consider the nature of the related risk(s) arising from the use of IT that were identified in accordance with CAS 315 to provide the basis for the design of the auditor’s additional procedures to address the assessed risk of material misstatement. Such procedures may address determining whether (CAS 330.A31):

  • The related risk(s) arising from IT has occurred. For example, if users have unauthorized access to an IT application (but cannot access or modify the system logs that track access), the auditor may inspect the system logs to obtain audit evidence that those users did not access the IT application during the period.

  • There are any alternate or redundant general IT controls, or any other controls, that address the related risk(s) arising from the use of IT. If so, the auditor may identify such controls (if not already identified) and therefore evaluate their design, determine that they have been implemented and perform tests of their operating effectiveness. For example, if a general IT control related to user access is deficient, the entity may have an alternate control whereby IT management reviews end user access reports on a timely basis. Circumstances when an application control may address a risk arising from the use of IT may include when the information that may be affected by the general IT control deficiency can be reconciled to external sources (e.g., a bank statement) or internal sources not affected by the general IT control deficiency (e.g., a separate IT application or data source).

In some circumstances, it may be necessary to obtain audit evidence supporting the effective operation of indirect controls (e.g., general IT controls). As explained in paragraphs A29 to A31, general IT controls may have been identified in accordance with CAS 315 because of their support of the operating effectiveness of automated controls or due to their support in maintaining the integrity of information used in the entity’s financial reporting, including system-generated reports. The requirement in paragraph 10(b) acknowledges that the auditor may have already tested certain indirect controls to address the matters in paragraph 10(a). (CAS 330.A32).

OAG Guidance

Applying a risk-based approach, develop an appropriate strategy for testing automated information processing controls, IT General Controls, and manual controls, since there are a number of interdependencies. As discussed in OAG Audit 5035.2, the ability to rely on the proper and consistent operation of automated controls usually depends on the effective operation of related ITGCs. See OAG Audit 5035.2 for further information on the linkage of ITGCs to automated controls and how ITGCs contribute evidence to the audit. There are a number of factors to consider before we determine the nature, timing and extent of testing for automated information processing controls and ITGCs, as follows:

  • The quality and effectiveness of the IT control environment and ELCs over IT.

  • Knowledge gained from past audits and any significant known or anticipated changes to people, processes, applications, technologies, operations or business conditions that could impact our audit.

  • High-level controls executed by IT management in the normal course of business to monitor controls.

Example:

Our client has a quarterly monitoring control whereby each division controller obtains a listing of all users who have access to key financial applications in their division. The division controller reviews the listing to confirm that only authorized users have been provided access and that access rights are appropriate and consistent with the company’s restricted access and segregation of duties objectives. Exceptions or anomalies are not common, but they are investigated and corrected as soon as detected. Depending on the risks and complexity associated with the entity’s administration of application security and the importance of restricted access to Internal Control over Financial Reporting (ICFR), we may be able to test this quarterly monitoring control (which would include assessing the reliability of the user listings used in the control) and significantly limit or eliminate testing of the detailed controls in the process for adding, deleting and changing user access rights.

  • The risks associated with the automated information processing controls and the ITGCs. The risks associated with automated information processing controls include: (a) the inherent risk of material misstatement in the underlying accounts, and (b) risk that the automated information processing controls will fail to prevent or detect material misstatement in the accounts after considering the effectiveness of other controls, such as direct ELCs. Those same risks apply to the ITGCs upon which the automated information processing controls depend, along with the risk that the ITGCs will fail to support the ongoing effectiveness of the automated information processing controls.

  • Use of a benchmarking strategy for automated information processing controls.

  • Alternative sources of evidence that might be available to determine the continued operation of key automated information processing controls.

Nature and extent of testing automated information processing controls when ITGC evidence is obtained

OAG Guidance

Audit evidence obtained about the implementation of an automated control may provide some level of evidence regarding the operating effectiveness of the automated control. When this evidence is considered in combination with evidence regarding the operating effectiveness of the Information Technology General Controls (from the date of implementation of the control to the current audit period), it may also provide substantial evidence about the automated control’s operating effectiveness during the relevant period.

Consequently for an automated control, the number of items required to be tested is generally minimal, assuming we have previously tested the control. This is because, where we are relying on any automated controls, or automated calculations, we will normally test Information Technology General Controls to be satisfied that the automated control / calculations continues to function properly. In the scenario where we do not have effective ITGCs, it may be more efficient to test automated controls using alternative techniques as described in the block below Automated controls approach with no ITGC evidence.

Regardless of whether we have effective ITGCs, we need to design the test prudently, checking that it addresses all critical iterations of the control. An iteration of a control is when an automated control is programmed to operate differently dependent upon the input. See the following examples of controls with multiple iterations:

Example 1:

An entity runs an online auction where it collects commissions based on the final auction price as follows:

Auction sales price Commission
$0.01 to $10.00 $1.00
Greater than $10.00 10% of the final auction price

The commission is automatically calculated by the application; however, we test both scenarios to obtain evidence over the accuracy of the commission calculation.

Example 2:

Price differences between an invoice and purchase order that are in excess of the established tolerance limits are blocked from payment processing. Tolerance limits are 10% or $100. We design the test to address both types of tolerance limits.

Depending on the nature of the control and its risks, evidence of the operation of the automated control may be obtainable through sufficient inquiry, observation, examination, and/or reperformance procedures performed during a design evaluation of the related transaction process. It may be necessary to test more than one transaction to see how all important aspects of the control operate. Also, it may be more efficient or effective or both to test certain automated information processing controls (particularly more complex controls, calculations or reports) “through the application,” such as by directly examining a configuration setting in the application.

When deciding our testing approach for automated controls, we should consider whether

  • we might obtain indirect assurance about the effectiveness of newly implemented controls through our tests of program change or program development controls (i.e., management’s process for testing of the operation of the control as part of their application implementation process);

  • there is opportunity to use dual purpose testing (e.g., substantive tests in the audit of financial statements that might also provide point‑in‑time evidence about the controls over completeness and accuracy of a key report used in a manual control); and

  • specialized skills are needed (e.g., to directly examine a configurable ERP setting or to evaluate the client’s approach to testing the control during application implementation).

The design of a test for an automated control considers the most relevant functionalities/attributes of the control, in order to assess the operating effectiveness of the automated control in all significant aspects, based on the information processing objectives that the control was designed to accomplish. The “test-of-one” concept is not limited to testing a single transaction handled by the automated controls. For example, consider the following:

  • the data fields the application is programmed to process,
  • the programmed calculation logic,
  • whether the application has tolerance levels for automated instructions,
  • the manner in which corrections are approved and processed,
  • the manner in which exceptions are dealt with, and
  • whether and how manual or systematic overrides are permitted.

Once we have gathered this information and performed enough work to evaluate the design of controls (including an understanding of the various control attributes and transaction paths), we then need to judge whether that work has also provided a sufficient level of assurance about whether the control is operating as designed. If not, additional procedures may need to be performed using one or more of the testing techniques described above. For example, it might be more efficient or effective to directly examine the configuration settings for the three‑way match in the application.

An example of where we might need to do more work to check that an automated control is operating as designed would be in testing the accuracy of an automated report used to generate a significant accounting estimate (e.g., an excess and obsolete inventory report supporting a reserve estimate). We could evaluate and possibly be satisfied with how the report fits into the design of controls by making inquiries of accounting and IT personnel regarding the nature of the data in the report, how it is accumulated, and how it is used in the accounting and estimation process. If the design of controls is such that the accuracy of the accounting estimate depends on the accuracy of the report and the accuracy of the report depends on proper programming in the application, then we would perform procedures designed to assess whether the report is complete and accurate according to its intended design.

Complex Automated Controls

For some complex automated controls, it might be impractical or even impossible to evaluate the design or the operating effectiveness of controls through typical “manual” procedures alone to understand a process by observing the flow of a transaction. An example might be an application that processes medical insurance claims. The automated process might use input data, edit checks, and attribute tests to direct transactions down alternative processing paths that are then subject to a variety of complex calculations to determine the actual claim value. The application might consider factors such as:

  • Is the claim within the effective policy period?
  • Is the medical service covered by the individual’s plan?
  • Is the service provided in or out of network?
  • Has the deductible been met?
  • Is there a co‑payment required?
  • At what location was the service performed?

Unlike a simple clerical process that can be easily re‑performed (such as the processing of sales invoice amounts as price times quantity), the variations and complexity of some processes (like this medical claims example) could require more sophisticated techniques for understanding and evaluating the transaction flow, the risks of misstatement, and the design and operation of controls.

Application level security and access conditions

OAG Guidance

When evaluating application-level security and access conditions (that is, determining “who has access to what”):

  • Refrain from using sophisticated segregation of duties analysis tools prior to performing a risk-based evaluation of the specific security and access conditions that are important to the audit. Avoid generating and analyzing volumes of data on existing access rights and potential “conflicts.” Rather, first understand the risk of material misstatement of the financial statements associated with unauthorized or sensitive access rights (whether through fraud or inadvertent error), which is determined as part of our procedures to understand a process by observing the flow of a transaction or other procedures to understand what could go wrong and to identify related controls, and then focus our efforts specifically on those risks.

  • Use the knowledge and information gained from past audits and from the work of the internal auditors to limit the amount of testing of security conditions in the current period, particularly when the ITGCs are effective and the environment is stable and not subject to significant change.

  • Consider likelihood, not just magnitude, when evaluating the risks associated with privileged access (e.g., super-user / DBA access, or access to application utilities that could change financial data or records). Do not default to a fully substantive audit approach in the presence of IT deficiencies or exceptions without first considering the impact of these deficiencies on our audit approach (see OAG Audit 4028.4 for additional guidance on the impact of control deficiencies on our audit approach). The existence of privileged access is not a “de facto” deficiency in internal control. Reasonable professional judgment is needed to evaluate the risks associated with privileged access (both fraud risks and the risks of inadvertent error or data corruption), and the nature and extent of controls needed to mitigate those risks. See OAG Audit 5035.2.

Considerations for recurring audits

OAG Guidance

In recurring audits we incorporate knowledge obtained during past audits into the decision-making process for determining the nature, timing, and extent of testing necessary. It is important to consider whether there have been changes in the controls or the process in which it operates. A benchmarking strategy is an efficient way to obtain evidence about the continued operating effectiveness of automated information processing controls. That is, if ITGCs are effective and continue to be tested and we can verify that an automated information processing control has not changed since it was last “baseline” tested, we can conclude the information processing control continues to be effective. After a period of time the baseline of the operation of an automated control would be reestablished. The length of time is a matter of professional judgment, considering factors such as

  • the effectiveness of the controls in the IT control environment, including controls over application and system software acquisition and maintenance, access controls and computer operations;

  • the auditor’s understanding of the nature of changes, if any, on the specific programs that contain the controls

  • the nature and timing of other related tests;

  • the consequences of errors associated with the information processing control that was benchmarked; and

  • whether the control is sensitive to other business factors that may have changed. For example, an automated control may have been designed with the assumption that only positive amounts will exist in a file. Such a control would no longer be effective if negative amounts (credits) begin to be posted to the account.

Automated controls testing techniques with ITGC evidence

OAG Guidance

There are a variety of techniques for testing whether an automated control operates effectively. They include:

  1. Obtaining evidence that the automated control is operating effectively through sufficient inquiry, observation, examination, and/or reperformance procedures during our understanding of a process by observing the flow of the related transactions.

  2. Running sample transactions (“test deck” or “integrated test facility”) through the application program or routine and comparing the output to expectations.

  3. Replicating the output by running our own independent queries or programs on the actual source data.

  4. Evaluating the logic of the application program / routine by

    • inspecting application system configurations,
    • inspecting vendor system documentation,
    • interviewing program developers (note that inquiry is not enough by itself),
    • inspecting the source code.
  5. Testing the logic indirectly by performing substantive testing of the output to source documents or by reconciling it to independent, reliable sources (e.g., testing the accuracy of an aging report by comparing to circularization results or by tracing back to sales invoices).

When testing the operating effectiveness of an automated control consider the risks that: 1) the logic or rules programmed or configured into the application are not accurate to achieve the desired outcome, and/or 2) the application draws upon inaccurate or unreliable data, either through its sources or its timing of execution. For example, if we desire evidence that a systematic inventory costing calculation is properly valuing inventory in accordance with the entity’s stated costing convention (e.g., FIFO), our testing approach needs to provide evidence not only that the calculation is functioning as intended, but also that it is drawing from appropriately updated price and quantity data files at the right time in the processing cycle.

Automated controls approach with no ITGC evidence

As discussed in OAG Audit 5035.2, the ability to rely on the proper and consistent operation of automated controls usually depends on the effective operation of related ITGCs. While effective ITGCs are often the primary basis for evaluating the continued operation of automated controls, other types of evidence might be available that, when considered alone or in combination with limited ITGC testing, could provide a more effective and efficient audit approach for one or more information processing controls. For example, we might consider

  • obtaining evidence that the automated information processing control has not been changed since the last time it was tested. Such evidence might come from application change information such as the “last modified date” (if such information is obtainable and reliable);

  • running CAAT procedures that efficiently reperfom a high frequency of the automated information processing control’s operation or that perhaps compare program code at frequent intervals during the audit period;

  • assessing inherent and control risk factors, such as: a low volume / complexity of program / data changes (e.g., off‑the-shelf application with no customization or no source code maintained at the client); testing the automated control more frequently throughout the audit period via testing methods 1–5 noted in the block above Automated controls testing techniques with ITGC evidence.

While any of the testing techniques noted above can be applied throughout the audit period, the most practical and efficient method is to obtain the last modified date associated with the automated control to determine it has not changed. (Each variation of the control would still need to be tested once within the audit period). If this information is not obtainable based on the application, we consider one of the other four testing techniques.

As discussed above, when testing automated controls, we need to consider all iterations of that control and design the test accordingly. This also applies to the method of testing automated controls when ITGCs are not effective. Check that the population used to make testing selections includes each possible iteration of the control. See the block above Nature and extent of testing automated application controls when ITGC evidence is obtained for additional information on circumstances when ITGCs are effective.

Automated controls and suggested tests of control examples

OAG Guidance

Examples of automated controls and suggested tests of control:

Control Example Test of Control
A new in‑house developed payroll system calculates individual tax on employees’ salary and wages. Use CAATs to recalculate payroll tax using tax rates required by local law. This is known as a Simulator technique.
The purchasing module of a moderately customized ERP system generates electronic approval requisition to the appropriate personnel based on the preset approval limits. Observe the entity personnel input for at least one purchase order, for each approval limit, into the system and examine each instance of system generated approval requests to the designated personnel. This is known as the Data Tracing technique.
A recently modified in‑house developed accounting system generates accounts receivable aging reports automatically based on invoice dates and outstanding balances.

Apply all relevant scenarios using fictitious data, i.e., various invoice dates in different preset outstanding balance number of days ranges, to a replica of the system in a testing (not production) environment. Compare the result with the expected values, which may be recalculated by us in parallel. This is known as the Integrated Testing Facility technique.

Note: Before starting to input our fictitious data, verify that the independent environment reproduces in all relevant aspects the production environment.

The three‑way match feature, i.e., matching data on the vendor invoice, receiving document and purchase order prior to cash disbursement, is used in a popular standard ERP system. Management has no capability to modify the ERP system. Review the three‑way match system parameters of the ERP system and select one transaction to understand the control through observation or inspection. This is known as the System Configurations Review technique.
Management uses database query to pull inventory acquisition dates, quantities and unit costs data that are readily available from the standard unmodified ERP system to prepare a monthly inventory obsolescence reserve analysis for senior management review. Review the query underlying programming code (*) for the appropriateness of the logic of the inventory obsolescence report generation process. Additionally, observe a rerun of the query to compare the report to the one that management generated. This is known as the System Configurations (Source Code) Review technique.

(*) Assumption: we have access to competent specialists who understand the database language used to write the query. In addition, proper ITGC audit procedures have been performed to obtain evidence over data integrity and reliability.