7063 Performing test counts at annual physical inventory counting
Jun-2019

Overview

This topic explains:

  • How to ascertain the existence of inventory through inspection
  • The process for performing test counts, including how to determine the number of test counts
  • How to evaluating count differences identified during a physical inventory observation
  • Documenting test counts
  • Tag control
  • Cut-off testing
Inspect the inventory and perform test counts

CAS Requirement

If inventory is material to the financial statements, the auditor shall obtain sufficient appropriate audit evidence regarding the existence and condition of inventory by:

(a) Attendance at physical inventory counting, unless impracticable, to:

(iii). Inspect the inventory (CAS 501.4(a)(iii))

(iv). Perform test counts (CAS 501.4(a)(iv))

CAS Guidance

Inspecting inventory when attending physical inventory counting assists the auditor in ascertaining the existence of the inventory (though not necessarily its ownership), and in identifying, for example, obsolete, damaged or aging inventory (CAS 501.A6).

Performing test counts, for example by tracing items selected from management’s count records to the physical inventory and tracing items selected from the physical inventory to management’s count records, provides audit evidence about the completeness and the accuracy of those records (CAS 501.A7).

In addition to recording the auditor’s test counts, obtaining copies of management’s completed physical inventory count records assists the auditor in performing subsequent audit procedures to determine whether the entity’s final inventory records accurately reflect actual inventory count results (CAS 501.A8).

OAG Guidance

The accuracy with which the entity determines inventory quantities for individual items is based on counting, weighing, measuring or estimating. Select some of the entity’s recorded counts at each location visited and recount the inventory (“tag to floor”). Testing from tag to floor provides us with audit evidence supporting the entity’s assertions of existence of the inventory and accuracy of the counts. Additionally, select some inventory items at each location visited and independently count and compare our counts with quantities recorded by the entity (“floor to tag”). Testing from floor to tag provides evidence that the counted inventory is there and allows us to conclude if the detailed listing of inventory is complete, to assess the accuracy of management’s counts and evaluate the condition of the inventory.

The audit evidence obtained from observing physical inventories comes not just from our test counts, but also from our other audit procedures performed at the time of the physical count. For example, our observation of the organization of the inventory warehouse and the condition of the inventory factor into our conclusion about the effectiveness of the entity’s inventory procedures. Our tests of the entity’s tag or count sheet control procedures, if effective, provide evidence as to the existence and completeness of the inventory counted. Similarly, our evaluation of the quality of management’s instructions, the competence of the counters, the supervision of counters, and the entity’s test count procedures provide additional evidence as to the existence of inventory and the completeness and accuracy of the count process.

We perform test counts throughout the course of our observation. When executing our test counts, we may either accompany the counters during their counts or obtain management’s counts and then perform our own independent counts.

When recording our test counts (including recounts), we reperform management’s counts to obtain evidence that the counters identified the appropriate inventory to count and accurately recorded quantities and SKU/locations and stage of completion, where applicable.

During the observation, we may also make a selection of inventory the entity or we have identified as obsolete, excess, damaged or aged to be used in our testing of the reserves for such inventory in connection with our other audit procedures performed after the observation. We also make note of such inventory we see throughout the facility in order to test completeness of the information and data used in the entity’s determination and calculation of valuation provisions. We make inquiries of inventory supervisors about the recoverability of the obsolete, excess, damaged or aged inventory during the observation. We consider the results of these procedures in our assessment of whether the inventory is recoverable in our post observation procedures.

Determine the number of test counts

OAG Guidance

Apply professional judgment in determining how many test counts to perform in total and with regard to specific areas of the inventory. Generally, plan to conduct between 60 and 120 test counts at each location where we observe the physical inventory counting, assuming a population of greater than 200 counts performed by management:

  • These counts cover both “tag to floor” and “floor to tag,” which would ordinarily be split evenly across both, e.g., 45 “tag to floor” and 45 “floor to tag” for a total of 90 test counts.

  • Generally, apply these counts separately with regard to each location where we observe a physical inventory and not across locations on a combined basis (i.e., determine the number of items to count at each location individually).

  • In certain cases, we may decide that test counts fewer than 60 are appropriate, e.g., where the total number of different inventory items is small (e.g., less than 200 inventory items), or where target test counting a limited number of higher value items provides sufficient coverage with regard to the total inventory at the location.

  • We may also decide that performing more than 120 test counts is appropriate.

In determining the appropriate number of test counts, as well as the appropriate distribution of such test counts among different categories of inventory and count teams consider the level of evidence desired from our test counts. As we desire more evidence from test counts, move higher in or above the suggested range. Factors to consider, individually and collectively when evaluating necessary evidence and testing levels include

  • our assessment of the entity’s procedures and controls (including, where appropriate, our validation of controls), objectivity and competence of count teams, and supervision based on our knowledge and past experience;

  • monetary significance of the inventory at the location being observed relative to the entity’s total inventory and overall materiality for the entity;

  • whether inventory quantities for accounting purposes are determined solely on the basis of the physical count or, alternatively, are based primarily on the effectiveness of inventory controls and processes (e.g., cycle counts) as indicated by a history of minimal book to physical adjustments;

  • number of inventory line items or different types of inventory with different characteristics (e.g., a large number of inventory line items might suggest the need for a higher number of test counts);

  • number of count teams (we expect to perform test counts covering all count teams);

  • susceptibility of the inventory to theft;

  • complexity in determining quantities by count/weighing/measurement or in determining stage of completion; and

  • past experience (positive or negative) with the conduct of physical inventories for the location.

An annual physical inventory count may occur across multiple days at one location if there is a large volume of inventory to count. In this situation, it may be appropriate to spread our sample of test counts across all days during which management counts (we apply judgment in determining how many days we will attend). However, we consider whether we need to increase the number of test counts above the minimum level to address the increased risk resulting from the multiple days of counts.

Evaluate all differences resulting from our test counts as to their significance and explanation for each difference documented. As explained further in the ‘Evaluating count differences during a physical inventory observation’ block below, issues that arise during the course of the physical inventory may cause us to expand the number of test counts from the number originally planned.

It is not appropriate for us to predefine a tolerable difference between the entity’s counts and our test counts, including in situations where inventory is weighed or measured. We evaluate all differences between our counts and the entity’s counts for significance and to determine if the difference represents an error. For example, when the volume of inventory is measured by a meter and the measurement of an item may fluctuate during the day based on the temperature, we record the difference between our measurement and the entity’s measurement. We then evaluate all differences between our measurements and the entity’s measurements individually and in the aggregate and determine if the inconsequential differences are explainable by the nature of the inventory (e.g., measurements may differ during the course of the day by an inconsequential amount due to temperature changes). In this case, we may conclude that additional test counts are not necessary and that we will not require the entity to recount and adjust the items where we identified these differences because the differences are not a result of inaccurate counting by the entity.

When selecting items for our test counts, we may focus some of our counts on higher value items in the inventory population (which when counted will give us greater coverage of the inventory balance) and on items with specific risk characteristics (assuming it is practical for us to identify such items during our planning for the inventory observation). Conversely, we generally avoid performing test counts on trivial items. However, we might end up counting low value items when the inventory population is made up of a large number of low value items.

Extent of tests of inventories in different sites / locations

OAG Guidance

Apply professional judgment when attempting to combine multiple physical inventory count locations as one count. Several conditions would ordinarily be in place if our test counts are going to be applied across different sites / locations. These conditions are:

  • The entity’s count teams are using the same guidance and are counting under the same conditions.

  • The entity’s systems and controls in place are the same.

  • The entity’s control environment and the results of controls testing are the same for the sites/locations.

  • The entity has assigned common count teams and has in place common supervision and review of count teams in the sites/locations.

  • The nature of the inventory in each site / location is similar.

While the individual counters may not be exactly the same across locations, we may reach the conclusion that we have met this condition if all counters have similar levels of experience and qualifications and are supervised during the execution of their counting procedures by the same individual(s).

The farther the distance between locations, the less likely they would be considered a single count location, because it is increasingly difficult to demonstrate all of the conditions above have been met (particularly the requirement for common supervision and review of count teams).

If all the above conditions are not met, treat each location as a separate population with appropriate test count quantities applied. In circumstances where multiple locations are considered one population with test selection quantities spread among locations (when all the above conditions are met), consider if we need to increase the number of test counts above the minimum level due to the higher inherent risk resulting from the multiple sites/locations. The increased number will be a matter of professional judgment and depend on the engagement circumstances.

Audit sampling

OAG Guidance

CAS 501 requires us to observe the performance of management’s count procedures, unless impracticable. Therefore, only in limited circumstances it may be necessary to perform counts ourselves. For example, we may consider it necessary to obtain additional evidence over the existence of inventory beyond procedures already performed. We evaluate the facts and circumstances, including the level of any evidence already obtained and inventory count procedures performed by management. We do not simply default to making test counts ourselves.

In situations where we determine it is necessary to make counts ourselves, we may use sampling methodologies (e.g., targeted testing, non-statistical sampling) to determine the extent of counts to perform.

Some examples of situations where we may consider it necessary to make counts to obtain evidence over the existence of inventory and use audit sampling, include the following:

  • Significant count differences are expected (i.e., based on prior period experience), it is impractical or infeasible for the entity to recount, and we want to be able to formally project and quantify our test results.

  • When we have tested the entity’s count procedures through observation as of an interim date and choose to extend this evidence through period-end via independent counts. In this situation, we may make additional counts of inventory at period-end (i.e., in lieu of performing substantive analytics or other tests of details in the intervening period).

  • When we have identified a control deficiency in management’s count procedures during our observation of management’s count at a count location, and as a result, additional evidence over the existence of inventory at that count location is desired.

In these limited circumstances, we are not testing the count teams' procedures, but are actually testing inventory quantities directly. As such, we are obtaining substantive evidence over the existence of inventory only and we still need to obtain sufficient appropriate evidence for other relevant assertions, as appropriate.

Evaluating count differences during a physical inventory observation

OAG Guidance

Differences may occur between our test counts and the quantities counted and recorded by the entity during a physical inventory count (or cycle count). Determination of the impact of differences in observed counts is a matter of professional judgment and needs to take into account the nature and underlying cause of the difference. The types of counting errors that may be encountered need to be discussed during the inventory count planning meeting involving the staff performing the observation, as well as more experienced engagement team members. During the physical inventory observation, these discussions need to be with designated experienced engagement team members and resolved and documented on a timely basis, if applicable.

If our test counts disclose an unacceptable number of differences in a particular location, we generally require the entity to recount the inventory in that area for that location. We, in turn, perform additional test counts of the recounted area. Where differences appear to be more widespread and systemic, e.g., an error in the instructions being followed by all counters in all areas, the entity’s recounts and our subsequent test counts may be increased substantially.

Count issues/differences can encompass several areas, including

  • differences in counting, weighing, measuring or estimating which are determined by the engagement team;

  • failure to follow instructions;

  • errors in counting, weighing, or measuring;

  • inadequate identification and description of the inventory counted;

  • inadequate methods to determine that no items are omitted or duplicated;

  • poor supervision of counters and the physical inventory process overall;

  • inadequate response when issues are identified, e.g., poor recount procedures when differences are identified by supervisors or us;

  • inadequate cut-off procedures; and

  • problems with control of count documents, and how individual areas or departments are controlled and cleared and released back into production or movement of the inventory.

When issues or testing differences are identified in the course of our inventory observation:

  • Understand, evaluate and document the nature and cause of each issue or difference. Factors to consider include:

    • Is a test count difference related to a particular count team or a particular area within the location?

    • Is a difference likely to be more fundamental and widespread (systemic), e.g., an error in the instructions being followed by all counters in all areas?

    • Is there any indication that the count differences or other issues may be evidence fraud?

  • Evaluate the potential magnitude of the issue relative to total inventory at the location. We need to consider the nature and extent of count differences and whether such differences could result in a material misstatement. Use judgment in evaluating the differences and consider the result as a whole. Items to consider include

    • relatively few minor count differences when counting large numbers of small value items would ordinarily not be considered significant;

    • individually large and/or frequent count differences would normally be considered significant; and

    • consider any trends in count differences (e.g., our counts consistently exceed the entity’s counts).

  • Determine the appropriate course of action:

    • Document the cause and disposition of the differences between the entity’s counts and our test counts.

    • Based on earlier team discussions in planning the physical inventory observation, determine whether the issue requires discussion with the team manager or engagement leader, whether during the course of the physical inventory observation (so that issues can be resolved during the observation) or at a later stage.

    • Obtain additional evidence, as needed. The amount of additional audit evidence derived from additional test counts will be based on judgment, but will depend heavily on the nature and extent of the issue or count differences identified. For example, it is not generally necessary to obtain additional evidence where there are a relatively few minor count differences. As the number of count differences or significance of count differences increases, the greater the likelihood that we perform additional test counts.

    • If our test counts result in large and/or frequent count differences in a particular area or location or are confined to a count team in a particular area, request that the entity recount the area or location and perform additional test counts of the recounted area or location. Do not perform additional test counts in other areas where no differences have been identified through testing.

  • Inform relevant entity personnel on site so that they can immediately review / correct the situation:

    • Reconcile count differences with entity personnel so that the final inventory is adjusted to the actual count prior to the completion of the physical inventory counting.

    • Evaluate the adequacy of the entity’s planned remediation. For example, where our test counts identify differences isolated to one area and/or to one count team, have the entity recount the items counted by the ineffective count team or within the area of the entity’s facility where multiple differences were noted.

    • Perform additional test counts to evaluate the effectiveness of the entity’s remediation efforts.

Document test counts

OAG Guidance

Document all test counts conducted during the physical inventory counting, including the “tag to floor” and “floor to tag” counts determined during planning, any additional counts we might make while at touring the inventory site, and any recounts performed. Document in our workpapers the resolution of differences resulting from such test counts, including any remedial actions taken by the entity and any subsequent testing by us.

Tag control

OAG Guidance

The objective of using inventory tags or count sheets (collectively referred to as "Tag Control") is to determine all items are counted and to secure the counts against alteration. Entities may use tags, count sheets or an alternative method (i.e., bar codes with scanning equipment) to perform and record counts to achieve this objective. Effective controls over Tag Control typically include:

  • Using pre-numbered tags or count sheets,

  • Accounting for all such tags/count sheets (including used, unused and voided) at the end of the count and

  • Segregating these duties from those of counting and inputting completed inventory tags or count sheets to inventory records

In order to limit the opportunity to misappropriate inventory or conceal intentional misstatements of inventory in the financial statements. Additionally, when tags are used to record counts, unused or voided tags are identified separately from those tags that are the record of the counts, by tracking used, unused and voided tags.

When tags are used, we test Tag Control by accounting for all used, unused and voided tags. If this is not practical due to the number of tags, we may use accept-reject testing. We select individual physical tags from the floor to trace into the tag control log for proper inclusion and classification (e.g., used, unused or voided). In a separate test, we select tag numbers from the tag control log and inspect the physical tags for proper classification. Use the appropriate tests of details form (e.g., targeted testing, accept-reject) to document these tests.

When count sheets are used, we test the procedures for determining all count sheets are completed and are returned after the count in a manner similar to tags (i.e., we test a number of count sheets for proper classification, like we would for the tags, as explained above).

We generally maintain a copy of the tag or count sheet control listing and may retain copies of some or all inventory tags or count sheets or inventory records. If we do not retain copies of the tags, we document our sample of tags for testing Tag Control, and based on inspection of the physical tags, whether they were used, unused or voided. Our copies of the inventory tags or count sheets from the time of the physical inventory counting can help us determine if items were changed or unused lines were filled in after the final inventory listing was completed.

Entities may also use bar code scanning technology to take and record counts, and thus, do not use tags or count sheets. However, the concepts of Tag Control are still relevant when scanners are used. We test the procedures for determining all counts are included in the final inventory report. At the end of the inventory count, we ask for an updated inventory report that shows the quantity (and monetary value, if available) of all inventory as of the end of the inventory count. We use this report after the observation to trace counts to the final inventory listing to determine if there were any additions or adjustments to the inventory after the count.

Similarly, where a third party count service is used that employs scanning technology to record counts and accumulate results, we perform procedures to determine all counts are included in the final inventory report.

Cut-off testing

OAG Guidance

To develop our plan to test cut-off as of the inventory count date(s), we obtain an understanding of the process to achieve proper cut-off. We understand the process to address the shipment and receipt of inventory before and after the count date (regardless of whether or not shipping and receiving are stopped during the counting). In order to design our test count procedures effectively, we also understand the process to control internal movement of inventory between separate locations or areas within a location (i.e., production activities, transfers and stock picking) to prevent omission or double counting. Many entities suspend movement (receiving, production, stock picking, shipping, etc.) or segregate certain inventory during the count to reduce the risk of improper cut-off.

For example, testing the last five receipts and shipments recorded prior to the count and the first five receipts and shipments recorded after the count (i.e., not a full sample based on audit sampling methodologies) can provide sufficient evidence over the cut-off of inventory as of the inventory count date(s) as the objective of this test is to determine what inventory is physically at the location, and therefore, subject to our count procedures. The extent of testing would depend on the engagement circumstances and is a matter of judgment. We may test more receipts and shipments immediately prior to and after the count when pre-numbered receiving or shipping tickets are not used. When the client uses pre-numbered receiving or shipping tickets, we note the last number used and any unused numbers leading up to and immediately after the physical inventory. For the last receipts and shipments selected, we may obtain supporting documentation (e.g., bills of lading, invoice) during the count. Subsequent to the count, we review the supporting documentation and trace the receipts and shipments to accounting records to determine whether the inventory was added to or relieved from the inventory records in the appropriate period.

In addition to testing cut-off at the time of the inventory count, we also obtain substantive evidence at period-end to test for proper cut-off, taking into account the ownership terms and arrangements between entities.