Learn
Exercise: Building a RACM with AI
In this exercise, we will move beyond single prompts and use a series of chained, structured prompts to build and analyze a Risk and Control Matrix (RACM) for a common business process. You will learn to use AI as a partner in risk identification, control analysis, and test script generation.
Objective
To construct a detailed RACM from a narrative process description by chaining multiple prompts, each designed for a specific analytical task, and outputting the final product in a machine-readable format.
Scenario
Your client has provided you with a narrative description of their Procure-to-Pay (P2P) process. For the purpose of this exercise, imagine you have attached this document to your prompt. Your task is to use AI to systematically build a RACM. The following steps was using Claude Sonnet 4.
Step 1: Risk Identification
The first step is to thoroughly read the process narrative and identify the key risks before even thinking about controls. Using a Chain-of-Thought approach ensures the AI performs a more comprehensive analysis.
Prompt:
You are a seasoned risk management professional specializing in the Procure-to-Pay process. Given
the attached process narrative, identify the key inherent risks. For each
risk, explain the potential business impact. Use a step-by-step reasoning process, addressing the
procurement, receiving, and payment stages separately to ensure complete coverage. Output the
result as a numbered list. Ensure your output is terse.
Step 2: Control Mapping
With a list of risks identified, the next step is to link them to the specific control activities mentioned in the narrative. This prompt will use the output from Step 1 as part of its context.
Prompt:
Excellent. Now, using the same process narrative and the list of identified risks from our
previous step:
1. Inadequate Vendor Due Diligence Risk
2. Vendor Master Data Integrity Risk
3. Inadequate Approval Controls Risk
4. Vendor Master File Maintenance Risk
5. Service Receipt Verification Weakness
6. Goods Receipt Control Gap
7. Receiving Documentation Risk
8. Invoice Processing Control Weakness
9. Inadequate Segregation of Duties
10. Three-Way Match Control Gap
11. Payment Authorization Risk
12. Threshold-Based Control Inconsistency
13. Email-Based Service Verification Risk
14. Batch Payment Processing Risk
identify the specific control activities mentioned in the document that are designed to mitigate each risk. Create a simple table in Markdown with two columns: "Risk" and "Control Description".
Step 3: Generating a Structured RACM
This is where we transition from human-readable tables to machine-readable data. The goal is to have the AI synthesize the information and structure it as JSON. This format can be directly ingested into GRC tools, databases, or analytics software.
Prompt:
Now, convert our previous risk-control mapping into a structured RACM. For each control, analyze
its description to infer its attributes. Specifically, determine the control type
(Preventive/Detective) and nature (Manual/Automated) and suggest a unique Control ID (e.g.,
P2P-C01). Output the entire RACM as a single JSON object. The object should contain a single key,
"racm", which is an array of control objects. Each object in the array must have the following
keys: "control_id", "risk_description", "control_description", "control_type", "control_nature".
Step 4: Designing Audit Test Procedures
Finally, with a structured RACM that you have reviewed, you can use the AI to draft specific audit procedures for a given control. This requires a persona shift to an audit practitioner.
Prompt:
Select the first control from our generated JSON RACM, P2P-C01. Now, you are an IT Audit Senior.
Draft detailed audit test procedures for this specific control. Provide separate procedures for:
a) Test of Design (ToD) to assess its appropriateness.
b) Test of Operating Effectiveness (ToE) to
verify it is working as intended. Be specific in your test steps, including potential sample sizes
and evidence to be gathered.
Here is what the final report looks like with Claude 4. We could make it prettier, but c'est la vie. Audit Test Procedures for Control P2P-C01
Control ID: P2P-C01
Control Description: New vendors require Procurement Manager approval, including documented business justification, prior to activation in the SureSpend system.
Risk Addressed: Inadequate vendor due diligence, increasing risk of fraud or engagement with unqualified vendors.
A) Test of Design (ToD)
Objective: Determine whether the control is designed to prevent onboarding of vendors without proper review and justification.
Test Steps:
- Review Policy: Obtain the vendor onboarding policy. Confirm that it requires Procurement Manager approval and documented justification for all new vendors.
- System Workflow Review: Inspect SureSpend workflow configuration. Confirm that approval is enforced through automated routing and that vendor records cannot be activated without prior approval.
- Approval Criteria: Review required fields in the vendor onboarding form or system template. Confirm that justification is mandatory and includes sufficient business rationale.
- Access Control Review: Obtain role-based access matrix. Confirm that only Procurement Managers have approval rights and that initiators cannot self-approve.
- Design Evaluation: Conclude whether the control design is adequate based on existence of: (a) documented policy, (b) system enforcement, (c) required justification, and (d) proper segregation of duties.
Sample Size: N/A (Design testing is based on documentation and configuration review)
Conclusion: Document whether the control, as designed, mitigates the stated risk and meets control objectives.
B) Test of Operating Effectiveness (ToE)
Objective: Verify that vendor approvals occurred consistently and in accordance with the control design during the audit period.
Test Steps:
- Define Population: Obtain a full list of vendors added in SureSpend during the 12-month audit period. Validate completeness using audit logs or vendor master change reports.
- Select Sample: Select 30 vendors randomly from the population. If high volume or risk, consider stratified or judgmental sampling.
- Verify Approval: For each sample, obtain system records showing Procurement Manager approval (e.g., workflow logs, timestamps, approver ID). Confirm approval occurred before activation.
- Validate Approver Authority: Cross-check approvers against current access rights. Ensure all approvals were made by authorized Procurement Managers during the relevant timeframe.
- Review Justification: Confirm that business justification was present for each sample and met minimum content standards (e.g., vendor purpose, alignment to business need).
- Check Completeness: Confirm required vendor fields were completed before approval (e.g., tax ID, banking details). Verify system prevents approval if fields are missing.
- Timeliness: Calculate time from request to approval. Flag delays exceeding 5 business days unless documented.
- Exceptions: Inquire about and review rejected or escalated vendors. Confirm appropriate handling and documentation.
- Confirm Set-Up Accuracy: Verify that final vendor records in SureSpend match approved application details.
- Check ITGC Dependency: Confirm that general IT controls over SureSpend (access and change management) have been tested and are effective. Note elevated risk if not.
Sample Size: 30 vendors
Audit Period: 1 Jan – 31 Dec 2024
Tolerable Exception Rate: 5% (given manual components)
Conclusion: Conclude on control effectiveness based on exceptions, their severity, and any compensating controls. Note if control failure is isolated or systemic.
This exercise demonstrates principles that form the foundation of advanced AI-driven audit work:
- Prompt Chaining: The output of one analytical step serves as the input for the next, creating a logical and powerful workflow that builds on itself.
- Structured I/O: Moving from simple text or Markdown to JSON output is the key to automation. Structured data allows AI systems to seamlessly interact with other audit technologies.
- Dynamic Personas: Shifting the AI's role from a "risk manager" to an "IT audit senior" allows you to leverage the model's breadth of knowledge for different stages of the audit lifecycle.
By mastering these techniques, you move from simply asking questions to designing automated analytical processes.