AI Risk Management: New Risks for the CFO

Introduction

In this article we delve into the topic of risk management. Artificial Intelligence (AI) carries many benefits but also a spectrum of risks, many of which are common to all forms of technology, such as data breaches and system/data reliability issues. However, as AI becomes more embedded in financial operations, CFOs must navigate a unique set of AI risks. This article does not seek to provide an exhaustive list but will instead focus on several emerging GenAI risks.

Confidentiality and Access Management

Confidentiality and data access in the age of AI requires careful management. Not only must we exercise precise control over the individuals who can access sensitive information, but we are also tasked with overseeing the AI tools that have access to and use confidential data. This dual responsibility underscores the complex nature of safeguarding information in the age of artificial intelligence. These new AI tools introduce new considerations related to confidentiality and access management, including:

  • What happens to your data after being entered into your AI technology?
  • Where is the data stored?
  • Who inside and outside of the company has access to this data?
  • Is your data aggregated and used by the AI technology to respond to other users?
  • Does any data entered and used get anonymized?

While there are various GenAI set-ups and configurations designed to maintain the security and separation of company data, it's important to highlight that GenAI tools, such as chatbots, can pose inherent risks to data confidentiality.

Let’s consider the scenario of a company employing GenAI chatbots for self-service data access, like allowing users to retrieve crucial financial information such as revenue and expenditure data. This situation can give rise to additional access considerations:

  • In traditional applications, access control is managed through roles and permissions.
  • However, when employing a GenAI chatbot that interacts with extensive financial datasets, it becomes imperative to ensure that comparable restrictions are enforced when the chatbot formulates a response.

Failure to do so could potentially result in the inadvertent disclosure of confidential information to unauthorized users.

Data Quality and Control

GenAI is recognized for its tendency to embellish or present inaccurate information. The issues of relevance and inaccurate responses can be addressed by improved data quality. A common and powerful approach to improving the relevance of responses is to use RAG (retrieval augmented generation).  To do this, a GenAI tool first searches a selected database(s) to find relevant documents/data and then uses the selected content to help generate an accurate and relevant answer. It's like looking up reference material before writing an essay.

In these cases, there are crucial risks surrounding the RAG approach and database, such as:

  • Is the data loaded accurate and complete?
  • Has the information loaded become out of date?
  • How do you deal with conflicting data that may be loaded?

These complexities and risks are key reasons why users, even those taking advantage of RAG, must continue to use AI in copilot methodology, ensuring there is continued human review and validation, especially of highly complex or critical matters.

Review and Validation of AI Outcomes

Review and validation have continued to be key methods of instilling effective controls in the finance and accounting space. This process often involves some aspect of reperforming the tasks performed by others to ensure the appropriate logic was applied and outcomes achieved. For example, this occurs in the review of account reconciliations, vendor invoices, and explanations of period-over-period variances.

In some cases, AI's complexity can obscure the logic behind its outputs, making traditional process review and validation challenging. Therefore, finance leaders will need to re-evaluate their review and validation activities to determine how they continue to get comfort around financial records and results managed by AI. Methods to address this could include:

  • Regular system testing to validate that GenAI tools are operating as expected. This may include processing test data with known inputs and outputs and confirming the GenAI tool produces accurate results.
  • Look towards explainable AI (XAI) to shed light on AI decision-making. XAI is a type of AI that focuses on making the workings and decisions of machine learning models transparent and understandable to humans. Its goal is to enhance trust and manageability in AI systems. By incorporating XAI, CFOs may be able to align validation processes with AI models, ensuring outcomes are consistent and justifiable.

Security and Financial Safeguards

The evolution of AI also advances the methods of cyberattacks, including more sophisticated forms of hacking, phishing, and malware. This creates increased exposure, especially as it relates to safeguarding the company’s financial assets (e.g., cash). CFOs and finance leaders must strengthen cybersecurity practices and seriously re-examine their processes and controls. They should evaluate whether these common controls are strong and precise enough to identify AI generate cybersecurity threats, which can appear significantly more “real”:

  • Payment review processes to ensure fictitious invoices and vendors are identified.
  • Security around communication with financial institutions or sharing information around financial assets (e.g., bank account numbers, login details, balance details, etc.)
  • Methods of validating the true identity of those they are sharing information with.

Conclusion

As CFOs and finance leaders adopt AI, it will be crucial that risk, controls, and compliance requirements are re-examined in a timely manner and that appropriate actions are taken to address new areas of exposure. Furthermore, as this technology is evolving rapidly, this should become a recurring, standard practice as all parties learn to navigate this exciting new technology.

At Connor Group, our AI subject matter experts stand ready to collaborate with you, providing practical AI solutions tailored to your organization's needs. By leveraging their insights and expertise, you can not only stay ahead of the AI curve but also build a powerful technology strategy that maximizes automation and value. As AI continues to evolve, let's embark on this exciting journey together, adapting and thriving in a world of boundless possibilities.

If you're seeking to adopt AI and do so in a controlled and effective way, contact Connor Group. Leveraging both our control and technology expertise, we'll share our experiences and help you create practical AI solutions that are well-controlled.

Authors

Jason Pikoos

Managing Partner, Client Experience

Lauren Bowe

Automation and Analytics Leader