Insights and Analysis

Mitigating AI-powered compliance risks: Lessons from The Matrix

""
""

The latest revisions to the Evaluation of Corporate Compliance Programs (ECCP) guidance show the Department of Justice (DOJ) is wary about the potential misuse of Artificial Intelligence (AI). Equally captivated are unscrupulous individuals who might use widely available generative AI tools to engage in fraud, corruption, and circumvent internal controls. More importantly, the vigilant compliance community can take a page out of humankind’s playbook in The Matrix and use analogue and organizational methods to mitigate AI-powered compliance risks.

AI’s meteoric rise in popularity and applications has been the defining story across a wide range of industries over the past few years, and DOJ has clearly been paying attention. Back in March 2024, Deputy Attorney General Lisa Monaco previewed the Department’s intention to scrutinize companies’ mitigation of AI-related risk in her keynote address at the American Bar Association’s National Institute on White Collar Crime.1 Six months later in front of the Society of Corporate Compliance and Ethics, Principal Deputy Assistant Attorney General (PDAAG) Nicole Argentieri provided the details.2

In that speech, PDAAG Argentieri unveiled updates to the ECCP surrounding the use and assessment of risks associated with emerging technologies.3 The 2024 changes draw prosecutors’ attention to the “deliberate or reckless misuse” of new and emerging technologies (especially AI), the “potential impact of new technologies . . . on [a company’s] ability to comply with criminal laws… [and AI risk integration in the] broader enterprise risk management (ERM) strategies.”4

We anticipate that bad actors may use the generative AI tools widely available today to fabricate more credibly and quickly the implements necessary to circumvent internal controls and engage in misconduct. Based on our experience, we outline possible abuses of generative AI in corporate settings, not to inspire any digital hijinks, but to flag potential perils and highlight possible solutions. The solutions that we outline assume a certain level of existing compliance systems. Risk-based adaptations can be developed for systems at different stages in their evolution and prioritize higher risk transactions, employees, or geographies.

Fabrication of documents

“The games you can play with evidence creation are extraordinary in this environment where creating content is so cheap,” said Matt Galvin, DOJ Fraud Section’s top data expert.5 Forged documents that enable a transfer of value to be embezzled or diverted to an illicit purpose have been a recurring headache for compliance teams. Such documents range from invoices and receipts, to travel bookings, itineraries, and charitable donation transfers. AI tools can generate convincing records that 1) look more credible than their manually created counterparts, 2) are therefore harder to spot, and 3) can be produced at previously unimaginable speeds.

With all of these “benefits,” bad actors might ask themselves: Why stop at mere transactions? AI tools can produce fraudulent corporate records, incorporation documents, and financial statements in furtherance of a fictitious or “ghost” vendor or charity scheme. Once a vendor has been onboarded in a company’s systems, it can be used to siphon funds out of the company. Taking things further, AI tools can manipulate due diligence reports and author fake backgrounds and corporate or financial histories – or at least gloss over the scandalous parts.

The requirement of deliverables for payments to vendors to be processed hasn’t thwarted  industrious fraudsters from assembling low-quality final work product based on open-source materials and stock footage. Now, generative AI tools enable scammers to generate slick slide decks and dozen-page long reports in an instant.

Fabrication of photographic evidence

Companies, especially in life sciences, often promote their products and services by sponsoring training events and industry conferences. Photos from these events are a typical form of proof that the funds were used as intended. Although stock photos being submitted to satisfy such a requirement is not a new practice, AI image generation tools can now create realistic and customized photos from “events” that never actually occurred. In combination with the AI-powered fabrication of sponsorship request documentation, unscrupulous employees can funnel funds out of a company’s account under the guise of sponsorships and event organization.

Voice impersonation and deepfakes

Deepfakes – synthetic videos created using AI – imitate a person’s appearance and voice. Given the state of widely available deepfake tools, in the corporate setting the average employee could likely spot a deepfake of a manager or a colleague on a videocall authorizing or requesting the execution of a transaction. 

If instead someone impersonated that colleague or manager using an AI-altered voice on a phone line, however, the voice impersonation could prove more deceptive because phone lines frequently cause voice distortions. The fraudster can also easily spoof his or her caller ID to appear as the colleague’s or supervisor’s name to prime the unsuspecting employee into believing that the call is legitimate. 

With relatively little effort, a fraudster could use AI-generated documents to onboard a fictitious conference organizer and process a sponsorship request for a non-existent training. AI-generated photos of the attendees at this conference as well as its plenary session could then be uploaded as proof of implementation along with fabricated training materials. And if an employee were to try raising questions, it’s easy to picture them receiving a “reassuring” phone call from the fraudster impersonating a regional compliance manager.

Protecting against the misuse of AI 

Widely available AI-powered technical solutions that reliably detect AI-synthesized records by themselves are presently in nascent stages of development. Some AI detection tools use linguistic and metadata analysis, as well as image artifacts to identify signs of AI-generated content. But perpetual technological evolution is needed to keep pace (or at least, catch up) with the lightspeed advancements in AI. 

Like humankind in The Matrix, compliance teams should base their mitigation strategy of AI-powered fraud and corruption risks on organizational, analogue, and traditional measures . . . at least for now. As technical, reliability, access, and budgetary barriers are surmounted, compliance teams can leverage market-tested AI-powered tools while continuing to “keep the human in the loop” when decisions are made. 

For the time being, compliance teams should focus on AI governance, risk management, training, and verification. Company-wide AI governance frameworks should be adopted. These frameworks should define clear accountability and oversight mechanisms, align on AI initiatives and acceptable uses, and foster trust and transparency among internal and external stakeholders.6

AI should be integrated in ERM frameworks. AI can do real-time analysis of huge datasets, identifying patterns and anomalies before they emerge as enterprise risks. AI-driven automation can make more efficient standard ERM tasks, such as risk assessment, compliance monitoring, and reporting. This integration can proactively identify, assess, and mitigate related risks and reduce siloed information (and the DOJ will ask about AI integration into the ERM system if things go awry). 

Robust instruction on spotting signs of generative AI use for compliance teams across company operations and geographies is critical. Fostering a healthier degree of professional skepticism is now more warranted than ever. 

Enhancing internal controls to verify manual data input is crucial because manual data can be manipulated. Certain measures can be scaled up or down, others should be reserved for high-risk situations. In other words, the old “one size does not fit all” compliance adage still reigns supreme. 

Beyond standard gap analyses, “Red Team Testing" should be conducted to assess an organization’s response and resilience against generative AI risks. Analogizing to a cyber penetration testing team, the “Red Team,” acting as the fraudster, can attempt to identify system, organizational, and human vulnerabilities and propose improvements. 


Authenticate Records


“Trust (your employees to do the right thing) but verify (with independent sources that they actually do it)” is another old adage that applies to preventing AI-created records from diverting company funds to illicit purposes. 

For example, expense management and vendor onboarding systems should verify the information entered by employees. Not only should expenses be paid for with corporate cards, but the card issuer should send the transaction data directly to the expense management system. Reasonable accommodations for cash payments or payments with personal cards should be made, but such transactions should be closely scrutinized. If corporate cards have not been adopted, developing a timeline for adoption and piloting them at specific business units and functions based on risk can be an appropriate mitigation strategy. 

Information on travel expenses such as airfares and lodging should be pulled from the company’s travel portal and incidental expenses such as ground transportation and meals should be cross-referenced with the traveler’s location based on the itinerary. 

The corporate records of new vendors should be confirmed with corporate registration data aggregators. Ideally, the vendor onboarding system would be programmatically connected with the aggregators and the compliance team’s only remaining task would be to scrutinize inconsistencies.

Due diligence providers and data aggregators should give their reports to the compliance team – not the requesting employee – to reduce the risk of falsification using AI. Once the due diligence report is final, compliance teams and requesting employees should work together to mitigate any identified risks. 

Detecting fabricated deliverables can be a more challenging undertaking. Absent reliable AI identification tools, manual checks may be necessary. Depending on the type of work product, requesting underlying data, sources, and drafts with their associated metadata can support an authentication analysis (for example, human authors typically exhibit erratic typing and editing patterns, but AI tools create documents in a single “block” or at unnatural speeds). 

Requiring deliverables to be authenticated by at least two authors can also mitigate this risk. Designating previously verified authors can further strengthen this measure. Particular circumstances may warrant subjecting deliverables to linguistic analysis tools that forensically compare an author’s writing to authenticated writing samples. 

The most sensitive records can also be hashed and tracked using blockchain. 


Define parameters for photographic authentication


Developing specific guidelines on what event photos should capture can dissuade those who seek to use AI-generated photos to process a payment, or at least assist those tasked with preventing them.

Current AI image generation tools are known to struggle with details and consistency. The event photo guidelines should require multiple photos from each event. The guidelines should require that photos capture event attendees along with company branding for each specific event, such as banners and literature with logos, event name, location, and dates. Known attendees, such as keynote speakers and individuals sponsored by the company to attend, should clearly appear in some of the photos.

The guidelines should further require that photo metadata be captured and recorded in the company’s compliance systems. Most smartphones conveniently display metadata such as location, date, and time simply by “swiping up” in each photo. AI-generated photos do not include such metadata.


Independently confirm urgent payment requests


Training on avoiding scams ranging from robocalls and fake texts, to spear-fishing, to voice impersonation and deepfakes should be offered to all employees for corporate and personal security. As an extra layer of security, employees engaged in sensitive functions such as compliance, accounting, and human resources should receive enhanced training. 

A voice impersonation scheme often includes a request from a person of authority purporting to urgently need a transaction processed via an irregular-yet-plausible channel. For example, in the final hours of a financial quarter, a “senior manager” pressures a recently hired accounting analyst to place a transaction ahead of the quarter closing. 

Another type of scheme involves fraudsters purposefully sharing erroneous transaction information with employees in sensitive positions expecting that the latter will eagerly correct them.

To counter such schemes, companies should design reasonable communication channels for transaction requests. Requests received outside of these channels should be questioned thoroughly. Employees most likely to receive such requests should be trained to resist the purported urgency or natural eagerness to correct erroneous information, and verify the request by contacting the requestor through a secure channel. 

For certain types of transactions, adopting a two-factor identification tool could further reduce risk. The party requesting the transaction would be asked to provide a one-time code or approve a one-time prompt on their mobile device. This technology is widely used in personal banking transactions and can be scaled up or down in the corporate setting for financial and non-financial approvals. An interim low-cost alternative can be to ask the party requesting the transaction to find time on the other party’s calendar and send an invitation for a follow-up call to discuss the transaction. An outsider would be unable to do so.

The journey ahead

Although AI represents an exciting new technological frontier, when prosecutors come knocking, the virtual assistants won’t be the ones facing the heat. The main takeaway from DOJ’s updated guidance is that excitement over new tech possibilities must be tempered by human risk mitigation. Generative AI’s undeniable productivity benefits should be balanced against its risks. Compliance professionals have a long experience balancing conflicting priorities.

PDAAG Argentieri has acknowledged that “generative AI makes it easier for criminals to commit crimes and harder for all of us – law enforcement and civilians alike – to know what is real and what is not.”7 This is why DOJ is urging companies to integrate the management of AI-related risk into broader ERM strategies and to train staff in the proper and responsible use of such emerging technologies.8 Companies should take care to update their handbooks and codes of conduct to preclude the unethical use of AI and tailor trainings to the specific duties of different employee groups.

Although we’ve suggested some technological approaches to tackling fraudulent use of AI, at the end of the day – at least for the foreseeable future – there’s no better safety net than human diligence. Companies that don’t want to run afoul of DOJ prosecutors eager to keep up with the times must double down on their commitment to corporate ethics and implement robust guardrails. Companies should consult counsel with not only legal but also technological and business acumen to navigate this quickly-shifting landscape and develop the top-tier compliance program necessary to keep their business on the straight and narrow, come what technological changes may.

 

 

Authored by Peter Spivack, Shelita Stewart, Nikolaos Doukellis, Toni Cross, and ELTEMATE’s Jeremy Burdge and Alexes Anderson.

 

 

1 Deputy Attorney General Lisa Monaco Delivers Keynote Remarks at the American Bar Association’s 39th National Institute on White Collar Crime, available at: https://www.justice.gov/opa/speech/deputy-attorney-general-lisa-monaco-delivers-keynote-remarks-american-bar-associationsStephanie Yonekura et al., Key insights from the 2024 American Bar Association White Collar Crime Conference, available at: https://www.hoganlovells.com/en/publications/key-insights-from-the-2024-american-bar-association-white-collar-crime-conference.

2 Principal Deputy Assistant Attorney General Nicole M. Argentieri Delivers Remarks at the Society of Corporate Compliance and Ethics 23rd Annual Compliance & Ethics Institute, available at: https://www.justice.gov/opa/speech/principal-deputy-assistant-attorney-general-nicole-m-argentieri-delivers-remarks-society.

3 Peter Spivack et al., DOJ updates guidance on Corporate Compliance Programs, available at: https://www.hoganlovells.com/en/publications/doj-updates-guidance-on-corporate-compliance-programs.

4 The Evaluation of Corporate Compliance Programs is the roadmap DOJ’s Criminal Division uses to assess a company’s compliance programs during the resolution stage of a criminal investigation. It is available at: https://www.justice.gov/criminal/criminal-fraud/page/file/937501/dl (2024 ECCP).

5 Gaspard Le Dem, DOJ data expert: analytics shouldn’t be “siloed” in compliance function, Global Investigations Review, available at: https://globalinvestigationsreview.com/just-anti-corruption/article/doj-data-expert-analytics-shouldnt-be-siloed-in-compliance-function.

See Hogan Lovells, 2024-2025 AI Trends Guide, pp. 4-7, available at: https://digital-client-solutions.hoganlovells.com/resources/ai-hub

7 Principal Deputy Assistant Attorney General Nicole M. Argentieri Delivers Remarks at the Computer Crime and Intellectual Property Section’s Symposium on Artificial Intelligence in the Justice Department at Center for Strategic and International Studies, available at: https://www.justice.gov/opa/speech/principal-deputy-assistant-attorney-general-nicole-m-argentieri-delivers-remarks-0.

8 2024 ECCP, pp. 4-5

Search

Register now to receive personalized content and more!