The U.S. Department of the Treasury warns that smaller financial institutions may fall behind larger banks in fraud detection due to the widespread use of artificial intelligence. Larger firms have more resources and historical data to experiment with AI, creating a capability gap. The department suggests that it may contribute its own data to narrow this divide by creating a data lake of fraud data available to train AI models.
While the financial sector has been an early adopter of AI for fraud detection, there is a lack of collaboration among industry players when it comes to sharing fraud data. This hinders smaller institutions from accessing the wide-ranging data sets needed to create accurate detection tools using AI. The report cites a large firm that saw a 50% reduction in fraud after training AI on historical data, demonstrating the effectiveness of AI-fueled detection methods.
The Department of the Treasury itself has used AI to recover $375 million through mitigating check fraud in near real-time. They believe that AI gives cybercriminals an advantage in the short term, as fraudsters are already using generative AI to create sophisticated phishing emails and deepfakes for identity impersonation. The report calls on financial institutions to be vigilant of new third-party risks introduced by AI, whether developed in-house or acquired through vendors.
Financial institutions have mainly limited the use of generative AI to cases where detailed explanations of decisions are not necessary. Concerns over fairness, bias, privacy, and consumer protection have led to a focus on the explainability of AI models. Treasury calls for additional research and development to address these concerns and ensure that AI models do not become ungovernable black boxes. The private sector is urged to provide transparency by affixing a digital equivalent of a nutrition label on vendor AI systems and third-party data.
