An AI Love Letter from NY DFS
Execs and infosec personnel at entities regulated by the New York State Department of Financial Services received an AI focused industry letter.
Much of it may seem obvious to cybersecurity professionals, but here are a few noteworthy aspects:
Official guidance: This is one of the first official regulatory guidances specifically addressing AI-related cybersecurity risks for financial institutions. It shows regulators are taking AI risks seriously.
AI-enhanced attacks: The document highlights how AI is being used to enhance existing cyberattack techniques, particularly in social engineering and deepfakes to gain access to Non-Public Information (NPI).
AI-driven social engineering attacks have led to employees taking unauthorized actions, such as wiring substantial amounts of funds to fraudulent accounts.
Supply chain focus: There’s a strong emphasis on third-party and vendor management related to AI risks, reflecting growing concern about supply chain vulnerabilities.
Biometric data risks: The guidance specifically mentions risks associated with storing biometric data for AI authentication systems, an emerging area of concern.
Covered Entities should consider using authentication factors that can withstand AI-manipulated deepfakes and other AI-enhanced attacks by avoiding authentication via SMS text, voice, or video, and using forms of authentication that AI deepfakes cannot impersonate, such as digital-based certificates and physical security keys. Similarly, instead of using a traditional fingerprint or other biometric authentication system, Covered Entities should consider using an authentication factor that employs technology with liveness detection or texture analysis to verify that a print or other biometric factor comes from a live person.
AI-specific controls: While many controls mentioned are standard, some are tailored to AI risks, like monitoring for unusual AI query behaviors that might indicate attempts to extract sensitive data.
Covered Entities that use AI-enabled products or services, or allow personnel to use AI applications such as ChatGPT, should also consider monitoring for unusual query behaviors that might indicate an attempt to extract NPI and blocking queries from personnel that might expose NPI to a public AI product or system.
Regulatory expectations: This guidance sets expectations for how financial institutions should address AI risks within their cybersecurity frameworks.
Opportunity, not just risks
The letter concludes…
In addition to AI-related risks, organizations should explore the substantial cybersecurity benefits that can be gained by integrating AI into cybersecurity tools, controls, and strategies. AI’s ability to analyze vast amounts of data quickly and accurately is tremendously valuable for: automating routine repetitive tasks, such as reviewing security logs and alerts, analyzing behavior, detecting anomalies, and predicting potential security threats; efficiently identifying assets, vulnerabilities, and threats; responding quickly once a threat is detected; and expediting recovery of normal operations.
The fact that a regulatory body is formalising these expectations is significant. AI-related cybersecurity measures are becoming a focus for future audits or examinations of financial institutions.