Aaron Momin
Chief Information Security Officer , New York
Artificial Intelligence
It reads like science fiction: An employee gets on a routine call with his chief financial officer and several other recognizable staff members to discuss urgent budget matters. Unbeknownst to the worker, every single person on the video call is an AI deepfake. Even worse, the company then sends $25 million to this group of imposters.
It’s easy to think “how could this possibly happen,” but this is a real story. As GenAI and deepfake technologies mature and spread, it’s becoming more likely that your organization and your employees could be targeted by a scam like this one.
Maintaining vigilance against the impact and scope of AI deepfakes and advanced phishing campaigns demands ongoing investment in both cybersecurity measures and employee education. So, we’ve put together some guidance that combines insights from leading industry foundations, like OWASP, and expertise from Synechron’s cybersecurity experts, to help make it less likely that this happens to you:
It's important to establish operational protection and safeguards against sophisticated phishing attacks. Organizations need to implement an enterprise standard Unified Communication Channels (UCC) and focus their investments on protecting those channels. By establishing standardized UCC channels, it makes it easier for employees to immediately discern attack attempts that are launched from untrusted and non-standard communication tools. Other safeguards, such as for email systems, include setting up Domain-Based Message Authentication, Reporting, and Conformance (DMARC). This feature ensures emails are properly authenticated and helps reduce the risk of domain spoofing. Adding Multi-Factor Authentication (MFA) provides an extra layer of security by requiring users to verify their identity through multiple methods. Lastly, Secure Email Gateways (SEGs) can be deployed to filter out malicious emails before they reach inboxes, while advanced filtering techniques and external source labeling can help users spot and avoid potential phishing attempts.
It’s not enough to just have compliance policies and preventative safeguards, you need to foster a culture of skepticism within your teams. Encourage users to question the authenticity of images and videos by looking for signs of deepfake content manipulation, such as odd depictions of hands, rough facial edges, inconsistent skin textures, blurred sections, atypical lines of communication, and unusual lighting or distortions. Recognizing deepfake videos often involves noting unnatural eye and hand movements, syncing issues with lip movements, and strange lighting or shadows. You should routinely educate your users around common phishing and deepfake tactics, so they’re aware of what they need to look out for. Challenge your employees to think about normative behavior, ensure they understand the typical channels of communication, and strengthen their knowledge of accepted practices and processes to increase their ability to recognize anomalies and suspicious behaviors based on real life situations and testing exercises.
Incident response plans are a critical tool to have in your organization, as they provide the necessary playbook to navigate cyber risk scenarios that mimic things that happen in real life. Similar to a football coach bringing their playbook onto the field, incident response plans provide two key functions for your organization – firstly, they ensure that there are defined roles, responsibilities, and accountabilities in place to guarantee everyone remains vigilant, knows their role in handling incidents and is empowered to respond effectively. Secondly, these plans provide orchestrated ‘plays’ for what to do when there is a cyber incident or a triggered crisis event. These playbooks can be regularly tested with drills and tabletop exercises to ensure the efficacy of response actions, ensuring a higher probability of recovery for your organization. Lastly, it's important to remember that incident response plans need to be continuously reviewed, updated and expanded to ensure that modern scenarios are well covered by existing protocols and playbooks.
Although GenAI can facilitate deepfake attacks, it can also play an important role in their prevention. By flagging anomalies, leveraging external threat information and identifying unusual activities, such as unauthorized access patterns and behaviors, AI can help pinpoint potential threats and compromises, in real-time, that may go unnoticed by traditional monitoring tools and approaches. Implementing these advanced detection and correlation methods can significantly enhance an organization's ability to identify and mitigate risks before they escalate.
For the foreseeable future successful cyber security strategies must acknowledge that we live in a world where there is an increasing amount of deceptive disinformation co-existing with the legitimate. The advent of accurate, sophisticated and easy-to-generate deepfake GenAI technologies means it’s essential that you prepare your organization and employees, taking a continuous approach to educate and upskill your workforce to recognize their characteristics and threats. By emphasizing continuous and proactive adaptation and learning, you can help safeguard your organization’s security and keep it resilient against new and emerging GenAI-powered threats.