Japan EN

Transform Your Security Approach with Cutting-Edge AI Frameworks

Aaron Momin

Chief Information Security Officer , New York

Artificial Intelligence

AI regulation is evolving rapidly, with different countries adopting various approaches to managing the use of AI in cybersecurity to support business productivity, reduce costs and gain efficiencies. In the European Union, for example, the EU AI Act establishes strict guidelines for AI systems, including those used in critical areas such as security. Meanwhile, in the United States, AI regulation is still in its early stages, with industry-specific guidelines slowly emerging but lacking a comprehensive federal framework.

Globally, compliance with standards such as NIST AI RMF, ISO/IEC 42001, OWASP AI and the NIST Cybersecurity Framework can help organizations align their AI-driven systems with best practices for data and risk management.

ISO/IEC 42001 in particular is an international standard that outlines a framework for establishing and implementing AI systems within organizations. This framework includes several key features for organizations developing or deploying AI-based services such as usage policies, comprehensive risk assessment methodology, ethical and responsible AI practices, and guidance on AI-related cybersecurity concerns. Frameworks such as this help provide companies with a quick-start shortcut to ensure proper implementation when attempting to integrate AI tools into their workflows.

Many of these frameworks and guidelines bring up similar methods to help ensure compliance within these new AI systems. Some of the most common suggestions are as follows:

Implement Robust AI Governance

This is one of the most essential pieces of ensuring compliance around this fast-moving technology. A key starting point is to establish a clear framework that outlines how AI systems are to be developed, deployed, and monitored within your organization with a strong emphasis on transparency and risk management. Creating comprehensive policies on how data is collected, stored, and used by AI systems will help ensure you comply with relevant privacy laws and industry standards. An additional bonus is the proactive safeguarding of your data from accidental internal leaks.

Secure Your LLM Infrastructure

Organizations are increasingly turning to external Large Language Models (LLMs) to enhance their workflows. By leveraging pre-trained models through API integrations, companies can quickly implement natural language processing for tasks such as threat intelligence analysis and incident response. The problem with this approach is that it requires careful consideration of data privacy and security concerns. Implementing robust data sanitization processes alongside input and output prompt filtering mechanisms is crucial when using external LLMs to prevent inadvertent exposure of sensitive information. Companies must ensure proper access controls when setting up these systems and commit to regular vendor assessments of their providers to help keep these new infrastructures secure.

Penetration Testing

Penetration testing for AI models is all about finding and fixing vulnerabilities unique to AI systems, such as machine learning models, datasets, and decision-making algorithms. This process involves simulating various attacks, such as adversarial attacks that can manipulate input data, model inversion that can reconstruct training data, and model stealing that can extract a model’s data set or unique parameters. There are some great tools out there to help with this. For example, Protect AI’s Guardian helps secure AI models through the entire development lifecycle. IBM’s Adversarial Robustness Toolbox is another fantastic option, offering a framework for penetration testing that allows developers to test and strengthen machine learning models against these types of attacks.

Regular Audits and Assessments

These evaluations are crucial for identifying vulnerabilities in your AI systems and ensuring they adapt to evolving threats. Regular audits provide insight into whether your tools are functioning as intended and help verify that they comply with organizational policies and international privacy laws, such as the GDPR and CCPA. Additionally, these audits provide valuable insight into how well any deployed AI models are adapting to new cyber threats, enabling you to make necessary adjustments to maintain resilience in a landscape that can change in a matter of weeks.

Employee Training and Awareness

While AI systems can detect and respond to threats, the human element is still the most critical line of defense. Think about it, how often do you hear from coworkers who may be implementing practices that are against compliance regulations? Regular training programs should educate employees on the risks associated with AI technology, including how cybercriminals may exploit AI vulnerabilities or use AI to craft more sophisticated attacks, such as more realistic phishing schemes or deepfake social engineering. Employees should also be trained in the ethical use of AI, data privacy regulations, and how to recognize unusual behavior in AI-driven processes that could indicate a breach. Creating a culture of cybersecurity awareness helps ensure that employees are actively being educated in relevant policies and are contributing to safeguarding the organization's AI systems and sensitive data.

The integration of AI into cybersecurity can enhance a company’s security measures and help it stay ahead of ever-changing cyber threats. However, this comes with its own challenges — ensuring that these new AI tools and systems are properly maintained and used. When leaders commit to creating comprehensive frameworks, they can strive for instilling confidence in AI deployment and ensure that these technologies provide robust protection while enhancing overall cybersecurity strategies.

The Author

Rachel Anderson, Digital Lead at Synechron UK
Aaron Momin

Chief Information Security Officer

Aaron is Synechron’s Chief Information Security Officer. He oversees the execution of Synechron's worldwide information security strategy and information security program. Aaron possesses nearly three decades of extensive experience in cyber risk, IT risk, information security, and business continuity planning. He most recently served as the Chief Information Security Officer at Certinia. Over the years, Aaron has also held significant positions at prestigious global consulting firms. He was a Managing Director at PwC and held managerial roles in security at both Ernst & Young and Accenture.

See More Relevant Articles