3 Emerging AI Security Practices for Proactive Cyber Defense
Implementing AI can lead to significant new risks for organizations. Two Toptal information security leaders explain how improving governance, loss prevention, and monitoring can help CISOs strengthen their defenses.
Implementing AI can lead to significant new risks for organizations. Two Toptal information security leaders explain how improving governance, loss prevention, and monitoring can help CISOs strengthen their defenses.
Authors
Michael is Toptal’s Information Security Practice Lead. He holds a bachelor’s degree in brain and cognitive sciences from the Massachusetts Institute of Technology and a master’s degree in high-tech crime investigations from George Washington University. Before joining Toptal, Michael served as executive director of the Advanced Cyber Security Center, and held other roles in the field, including consultant, principal investigator, advisor to government officials, and chief information security officer.
Previously At
Sharon is a security and compliance professional with decades of experience leading and delivering security and audit projects on a global scale. She holds a bachelor’s degree in business administration from Eastern Michigan University and a master’s degree in high-tech crime investigations from George Washington University. Sharon has served as a vCISO, risk manager, program manager, and trusted advisor to companies of all sizes.
Previously At
AI is popping up everywhere business leaders look. Employees are using it to write emails. Vendors are integrating it into their products (or claiming it was there all along). Bad actors are using it to carry out malicious attacks. Each new AI use case–even a seemingly innocuous one–significantly shifts the threat landscape and expands an organization’s attack surface. Executives can be forgiven for feeling overwhelmed by the disruption.
As leaders in Toptal’s Information Security practice, we’ve noted that one of the most significant challenges chief information security officers (CISOs) face is the rapid and boundaryless propagation of AI throughout the business operating environment. Three-quarters of knowledge workers use Gen AI at work—and of those users, 78% utilize tools that have not been vetted or sanctioned by their companies, according to a 2024 global report by Microsoft and LinkedIn. Whether your organization has actively chosen to leverage AI or not, the ease with which employees can incorporate AI into daily work means cybersecurity leaders must assume responsibility for defining and promoting proper use.
Although security executives often prefer the comfort of following well-established standards and guidelines, our experience working with clients on their AI security roadmaps suggests that CISOs will need to embrace discomfort and hack together reasonable defenses in the short term. In lieu of universally applicable protocols to address the security risks of AI, we uncovered key emerging practices for mitigating those risks while engaging with clients and collaborating with some of Toptal’s world-class information security and data science experts.
Employee Use of Publicly Available AI Tools
Before we delve into more complex AI security use cases and risks, let’s quickly address the most common scenario: At many companies across the globe, staff members are using Gen AI chatbots built on publicly available large language models (LLMs) such as OpenAI’s ChatGPT, Google Gemini, and Meta’s Llama to perform basic or repetitive tasks more efficiently. As younger generations who are already familiar with and interested in AI enter the workforce, usage will no doubt continue to increase.
Though some CISOs may attempt to block the use of these tools, history suggests that any attempted restrictions will ultimately fail, just as they have in shadow usage of past innovations, such as social media, personal devices, and hosted services. Instead, we agree with CISOs who believe organizations should establish guardrails that allow employees to experiment with AI safely, and even create a sandbox for experimentation. Chasing internal scofflaws wastes resources, and resisting AI could lead to competitive disadvantages.
Gen AI usage by employees is generally transactional and direct, with the primary risk being data leakage through staffers sharing proprietary information in prompts. CISOs can mitigate this risk in the following ways:
- Define the business cases that constitute acceptable use of public models.
- Educate staff on how the developers of publicly available chatbots can reuse or otherwise release information disclosed in prompts.
- Consider categorizing the types of information employees can include in prompts to promote healthy exploration and experimentation.
Companies that intend to integrate AI into externally-facing applications, either fine-tuned LLMs or proprietary models, will face substantially more complex monitoring challenges as usage extends beyond individuals and includes integrated automation of business processes and application features.
Governance in the Face of Emerging AI Security Risks
Integrating publicly available LLMs into a company’s own software solutions is an increasingly popular practice. It allows organizations to take advantage of powerful AI capabilities without having to custom-build their own models—but it also greatly increases security risks.
For example, a business may create a better support chatbot by fine-tuning an LLM and training it on product documentation, help desk records, and other proprietary information. Data leakage risks include the potential disclosure of proprietary, protected, or otherwise sensitive information that the model owner could store and process indefinitely. In some cases, the model owner may also use that information (intentionally or inadvertently) in ways that could expose it to users outside of the organization.
Examples of how security leaders could address governance needs include:
- Implement a new change-control policy that accounts for the use of third-party data processors, and adds them to existing vendor assessment procedures.
- Review the LLM API license and other usage agreements against business risk alignment and allowances.
- Consider and address the risk of improper outputs from authorized model use, such as inadvertent access to sensitive data, hallucination, and model bias. These novel risks require innovative approaches to validate inputs and outputs developed in collaboration between the data owners and the security team.
Companies that develop custom AI models—for example, hyperpersonalizing customer engagement by combining customer history, their product and services catalog, web analytics, and external marketing data to train a proprietary model—should also consider implementing additional controls.
From a security perspective, this use case is similar to other product development efforts that the CISO can govern accordingly with strong direct engagement through the development life cycle. Two key points to keep in mind:
- Custom AI development activities tend to aggregate data from multiple sources. CISOs should consider the resulting models a raw amalgamation of proprietary information decoupled from any prior control context.
- To effectively govern associated data leakage risks, the CISO should assess how privilege management will change across the data flow architecture and define new policies and procedures for enforcing data access control.
Data Loss Prevention in the Age of AI
Data loss prevention in terms of data structure, labeling, and access controls is generally well understood and supported by mature leading data management practices and solutions. But the controls begin to break down when data is incorporated into AI model training and fine-tuning.
Even under the best circumstances, CISOs should expect that any included data will become unstructured, lose its labeling context, and become broadly available through any model interface regardless of predetermined permissions. Preexisting data-level technical controls will no longer be effective once the new model is trained.
When an organization incorporates internal data into model training, the security team must treat the result as an entirely new data source outside of legacy security conventions. The primary novel risk characteristics include aggregation and model access:
- Aggregated Data: Security professionals who work in government and national security environments are well informed about the intelligence risks associated with aggregated data. As the US-CERT noted in 2005, “aggregated data undergoes a constant transformation … yielding information and intelligence.” What’s more, aggregated data at one sensitivity level could lead to the discovery of more sensitive or classified information. Per the US Department of Commerce Office of Security, “The new material may aggregate, or bring together, pieces of information that are unclassified, or have one classification level, but when you present them together it either renders the new information classified or increases its classification level.” By working closely with data owners, your organization’s security team can determine the model-specific risk profile and devise protection requirements against improper disclosure.
- Model Usage: An organization’s CISO should expect any access control regime applied at the source data level to be reimplemented against model usage. That will likely shift the security control into a product development context, implemented within the model interface logic rather than at the system or database level. Defining the intended model use profile would help establish proper guardrails and assist with preventing unauthorized model usage expansion into other projects.
Effective Monitoring Strategies for AI Implementation
Companies that stick with the internal use of off-the-rack or fine-tuned public LLMs should be able to monitor usage with existing controls. There may be additional challenges concerning the models inadvertently expanding access to sensitive data, especially when potentially exposing company financial or employee personal information beyond authorized users. But appropriately restricting model access based on the most sensitive data used to develop the model should effectively manage risks at acceptable levels.
Organizations intending to integrate AI into externally facing applications—either fine-tuned LLMs or custom models—will face substantially more complex monitoring challenges. Implementing external AI use cases safely will require:
- New policies, procedures, and rule sets to extend secure application development to AI.
- New practices and techniques for monitoring outputs.
- Building output-oriented sensors and working with security information and event management (SIEM) and other security operations management vendors to develop AI-aware detection contexts.
Integrated AI models are best characterized as black boxes that may produce damaging results under the best of circumstances. They represent a whole new class of inherited vulnerabilities, including intentional misuse and unpredictable data loss.
The Disruption Continues
CISOs should expect AI to continue to be an incredibly disruptive influence on their short- and long-term strategic plans. In my experience, CISOs that successfully mitigate AI security risks give employees space to safely experiment and leverage external security practitioners with specialized expertise to iterate security defenses and rapidly adapt to new discoveries. There are no well-established best practices to lean on, only procedures that have shown effectiveness within specific contexts. As early adopters at the leading edge of AI, organizations must ultimately identify effective practices for themselves until the emerging AI security landscape matures.
Have a question for Michael or his Information Security team? Get in touch.
Authors
About the author
Michael is Toptal’s Information Security Practice Lead. He holds a bachelor’s degree in brain and cognitive sciences from the Massachusetts Institute of Technology and a master’s degree in high-tech crime investigations from George Washington University. Before joining Toptal, Michael served as executive director of the Advanced Cyber Security Center, and held other roles in the field, including consultant, principal investigator, advisor to government officials, and chief information security officer.
PREVIOUSLY AT
About the author
Sharon is a security and compliance professional with decades of experience leading and delivering security and audit projects on a global scale. She holds a bachelor’s degree in business administration from Eastern Michigan University and a master’s degree in high-tech crime investigations from George Washington University. Sharon has served as a vCISO, risk manager, program manager, and trusted advisor to companies of all sizes.
PREVIOUSLY AT