Etay Maor is Chief Security Strategist for Cato Networks, a leader of advanced cloud-native cybersecurity technologies.
AI is no longer hype. It has taken over the day-to-day operations of most organizations. According to McKinsey research, 78% of surveyed organizations applied AI in at least one business process. As AI increasingly pervades internal processes, so too do threat actors. For example, GenAI tools amplify the scale and impact of polymorphic malware that can evade existing detection systems, and phishing campaigns using AI-led personalization are extremely convincing with a high success rate.
Our own threat research labs (CTRL) revealed five trends defining how bad actors will further exploit AI to advance their malicious intentions.
1. AI agents will become prime targets to breach access.
As autonomous AI agents become more popular, we’re starting to see vulnerabilities that could allow them to be exploited and weaponized. Agentic AI stands alone from all previous AI models, as it can autonomously make decisions as well as learn from and adapt to new scenarios to achieve specific business goals. An October 2024 Gartner Inc. report predicted that 33% of all enterprise software applications would incorporate agentic AI by 2028, up from less than 1% in 2024, and that 15% of routine work decisions would be made autonomously.
AI agents are often integrated with critical systems and operate with privileged permissions to gather or process sensitive information. Potential security risks arise when autonomous large language models (LLMs) are provided with unrestricted access and permissions to interact with other systems and data sources, calling attention to the dangers of excessive agency.
Compromising these AI agents can provide bad actors with direct access to high-risk system resources. Abusing the agencies of AI agents by contaminating their algorithms and training data through prompt injection attacks or data poisoning can allow adversaries to manipulate the results.
2. Threat actors will focus on designing prompts to evade security controls in AI modules.
Threat actors use prompt injection attacks to craft input provided to the AI model, thereby controlling the model’s output or behavior. Bad prompts deceive AI into incorrectly interpreting or answering user inputs, producing malicious results like evading security controls, exposing sensitive data or producing unsafe content.
Attackers can design prompts that insert malicious instructions through seemingly innocent prompts, manipulate the model to reset its understanding of its role or subtly influence the model’s output by feeding misleading information or context. As these attacks do not require direct access to the AI model itself or its training data, attackers can exploit these tools relatively easier.
3. AI-driven scams will become increasingly common.
As AI continues to permeate our personal and professional lives, we expect AI-based scams to become the most prominent type of cybercrime in the near future. A few ways in which threat actors are exploiting AI include:
• Adaptable malware that learns from existing threat detection systems and adapts in real time to launch attacks that security systems fail to detect.
• Scalable attacks that exploit AI’s capability to automate repetitive processes to launch large-scale attacks, such as DDoS and phishing campaigns.
• Synthetic identities that combine real and fake personal data of victims and appear legitimate enough to trick fraud detection systems.
• Highly personalized phishing campaigns that scan and analyze victims’ personal data available online and through breached data repositories to create convincing phishing emails with higher success rates.
• Vishing attacks that employ AI-powered voice synthesis and deepfakes to impersonate trusted entities like a company’s IT support to manipulate victims into revealing sensitive personal information or leaking corporate data.
4. AI lowers the bar for threat actors to proliferate.
AI-powered automation will give rise to the “zero-knowledge threat actor.” Threat actors will leverage AI tools and techniques to adapt their tactics dynamically based on gathered information about their targets, automating attacks without needing much prior knowledge or manual intervention.
5. Shadow AI will emerge to the forefront of security risks.
Shadow AI is using AI technologies beyond an organization’s IT-sanctioned governance. This includes employees using GenAI tools to look for quick productivity gains and conduct an endless number of routine tasks such as drafting emails or reviewing code. Shadow AI introduces the potential risk of violating data privacy laws and noncompliance with industry regulations.
Using an unapproved LLM can lead to operational risks, as the AI model trained outside of the organization’s policy framework may provide incorrect advice to the business due to inadequate training or unexpected hallucinations, resulting in process inefficiencies and disrupted workflows. This could lead to bad business decisions, efficiency loss and workflow disruptions.
How can organizations prevent AI-based threats?
Organizations can adopt some key mitigation strategies to help deal with the above AI security risks.
Protection From Excessive Agency
Organizations can adopt a zero-trust security model to ensure AI agents verify identity and authorization before executing actions. Restrict AI agents from accessing critical systems unless explicitly authorized using granular access controls. Enforce human review for high-risk AI activities.
Mitigating Adversarial AI Manipulation
Enforce stringent input validation using predefined rules that identify and block adversarial inputs. Enhance the resilience of the AI model by exposing it to data from previous adversarial attacks or simulated attacks. Avoid at-scale exploitation of AI threats by limiting the quantity of AI-generated actions within a fixed time period. Use AI-led detection tools to identify network and user anomalies, react to threats autonomously, and recognize security policy and control vulnerabilities.
Managing Shadow AI Risks
AI-powered discovery and visibility tools can automatically scan and detect unauthorized AI tools and applications within the system. Establish a formal AI use policy, and set clear employee expectations for using sensitive data with AI models. Train employees on AI risks and best practices, and promote a secure AI culture.
As AI systems become increasingly powerful and autonomous, organizations must adjust their security framework to match the advancing capabilities of agentic AI. The emergence of these fledgling AI-driven threats raises the need for a thorough examination of technologies such as SASE, ZTNA and MDR/XDR, which we expect to shape the future cybersecurity market.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?