Rupesh Chokshi is Senior Vice President and General Manager of Akamai‘s Application Security Portfolio.
Imagine this scenario. You’ve launched a shiny, new AI assistant to help serve your customers. A user goes to your website and makes some seemingly innocent requests to the assistant, which cheerfully delivers a response that includes some sensitive information. Now imagine this user isn’t a customer at all, but an attacker skillfully manipulating your AI to reveal that sensitive data—or, worse yet, tricking it to grant access to an internal system or database.
This isn’t hypothetical. With the rush to deploy applications and services based on generative AI (GenAI) and large language models (LLMs), some companies are leaving themselves open to this kind of attack. Cyber criminals are busily at work uncovering and exploiting LLM vulnerabilities to gain access to your critical systems and data.
Tricking AI Into Breaking The Rules
Case in point: Late last year, threat actors discovered a vulnerability in ChatGPT-Next-Web (Note: This is now known as NextChat and isn’t an OpenAI product) that allowed them to bypass access controls and gain deeper access to internal systems. Once inside, attackers could penetrate deeper into an organization’s infrastructure. This attack strategy can be scaled and automated to target thousands of environments, putting any unpatched system at risk.
This example highlights a key vulnerability of AI systems: their need to interact with multiple APIs to access data and services to do their job. These APIs are a ripe target for attackers.
The Unseen Risk
Compounding the risk is the rapid adoption of GenAI and LLMs by departments across the enterprise. This is driving a proliferation of APIs outside normal IT processes, creating a growing problem of shadow APIs that lack proper visibility, documentation or safeguards.
It doesn’t stop there. AI systems can now generate their own APIs dynamically, without human involvement. This takes the shadow API threat to a whole new level, creating potential vulnerabilities you may not even know about.
Why LLM APIs Are A Different Animal
To understand how LLM adoption is reshaping the risk landscape, it’s important to recognize that APIs serving GenAI and LLMs differ from “regular” APIs in significant ways:
• Purpose And Behavior: Traditional APIs are deterministic, producing predictable results. AI and LLM APIs are non-deterministic, with the ability to generate different outputs, making testing and defense more difficult.
• Data Usage: Traditional APIs are usually linked to specific transactions, business logic or integrations. AI and LLM APIs may have the ability to consume and generate sensitive data, creating potential privacy and security risks.
• Novel Security Risks: Traditional APIs are vulnerable to lapses in configuration, authentication and other well-understood issues. AI and LLM APIs present new avenues of attack, including prompt injection, model abuse and jailbreaks, and hallucination. Skilled attackers can exploit these vulnerabilities to gain access to sensitive systems and data.
• Governance: Traditional API security focuses on well-established strategies, such as rate limiting, data validation and authentication. AI and LLM APIs require additional safeguards and monitoring, including deeper traffic inspection, classification of content and more granular policy enforcement to govern prompts and response handling. Strategies like rate limiting aren’t enough because AI attacks can “fly under the radar” undetected.
Don’t assume a firewall alone will protect your LLM APIs. Relying on static allow lists to control access to systems and data is insufficient. And don’t assume that your DevOps teams will flag risks because they may not even be aware of them.
What You Can Do: Update Your API Security Strategy
To make sure your LLMs—and the applications they serve—are protected, it’s time to rethink your API security. The first step is being fully aware of the risks. The OWASP Top 10 LLM Applications 2025 is a valuable resource for understanding new vulnerabilities and mitigation strategies.
As a baseline, it’s important to make sure you’re covering the most important “blocking and tackling” in your LLM security strategy. That means:
• Full API discovery, tracking and testing so you know what’s out there in your environment at all times, including detecting and documenting dynamically generated APIs and API calls.
• Implementing security policies that support real-time assessment of new APIs, not just those already registered.
• Ensuring all APIs include proper access restrictions to prevent data leakage.
• Implementing microsegmentation to prevent lateral movement within your environment should an attacker gain access via an AI or LLM API.
Making The Business Case
Deploying GenAI and LLM applications offers exciting new opportunities for gaining a competitive advantage. But it also presents security challenges that have budgetary implications. How can you implement the security needed to protect against threats targeting LLMs without negating the economic advantages driving your AI strategy?
Security leaders need to bring these risks to the business early, before an incident does it for them. This is essential to gain the support and sponsorship needed to fund the security improvements required to safely implement your AI and LLM strategy.
It’s worth noting that reservations to invest in security usually evaporate quickly after an organization has experienced a breach. Highlighting the upside of avoiding a breach—and the associated financial and reputational costs—should be part of any corporate GenAI and LLM adoption strategy.
Making sure your security strategy is LLM-ready should be the first step to making your business LLM-ready. Because in today’s AI era, it only takes one missed API to become tomorrow’s headline.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?