Protecting Your Business When Working With Companies That Leverage AI

Protecting Your Business When Working With Companies That Leverage AI

Will Sweeney is the Founder and Managing Partner of Zaviant, a data security and privacy consulting firm based in Philadelphia.

Many organizations now use artificial intelligence (AI) tools to help streamline processes, solve problems and enhance decision-making. In fact, a survey from PwC found that nearly three-quarters of U.S. businesses currently use or plan to use both traditional and generative AI.

Whether your business uses AI yet or not, many of your vendors likely do​​—and unfortunately, there are risks involved. So, how do you protect your organization when your trusted partners utilize AI? What goes into developing an AI-aware third-party risk management (TPRM) program? Here are a few things to keep in mind.

Understand The Risks

For most businesses, the primary risk that comes from working with vendors who use AI systems involves handling corporate and other sensitive data. Inputting regulated or highly confidential information—such as health records, intellectual property, trade secrets or customer personal information—could result in model training processes scraping this data, potentially exposing it to unauthorized parties outside your organization.

Additionally, it’s crucial to understand that AI bias can manifest differently from what’s commonly understood. Sometimes AI models produce biased outputs based on correlations they’ve identified. For instance, a model might offer different interest rates or financial products based on characteristics like race or credit score, even when using factually correct data. These inadvertent discriminatory outputs could lead to serious legal and compliance issues and infringe on the privacy rights of individuals.

Vendors who don’t grasp these risks might mishandle sensitive information or deploy systems that produce discriminatory or damaging outcomes. Protecting your business from data loss and compliance issues means ensuring your contractors are appropriately educated about the tools they’re using.

Vet Your Third-Party Vendors (And The AI Systems They Use)

Before engaging vendors, be sure to conduct proper due diligence. Some third parties may not disclose their use of AI outright, so it’s essential to include specific requirements in contracts that outline acceptable usage. You should also confirm they will not comingle your data with that of other clients, and learn how they plan to maintain security standards to ensure compliance while working with your organization.

You should require transparency regarding the AI tools each vendor is utilizing. With hundreds of AI models available, it’s crucial to work with contractors who utilize secure systems that provide clear explanations and insights into how they develop datasets and generate conclusions.

Additionally, establish clear data usage agreements and consider anonymizing data before sharing it to reduce risks. Defining precise terms with third parties, outlining data restrictions and specifying the AI model’s role within your organization is also important. Ensure that the AI’s access to sensitive data is limited or restricted—especially information that, if mishandled, could harm your business or customers.

It’s All In The Contract

After thoroughly vetting third parties, the next layer of protection is your contract. Clearly outline who is responsible for any damages caused by AI systems, including an identification clause that specifies whether the AI developer or contractor would be liable, and under what conditions. It’s also wise to define scenarios where shared liability would apply.

Consider including specific language about data confidentiality, usage limitations and security protocols. Your contract should also explicitly prohibit using your proprietary data for training AI models without express permission and address intellectual property rights for any outputs generated by AI systems using your data. Finally, you should include exit provisions that detail handling your data upon contract termination, with verification requirements to ensure all relevant information has been properly removed from the vendor’s systems.

Make Maintenance A Requirement

Last but not least, you should require performance and accuracy guarantees for the AI models your vendors are using. Consider requesting the ability to conduct regular audits or testing, such as assessing for bias, or requiring vendors to run these audits and provide reports. You should also run red teaming exercises that test the AI systems with adversarial inputs designed to provoke biased outputs, probe for potential vulnerabilities (such as ways to bypass safety measures), attempt to “jailbreak” the system to make it violate its guidelines or intended use cases and identify blind spots or failure modes in the AI’s design.

Onboard New Vendors With Confidence

In today’s technology-driven world, protecting your organization requires vigilance and strategic planning. Understanding the potential vulnerabilities associated with working with third parties that leverage AI, thoroughly vetting them, establishing clear contractual protections and requiring ongoing maintenance and security protocols allows you to engage new vendors with greater peace of mind.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Leave a Reply

Your email address will not be published. Required fields are marked *