Shai Gabay is cofounder and CEO of Trustmi, a pioneer in business payments security.
Organizations invest heavily in technology, yet today’s most costly breaches are increasingly slipping through traditional defenses. The reason? Attackers target human psychology, now supercharged by AI.
Using tools like deepfakes, cybercriminals can impersonate high-ranking executives with alarming precision, manipulating trust to override judgment. These bad actors tap into human nature—fear, trust and authority—with striking effectiveness. In one case, a finance employee at a multinational firm paid $25 million to scammers after a deepfake video conference call with someone posing as the CFO.
What makes this threat so effective is how AI amplifies psychological tactics—gathering intel at a scale and using it to exploit human trust with speed and precision.
The following are five examples of the psychological tactics at the heart of social engineering attacks that I’ve found most alarming, along with some of my best practices for addressing them.
1. Authority Exploitation
One of social engineers’ most potent tactics is leveraging behavioral AI to impersonate someone with authority, often the CEO or CFO. In most organizations, there’s a natural deference to authority, so when CEOs or CFOs make a request, that communication gets attention.
Scam artists understand that employees are less likely to question directives from authority, especially if they are time-sensitive or confidential.
2. Pressure And Urgency
The “urgent action required” ploy is a cornerstone of social engineering, exploiting our natural tendency to prioritize immediate threats. Scammers fabricate time-sensitive scenarios, such as a critical deal on the brink of collapse, to compel targets to bypass crucial verification steps.
In high-pressure environments, employees are more likely to sidestep standard protocols. The scammer’s arsenal includes phrases like “don’t let this slip” or “the board is counting on you,” appealing to an individual’s sense of organizational duty.
3. Payment Diversions
Many AI-powered social engineering attacks involve redirecting payments or changing vendor banking details. Finance departments process numerous payments daily and rely on established workflows. Therefore, an email referencing a familiar vendor name or a nearly identical invoice template blends seamlessly into daily transactions.
Criminals exploit this reliance on routine, counting on just one slip to funnel funds. My company’s 2024 survey found that 50% of respondents said their companies “experienced business payment fraud resulting from human error” in the past year.
4. Information Gathering
Attackers do their homework—or rather, AI does it for them. Today’s cybercriminals use generative AI to sift through LinkedIn profiles, press releases, breached inboxes and other public data to build tailored, believable messages.
By referencing real details—like a colleague’s name, a recent project or your boss’s travel schedule—they gain instant credibility. The more specific the reference, the more genuine it feels. And when something feels familiar, we’re less likely to question it.
5. The Human Element
Even robust technical defenses can crumble if one employee is swayed by emotional manipulation that convinces them to break protocol, and the results can be devastating. Attackers seek to manipulate personal connections. No software can patch the innate drive to help a boss. That’s why training and straightforward process checks are critical.
Understanding these tactics taps into our emotional instincts—urgency, loyalty, fear and trust. That’s what makes them so effective. But it also points to a solution: security strategies that directly address those human factors.
Confronting Emotional Triggers
Understanding these triggers is the foundation of effective security:
1. Engage in ongoing training. Conduct regular, interactive drills and quizzes highlighting new social engineering tactics to help employees recognize red flags in actual scenarios.
2. Foster a culture of verification. Encourage teams to question out-of-the-ordinary requests, no matter who appears to be sending them. For example, at my company, every new hire should get a message that looks like it’s from me—an email and a text—asking them to send money. Of course, it’s not real. It’s a simulation and part of how we train employees to recognize socially engineered fraud. I tell every new hire, “Even if it looks and sounds like me asking for something, if the request seems just a little off, verify it.” Trust is important, but verification is how we protect it.
3. Implement formal approval workflows. Make it a standard policy that all significant payments require at least two verification forms.
4. Make security a shared responsibility. Practitioners learn quickly: It doesn’t matter how good you are—it depends on how good the team is, especially when it comes to socially engineered attacks. Even with the best tools and strongest individual talent, all it takes is one person to open the door. And increasingly, that door is in finance. At my company, we’re seeing attackers zero in on payment processes, vendor workflows and finance teams who hold the keys to high-value transactions. That’s why security can’t live in a silo. It must integrate with the systems and processes that finance depends on, so that both teams have a shared vantage point. When employees feel they are active guardians of the organization—and when finance and security are aligned—they’re more likely to spot and stop social engineering attempts before they turn into financial loss.
5. Use technology that flags anomalies. Tools powered by behavioral AI can detect unusual sending patterns, suspicious login locations or sudden changes in transaction habits—especially in the context of financial workflows. At my company, we drink our own vintage of behavioral AI champagne. It flags vendor and payment anomalies that traditional tools—busy teams and bank account validations—often miss. By looking across the full procurement and payment process—not just email, but ERP and finance platforms—it’s like having a fraud analyst with perfect memory and zero fatigue. And in this social engineering landscape, that’s not a luxury; it’s a necessity.
Social engineering is a profoundly human phenomenon rooted in behavioral AI and psychology. By understanding these triggers and building a proactive culture of verification, organizations can reduce the vulnerabilities attackers love to exploit. Technology plays a role, but scammers target our innate impulses. Recognize that truth and address it head-on; then you can be one step ahead in the fight against social engineering.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?