Pawan Anand is a global technology leader at Ascendion, where he drives generative AI and agentic AI innovation.
In just a few years, generative AI (GenAI) has evolved from a novel concept to a boardroom mandate. McKinsey recently found that 71% of global enterprises now use GenAI in at least one business function—more than double the share just two years earlier.
However, according to another McKinsey report, only a very small percentage of companies, about 11%, report being able to achieve scale with these deployments. Why the stall?
In my work advising telecom, media and technology customers, I see the same pattern: Dazzling proofs of concept get bogged down when they confront security reviews, unclear ROI or ethical concerns. The four reality checks below can help senior executives convert experimentation into bankable earnings instead of adding GenAI to the corporate “innovation theater” archive.
A ‘value backlog’ beats a tech backlog.
Here’s a hard truth: Pilots that begin with “we should try GPT-4” rarely survive the CFO’s second question: How does this move the profit and loss (P&L) statement? Novelty without metrics quickly becomes discretionary spend and is eventually shut down.
To overcome this hurdle, flip the sequence. Start with money, not technology. Build a value backlog that ranks use cases by clear financial levers: revenue lift, faster time to market, lower product development costs and improved working-capital efficiency. Tie every idea to measurable KPI—average handle time, upsell‑conversion rate, defect‑rework hours—before engineering begins.
A large enterprise that I worked with cut 47 proposed experiments to nine and unlocked extra budget because the business case was clear from day one. If a use case can’t earn a seat on your quarterly operating scorecard, it should be parked.
Your data guardians need product mindsets.
Compliance teams excel at guarding data but can unintentionally throttle innovation; product teams sprint fast yet may overlook policy nuance. The resulting tension drags release cycles and drives “shadow AI.”
That’s why organizations should establish a cross‑functional AI working group, or “AI product guild.” Pair every product manager with a data‑privacy or security lead; give the duo a shared objective and key result (OKR): “time‑to‑first‑model in secure production.” Embed risk specialists directly in agile squads—so they’re part of the build process, not stuck in downstream ticket queues.
I’ve seen a large enterprise organization accelerate product engineering cycles and reduce time to market significantly by embedding risk and compliance experts directly within agile teams. When guardians own velocity metrics, governance becomes an accelerator, not a barrier.
Master the unit economics.
Early pilots run on promotional credits, but production workloads do not. Token usage, GPU leases and retrieval‑augmented‑generation (RAG) storage fees compound quickly, often blindsiding finance teams that budget like traditional SaaS.
That’s why it’s crucial to treat every inference call as a metered resource. Here are a few ways to do that:
• Create per‑use‑case token budgets linked to customer or process value.
• Expose real‑time cost dashboards to product owners, down to pennies per prompt.
• Re‑architecture prompts to minimize context size, cache deterministic outputs and offload non‑differentiating tasks to smaller open‑weight models.
At another large enterprise, the CXOs’ confidence—and willingness to fund new use cases—grew once GenAI spend looked like a controllable utility instead of an unpredictable experiment. For many leaders, the key to success is to have the ability to determine the ROI and plan to scale.
Ethics cannot be bolted on.
Retrospective ethics reviews add schedule risk and invite rework. Worse, they signal to regulators that fairness is an afterthought.
Before deploying AI, build guardrail kits—pre‑approved policy components your teams can drop into any GenAI workflow. Think reusable content filters, red‑team prompt libraries, audit‑ready logging, automatic watermarking and opt‑out mechanisms for user data. Codify these kits as APIs so developers can integrate compliance directly into workflows—“compliance as code”—instead of reinventing safeguards for every project.
In my experience, enterprises that roll out guardrail kits see model reuse grow and legal escalations plunge because each squad started with the same trusted foundation. Proactive safeguards buy goodwill—both with customers and with regulators drafting the next wave of AI rules.
Conclusion
Generative AI is already redrawing competitive maps, but only organizations that pass these four reality checks will capture the upside. Starting with a value‑driven backlog that satisfies the CXOs, fuse product velocity with governance discipline, master the new unit economics and embed ethics at design time.
Leaders who operationalize these principles graduate from pilot purgatory to production advantage—joining the rare 11% already converting GenAI experimentation into sustainable returns while everyone else is still chasing demos, pilots, proofs of concept and experimentation.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?