GenAI adoption outpaces governance, Ernst & Young finds
Generative AI (genAI) adoption is outpacing corporate governance of the technology, with 75% of companies using genAI but only a third having responsible controls in place, according to consulting firm Ernst & Young (EY).
Though executives see the technology’s potential, about half admit their governance frameworks lag behind current and future AI needs. However, 50% are making major investments to address those gaps, EY’s “pulse survey” showed.
The survey of 975 C-level executives across 21 countries included CEOs, CIOs, CTOs, CFOs, chief human resource officers, chief marketing officers and chief risk officers. Most C-suite leaders plan to use emerging genAI within a year, but risk awareness lags, EY said. For example, 76% plan to use agentic AI, yet only 56% understand the risks; 88% use synthetic data, but just 55% know the risks, the consultancy found.
“Consumer concerns about AI responsibility impact brand trust, placing CEOs at the forefront of these discussions,” said Raj Sharma, EY Global Managing Partner for growth and innovation. “Executives must address these issues by developing responsible strategies to mitigate AI risks and being transparent about their organization’s use and protection of AI.”
While CEOs express more concern than their C-suite colleagues, overall awareness of genAI risks remains low, with C-suite executives on average only half as concerned as consumers about adherence to responsible AI principles, EY said. Only 18% of CEOs say their organizations have strong “fairness” controls, and just 14% believe their systems follow regulations.
Open AI models can unintentionally reinforce bias and raise regulatory risks, while poor data management threatens privacy. Strong governance is essential as AI regulations evolve, according to Joe Depa, EY’s global chief innovation officer. “Without governance frameworks, organizations will encounter significant challenges including bias, security vulnerabilities and regulatory non-compliance,” Depa said. “These threats not only affect their overall AI systems, but they also have serious ramifications across their whole business.”
Of the leaders surveyed by EY, 72% said AI has been “integrated and scaled” in nearly all initiatives, and 99% are working on it — yet only a third follow what EY said is its own “responsible framework.”
“While on an individual basis, most firms have responsible AI principles in place, on average, organizations only have strong controls in three out of nine facets, which includes accountability, compliance and security,” EY said.
While 63% of executives believe they’re aligned with public views, consumers are more than twice as worried about issues such as accountability (58% vs. 23%) and policy compliance (52% vs. 23%), the survey found. Though both executives and consumers see value in genAI tools for routine and technical tasks, the disconnect on key concerns highlights a serious challenge for leaders, according to EY.
Responsible AI requires clear governance, defined roles, and core principles like accountability, transparency, and data protection — supported by human oversight at every stage, Depa said.
Training is also key. For example, more than 300,000 EY employees completed foundational AI training, and 100,000 are advancing further through specialized programs. To succeed, organizations must guide and encourage safe AI use, with clear guardrails and room to experiment.
“However, it’s not just about saying how and why AI can be used, businesses should encourage — and even incentivize — their people to experiment with AI in a safe environment through setting transparent guardrails,” Depa said.UK’s new AI framework puts culture before code – ComputerworldRead More