How two companies are moving AI prototypes to production
With many AI projects failing, there’s no one-size-fits-all formula for advancing AI proofs of concept to real-world use in the corporate world. But two companies, Ernst & Young (EY) and Lumen, have had success — though they’ve tackled the issue in dramatically different ways.
EY, being in a regulated space of finance and tax, has embraced what it sees as a measured and responsible approach while managing the risks that come with rolling out new technology. Lumen has been more aggressive, working to create an AI culture at the company by giving all employees AI tools from day one.
“There’s become a bifurcation [in approaches]…, some experimentation is innovation theater…, but you’re now starting to get to tangible use cases,” said Joe Depa, global chief innovation officer at EY.
At EY, responsible AI projects with risk management
EY, the global tax and advisory firm, develops portable frameworks to help clients navigate AI adoption. The company has 30 million documented processes internally and 41,000 agents in production and uses its own knowledge to steer clients to success.
With agentic AI becoming more and more ubiquitous, even more foundational technologies are on the way that will further change enterprise IT operations. “The speed of technology evolution is only getting faster,” Depa said. “We’re moving from generative AI to agentic AI to physical AI. We’ve got quantum right behind it.”
Organizations now find themselves implementing new AI processes while replacing legacy infrastructure “that you still haven’t caught up with the last technology life cycle,” Depa said.
A critical part of success with AI is ensuring a solid data foundation, he said — otherwise, prototypes will likely fail before getting off the ground. (An EY client survey in late 2024 found that 83% of organizations at the time lacked the proper data foundation to take advantage of AI.)
“Whether we’re talking about generative AI or physical AI or quantum, your underlying data set is…a lifeblood in some cases, but also an inhibitor,” Depa said. He argued that governance and responsible AI frameworks are what make scaled deployment possible.
“What we found is that clients that have implemented responsible AI frameworks…into their workflows and processes and the way they train employees” reduced their compliance risk, Depa said. “But then they also saw greater growth and value out of AI.”
Responsible AI guardrails are important because they allow teams to experiment more freely. “They now feel comfortable experimenting in a safe sandbox,” Depa explained.
When clients struggle with AI rollouts, Depa asks about their training approach. “I’ve never heard any client say they’ve over-invested in training still,” he said, adding that a successful deployment often means abandoning traditional training methods.
”You have to train employees at the point of their application of AI solutions, so they truly learn on the spot,” Depa said.
He pointed to robotic surgery as an example. The technology can perform surgeries “at or better than human surgery” with laser-like precision, helping address physician shortages and improving health outcomes.
“But if I can’t get the hospitals, the doctors, to adopt this new technology, it doesn’t really matter,” Depa said. “It’s less of a technology challenge, more of a change management, people-process challenge.”
At Lumen, ‘culture eats strategy’
Lumen, which is expanding its network backbone to meet AI demands, has made AI adoption a board-level strategic commitment. The company uses what Sean Alexander, senior vice president of connected ecosystems, calls a “tops down, bottoms-up” approach.
“I’m a big believer that culture eats strategy for breakfast, and that’s even more important in the AI space,” Alexander said.
CEO Kate Johnson uses AI tools daily, and that interest in the technology flows down the workforce chain, with new employees getting AI tools on day one.
“We turn on Copilot Studio and Copilot Enterprise for everybody,” Alexander said. “For onboarding new employees, this is taking the traditional six months to realize your potential down to about four months.”
Alexander is also developing a “Copilot Studio in a day” program where teams spend half a day in training, then move into “hacking” to build confidence.
“We’ve installed a governance model focused on responsible adoption of AI that encourages a maker culture in terms of taking agency and solving problems, but making sure we’re starting off with a specific measurable metric that we want to move and then working back from that,” he said.
One sales leader records his weekly one-on-one conversations with direct reports, then feeds those transcripts into an large language model (LLM) he built with Copilot Studio. This allows him to identify “specific points of friction, areas of opportunity” and “drift in strategic planning,” Alexander said.
He offered a number of examples involving AI in production. The company, for example, built a migration buddy agent to help customers move from legacy products to strategic portfolios. The agent performs customer lookups, product validation, offer validation, compliance checks, and contract reviews.
“There’s a human in the middle taking a look at it, but [it] provides output to the sales agent, which is significantly reducing time involved in increasing responsiveness and customer satisfaction,” Alexander said.
“We put teams together and identify a specific problem, then both business leaders and technical leaders build up the agent and test before deployment,” he said.
Testing follows a careful rollout process with groups of about a dozen customers. “There’s a lot of A/B testing and controlled rollout to ensure we’re meeting the quality bar,” Alexander said.
For customer service, time-to-resolution is Lumen’s most important metric for network outages. One of Alexander’s peers converted what started as a “hack” into an LLM-based feature called “Ask Greg.”
The system solves network issues by reasoning over problems and providing resolution steps, pulling health monitoring, telemetry, and geospatial data from dozens of systems.
“We have about four million customer service requests per year,” Alexander said. “Our estimate is that this hack, which started as a pilot, is saving us about $10 million in cost per year.
Lumen also takes advantage of a knowledge graph based on Microsoft 365 data. The company organized its SharePoint data by department and security level. Copilot can augment conversations with understanding of Lumen’s products, services, and operations in near real-time.
“We’re changing the company. We’re transforming it daily,” Alexander said.How two companies are moving AI prototypes to production – ComputerworldRead More