AI agents make mistakes; Rubrik has figured out a way to reverse them

5gDedicated

Agentic AI has the potential to transform (or even fully take over) workflows and fundamentally change the way we work. But although the use of the technology is skyrocketing, it is still immature; AI agents can cut corners, struggle with multi-step tasks, become disoriented, lie, and attempt to cover their tracks when they mess up.

Data management and security vendor Rubrik says it can now undo AI’s mistakes.

The new Agent Rewind tool in Rubrik Security Cloud gives users the ability to pinpoint the exact moment an AI agent tripped up, then roll back its actions to a set point in time. Rewind is powered by technology from fine-tuning company Predibase, which Rubrik acquired this year.

“When organizations invest in AI, they often overlook the potential mistakes AI agents can make,” Rubrik’s chief product office (CPO) and head of AI, Anneka Gupta, told Computerworld. “Agentic AI introduces the concept of ‘non-human error,’ highlighting the need for organizations to implement solutions that can address potentially serious errors that can lead to business downtime.”

Correcting AI when it veers off course

Agent Rewind, which will be generally available in the next few months, is designed to integrate with a variety of platforms, APIs, and agent builders, including Salesforce’s Agentforce, Microsoft Copilot Studio, and Amazon Bedrock Agents, as well as custom AI agents.

The platform provides what Rubrik calls “context-enriched visibility,” mapping an agent’s behavior, tool use, and impact. Each action is connected back to its root cause, whether it was a prompt, plan, or tool. The feature is combined with Rubrik Security Cloud to “rewind what changed,” including alterations to files, databases, configurations, or repositories, Gupta explained.

“This capability allows for precise recovery if something goes wrong,” she said.

The user interface (UI) features dashboards and agent maps where users can visualize agents in their environments, categorized by high, medium, and low risk. In a demo provided by the company, an interactive dashboard lists active agents, displaying those at highest risk, their high-impact actions, and rewind stats. Clicking on a specific agent reveals its autonomous actions, for example, showing that the agent updated the field type resolution date, deleted 3,500 duplicated tickets, executed “DROP TABLE customer_temp_orders,” and cleared finance staging test data.

Going a level deeper, the dashboard provides a summary and ‘rewind plan.’ From there, a user can open a ticket to initiate recovery and select a recovery point for deleted data (either the latest good snapshot or a previous snapshot), then proceed with the recovery workflow.

“Agent Rewind makes AI actions transparent and auditable, creating an audit trail and immutable snapshots that enable safe rollbacks,” said Gupta.

Analysts and early users call it a novel tool, with BioIVT CISO Chad Pallett saying it is “the answer I’ve been waiting for” in a market requiring true observability and remediation.

And Johnny Yu, research manager at IDC, told Computerworld, “Agent Rewind is the first offering I’m aware of from any vendor that closely links visibility of AI agent actions (through Predibase) with the ability to undo those actions (through Rubrik Security Cloud).”

Enterprises lack ‘safety nets’

Traditionally, it’s been difficult to undo agent mistakes because of the “autonomous and unpredictable” nature of their actions, said Gupta.

“Unlike a chatbot that simply retrieves information, an agent can perform work on behalf of an individual or organization, so when it makes a mistake, these actions have real consequences that can quickly cause damage,” she said. These consequences can include technical malfunctions, legal challenges, or even “catastrophic events” such as the deletion of entire production databases.

Until now, when mistakes have occurred, enterprises’ recourse lay in activating data protection tools, Gupta explained. This meant reverting to an earlier state via snapshot, or recovering and reconstructing the earlier state from backup copies of data.

“Current observability tools can show what happened and provide visibility into errors, but they do not provide information on why it happened or how to reverse high-risk actions,” said Gupta. They have “trouble pinpointing the exact moment when an AI agent made a mistake, which delays and complicates recovery.”

IDC’s Yu noted that part of the problem is how young the technology is, and how fast enterprises are moving. Organizations are “pressured to bring the technology to production as soon as possible, with little regard for implementing support systems and safety nets for when things go wrong,” he said.

The same was true at the advent of cloud and containers: a large number of organizations repatriated their newly migrated applications within the first year after running into unexpected costs, added complexity, or incompatibility with data security tools, he pointed out.

“Organizations don’t want to risk losing data, overexposing it, or having it stolen by malicious actors, and it’s better to put those safety nets in place first rather than after an incident,” said Yu.

The benefits of Agent Rewind lie in its ability to capture and fix AI agent mistakes accurately and at scale, he said. This means the platform’s usefulness is proportional to how costly an AI agent’s mistakes could possibly be. Enterprises looking at the tool should consider where they are on their AI journey; if they’re still training and testing, and not implementing agentic AI in critical production environments, the benefits are diminished.

On the other hand, “any organization that aspires to someday implement AI to the point where an AI agent making a bad decision will have a significant impact on the business will want to consider Agent Rewind,” said Yu.Workday explores employee attitudes towards AI agents – ComputerworldRead More