Microsoft adds Claude to Copilot, but cross-cloud AI could raise new governance challenges

5gDedicated

Microsoft has broadened the AI foundation models inside its Microsoft 365 Copilot suite by adding Anthropic’s Claude Sonnet 4 and Claude Opus 4.1 alongside OpenAI’s GPT family. With this update, users can switch between OpenAI and Anthropic models in the Researcher agent or when building agents in Microsoft Copilot Studio.

“Copilot will continue to be powered by OpenAI’s latest models, and now our customers will have the flexibility to use Anthropic models too — starting in Researcher or when building agents in Microsoft Copilot Studio,” Charles Lamanna, president, Business & Industry Copilot, said in a company blog post. “Claude in Researcher is rolling out today through the Frontier Program to Microsoft 365 Copilot-licensed customers who opt in. And to build agents, users can also opt in to try Claude in Copilot Studio.”

Researcher and Copilot Studio gain Anthropic support

Microsoft described Researcher as a first-of-its-kind reasoning agent that can now be powered by either OpenAI’s deep reasoning models or Anthropic’s Claude Opus 4.1.

Researcher is designed to help users build detailed go-to-market strategies, analyze emerging product trends, or create comprehensive quarterly reports. It is capable of tacking complex, multistep research by reasoning across the web, trusted third-party data, and an organization’s internal work content across emails, chats, meetings, and files.

In the Copilot Studio, Claude Sonnet 4 and Claude Opus 4.1 can enable users to create and customize enterprise-grade agents. Enterprises can build, orchestrate, and manage agents powered by Anthropic models for deep reasoning, workflow automation, and flexible agentic tasks, stated the company. With multi-agent systems and prompt tools in Copilot Studio, users will have an option to mix models across Anthropic, OpenAI, and others from the Azure AI Model Catalog for specialized tasks.

Microsoft is presenting Claude not as a replacement for GPT models, but as a complementary option.

“In Microsoft’s pilots, Claude produced more polished decks and financial models; GPT-delivered speed and fluency in drafting. Claude Opus now anchors Copilot’s Researcher agent for long-context reasoning, while GPT models remain the default generator,” said Sanchit Vir Gogia, CEO and chief analyst at Greyhound Research.

Instead of asking which is better, enterprises should focus on which model is better for what kind of workloads, Gogia said.

Gogia highlighted visible trade-offs, too. Claude is slower and can be pricier, but wins trust. On the other hand, GPT models are faster but more cavalier with sourcing. CIOs should design rubrics that match workloads to the model best suited for the job, he said.

Redundancy as resilience

For years, enterprises equated Copilot with OpenAI, creating an unwanted dependence. The September ChatGPT outage resulted in users losing access to GPT models, whereas Copilot and Claude continued to work. A similar incident occurred in June last year. This incident helped enterprises understand the risk of relying on a single model and showed how important resilience is in AI.

The introduction of Anthropic models alongside OpenAI highlights Microsoft’s shift toward a multi-model strategy. This also acts as a backup. If one system goes down, another takes over.

“As Microsoft 365 Copilot and ChatGPT Enterprise emerge as the preferred AI assistants for organizations deploying enterprise AI, Microsoft is responding to market dynamics by expanding its options and strengthening its position,” said Max Goss, senior director analyst at Gartner.

Goss added Microsoft’s move to add Anthropic’s AI models to Microsoft 365 Copilot signals a strategic effort to diversify its AI partnerships beyond OpenAI, likely driven by growing competition between the vendors and Microsoft’s desire to avoid full reliance on OpenAI models. It’s also a recognition that no one AI model (or family of models) will be sufficient for all the enterprises’ needs.

Cross-cloud complications

Unlike OpenAI’s GPT models, which run on Azure, Anthropic’s Claude runs on AWS. Microsoft has warned customers that Anthropic models are hosted outside Microsoft-managed environments and subject to Anthropic’s Terms of Service. So every time Claude is used, it crosses cloud borders that bring governance challenges, and new egress bills in latency.

“Deterministic routing is the issue that matters. Enterprises must catalogue where models are used, enforce guardrails for Graph data, and bind every request to a user, region, and model tag. Cross-cloud traffic exposes weak seams — DNS, firewalls, CASB,” added Gogia.

He added enterprises should not assume that multi-model is plug-and-play, but treat it as multi-cloud. Latency bumps should be expected when traffic leaves Azure, and prepare for compliance teams to question why it did. Best practice is to pin Anthropic usage to the nearest AWS region, cache repeated context to save cycles, and pre-clear firewall rules before users ever click Try Claude. Risk officers will demand proof that Graph data remains bounded, so CIOs should build logging and monitoring before adoption, not after the first escalation.Asana puts ‘AI teammate’ agents to work – ComputerworldRead More