As part of our commitment to secure and responsible use of AI tools across the university, we want to highlight a recent advisory from . Jisc provides us with expert support on digital infrastructure and cybersecurity, and their latest guidance is clear: Do not enable Anthropic鈥檚 Claude models in Microsoft Copilot 365.
Microsoft has introduced Anthropic鈥檚 Claude models into Copilot 365, specifically within the Researcher agent and Copilot Studio. While this may seem like a welcome expansion of AI capabilities, enabling Anthropic comes with significant data protection and compliance risks.
What You Lose When You Enable Anthropic
According to Jisc, turning on Anthropic means your data:
- Leaves Microsoft鈥檚 secure environment
- Is no longer covered by Microsoft鈥檚 audit and compliance controls
- Does not benefit from Microsoft鈥檚 data residency guarantees
- Is excluded from Microsoft鈥檚 Customer Copyright Commitment
- Is not protected by Microsoft鈥檚 service level agreements (SLAs)
Instead, your data is governed by Anthropic鈥檚 own commercial terms and data processing agreements, which have not yet been fully analysed by Jisc for risk.
Why This Matters
Microsoft has built trust in Copilot by ensuring data is processed securely within its enterprise environment. Enabling Anthropic undermines that trust and introduces ambiguity into our messaging around AI safety. As Jisc puts it:
鈥淐opilot 365 can still be safe and secure, as long as you do not enable the Anthropic option.鈥
This is especially critical for us in higher education, where data governance, compliance with the ICO鈥檚 72-hour breach notification rule, and protection of personal data are non-negotiable.