
MCP—the Model Context Protocol introduced by Anthropic in November 2024—is an open standard for connecting AI assistants to data sources and development environments. It’s built for a future where every AI assistant is wired directly into your environment, where the model knows what files you have open, what text is selected, what you just typed, and what you’ve been working on.
And that’s where the security risks begin.
AI is driven by context, and that’s exactly what MCP provides. It gives AI assistants like GitHub Copilot everything they might need to help you: open files, code snippets, even what’s selected in the editor. When you use MCP-enabled tools that transmit data to remote servers, all of it gets sent over the wire. That might be fine for most developers. But if you work at a financial firm, hospital, or any organization with regulatory constraints where you need to be extremely careful about what leaves your network, MCP makes it really easy to lose control of a lot of things.
Let’s say you’re working in Visual Studio Code on a healthcare app, and you select a few lines of code to debug a query—a routine moment in your day. That snippet might include connection strings, test data with real patient info, and part of your schema. You ask Copilot to help and approve an MCP tool that connects to a remote server—and all of it gets sent to external servers. That’s not just risky. It could be a compliance violation under HIPAA, SOX, or PCI-DSS, depending on what gets transmitted.
These are the kinds of things developers accidentally send every day without realizing it:
- Internal URLs and system identifiers
- Passwords or tokens in local config files
- Network details or VPN information
- Local test data that includes real user info, SSNs, or other sensitive values
With MCP, devs on your team could be approving tools that send all of those things to servers outside of your network without realizing it, and there’s often no easy way to know what’s been sent.
But this isn’t just an MCP problem; it’s part of a larger shift where AI tools are becoming more context-aware across the board. Browser extensions that read your tabs, AI coding assistants that scan your entire codebase, productivity tools that analyze your documents—they’re all collecting more information to provide better assistance. With MCP, the stakes are just more visible because the data pipeline is formalized.
Many enterprises are now facing a choice between AI productivity gains and regulatory compliance. Some orgs are building air-gapped development environments for sensitive projects, though achieving true isolation with AI tools can be complex since many still require external connectivity. Others lean on network-level monitoring and data loss prevention solutions that can detect when code or configuration files are being transmitted externally. And a few are going deeper and building custom MCP implementations that sanitize data before transmission, stripping out anything that looks like credentials or sensitive identifiers.
One thing that can help is organizational controls in development tools like VS Code. Most security-conscious organizations can centrally disable MCP support or control which servers are available through group policies and GitHub Copilot enterprise settings. But that’s where it gets tricky, because MCP doesn’t just receive responses. It sends data upstream, potentially to a server outside of your organization, which means every request carries risk.
Security vendors are starting to catch up. Some are building MCP-aware monitoring tools that can flag potentially sensitive data before it leaves the network. Others are developing hybrid deployment models where the AI reasoning happens on-premises but can still access external knowledge when needed.
Our industry is going to have to come up with better enterprise solutions for securing MCP if we want to meet the needs of all organizations. The tension between AI capability and data security will likely drive innovation in privacy-preserving AI techniques, federated learning approaches, and hybrid deployment models that keep sensitive context local while still providing intelligent assistance.
Until then, deeply integrated AI assistants come with a cost: Sensitive context can slip through—and there’s no easy way to know it has happened.
AI & ML, Commentary
Radar