Insight
Insights
Nov 26, 2025

MCP security: Separating workload from workforce (and why it matters)

Table of contents
Authors
Casey Bleeker
Casey Bleeker
CEO & Co-Founder

In a recent high‑profile case, a Chinese state‑sponsored threat group used Claude Code and off‑the‑shelf MCP tools to run an attack campaign. The attacker spent only a few minutes prompting an agent, yet the agent spent days carrying out the work. That time imbalance highlights a major shift: AI is now performing the bulk of operational effort.

And this is where things get more concerning because the same mechanics can play out inside organizations.

When an AI agent acts using an identity 

In the external case, investigators could attribute and reconstruct the attack. But if the same workflow had been initiated by someone inside the company with valid credentials and access to critical systems it would have been far harder to detect. MCP allows actions to be executed using the user’s identity, even when the user never explicitly triggered each stage. This is where the line between workload automation and workforce identity breaks.

MCP is not just for developers 

As MCP adoption grows, use cases shift and business teams increasingly begin to rely on MCP servers embedded in AI assistants. For example, we had a customer whose finance leader used MCP servers within Claude to pull ERP data for reporting and analysis through a simple prompt. Their intent was purely operational but because the agent acted using their high‑privilege credentials, it gained access to systems normally restricted to technical users. While the productivity gains are significant, non‑technical users may not recognize how actions can compound through chained tool use and create a new class of organizational risk.

The hidden risk: tool‑to‑tool execution

MCP allows autonomous chaining of tools meaning the user might approve one action, while the agent performs five more to fulfill the task. Move that same workflow from a developer identity to a production identity, and the impact changes instantly. Yet everything looks “normal” because credentials are valid, integrations are legitimate, and nothing is overtly malicious.

Why traditional security controls can’t see this

Most security tooling was designed for infrastructure, networks, or IAM, not for the internal decision-making of AI agents Traditional controls fail because:

  1. Actions are executed with valid identities
  2. Requests look legit
  3. MCP appears properly configured
  4. Logs don’t show intent, only outcomes

In the threat-actor case, analysts only caught the operation because they had visibility into prompts, decisions, and agent action chains. Without that layer, organizations remain blind to how and why agents act.

This is why separating workforce identity from workload execution is becoming critical.

How SurePath AI mitigates risk without slowing innovation

Rather than blocking MCP, which would push users toward shadow AI, SurePath AI enables safe adoption by providing:

  • Workload and workforce visibility – capturing what tools agents can use and how they’re being used.
  • Identity‑aware enforcement – prevents privilege escalation and sensitive actions before execution.
  • Runtime guardrails – applied at agent speed, without requiring users to change their workflows.
  • Governance for non-technical teams – security, compliance, and AI program leaders can set policies without needing to modify MCP code or infrastructure.

Accelerate safely, not slowly

AI agents now operate with more endurance, more integration, and more autonomy than the humans they support. The question is no longer whether teams will use MCP. The question is: Can you see and govern what an agent is about to do before it does it?

Want to see these risks and safeguards in action?

Watch here: MCP Security: Separating workload from workforce (and why it matters)