Anthropic just came out with a new article about code execution with MCP which is pretty extraordinary.
It's nuanced, but it sure seems like they just threw massive shade at MCPs, and basically deprecated them down to being like service directories.
Look at the subtitle of their post:
Direct tool calls consume context for each definition and result. Agents scale better by writing code to call tools instead. Anthropic Engineering
Dayum.
They go on to throw more rocks.
Every intermediate result must pass through the model. In this example, the full call transcript flows through twice. For a 2-hour sales meeting, that could mean processing an additional 50,000 tokens. Even larger documents may exceed context window limits, breaking the workflow. With large documents or complex data structures, models may be more likely to make mistakes when copying data between tool calls. Anthropic Engineering
Then they say you can do something like this instead:
With code execution environments becoming more common for agents, a solution is to present MCP servers as code APIs rather than direct tool calls. The agent can then write code to interact with MCP servers. This approach addresses both challenges: agents can load only the tools they need and process data in the execution environment before passing results back to the model. Anthropic Engineering
To me this is treating the MCP like a directory of things you can call, and then having your agents write your own code for calling them.
In other words, not calling them anymore using the MCP itself.
They give an example of turning each tool that comes back from the MCP into a Typescript tool-calling file that agents can use to invoke that particular tool in code. And they specifically mention the advantage of agents being able to do things with those results or whatever, using just code, that doesn't involve AI calls.
Look at the advantage they say this gives:
The agent discovers tools by exploring the filesystem: listing the ./servers/ directory to find available servers (like google-drive and salesforce), then reading the specific tool files it needs (like getDocument.ts and updateRecord.ts) to understand each tool's interface. This lets the agent load only the definitions it needs for the current task. This reduces the token usage from 150,000 tokens to 2,000 tokens—a time and cost saving of 98.7%.Anthropic, from the same article
I'm so in love with this. It's more filesystem-based structure, which I'm already all-in on for my context management. Now tool-calling is becoming file-system based too.
So they're heading in a direction I was already heading anyway, which is to just write direct API calls, but they're doing doing it in a much cooler way with these compostable files that can be shared.
I think they might have just turned MCP tool calls into Skills.
I guess in this world MCPs are still powerful, but more as a directory of what's possible, and what gets done manually/directly as opposed to the mechanism for actually doing it.
Unbelievably cool.
I'm so very much migrating immediately.