
MCP Weekly: Google and Microsoft Standardize MCP, OpenAI Optimizes Agents, Docker Secures Execution
Welcome to the latest installment of the MCP Weekly, covering major developments from March 13th to March 20th. In this blog, we look at enterprise platform launches, major security investments, infrastructure upgrades, and new agent tooling across the ecosystem.
TL;DR
Google launched the Colab MCP Server, letting any MCP-compatible agent control Google Colab notebooks directly in the cloud. Microsoft added an Azure DevOps Remote MCP Server to Foundry, connecting AI agents to Boards, Repos, and Wikis without local proxies. OpenAI released GPT-5.4 mini and nano, two fast and affordable models built for subagent delegation in multi-agent systems, and acquired Astral, the team behind uv and Ruff, to deepen Codex into a full Python development environment.
On the infrastructure side, Docker and NanoCo integrated NanoClaw with Docker Sandboxes, running every agent in a disposable MicroVM with no access to the host machine. Snowflake Ventures backed Bedrock Data to bring automated data governance and agent access monitoring into the Snowflake platform. NVIDIA launched NemoClaw, an open-source security stack for running autonomous agents locally on RTX and DGX hardware.
Major Updates of the Week
Google Colab MCP Server
Google released an open-source MCP server that gives any compatible agent direct control over Google Colab. Agents can now create notebooks, inject code, install dependencies, and run Python in a secure cloud environment without copying anything back and forth manually. This moves Colab from a user interface into a programmable execution layer that agents can operate end to end.
Microsoft Foundry MCP Server
Microsoft launched a public preview of an Azure DevOps Remote MCP Server inside Microsoft Foundry. Agents can now connect directly to Azure Boards, Repos, and Wikis through the Foundry Tool Catalog, with administrators able to restrict which specific tools each agent can access. This removes the need for developers to run local proxy servers just to give their agents access to project data.
Claude Updates
Snowflake and Bedrock Data
Snowflake Ventures backed Bedrock Data in a strategic investment to bring automated data governance into the Snowflake platform. The integration extends Snowflake Horizon Catalog with upstream data lineage and enriched metadata, and adds centralized monitoring of Cortex AI agents to ensure they only access authorized data. The partnership targets the core enterprise concern of knowing exactly what data an AI agent touched and why.
OpenAI Updates
Docker and NanoClaw
Docker partnered with NanoClaw to run every NanoClaw agent instance inside a disposable MicroVM-based Docker Sandbox. Agents can install packages, modify files, and run terminal commands without any of those actions reaching the host machine. The NanoClaw codebase consists of only 15 core source files, making the entire stack straightforward for security teams to inspect and verify.
Other Updates
My Thoughts: The Rise of a Two-layer Agent Stack
Two structural shifts are starting to lock in.
Google and Microsoft shipping MCP integrations directly into Colab and Azure DevOps signals that MCP is moving from developer tooling into core platform infrastructure. Once that happens, it defines how agents are expected to connect by default. Any system that sits outside that standard starts to carry integration overhead immediately.
At the same time, the security layer is consolidating into something much more concrete. Docker isolating agents in MicroVMs, Okta assigning them identity and lifecycle controls, and Snowflake tracking data access and lineage are all solving different parts of the same operational requirement. Enterprises need deterministic control over what agents can access, what they can execute, and how to intervene in real time.
What’s emerging is a clearer separation of concerns. MCP defines how agents connect. The security and identity layer defines what they are allowed to do once connected.
That boundary is quickly becoming the deciding factor for whether an agent system is usable in production. Teams that treat it as a first-class architectural layer will move faster with fewer constraints. Everyone else will spend their time rebuilding guardrails after the fact.
Customized Plans for Real Enterprise Needs
Gentoro makes it easier to operationalize AI across your enterprise. Get in touch to explore deployment options, scale requirements, and the right pricing model for your team.


