Three capabilities that set this platform apart — and why enterprises building production agents choose MongoDB.
One cluster for all agent memory — conversations, knowledge embeddings, entity graphs, checkpoints, and audit logs. No sync pipelines. No consistency bugs.
Bring any LLM via Model Proxy. Support for LangGraph, CrewAI, Google ADK, and custom frameworks. Designed to be truly multi-cloud.
5-layer tenant isolation (compute, data, network, secrets, control plane), CSFLE encryption, private networking, RBAC, and per-execution resource limits.
Everything your agents need to run in production — orchestration, execution, memory, observability, and security. All managed by MongoDB. Click any component to explore.
Click any component to see how it works and what MongoDB handles for you.
The central proxy that routes calls between agents, tools, LLMs, and other connections. It reviews and enforces authorization and policies established at the Organization and Project level.
MongoDB Role
Relays Executor Checkpoints to the Customer Data Store for durable execution. Execution Logs and Traces flow through the Orchestrator to the Runtime Data Store. All routing decisions are persisted for audit compliance.
Watch a real agent request flow through the platform — from API ingress through orchestration, execution, memory access, and observability. Step through or auto-run.
Every component below — API Gateway, Orchestrator, Executors, Memory, and Observability — is hosted, secured, and scaled by MongoDB. You deploy your agent code. We run everything else.
Agents are organized in a three-level hierarchy mirroring Atlas. Click each level to see what it contains and how isolation boundaries work.
Top-level container and tenant isolation boundary. Data, credentials, and resources never cross Organization boundaries. Customers pay for the platform at this level. Supports federated identity providers and defines policies that cascade to all child Projects.
Organization: State Farm
├── Project: Claims Agent Dev
│ ├── Workspace: Web Chat
│ └── Workspace: Phone
├── Project: Claims Agent Prod
└── Project: Underwriting AgentFrom local development to production deployment — the complete agent engineering lifecycle using the Magenta SDK, CLI, and platform tools.
magenta dev — local runtime emulationmagenta build — container images to registrymagenta deploy — multi-tenant cloudDefine agents using the Magenta Python SDK with your framework of choice — LangGraph, CrewAI, Google ADK, or custom code. The SDK handles tool isolation, model access, memory, and durable execution via checkpointing.
from magenta import Agent, Tool
from langgraph.graph import StateGraph
# Define tools with Magenta SDK
@Tool("lookup_policy")
def lookup_policy(policy_id: str):
"""Look up an insurance policy by ID."""
return db.policies.find_one({"_id": policy_id})
# Build agent with your preferred framework
graph = StateGraph(AgentState)
graph.add_node("agent", agent_node)
graph.add_node("tools", tool_node)
agent = Agent(
graph=graph.compile(),
model="claude-sonnet",
memory="project", # uses project-level memory
)
magenta dev, magenta build, magenta deploy — full lifecycle from terminal.
IaC with magenta_project, magenta_workspace resources for GitOps workflows.
Agent Playground, execution traces, cost monitoring, and policy management in the browser.
The same platform that runs your first agent runs thousands. Here's what Magenta handles at every stage of your journey.
Memory collections shard by agent_id. Atlas auto-balances chunks across nodes as agent count grows. No application-level sharding logic needed.
Real-time event propagation between agents without polling. CDC pipelines keep enterprise data from external sources synchronized into the Customer Data Store automatically.
Five-layer isolation: compute (per-execution sandboxes), data (scoped to Organization), network (deny cross-tenant by default), secrets (CSFLE), and control plane (RBAC). No misconfiguration can leak data.