Execbox MCP Adapters And Protocol
This page covers two related but separate parts of the execbox architecture:
- MCP adapters in
@execbox/core - transport-safe execution plumbing in
@execbox/protocol
Use this page as the overview. For the remote execution control flow, read execbox-remote-workflow.md. For the message-level protocol contract, read execbox-protocol-reference.md. For the normative runner specification, read execbox-runner-specification.md.
MCP Adapter Responsibilities
The MCP adapter layer lets execbox sit on either side of an MCP tool catalog:
createMcpToolProvider({ client })wraps caller-owned MCP client connections as execbox providersopenMcpToolProvider({ client | server })opens a provider handle and is required when execbox owns a local{ server }connectioncodeMcpServer()exposes execbox execution back out as MCP tools such asmcp_execute_code,mcp_search_tools, andmcp_code
Ownership Model
{ client }sources stay caller-owned.createMcpToolProvider()is the ergonomic helper for that path.{ server }sources are execbox-owned. Callers useopenMcpToolProvider()and close the returned handle.codeMcpServer()uses the same handle path internally and closes any resources it owns with the wrapper server.
Protocol Responsibilities
@execbox/protocol is not a sandbox runtime. It provides the transport-safe layer that lets a trusted host and a runtime exchange execution messages without sharing host closures.
It owns:
- the
execute,cancel,started,tool_call,tool_result, anddonemessage types - the shared host transport session used by
@execbox/process,@execbox/worker, and@execbox/remote - Node transport bootstrap helpers for worker and child-process execution
- the reusable async resource pool used by pooled process and worker shells
- transport-facing access to the shared manifest and dispatcher model defined in
@execbox/core
The architecture split is:
@execbox/coreowns provider resolution, manifest extraction, and host-side tool dispatch semantics@execbox/protocolowns wire messages, host-session lifecycle, and host-side shell pooling utilities around those semantics
How The Packages Fit Together
QuickJsExecutoruses the shared runner semantics from@execbox/coredirectly.IsolatedVmExecutoruses the same core runner semantics, but keeps a directisolated-vmbridge instead of transport messages.ProcessExecutorandWorkerExecutoruse the shared host session from@execbox/protocolplus the shared QuickJS protocol endpoint inside the child or worker.RemoteExecutoruses that same host session across an app-owned transport boundary.- Pooled process and worker execution reuse only the outer host shell. Each
execute()call still starts a fresh QuickJS runtime through the shared protocol endpoint.
Transport-Backed Execution Flow
The same host-session model is used for process, worker, and remote execution:
Important implications:
- tool closures and secrets stay on the host side
- runners only receive code, runtime options, provider metadata, and JSON-safe tool inputs and results
- the host session is responsible for timeout backstops, caller cancellation, transport failure handling, and tool-call correlation
Boundaries And Trust
- MCP adapters decide how tool catalogs are discovered, wrapped, and re-exposed.
- The protocol decides how a transport-backed runtime asks the trusted host to invoke those tools.
- The provider surface remains the real capability boundary.
- Transport-backed execution changes where guest code runs, not who owns host capabilities.