Skip to content

1ru: Track requested runtime, sandbox, and operator UX extensions#14

Draft
penso wants to merge 2 commits intomainfrom
beads-polyphony-1ru
Draft

1ru: Track requested runtime, sandbox, and operator UX extensions#14
penso wants to merge 2 commits intomainfrom
beads-polyphony-1ru

Conversation

@penso
Copy link
Owner

@penso penso commented Mar 16, 2026

Automated handoff for 1ru.

Issue:
Base branch: main
Head branch: beads-polyphony-1ru
Commit: be6ee9f

@penso
Copy link
Owner Author

penso commented Mar 16, 2026

Summary

This branch adds runtime backend selection, sandbox backend selection, and a Docker sandbox launcher. The main risk is that several new config paths parse cleanly and are documented, but still fail or over-share state on the actual execution path.

Risks

  • sandbox.backend = "docker" is not validated against transport, so an openai_chat agent can accept the setting even though OpenAiRuntime ignores agent.command entirely. In that configuration the run still happens on the host, not in Docker. Relevant paths: crates/workflow/src/service.rs, crates/agents/src/docker_sandbox.rs, crates/agent-openai/src/lib.rs.
  • The Docker sandbox passes the full parent process environment into the container via current_env(), not just the prepared agent env. That leaks unrelated host secrets such as tracker or provider tokens into the sandbox and weakens the isolation this feature is meant to add. Relevant path: crates/agents/src/docker_sandbox.rs.
  • runtime.backend = "llama_cpp" is accepted by config parsing and validation, but the runtime registry never registers a LlamaCpp backend. The config therefore loads successfully and only fails later at dispatch/model-discovery time with “no runtime backend registered”. Relevant paths: crates/workflow/src/render.rs, crates/workflow/src/service.rs, crates/agents/src/lib.rs.

Recommended human checks

  • Smoke-test an openai_chat agent configured with sandbox.backend = "docker" and verify the request really runs inside Docker. The current code path suggests it will not.
  • Run one Docker-sandboxed task and inspect the container environment to confirm only intended POLYPHONY_* and profile-specific variables are present.
  • Decide whether llama_cpp should be blocked during validation now, or fully implemented before keeping it in the accepted runtime enum and docs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant