From 7605dbde3b9a280b3a31b3126a8b0b413cf0a369 Mon Sep 17 00:00:00 2001 From: ParthSareen Date: Mon, 16 Mar 2026 11:03:17 -0700 Subject: [PATCH 1/2] docs(ollama): add ollama to community sandboxes catalog and supported agents --- docs/about/supported-agents.md | 1 + docs/inference/configure.md | 2 +- docs/sandboxes/community-sandboxes.md | 1 + docs/tutorials/index.md | 6 +++--- 4 files changed, 6 insertions(+), 4 deletions(-) diff --git a/docs/about/supported-agents.md b/docs/about/supported-agents.md index c21335a8..6fd313dc 100644 --- a/docs/about/supported-agents.md +++ b/docs/about/supported-agents.md @@ -8,6 +8,7 @@ The following table summarizes the agents that run in OpenShell sandboxes. All a | [OpenCode](https://opencode.ai/) | [`base`](https://github.com/NVIDIA/OpenShell-Community/tree/main/sandboxes/base) | Partial coverage | Pre-installed. Add `opencode.ai` endpoint and OpenCode binary paths to the policy for full functionality. | | [Codex](https://developers.openai.com/codex) | [`base`](https://github.com/NVIDIA/OpenShell-Community/tree/main/sandboxes/base) | No coverage | Pre-installed. Requires a custom policy with OpenAI endpoints and Codex binary paths. Requires `OPENAI_API_KEY`. | | [OpenClaw](https://openclaw.ai/) | [`openclaw`](https://github.com/NVIDIA/OpenShell-Community/tree/main/sandboxes/openclaw) | Bundled | Agent orchestration layer. Launch with `openshell sandbox create --from openclaw`. | +| [Ollama](https://ollama.com/) | [`ollama`](https://github.com/NVIDIA/OpenShell-Community/tree/main/sandboxes/ollama) | Bundled | Run cloud and local models. Includes Claude Code, Codex, and OpenClaw. Launch with `openshell sandbox create --from ollama`. | More community agent sandboxes are available in the {doc}`../sandboxes/community-sandboxes` catalog. diff --git a/docs/inference/configure.md b/docs/inference/configure.md index 24dbb8f1..4c86dce5 100644 --- a/docs/inference/configure.md +++ b/docs/inference/configure.md @@ -137,7 +137,7 @@ Use this endpoint when inference should stay local to the host for privacy and s When the upstream runs on the same machine as the gateway, bind it to `0.0.0.0` and point the provider at `host.openshell.internal` or the host's LAN IP. `127.0.0.1` and `localhost` usually fail because the request originates from the gateway or sandbox runtime, not from your shell. -If the gateway runs on a remote host or behind a cloud deployment, `host.openshell.internal` points to that remote machine, not to your laptop. A laptop-local Ollama or vLLM process is not reachable from a remote gateway unless you add your own tunnel or shared network path. +If the gateway runs on a remote host or behind a cloud deployment, `host.openshell.internal` points to that remote machine, not to your laptop. A locally running Ollama or vLLM process is not reachable from a remote gateway unless you add your own tunnel or shared network path. Ollama also supports cloud-hosted models that do not require local hardware. ### Verify the Endpoint from a Sandbox diff --git a/docs/sandboxes/community-sandboxes.md b/docs/sandboxes/community-sandboxes.md index 3bcb2d27..d2924657 100644 --- a/docs/sandboxes/community-sandboxes.md +++ b/docs/sandboxes/community-sandboxes.md @@ -43,6 +43,7 @@ The following community sandboxes are available in the catalog. | Sandbox | Description | |---|---| | `base` | Foundational image with system tools and dev environment | +| `ollama` | Ollama with cloud and local model support, Claude Code, Codex, and OpenClaw pre-installed | | `openclaw` | Open agent manipulation and control | | `sdg` | Synthetic data generation workflows | diff --git a/docs/tutorials/index.md b/docs/tutorials/index.md index 6b7539b9..c06126e3 100644 --- a/docs/tutorials/index.md +++ b/docs/tutorials/index.md @@ -44,11 +44,11 @@ Launch Claude Code in a sandbox, diagnose a policy denial, and iterate on a cust {bdg-secondary}`Tutorial` ::: -:::{grid-item-card} Local Inference with Ollama +:::{grid-item-card} Inference with Ollama :link: local-inference-ollama :link-type: doc -Route inference to a local Ollama server, verify it from a sandbox, and reuse the same pattern for other OpenAI-compatible engines. +Route inference through Ollama using cloud-hosted or local models, and verify it from a sandbox. +++ {bdg-secondary}`Tutorial` ::: @@ -68,6 +68,6 @@ Route inference to a local LM Studio server via the OpenAI or Anthropic compatib First Network Policy GitHub Push Access -Local Inference with Ollama +Inference with Ollama Local Inference with LM Studio ``` From bb86f37d45c8110bdb6f167a793f3978714cc60e Mon Sep 17 00:00:00 2001 From: ParthSareen Date: Mon, 16 Mar 2026 14:04:21 -0700 Subject: [PATCH 2/2] update readme --- README.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index a2cef119..1800a468 100644 --- a/README.md +++ b/README.md @@ -36,7 +36,7 @@ uv tool install -U openshell ### Create a sandbox ```bash -openshell sandbox create -- claude # or opencode, codex +openshell sandbox create -- claude # or opencode, codex, ollama ``` A gateway is created automatically on first use. To deploy on a remote host instead, pass `--remote user@host` to the create command. @@ -137,6 +137,7 @@ The CLI auto-bootstraps a GPU-enabled gateway on first use. GPU intent is also i | [OpenCode](https://opencode.ai/) | [`base`](https://github.com/NVIDIA/OpenShell-Community/tree/main/sandboxes/base) | Works out of the box. Provider uses `OPENAI_API_KEY` or `OPENROUTER_API_KEY`. | | [Codex](https://developers.openai.com/codex) | [`base`](https://github.com/NVIDIA/OpenShell-Community/tree/main/sandboxes/base) | Works out of the box. Provider uses `OPENAI_API_KEY`. | | [OpenClaw](https://openclaw.ai/) | [Community](https://github.com/NVIDIA/OpenShell-Community) | Launch with `openshell sandbox create --from openclaw`. | +| [Ollama](https://ollama.com/) | [Community](https://github.com/NVIDIA/OpenShell-Community) | Launch with `openshell sandbox create --from ollama`. | ## Key Commands