Skip to content

Commit 88812df

Browse files
authored
sandbox: add ollama sandbox with openclaw support (#42)
* sandbox: add ollama sandbox with support for openclaw and claude * Update policy to be *.r2.cloudflarestorage.com to use wildcard matching
1 parent e705056 commit 88812df

File tree

5 files changed

+284
-1
lines changed

5 files changed

+284
-1
lines changed

README.md

Lines changed: 16 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,7 @@ This repo is the community ecosystem around OpenShell -- a hub for contributed s
2323
| Sandbox | Description |
2424
| ----------------------- | ------------------------------------------------------------ |
2525
| `sandboxes/base/` | Foundational image with system tools, users, and dev environment |
26+
| `sandboxes/ollama/` | Ollama for local and cloud LLMs with Claude Code, Codex, OpenCode pre-installed |
2627
| `sandboxes/sdg/` | Synthetic data generation workflows |
2728
| `sandboxes/openclaw/` | OpenClaw -- open agent manipulation and control |
2829

@@ -51,7 +52,21 @@ After the Brev instance is ready, access the Welcome UI to inject provider keys
5152
openshell sandbox create --from openclaw
5253
```
5354

54-
The `--from` flag accepts any sandbox defined under `sandboxes/` (e.g., `openclaw`, `sdg`), a local path, or a container image reference.
55+
The `--from` flag accepts any sandbox defined under `sandboxes/` (e.g., `openclaw`, `ollama`, `sdg`), a local path, or a container image reference.
56+
57+
### Ollama Sandbox
58+
59+
The Ollama sandbox provides Ollama for running local LLMs and routing to cloud models, with Claude Code and Codex pre-installed.
60+
61+
**Quick start:**
62+
63+
```bash
64+
openshell sandbox create --from ollama
65+
66+
curl http://127.0.0.1:11434/api/tags
67+
```
68+
69+
See the [Ollama sandbox README](sandboxes/ollama/README.md) for full details.
5570

5671
## Contributing
5772

sandboxes/ollama/Dockerfile

Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
# syntax=docker/dockerfile:1.4
2+
3+
# SPDX-License-Identifier: Apache-2.0
4+
5+
# Ollama sandbox image for OpenShell
6+
#
7+
# Builds on the community base sandbox (has Node.js, Claude, Codex pre-installed).
8+
# Build: docker build -t openshell-ollama --build-arg BASE_IMAGE=openshell-base .
9+
# Run: openshell sandbox create --from ollama --forward 11434
10+
11+
ARG BASE_IMAGE=ghcr.io/nvidia/openshell-community/sandboxes/base:latest
12+
FROM ${BASE_IMAGE}
13+
14+
USER root
15+
16+
# Install zstd (required by Ollama install script)
17+
RUN apt-get update && apt-get install -y --no-install-recommends zstd \
18+
&& rm -rf /var/lib/apt/lists/*
19+
20+
# Install Ollama
21+
RUN curl -fsSL https://ollama.com/install.sh | sh
22+
23+
# Copy sandbox policy
24+
COPY policy.yaml /etc/openshell/policy.yaml
25+
26+
# Copy entrypoint script
27+
COPY entrypoint.sh /usr/local/bin/entrypoint
28+
RUN chmod +x /usr/local/bin/entrypoint
29+
30+
# Set environment variables for OpenShell provider discovery
31+
ENV OLLAMA_HOST=http://127.0.0.1:11434 \
32+
NPM_CONFIG_PREFIX=/sandbox/.npm-global \
33+
PATH="/sandbox/.npm-global/bin:/sandbox/.venv/bin:/usr/local/bin:/usr/bin:/bin"
34+
35+
# Configure npm to install globals into a writable directory
36+
# (the sandbox policy makes /usr read-only, so the default /usr/lib/node_modules fails)
37+
RUN mkdir -p /sandbox/.npm-global && \
38+
chown sandbox:sandbox /sandbox/.npm-global
39+
40+
# Add environment variables to .bashrc for interactive shells
41+
RUN echo 'export OLLAMA_HOST=http://127.0.0.1:11434' >> /sandbox/.bashrc && \
42+
echo 'export NPM_CONFIG_PREFIX=/sandbox/.npm-global' >> /sandbox/.bashrc && \
43+
echo 'export PATH="/sandbox/.npm-global/bin:$PATH"' >> /sandbox/.bashrc && \
44+
chown sandbox:sandbox /sandbox/.bashrc
45+
46+
USER sandbox
47+
48+
ENTRYPOINT ["/usr/local/bin/entrypoint"]
49+
CMD ["/bin/bash", "-l"]

sandboxes/ollama/README.md

Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
# Ollama Sandbox
2+
3+
OpenShell sandbox image pre-configured with [Ollama](https://ollama.com) for running local LLMs.
4+
5+
## What's Included
6+
7+
- **Ollama** — Ollama runs cloud and local models and connects them to tools like Claude Code, Codex, OpenCode, and more.
8+
- **Auto-start** — Ollama server starts automatically when the sandbox starts
9+
- **Pre-configured**`OLLAMA_HOST` is set for OpenShell provider discovery
10+
- **Claude Code** — Pre-installed (`claude` command)
11+
- **Codex** — Pre-installed (`@openai/codex` npm package)
12+
- **Node.js 22** — Runtime for npm-based tools
13+
- **npm global** — Configured to install to user directory (works with read-only `/usr`)
14+
15+
## Build
16+
17+
```bash
18+
docker build -t openshell-ollama .
19+
```
20+
21+
## Usage
22+
23+
### Create a sandbox
24+
25+
```bash
26+
openshell sandbox create --from ollama
27+
```
28+

sandboxes/ollama/entrypoint.sh

Lines changed: 58 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,58 @@
1+
#!/usr/bin/env bash
2+
3+
# SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
4+
# SPDX-License-Identifier: Apache-2.0
5+
6+
# Entrypoint for Ollama sandbox — auto-starts Ollama server
7+
set -euo pipefail
8+
9+
# Export OLLAMA_HOST for OpenShell provider discovery
10+
export OLLAMA_HOST="${OLLAMA_HOST:-http://127.0.0.1:11434}"
11+
12+
# Start Ollama server in background
13+
echo "[ollama] Starting Ollama server..."
14+
nohup ollama serve > /tmp/ollama.log 2>&1 &
15+
OLLAMA_PID=$!
16+
17+
# Wait for server to be ready
18+
echo "[ollama] Waiting for server to be ready..."
19+
for i in {1..60}; do
20+
if curl -fsSL http://127.0.0.1:11434/api/tags > /dev/null 2>&1; then
21+
echo "[ollama] Server ready at http://127.0.0.1:11434"
22+
break
23+
fi
24+
if ! kill -0 $OLLAMA_PID 2>/dev/null; then
25+
echo "[ollama] Server failed to start. Check /tmp/ollama.log"
26+
exit 1
27+
fi
28+
sleep 1
29+
done
30+
31+
# Pull default model if specified and not already present
32+
if [ -n "${OLLAMA_DEFAULT_MODEL:-}" ]; then
33+
if ! ollama list | grep -q "^${OLLAMA_DEFAULT_MODEL}"; then
34+
echo "[ollama] Pulling model: ${OLLAMA_DEFAULT_MODEL}"
35+
ollama pull "${OLLAMA_DEFAULT_MODEL}"
36+
echo "[ollama] Model ${OLLAMA_DEFAULT_MODEL} ready"
37+
fi
38+
fi
39+
40+
# Print connection info
41+
echo ""
42+
echo "========================================"
43+
echo "Ollama sandbox ready!"
44+
echo " API: http://127.0.0.1:11434"
45+
echo " Logs: /tmp/ollama.log"
46+
echo " PID: ${OLLAMA_PID}"
47+
if [ -n "${OLLAMA_DEFAULT_MODEL:-}" ]; then
48+
echo " Model: ${OLLAMA_DEFAULT_MODEL}"
49+
fi
50+
echo "========================================"
51+
echo ""
52+
53+
# Execute the provided command or start an interactive shell
54+
if [ $# -eq 0 ]; then
55+
exec /bin/bash -l
56+
else
57+
exec "$@"
58+
fi

sandboxes/ollama/policy.yaml

Lines changed: 133 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,133 @@
1+
# SPDX-License-Identifier: Apache-2.0
2+
3+
version: 1
4+
5+
# --- Sandbox setup configuration (queried once at startup) ---
6+
7+
filesystem_policy:
8+
include_workdir: true
9+
read_only:
10+
- /usr
11+
- /lib
12+
- /proc
13+
- /dev/urandom
14+
- /app
15+
- /etc
16+
- /var/log
17+
read_write:
18+
- /sandbox
19+
- /tmp
20+
- /dev/null
21+
22+
landlock:
23+
compatibility: best_effort
24+
25+
process:
26+
run_as_user: sandbox
27+
run_as_group: sandbox
28+
29+
# --- Network policies (queried per-CONNECT request) ---
30+
#
31+
# Each named policy maps a set of allowed (binary, endpoint) pairs.
32+
# Binary identity is resolved via /proc/net/tcp inode lookup + /proc/{pid}/exe.
33+
# Ancestors (/proc/{pid}/status PPid walk) and cmdline paths are also matched.
34+
# SHA256 integrity is enforced in Rust via trust-on-first-use, not here.
35+
36+
network_policies:
37+
ollama:
38+
name: ollama
39+
endpoints:
40+
- { host: ollama.com, port: 443 }
41+
- { host: www.ollama.com, port: 443 }
42+
- { host: registry.ollama.com, port: 443 }
43+
- { host: registry.ollama.ai, port: 443 }
44+
- { host: "*.r2.cloudflarestorage.com", port: 443 }
45+
- { host: github.com, port: 443 }
46+
- { host: objects.githubusercontent.com, port: 443 }
47+
- { host: raw.githubusercontent.com, port: 443 }
48+
binaries:
49+
- { path: /usr/bin/curl }
50+
- { path: /bin/bash }
51+
- { path: /usr/bin/sh }
52+
- { path: /usr/local/bin/ollama }
53+
- { path: /usr/bin/ollama }
54+
55+
claude_code:
56+
name: claude_code
57+
endpoints:
58+
- { host: api.anthropic.com, port: 443, protocol: rest, enforcement: enforce, access: full, tls: terminate }
59+
- { host: statsig.anthropic.com, port: 443 }
60+
- { host: sentry.io, port: 443 }
61+
- { host: raw.githubusercontent.com, port: 443 }
62+
- { host: platform.claude.com, port: 443 }
63+
binaries:
64+
- { path: /usr/local/bin/claude }
65+
- { path: /usr/bin/node }
66+
67+
npm:
68+
name: npm
69+
endpoints:
70+
- { host: registry.npmjs.org, port: 443 }
71+
- { host: npmjs.org, port: 443 }
72+
binaries:
73+
- { path: /usr/bin/npm }
74+
- { path: /usr/bin/node }
75+
- { path: /bin/bash }
76+
- { path: /usr/bin/curl }
77+
78+
github:
79+
name: github
80+
endpoints:
81+
- host: github.com
82+
port: 443
83+
protocol: rest
84+
tls: terminate
85+
enforcement: enforce
86+
rules:
87+
# Git Smart HTTP read-only: allow clone, fetch, pull
88+
- allow:
89+
method: GET
90+
path: "/**/info/refs*"
91+
# Data transfer for reads
92+
- allow:
93+
method: POST
94+
path: "/**/git-upload-pack"
95+
binaries:
96+
- { path: /usr/bin/git }
97+
98+
github_rest_api:
99+
name: github-rest-api
100+
endpoints:
101+
- host: api.github.com
102+
port: 443
103+
protocol: rest
104+
tls: terminate
105+
enforcement: enforce
106+
rules:
107+
- allow:
108+
method: GET
109+
path: "/**/"
110+
- allow:
111+
method: HEAD
112+
path: "/**/"
113+
- allow:
114+
method: OPTIONS
115+
path: "/**/"
116+
binaries:
117+
- { path: /usr/bin/gh }
118+
119+
nvidia:
120+
name: nvidia
121+
endpoints:
122+
- { host: integrate.api.nvidia.com, port: 443 }
123+
binaries:
124+
- { path: /usr/bin/curl }
125+
- { path: /bin/bash }
126+
- { path: /usr/local/bin/opencode }
127+
nvidia_web:
128+
name: nvidia_web
129+
endpoints:
130+
- { host: nvidia.com, port: 443 }
131+
- { host: www.nvidia.com, port: 443 }
132+
binaries:
133+
- { path: /usr/bin/curl }

0 commit comments

Comments
 (0)