Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 16 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ This repo is the community ecosystem around OpenShell -- a hub for contributed s
| Sandbox | Description |
| ----------------------- | ------------------------------------------------------------ |
| `sandboxes/base/` | Foundational image with system tools, users, and dev environment |
| `sandboxes/ollama/` | Ollama for local and cloud LLMs with Claude Code, Codex, OpenCode pre-installed |
| `sandboxes/sdg/` | Synthetic data generation workflows |
| `sandboxes/openclaw/` | OpenClaw -- open agent manipulation and control |

Expand Down Expand Up @@ -51,7 +52,21 @@ After the Brev instance is ready, access the Welcome UI to inject provider keys
openshell sandbox create --from openclaw
```

The `--from` flag accepts any sandbox defined under `sandboxes/` (e.g., `openclaw`, `sdg`), a local path, or a container image reference.
The `--from` flag accepts any sandbox defined under `sandboxes/` (e.g., `openclaw`, `ollama`, `sdg`), a local path, or a container image reference.

### Ollama Sandbox

The Ollama sandbox provides Ollama for running local LLMs and routing to cloud models, with Claude Code and Codex pre-installed.

**Quick start:**

```bash
openshell sandbox create --from ollama

curl http://127.0.0.1:11434/api/tags
```

See the [Ollama sandbox README](sandboxes/ollama/README.md) for full details.

## Contributing

Expand Down
49 changes: 49 additions & 0 deletions sandboxes/ollama/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# syntax=docker/dockerfile:1.4

# SPDX-License-Identifier: Apache-2.0

# Ollama sandbox image for OpenShell
#
# Builds on the community base sandbox (has Node.js, Claude, Codex pre-installed).
# Build: docker build -t openshell-ollama --build-arg BASE_IMAGE=openshell-base .
# Run: openshell sandbox create --from ollama --forward 11434

ARG BASE_IMAGE=ghcr.io/nvidia/openshell-community/sandboxes/base:latest
FROM ${BASE_IMAGE}

USER root

# Install zstd (required by Ollama install script)
RUN apt-get update && apt-get install -y --no-install-recommends zstd \
&& rm -rf /var/lib/apt/lists/*

# Install Ollama
RUN curl -fsSL https://ollama.com/install.sh | sh

# Copy sandbox policy
COPY policy.yaml /etc/openshell/policy.yaml

# Copy entrypoint script
COPY entrypoint.sh /usr/local/bin/entrypoint
RUN chmod +x /usr/local/bin/entrypoint

# Set environment variables for OpenShell provider discovery
ENV OLLAMA_HOST=http://127.0.0.1:11434 \
NPM_CONFIG_PREFIX=/sandbox/.npm-global \
PATH="/sandbox/.npm-global/bin:/sandbox/.venv/bin:/usr/local/bin:/usr/bin:/bin"

# Configure npm to install globals into a writable directory
# (the sandbox policy makes /usr read-only, so the default /usr/lib/node_modules fails)
RUN mkdir -p /sandbox/.npm-global && \
chown sandbox:sandbox /sandbox/.npm-global

# Add environment variables to .bashrc for interactive shells
RUN echo 'export OLLAMA_HOST=http://127.0.0.1:11434' >> /sandbox/.bashrc && \
echo 'export NPM_CONFIG_PREFIX=/sandbox/.npm-global' >> /sandbox/.bashrc && \
echo 'export PATH="/sandbox/.npm-global/bin:$PATH"' >> /sandbox/.bashrc && \
chown sandbox:sandbox /sandbox/.bashrc

USER sandbox

ENTRYPOINT ["/usr/local/bin/entrypoint"]
CMD ["/bin/bash", "-l"]
28 changes: 28 additions & 0 deletions sandboxes/ollama/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
# Ollama Sandbox

OpenShell sandbox image pre-configured with [Ollama](https://ollama.com) for running local LLMs.

## What's Included

- **Ollama** — Ollama runs cloud and local models and connects them to tools like Claude Code, Codex, OpenCode, and more.
- **Auto-start** — Ollama server starts automatically when the sandbox starts
- **Pre-configured** — `OLLAMA_HOST` is set for OpenShell provider discovery
- **Claude Code** — Pre-installed (`claude` command)
- **Codex** — Pre-installed (`@openai/codex` npm package)
- **Node.js 22** — Runtime for npm-based tools
- **npm global** — Configured to install to user directory (works with read-only `/usr`)

## Build

```bash
docker build -t openshell-ollama .
```

## Usage

### Create a sandbox

```bash
openshell sandbox create --from ollama
```

58 changes: 58 additions & 0 deletions sandboxes/ollama/entrypoint.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
#!/usr/bin/env bash

# SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0

# Entrypoint for Ollama sandbox — auto-starts Ollama server
set -euo pipefail

# Export OLLAMA_HOST for OpenShell provider discovery
export OLLAMA_HOST="${OLLAMA_HOST:-http://127.0.0.1:11434}"

# Start Ollama server in background
echo "[ollama] Starting Ollama server..."
nohup ollama serve > /tmp/ollama.log 2>&1 &
OLLAMA_PID=$!

# Wait for server to be ready
echo "[ollama] Waiting for server to be ready..."
for i in {1..60}; do
if curl -fsSL http://127.0.0.1:11434/api/tags > /dev/null 2>&1; then
echo "[ollama] Server ready at http://127.0.0.1:11434"
break
fi
if ! kill -0 $OLLAMA_PID 2>/dev/null; then
echo "[ollama] Server failed to start. Check /tmp/ollama.log"
exit 1
fi
sleep 1
done

# Pull default model if specified and not already present
if [ -n "${OLLAMA_DEFAULT_MODEL:-}" ]; then
if ! ollama list | grep -q "^${OLLAMA_DEFAULT_MODEL}"; then
echo "[ollama] Pulling model: ${OLLAMA_DEFAULT_MODEL}"
ollama pull "${OLLAMA_DEFAULT_MODEL}"
echo "[ollama] Model ${OLLAMA_DEFAULT_MODEL} ready"
fi
fi

# Print connection info
echo ""
echo "========================================"
echo "Ollama sandbox ready!"
echo " API: http://127.0.0.1:11434"
echo " Logs: /tmp/ollama.log"
echo " PID: ${OLLAMA_PID}"
if [ -n "${OLLAMA_DEFAULT_MODEL:-}" ]; then
echo " Model: ${OLLAMA_DEFAULT_MODEL}"
fi
echo "========================================"
echo ""

# Execute the provided command or start an interactive shell
if [ $# -eq 0 ]; then
exec /bin/bash -l
else
exec "$@"
fi
133 changes: 133 additions & 0 deletions sandboxes/ollama/policy.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,133 @@
# SPDX-License-Identifier: Apache-2.0

version: 1

# --- Sandbox setup configuration (queried once at startup) ---

filesystem_policy:
include_workdir: true
read_only:
- /usr
- /lib
- /proc
- /dev/urandom
- /app
- /etc
- /var/log
read_write:
- /sandbox
- /tmp
- /dev/null

landlock:
compatibility: best_effort

process:
run_as_user: sandbox
run_as_group: sandbox

# --- Network policies (queried per-CONNECT request) ---
#
# Each named policy maps a set of allowed (binary, endpoint) pairs.
# Binary identity is resolved via /proc/net/tcp inode lookup + /proc/{pid}/exe.
# Ancestors (/proc/{pid}/status PPid walk) and cmdline paths are also matched.
# SHA256 integrity is enforced in Rust via trust-on-first-use, not here.

network_policies:
ollama:
name: ollama
endpoints:
- { host: ollama.com, port: 443 }
- { host: www.ollama.com, port: 443 }
- { host: registry.ollama.com, port: 443 }
- { host: registry.ollama.ai, port: 443 }
- { host: "*.r2.cloudflarestorage.com", port: 443 }
- { host: github.com, port: 443 }
- { host: objects.githubusercontent.com, port: 443 }
- { host: raw.githubusercontent.com, port: 443 }
binaries:
- { path: /usr/bin/curl }
- { path: /bin/bash }
- { path: /usr/bin/sh }
- { path: /usr/local/bin/ollama }
- { path: /usr/bin/ollama }

claude_code:
name: claude_code
endpoints:
- { host: api.anthropic.com, port: 443, protocol: rest, enforcement: enforce, access: full, tls: terminate }
- { host: statsig.anthropic.com, port: 443 }
- { host: sentry.io, port: 443 }
- { host: raw.githubusercontent.com, port: 443 }
- { host: platform.claude.com, port: 443 }
binaries:
- { path: /usr/local/bin/claude }
- { path: /usr/bin/node }

npm:
name: npm
endpoints:
- { host: registry.npmjs.org, port: 443 }
- { host: npmjs.org, port: 443 }
binaries:
- { path: /usr/bin/npm }
- { path: /usr/bin/node }
- { path: /bin/bash }
- { path: /usr/bin/curl }

github:
name: github
endpoints:
- host: github.com
port: 443
protocol: rest
tls: terminate
enforcement: enforce
rules:
# Git Smart HTTP read-only: allow clone, fetch, pull
- allow:
method: GET
path: "/**/info/refs*"
# Data transfer for reads
- allow:
method: POST
path: "/**/git-upload-pack"
binaries:
- { path: /usr/bin/git }

github_rest_api:
name: github-rest-api
endpoints:
- host: api.github.com
port: 443
protocol: rest
tls: terminate
enforcement: enforce
rules:
- allow:
method: GET
path: "/**/"
- allow:
method: HEAD
path: "/**/"
- allow:
method: OPTIONS
path: "/**/"
binaries:
- { path: /usr/bin/gh }

nvidia:
name: nvidia
endpoints:
- { host: integrate.api.nvidia.com, port: 443 }
binaries:
- { path: /usr/bin/curl }
- { path: /bin/bash }
- { path: /usr/local/bin/opencode }
nvidia_web:
name: nvidia_web
endpoints:
- { host: nvidia.com, port: 443 }
- { host: www.nvidia.com, port: 443 }
binaries:
- { path: /usr/bin/curl }
Loading