diff --git a/docs/AuthoringApps.md b/docs/AuthoringApps.md new file mode 100644 index 00000000..082d8316 --- /dev/null +++ b/docs/AuthoringApps.md @@ -0,0 +1,375 @@ +# Authoring Apps for Fileglancer + +Fileglancer can discover and run apps from GitHub repositories. An app is defined by a `runnables.yaml` manifest file that describes one or more commands (called **runnables**) that users can launch as cluster jobs through the Fileglancer UI. + +## Quick Start + +1. Create a `runnables.yaml` file in your GitHub repository +2. Define your runnables with their commands and parameters +3. Add the repo URL in Fileglancer's Apps page + +Minimal example: + +```yaml +name: My Tool +runnables: + - id: run + name: Run My Tool + command: python main.py + parameters: [] +``` + +## Manifest Discovery + +When a user adds a GitHub repository, Fileglancer clones it and walks the directory tree looking for `runnables.yaml` files. + +### Multi-App Repositories + +A single repository can contain multiple apps by placing manifest files in subdirectories: + +``` +my-repo/ +├── tool1/ +│ ├── runnables.yaml # App: "Image Converter" +│ └── convert.py +├── tool2/ +│ ├── runnables.yaml # App: "Data Analyzer" +│ └── analyze.py +└── README.md +``` + +Each manifest is discovered and registered as a separate app. When a job runs, the working directory is set to the subdirectory containing the manifest, so relative paths in commands resolve correctly. + +The following directories are skipped during discovery: `.git`, `node_modules`, `__pycache__`, `.pixi`, `.venv`, `venv`. + +## Manifest Reference + +### Top-Level Fields + +| Field | Type | Required | Description | +|-------|------|----------|-------------| +| `name` | string | yes | Display name shown in the Fileglancer UI | +| `description` | string | no | Short description of the app | +| `version` | string | no | Version string (for display only) | +| `repo_url` | string | no | GitHub URL of a separate repository containing the tool code (see [Separate Tool Repo](#separate-tool-repo)) | +| `requirements` | list of strings | no | Tools that must be available on the server (see [Requirements](#requirements)) | +| `runnables` | list of objects | yes | One or more runnable definitions (see [Runnables](#runnables)) | + +### Requirements + +The `requirements` field lists tools that must be installed on the server before the job can run. Each entry is a tool name with an optional version constraint. + +```yaml +requirements: + - "pixi>=0.40" + - npm + - "maven>=3.9" +``` + +**Supported tools:** `pixi`, `npm`, `maven` + +**Supported version operators:** `>=`, `<=`, `!=`, `==`, `>`, `<` + +If a requirement is not met (tool missing or version too old), job submission fails with a descriptive error message. If `requirements` is omitted or empty, no checks are performed. + +### Runnables + +Each runnable defines a single command that users can launch. If the manifest has multiple runnables, the user selects which one to run. + +| Field | Type | Required | Description | +|-------|------|----------|-------------| +| `id` | string | yes | Unique identifier (used in CLI flags and URLs, should be URL-safe) | +| `name` | string | yes | Display name shown in the UI | +| `description` | string | no | Longer description of what this runnable does | +| `command` | string | yes | Base shell command to execute (see [Command Building](#command-building)) | +| `parameters` | list of objects | no | Parameter definitions (see [Parameters](#parameters)) | +| `resources` | object | no | Default cluster resource requests (see [Resources](#resources)) | +| `env` | object | no | Default environment variables to export (see [Environment Variables](#environment-variables)) | +| `pre_run` | string | no | Shell script to run before the main command (see [Pre/Post-Run Scripts](#prepost-run-scripts)) | +| `post_run` | string | no | Shell script to run after the main command (see [Pre/Post-Run Scripts](#prepost-run-scripts)) | + +### Parameters + +Parameters define the inputs that users fill in through the Fileglancer UI. Each parameter with a `flag` field becomes a CLI flag appended to the base command. Parameters without a `flag` are emitted as positional arguments. + +| Field | Type | Required | Description | +|-------|------|----------|-------------| +| `flag` | string | no | CLI flag syntax (e.g. `--outdir`, `-n`). Omit for positional arguments. Must start with `-` | +| `name` | string | yes | Display label in the UI | +| `type` | string | yes | Data type (see [Parameter Types](#parameter-types)) | +| `description` | string | no | Help text shown below the input field | +| `required` | boolean | no | Whether the user must provide a value. Default: `false` | +| `default` | any | no | Pre-filled default value. Type must match the parameter type | +| `options` | list of strings | no | Allowed values (only for `enum` type) | +| `min` | number | no | Minimum value (only for `integer` and `number` types) | +| `max` | number | no | Maximum value (only for `integer` and `number` types) | +| `pattern` | string | no | Regex validation pattern (only for `string` type, uses full match) | + +### Parameter Sections + +Parameters can be grouped into collapsible sections in the UI. A section is an item in the `parameters` list that has a `section` key instead of `name`/`type`. Sections contain their own nested `parameters` list (one level deep only). Top-level parameters and sections can be interleaved freely. + +| Field | Type | Required | Description | +|-------|------|----------|-------------| +| `section` | string | yes | Section title displayed in the UI | +| `description` | string | no | Help text shown next to the section title | +| `collapsed` | boolean | no | Whether the section starts collapsed. Default: `false` | +| `parameters` | list of objects | no | Parameter definitions within this section (same schema as top-level parameters) | + +```yaml +parameters: + # Top-level parameter (always visible) + - flag: --input + name: Input Path + type: file + required: true + + # Collapsible section + - section: Advanced Options + description: Optional tuning parameters + collapsed: true + parameters: + - flag: --chunk_size + name: Chunk Size + type: string + default: "128,128,128" + - flag: --verbose + name: Verbose + type: boolean + default: false +``` + +When a section has `collapsed: true`, it renders as a closed accordion in the UI. Users can click to expand it and see the parameters inside. Sections without `collapsed` (or with `collapsed: false`) start expanded. + +On form validation, any section containing a parameter with an error is automatically expanded so the user can see and fix the problem. + +### Flag Forms + +Parameters support three flag styles: + +- **Double-dash flags** (most common): `flag: --outdir` emits `--outdir '/path'` +- **Single-dash flags**: `flag: -n` emits `-n 5` +- **Positional arguments**: Omit `flag` entirely. The value is emitted as a bare argument (no flag prefix) + +An internal `key` is auto-generated from the flag: `--outdir` becomes key `outdir`, `-n` becomes key `n`. Positional parameters get keys `_arg0`, `_arg1`, etc. Keys must be unique within a runnable. + +### Parameter Types + +| Type | UI Control | CLI Output (flagged) | CLI Output (positional) | Validation | +|------|-----------|---------------------|------------------------|------------| +| `string` | Text input | `--flag 'value'` | `'value'` | Optional `pattern` regex (full match) | +| `integer` | Number input (step=1) | `--flag 42` | `42` | Must be a whole number. Optional `min`/`max` bounds | +| `number` | Number input | `--flag 3.14` | `3.14` | Must be numeric. Optional `min`/`max` bounds | +| `boolean` | Checkbox | `--flag` (if true, omitted if false) | N/A | Must be true/false | +| `file` | Text input + file browser | `--flag '/path/to/file'` | `'/path/to/file'` | Must be an absolute path. Path must exist and be readable on the server | +| `directory` | Text input + directory browser | `--flag '/path/to/dir'` | `'/path/to/dir'` | Must be an absolute path. Path must exist and be readable on the server | +| `enum` | Dropdown select | `--flag 'chosen_value'` | `'chosen_value'` | Value must be one of the `options` list | + +**Notes on `file` and `directory` types:** +- The Fileglancer UI provides a file browser button alongside the text input +- Paths are validated server-side before job submission (must exist and be accessible) +- Both absolute paths (`/data/images`) and home-relative paths (`~/output`) are accepted +- Shell metacharacters (`;`, `&`, `|`, `` ` ``, `$`, `(`, `)`, etc.) are rejected for safety + +### Resources + +Default resource requests for the cluster scheduler. Users can override these in the UI before submitting. + +| Field | Type | Description | +|-------|------|-------------| +| `cpus` | integer | Number of CPUs to request | +| `memory` | string | Memory allocation, e.g. `"16 GB"` | +| `walltime` | string | Wall clock time limit, e.g. `"04:00"` (hours:minutes) | + +If omitted, the server's global defaults are used. User overrides take highest priority, followed by the runnable's defaults, then the server defaults. + +### Environment Variables + +The `env` field defines default environment variables that are exported before the main command runs. Each entry is a key-value pair where the key is the variable name and the value is the default string value. + +```yaml +runnables: + - id: convert + name: Convert to OME-Zarr + command: nextflow run main.nf + env: + JAVA_HOME: /opt/java + NXF_SINGULARITY_CACHEDIR: /scratch/singularity +``` + +Users can override or extend these in the Fileglancer UI before submitting a job. Variable names must match `[A-Za-z_][A-Za-z0-9_]*` and values are shell-quoted with `shlex.quote()` for safety. + +### Pre/Post-Run Scripts + +The `pre_run` and `post_run` fields allow you to specify shell commands that run before and after the main command, respectively. These are useful for loading modules, setting up the environment, or performing cleanup. + +```yaml +runnables: + - id: convert + name: Convert to OME-Zarr + command: nextflow run main.nf + pre_run: | + module load java/21 + post_run: | + echo "Conversion complete" +``` + +Users can override these in the UI. If a user provides their own pre/post-run script, it replaces the manifest default entirely. + +The generated job script has the following structure: + +```bash +unset PIXI_PROJECT_MANIFEST +cd /path/to/repo + +# Environment variables +export JAVA_HOME='/opt/java' +export NXF_SINGULARITY_CACHEDIR='/scratch/singularity' + +# Pre-run script +module load java/21 + +# Main command +nextflow run main.nf \ + --input '/data/input' \ + --outdir '/data/output' + +# Post-run script +echo "Conversion complete" +``` + +## Command Building + +When a job is submitted, Fileglancer constructs the full shell command from the runnable's `command` field and the user-provided parameter values using a two-pass approach: + +1. Start with the base `command` string +2. Merge user-provided values with defaults for any parameters the user didn't set +3. **Pass 1 — Positional arguments**: Emit values for parameters without a `flag`, in declaration order, as bare shell-quoted values +4. **Pass 2 — Flagged arguments**: Emit values for parameters with a `flag`, in declaration order: + - Boolean `true` → append the flag (e.g. `--verbose`) + - Boolean `false` → omit entirely + - All other types → append `{flag} {shell_quoted_value}` +5. Join all parts with line-continuation (`\`) for readability + +For example, given this runnable: + +```yaml +command: pixi run python demo.py +parameters: + - flag: --message + name: Message + type: string + required: true + - flag: --repeat + name: Repeat Count + type: integer + default: 3 + - flag: --verbose + name: Verbose + type: boolean + default: false +``` + +If the user provides `message: "Hello"`, `verbose: true`, and leaves `repeat` at its default, the resulting command is: + +```bash +pixi run python demo.py \ + --message 'Hello' \ + --verbose \ + --repeat '3' +``` + +All string values are shell-quoted using `shlex.quote()` to prevent injection. + +## Separate Tool Repo + +By default, the job runs inside the cloned repository that contains the manifest. If your tool code lives in a different repository, use the `repo_url` field: + +```yaml +name: My Pipeline +repo_url: https://github.com/org/pipeline-code +runnables: + - id: run + name: Run Pipeline + command: nextflow run main.nf + parameters: [] +``` + +When `repo_url` is set: +- The discovery repo (containing `runnables.yaml`) is used only for manifest metadata +- The tool repo (`repo_url`) is cloned separately and used as the working directory for the job +- The user can opt to "pull latest" before each run to get the newest code from both repos + +## Job Execution + +When a user submits a job: + +1. The manifest is re-fetched from the cached clone +2. Requirements are verified on the server +3. The command is built with validated parameters +4. A working directory is created at `~/.fileglancer/jobs/{id}-{app}-{runnable}/` +5. The repository is symlinked into the working directory +6. The command runs on the cluster with `stdout.log` and `stderr.log` captured +7. Job status is monitored and updated in real time (PENDING → RUNNING → DONE/FAILED/KILLED) + +Users can view logs, relaunch with the same parameters, or cancel running jobs from the Fileglancer UI. + +## Full Example + +```yaml +name: OME-Zarr Converter +description: Convert Bio-Formats-compatible images to OME-Zarr using bioformats2raw +version: "1.0" + +runnables: + - id: convert + name: Convert to OME-Zarr + description: Convert image files or directories to OME-Zarr format + command: nextflow run JaneliaSciComp/nf-omezarr -profile singularity + parameters: + - flag: --input + name: Input Path + type: file + description: Path to input image file or directory containing image files + required: true + + - flag: --outdir + name: Output Directory + type: directory + description: Directory where converted OME-Zarr outputs will be saved + required: true + + - flag: --chunk_size + name: Chunk Size + type: string + description: Zarr chunk size in X,Y,Z order + default: "128,128,128" + + - flag: --compression + name: Compression + type: enum + description: Block compression algorithm + options: + - blosc + - zlib + default: blosc + + - flag: --overwrite + name: Overwrite Existing + type: boolean + description: Overwrite images in the output directory if they exist + default: false + + - flag: --cpus + name: CPUs per Task + type: integer + description: Number of cores to allocate for each bioformats2raw task + default: 10 + min: 1 + max: 500 + + resources: + cpus: 4 + memory: "16 GB" + walltime: "24:00" +``` diff --git a/docs/Development.md b/docs/Development.md index 0ca21579..3030360e 100644 --- a/docs/Development.md +++ b/docs/Development.md @@ -157,6 +157,22 @@ https://fileglancer-dev.int.janelia.org/ **Important:** Remember to remove or comment out this entry from `/etc/hosts` when you're done testing, especially if the domain is used in production. +### Cluster Configuration (Apps Feature) + +The Apps feature requires a `cluster` section in `config.yaml` to submit jobs. See `config.yaml.template` for all available options. + +#### Job Reconnection + +If the Fileglancer server restarts while jobs are running on the cluster, it automatically attempts to reconnect to those jobs on startup. This requires `job_name_prefix` to be set in the cluster configuration — the prefix is used to identify jobs belonging to this Fileglancer instance when querying the cluster scheduler (e.g. `bjobs` for LSF). + +```yaml +cluster: + executor: lsf + job_name_prefix: fg # required for reconnection +``` + +Without `job_name_prefix`, reconnection is silently skipped and active jobs will not be re-tracked. They will eventually be marked FAILED after the `zombie_timeout_minutes` period (default: 30 minutes). + ### Troubleshooting If you run into any build issues, the first thing to try is to clear the build directories and start from scratch: diff --git a/docs/config.yaml.template b/docs/config.yaml.template index 1a82cf02..5f44863f 100644 --- a/docs/config.yaml.template +++ b/docs/config.yaml.template @@ -123,3 +123,29 @@ session_cookie_name: fg_session # Set to true for production with valid HTTPS certificates # session_cookie_secure: true + +# +# Cluster configuration for Apps feature +# Mirrors py-cluster-api ClusterConfig - all fields are optional +# See: https://github.com/JaneliaSciComp/py-cluster-api +# +# cluster: +# executor: local # "local" or "lsf" +# job_name_prefix: fg # Prefix for cluster job names. REQUIRED for job +# # reconnection after server restarts. Without this, +# # active jobs will not be re-tracked and will +# # eventually be marked FAILED by the zombie timeout. +# memory: "8 GB" # Default memory allocation +# walltime: "04:00" # Default walltime (HH:MM) +# cpus: 1 # Default CPU count +# queue: normal # LSF queue name +# poll_interval: 10.0 # Job status polling interval (seconds) +# lsf_units: MB # Memory units for LSF (KB, MB, GB) +# suppress_job_email: true # Suppress LSF job email notifications +# extra_directives: # Additional scheduler directives (prefix auto-added) +# - "-P your_account" +# extra_args: # Extra CLI args appended to submit command +# - "-P" +# - "your_project" +# script_prologue: # Commands to run before each job +# - "module load java/11" diff --git a/fileglancer/alembic/versions/a3f7c2e19d04_add_jobs_table.py b/fileglancer/alembic/versions/a3f7c2e19d04_add_jobs_table.py new file mode 100644 index 00000000..b8b95320 --- /dev/null +++ b/fileglancer/alembic/versions/a3f7c2e19d04_add_jobs_table.py @@ -0,0 +1,49 @@ +"""add jobs table + +Revision ID: a3f7c2e19d04 +Revises: 2d1f0e6b8c91 +Create Date: 2026-02-08 00:00:00.000000 + +""" +from alembic import op +import sqlalchemy as sa + + +# revision identifiers, used by Alembic. +revision = 'a3f7c2e19d04' +down_revision = '2d1f0e6b8c91' +branch_labels = None +depends_on = None + + +def upgrade() -> None: + op.create_table( + 'jobs', + sa.Column('id', sa.Integer(), primary_key=True, autoincrement=True), + sa.Column('username', sa.String(), nullable=False), + sa.Column('cluster_job_id', sa.String(), nullable=True), + sa.Column('app_url', sa.String(), nullable=False), + sa.Column('app_name', sa.String(), nullable=False), + sa.Column('manifest_path', sa.String(), nullable=False, server_default=''), + sa.Column('entry_point_id', sa.String(), nullable=False), + sa.Column('entry_point_name', sa.String(), nullable=False), + sa.Column('parameters', sa.JSON(), nullable=False), + sa.Column('status', sa.String(), nullable=False), + sa.Column('exit_code', sa.Integer(), nullable=True), + sa.Column('resources', sa.JSON(), nullable=True), + sa.Column('env', sa.JSON(), nullable=True), + sa.Column('pre_run', sa.String(), nullable=True), + sa.Column('post_run', sa.String(), nullable=True), + sa.Column('pull_latest', sa.Boolean(), nullable=False, server_default='0'), + sa.Column('created_at', sa.DateTime(), nullable=False), + sa.Column('started_at', sa.DateTime(), nullable=True), + sa.Column('finished_at', sa.DateTime(), nullable=True), + ) + op.create_index('ix_jobs_username', 'jobs', ['username']) + op.create_index('ix_jobs_cluster_job_id', 'jobs', ['cluster_job_id']) + + +def downgrade() -> None: + op.drop_index('ix_jobs_cluster_job_id', table_name='jobs') + op.drop_index('ix_jobs_username', table_name='jobs') + op.drop_table('jobs') diff --git a/fileglancer/apps.py b/fileglancer/apps.py new file mode 100644 index 00000000..cf05a9be --- /dev/null +++ b/fileglancer/apps.py @@ -0,0 +1,857 @@ +"""Apps module for fetching manifests, building commands, and managing cluster jobs.""" + +import asyncio +import os +import re +import shlex +import shutil +import subprocess +from pathlib import Path +from datetime import datetime, UTC +from typing import Optional + +import yaml +from loguru import logger +from packaging.specifiers import SpecifierSet +from packaging.version import Version + +from cluster_api import create_executor, ResourceSpec, JobMonitor +from cluster_api._types import JobStatus + +from fileglancer import database as db +from fileglancer.model import AppManifest, AppEntryPoint, AppParameter +from fileglancer.settings import get_settings + + +_MANIFEST_FILENAME = "runnables.yaml" + +_REPO_CACHE_BASE = Path(os.path.expanduser("~/.fileglancer/apps")) +_repo_locks: dict[str, asyncio.Lock] = {} + + +def _get_repo_lock(owner: str, repo: str, branch: str) -> asyncio.Lock: + """Get or create an asyncio lock for a specific repo+branch.""" + key = f"{owner}/{repo}/{branch}" + if key not in _repo_locks: + _repo_locks[key] = asyncio.Lock() + return _repo_locks[key] + + +def _parse_github_url(url: str) -> tuple[str, str, str]: + """Parse a GitHub repo URL into (owner, repo, branch). + + Raises ValueError if not a valid GitHub repo URL. + """ + pattern = r"https?://github\.com/([^/]+)/([^/]+?)(?:\.git)?(?:/tree/([^/]+))?/?$" + match = re.match(pattern, url) + if not match: + raise ValueError( + f"Invalid app URL: '{url}'. Only GitHub repository URLs are supported " + f"(e.g., https://github.com/owner/repo)." + ) + owner, repo, branch = match.groups() + branch = branch or "main" + + # Validate segments to prevent path traversal + for name, value in [("owner", owner), ("repo", repo), ("branch", branch)]: + if ".." in value or "\x00" in value: + raise ValueError( + f"Invalid app URL: {name} '{value}' contains invalid characters" + ) + + return owner, repo, branch + + +async def _run_git(args: list[str], timeout: int = 60): + """Run a git command asynchronously. + + Raises ValueError with a readable message on failure. + """ + try: + proc = await asyncio.wait_for( + asyncio.create_subprocess_exec( + *args, + stdout=asyncio.subprocess.PIPE, + stderr=asyncio.subprocess.PIPE, + ), + timeout=timeout, + ) + stdout, stderr = await proc.communicate() + except asyncio.TimeoutError: + raise ValueError(f"Git command timed out after {timeout}s: {' '.join(args)}") + + if proc.returncode != 0: + err = stderr.decode().strip() if stderr else "unknown error" + raise ValueError(f"Git command failed: {err}") + + +async def _ensure_repo_cache(url: str, pull: bool = False) -> Path: + """Clone or update the GitHub repo in per-user cache. Returns repo path. + + Cache is keyed by owner/repo/branch to avoid checkout races between branches. + An asyncio lock serializes git operations for the same repo+branch. + """ + owner, repo, branch = _parse_github_url(url) + repo_dir = (_REPO_CACHE_BASE / owner / repo / branch).resolve() + repo_dir.relative_to(_REPO_CACHE_BASE.resolve()) + lock = _get_repo_lock(owner, repo, branch) + + async with lock: + if repo_dir.exists(): + logger.debug(f"Repo cache hit: {owner}/{repo} ({branch})") + if pull: + logger.info(f"Pulling latest for {owner}/{repo} ({branch})") + await _run_git(["git", "-C", str(repo_dir), "pull", "origin", branch]) + else: + logger.info(f"Cloning {owner}/{repo} ({branch}) into {repo_dir}") + repo_dir.parent.mkdir(parents=True, exist_ok=True) + clone_url = f"https://github.com/{owner}/{repo}.git" + await _run_git( + ["git", "clone", "--branch", branch, clone_url, str(repo_dir)], + timeout=120, + ) + + return repo_dir + + +_SKIP_DIRS = {'.git', 'node_modules', '__pycache__', '.pixi', '.venv', 'venv'} + + +def _read_manifest_file(manifest_dir: Path) -> AppManifest: + """Read and validate a runnables.yaml file from the given directory. + + Raises ValueError if the file is not found. + """ + filepath = manifest_dir / _MANIFEST_FILENAME + if not filepath.is_file(): + raise ValueError( + f"No {_MANIFEST_FILENAME} found in {manifest_dir}." + ) + data = yaml.safe_load(filepath.read_text()) + return AppManifest(**data) + + +def _find_manifests_in_repo(repo_dir: Path) -> list[tuple[str, AppManifest]]: + """Walk the cloned repo and discover all manifest files. + + Returns a list of (relative_dir_path, AppManifest) tuples. + Uses "" for root-level manifests. + """ + results: list[tuple[str, AppManifest]] = [] + + for dirpath, dirnames, filenames in os.walk(repo_dir, topdown=True): + # Prune directories we should skip + dirnames[:] = [d for d in dirnames if d not in _SKIP_DIRS] + + if _MANIFEST_FILENAME not in filenames: + continue + + current = Path(dirpath) + try: + manifest = _read_manifest_file(current) + except (ValueError, Exception) as e: + logger.warning(f"Skipping invalid manifest in {dirpath}: {e}") + continue + + # Compute relative path from repo root + rel = current.relative_to(repo_dir) + rel_str = str(rel) if str(rel) != "." else "" + results.append((rel_str, manifest)) + + return results + + +async def fetch_app_manifest(url: str, manifest_path: str = "") -> AppManifest: + """Fetch and validate an app manifest from a cloned repo. + + Clones the repo if needed, then reads the manifest from disk. + """ + repo_dir = await _ensure_repo_cache(url) + target_dir = repo_dir / manifest_path if manifest_path else repo_dir + return _read_manifest_file(target_dir) + + +# --- Requirement Verification --- + +_TOOL_REGISTRY = { + "pixi": { + "version_args": ["pixi", "--version"], + "version_pattern": r"pixi (\S+)", + }, + "npm": { + "version_args": ["npm", "--version"], + "version_pattern": r"^(\S+)$", + }, + "maven": { + "version_args": ["mvn", "--version"], + "version_pattern": r"Apache Maven (\S+)", + }, +} + +_REQ_PATTERN = re.compile(r"^([a-zA-Z][a-zA-Z0-9_-]*)\s*((?:>=|<=|!=|==|>|<)\s*\S+)?$") + + +def verify_requirements(requirements: list[str]): + """Verify that all required tools are available and meet version constraints. + + Raises ValueError with a message listing all unmet requirements. + """ + if not requirements: + return + + errors = [] + + for req in requirements: + match = _REQ_PATTERN.match(req.strip()) + if not match: + errors.append(f"Invalid requirement format: '{req}'") + continue + + tool = match.group(1) + version_spec = match.group(2) + + # Check tool exists on PATH + if shutil.which(tool) is None: + # For maven, the binary is 'mvn' not 'maven' + registry_entry = _TOOL_REGISTRY.get(tool) + binary = registry_entry["version_args"][0] if registry_entry else tool + if binary != tool and shutil.which(binary) is not None: + pass # binary found under alternate name + else: + errors.append(f"Required tool '{tool}' is not installed or not on PATH") + continue + + if version_spec: + registry_entry = _TOOL_REGISTRY.get(tool) + if not registry_entry: + errors.append(f"Cannot check version for '{tool}': no version command configured") + continue + + try: + result = subprocess.run( + registry_entry["version_args"], + capture_output=True, text=True, timeout=10, + ) + output = result.stdout.strip() or result.stderr.strip() + ver_match = re.search(registry_entry["version_pattern"], output) + if not ver_match: + errors.append( + f"Could not parse version for '{tool}' from output: {output!r}" + ) + continue + + installed = Version(ver_match.group(1)) + specifier = SpecifierSet(version_spec.strip()) + if not specifier.contains(installed): + errors.append( + f"'{tool}' version {installed} does not satisfy {version_spec.strip()}" + ) + except FileNotFoundError: + errors.append(f"Required tool '{tool}' is not installed or not on PATH") + except subprocess.TimeoutExpired: + errors.append(f"Timed out checking version for '{tool}'") + + if errors: + raise ValueError("Unmet requirements:\n - " + "\n - ".join(errors)) + + +# --- Command Building --- + +# Characters that are dangerous in shell commands +_SHELL_METACHAR_PATTERN = re.compile(r'[;&|`$(){}!<>\n\r]') + +# Valid environment variable name +_ENV_VAR_NAME_PATTERN = re.compile(r'^[A-Za-z_][A-Za-z0-9_]*$') + + +def _validate_parameter_value(param: AppParameter, value) -> str: + """Validate a single parameter value against its schema and return the string representation. + + Raises ValueError if validation fails. + """ + if param.type == "boolean": + if not isinstance(value, bool): + raise ValueError(f"Parameter '{param.name}' must be a boolean") + return str(value) + + if param.type == "integer": + try: + int_val = int(value) + except (TypeError, ValueError): + raise ValueError(f"Parameter '{param.name}' must be an integer") + if param.min is not None and int_val < param.min: + raise ValueError(f"Parameter '{param.name}' must be >= {param.min}") + if param.max is not None and int_val > param.max: + raise ValueError(f"Parameter '{param.name}' must be <= {param.max}") + return str(int_val) + + if param.type == "number": + try: + num_val = float(value) + except (TypeError, ValueError): + raise ValueError(f"Parameter '{param.name}' must be a number") + if param.min is not None and num_val < param.min: + raise ValueError(f"Parameter '{param.name}' must be >= {param.min}") + if param.max is not None and num_val > param.max: + raise ValueError(f"Parameter '{param.name}' must be <= {param.max}") + return str(num_val) + + if param.type == "enum": + str_val = str(value) + if param.options and str_val not in param.options: + raise ValueError(f"Parameter '{param.name}' must be one of {param.options}") + return str_val + + # string, file, directory + str_val = str(value) + + if param.type in ("file", "directory"): + # Normalize backslashes to forward slashes + str_val = str_val.replace("\\", "/") + + # Validate path characters + if _SHELL_METACHAR_PATTERN.search(str_val): + raise ValueError(f"Parameter '{param.name}' contains invalid characters") + + # Require absolute path + if not str_val.startswith("/") and not str_val.startswith("~"): + raise ValueError(f"Parameter '{param.name}' must be an absolute path (starting with / or ~)") + + # Verify path exists + expanded = os.path.expanduser(str_val) + if not os.path.exists(expanded): + raise ValueError(f"Parameter '{param.name}': path does not exist: {str_val}") + if not os.access(expanded, os.R_OK): + raise ValueError(f"Parameter '{param.name}': path is not accessible: {str_val}") + + if param.type == "string" and param.pattern: + if not re.fullmatch(param.pattern, str_val): + raise ValueError(f"Parameter '{param.name}' does not match required pattern") + + return str_val + + +def build_command(entry_point: AppEntryPoint, parameters: dict) -> str: + """Build a shell command from an entry point and parameter values. + + All parameter values are validated and shell-escaped. + Positional parameters (no flag) are emitted first in declaration order, + then flagged parameters in declaration order. + Raises ValueError for invalid parameters. + """ + # Build a lookup of parameter definitions by key + flat_params = entry_point.flat_parameters() + param_defs = {p.key: p for p in flat_params} + + # Validate required parameters + for param in flat_params: + if param.required and param.key not in parameters: + if param.default is None: + raise ValueError(f"Required parameter '{param.name}' is missing") + + # Check for unknown parameters + for param_key in parameters: + if param_key not in param_defs: + raise ValueError(f"Unknown parameter '{param_key}'") + + # Compute effective values: user-provided merged with defaults + effective: dict[str, tuple[AppParameter, any]] = {} + for param in flat_params: + if param.key in parameters: + effective[param.key] = (param, parameters[param.key]) + elif param.default is not None: + effective[param.key] = (param, param.default) + + # Start with the base command + parts = [entry_point.command] + + # Pass 1: Positional args in declaration order + for param in flat_params: + if param.flag is not None: + continue + if param.key not in effective: + continue + p, value = effective[param.key] + validated = _validate_parameter_value(p, value) + parts.append(shlex.quote(validated)) + + # Pass 2: Flagged args in declaration order + for param in flat_params: + if param.flag is None: + continue + if param.key not in effective: + continue + p, value = effective[param.key] + validated = _validate_parameter_value(p, value) + if p.type == "boolean": + if value is True: + parts.append(p.flag) + else: + parts.append(f"{p.flag} {shlex.quote(validated)}") + + return (" \\\n ").join(parts) + + +# --- Executor Management --- + +_executor = None +_monitor = None +_monitor_task = None + + +async def get_executor(): + """Get or create the cluster executor singleton.""" + global _executor + if _executor is None: + settings = get_settings() + config = settings.cluster.model_dump(exclude_none=True) + _executor = create_executor(**config) + return _executor + + +async def start_job_monitor(): + """Start the background job monitoring loop.""" + global _monitor, _monitor_task + + settings = get_settings() + executor = await get_executor() + + # Reconnect to any previously submitted jobs (e.g. after server restart) + try: + reconnected = await executor.reconnect() + if reconnected: + for record in reconnected: + record.on_exit(_on_job_exit) + logger.info(f"Reconnected to {len(reconnected)} existing cluster jobs") + except Exception as e: + logger.debug(f"Job reconnection skipped: {e}") + + _monitor = JobMonitor(executor, poll_interval=settings.cluster.poll_interval) + await _monitor.start() + + # Start reconciliation loop + _monitor_task = asyncio.create_task(_reconcile_loop(settings)) + logger.info("Job monitor started") + + +async def stop_job_monitor(): + """Stop the background job monitoring loop.""" + global _monitor, _monitor_task + + if _monitor_task: + _monitor_task.cancel() + try: + await _monitor_task + except asyncio.CancelledError: + pass + _monitor_task = None + + if _monitor: + await _monitor.stop() + _monitor = None + + logger.info("Job monitor stopped") + + +async def _reconcile_loop(settings): + """Periodically reconcile DB job statuses with cluster state.""" + while True: + try: + await _reconcile_jobs(settings) + except Exception: + logger.exception("Error in job reconciliation loop") + + await asyncio.sleep(settings.cluster.poll_interval) + + +async def _reconcile_jobs(settings): + """Reconcile DB job statuses with the executor's tracked jobs.""" + executor = await get_executor() + + with db.get_db_session(settings.db_url) as session: + active_jobs = db.get_active_jobs(session) + + for db_job in active_jobs: + if not db_job.cluster_job_id: + # Job never got a cluster_job_id - submission didn't complete. + # Mark FAILED if it's been stuck longer than zombie_timeout. + created = db_job.created_at.replace(tzinfo=None) if db_job.created_at.tzinfo else db_job.created_at + age_minutes = (datetime.now(UTC).replace(tzinfo=None) - created).total_seconds() / 60 + if age_minutes > settings.cluster.zombie_timeout_minutes: + db.update_job_status(session, db_job.id, "FAILED", finished_at=datetime.now(UTC)) + logger.warning( + f"Job {db_job.id} has no cluster_job_id after " + f"{age_minutes:.0f} minutes, marked FAILED" + ) + continue + + # Check if executor is tracking this job + tracked = executor.jobs.get(db_job.cluster_job_id) + if tracked is None: + # Job was purged from executor tracking. Terminal status + # updates are handled by the on_exit callback, so this + # means either the callback already fired or the job was + # lost (e.g. server restart without reconnection). + # Skip it here — the zombie timeout above will catch + # truly stuck jobs that never got a cluster_job_id. + continue + + # Sync non-terminal status changes (e.g. PENDING -> RUNNING). + # Terminal transitions are handled by the on_exit callback. + new_status = _map_status(tracked.status) + if new_status != db_job.status: + db.update_job_status( + session, db_job.id, new_status, + exit_code=tracked.exit_code, + started_at=tracked.start_time, + finished_at=tracked.finish_time, + ) + logger.info(f"Job {db_job.id} status updated: {db_job.status} -> {new_status}") + + +def _map_status(status: JobStatus) -> str: + """Map py-cluster-api JobStatus to our string status.""" + mapping = { + JobStatus.PENDING: "PENDING", + JobStatus.RUNNING: "RUNNING", + JobStatus.DONE: "DONE", + JobStatus.FAILED: "FAILED", + JobStatus.KILLED: "KILLED", + JobStatus.UNKNOWN: "FAILED", + } + return mapping.get(status, "FAILED") + + +def _on_job_exit(record): + """Callback fired by JobMonitor when a job reaches terminal state. + + This runs inside the monitor's poll loop, before completed jobs are + purged, so we are guaranteed to capture the final status. + """ + settings = get_settings() + new_status = _map_status(record.status) + + with db.get_db_session(settings.db_url) as session: + db_job = db.get_job_by_cluster_id(session, record.job_id) + if db_job is None: + logger.warning(f"No DB job found for cluster job {record.job_id}") + return + if db_job.status == new_status: + return + db.update_job_status( + session, db_job.id, new_status, + exit_code=record.exit_code, + started_at=record.start_time, + finished_at=record.finish_time, + ) + logger.info(f"Job {db_job.id} completed: {db_job.status} -> {new_status}") + + +# --- Job Submission --- + +def _sanitize_for_path(s: str) -> str: + """Sanitize a string for use in a directory name.""" + return re.sub(r'[^a-zA-Z0-9._-]', '_', s) + + +def _build_work_dir(job_id: int, app_name: str, entry_point_id: str) -> Path: + """Build a working directory path under ~/.fileglancer/jobs/.""" + safe_app = _sanitize_for_path(app_name) + safe_ep = _sanitize_for_path(entry_point_id) + return Path(os.path.expanduser(f"~/.fileglancer/jobs/{job_id}-{safe_app}-{safe_ep}")) + + +async def submit_job( + username: str, + app_url: str, + entry_point_id: str, + parameters: dict, + resources: Optional[dict] = None, + pull_latest: bool = False, + manifest_path: str = "", + env: Optional[dict] = None, + pre_run: Optional[str] = None, + post_run: Optional[str] = None, +) -> db.JobDB: + """Submit a new job to the cluster. + + Fetches the manifest, validates parameters, builds the command, + submits to the executor, and creates a DB record. + Each job runs in its own directory under ~/.fileglancer/jobs/. + """ + settings = get_settings() + + # Fetch and validate manifest + manifest = await fetch_app_manifest(app_url, manifest_path) + + # Find entry point + entry_point = None + for ep in manifest.runnables: + if ep.id == entry_point_id: + entry_point = ep + break + if entry_point is None: + raise ValueError(f"Entry point '{entry_point_id}' not found in manifest") + + # Verify requirements before proceeding + verify_requirements(manifest.requirements) + + # Build command + command = build_command(entry_point, parameters) + + # Build resource spec + resource_spec = _build_resource_spec(entry_point, resources, settings) + + # Merge env/pre_run/post_run: manifest defaults overridden by user values + merged_env = dict(entry_point.env or {}) + if env: + merged_env.update(env) + effective_pre_run = pre_run if pre_run is not None else (entry_point.pre_run or None) + effective_post_run = post_run if post_run is not None else (entry_point.post_run or None) + + # Create DB record first to get job ID for the work directory + resources_dict = None + if resource_spec: + resources_dict = { + "cpus": resource_spec.cpus, + "memory": resource_spec.memory, + "walltime": resource_spec.walltime, + "queue": resource_spec.queue, + } + + with db.get_db_session(settings.db_url) as session: + db_job = db.create_job( + session=session, + username=username, + app_url=app_url, + app_name=manifest.name, + entry_point_id=entry_point.id, + entry_point_name=entry_point.name, + parameters=parameters, + resources=resources_dict, + manifest_path=manifest_path, + env=merged_env or None, + pre_run=effective_pre_run, + post_run=effective_post_run, + pull_latest=pull_latest, + ) + job_id = db_job.id + + # Create work directory + work_dir = _build_work_dir(job_id, manifest.name, entry_point.id) + work_dir.mkdir(parents=True, exist_ok=True) + + # Determine which repo to symlink and where to cd + if manifest.repo_url: + # Tool code lives in a separate repo — clone it and cd to its root + tool_repo_dir = await _ensure_repo_cache(manifest.repo_url, pull=pull_latest) + repo_link = work_dir / "repo" + repo_link.symlink_to(tool_repo_dir) + cd_target = repo_link + else: + # Tool code is in the discovery repo — cd into manifest's subdirectory + repo_dir = await _ensure_repo_cache(app_url, pull=pull_latest) + repo_link = work_dir / "repo" + repo_link.symlink_to(repo_dir) + if manifest_path: + cd_target = repo_link / manifest_path + else: + cd_target = repo_link + + # Build environment variable export lines + env_lines = "" + if merged_env: + parts = [] + for var_name, var_value in merged_env.items(): + if not _ENV_VAR_NAME_PATTERN.match(var_name): + raise ValueError(f"Invalid environment variable name: '{var_name}'") + parts.append(f"export {var_name}={shlex.quote(var_value)}") + env_lines = "\n".join(parts) + "\n" + + # Wrap command with cd into the repo symlink + # Unset PIXI_PROJECT_MANIFEST so pixi uses the repo's own manifest + # instead of inheriting fileglancer's from the dev server environment + script_parts = [f"unset PIXI_PROJECT_MANIFEST\ncd {cd_target}"] + if env_lines: + script_parts.append(env_lines.rstrip()) + if effective_pre_run: + script_parts.append(effective_pre_run.rstrip()) + script_parts.append(command) + if effective_post_run: + script_parts.append(effective_post_run.rstrip()) + full_command = "\n\n".join(script_parts) + + # Set work_dir and log paths on resource spec + resource_spec.work_dir = str(work_dir) + resource_spec.stdout_path = str(work_dir / "stdout.log") + resource_spec.stderr_path = str(work_dir / "stderr.log") + + # Submit to executor + executor = await get_executor() + job_name = f"fg-{manifest.name}-{entry_point.id}" + cluster_job = await executor.submit( + command=full_command, + name=job_name, + resources=resource_spec, + ) + + # Register callback to update DB when job reaches terminal state + cluster_job.on_exit(_on_job_exit) + + # Update DB with cluster job ID and return fresh object + with db.get_db_session(settings.db_url) as session: + db.update_job_status( + session, job_id, "PENDING", + cluster_job_id=cluster_job.job_id, + ) + db_job = db.get_job(session, job_id, username) + session.expunge(db_job) + + logger.info(f"Job {db_job.id} submitted for user {username} in {work_dir}: {command}") + return db_job + + +def _build_resource_spec(entry_point: AppEntryPoint, overrides: Optional[dict], settings) -> ResourceSpec: + """Build a ResourceSpec from entry point defaults, user overrides, and global defaults.""" + cpus = settings.cluster.cpus + memory = settings.cluster.memory + walltime = settings.cluster.walltime + queue = settings.cluster.queue + + # Apply entry point defaults + if entry_point.resources: + if entry_point.resources.cpus is not None: + cpus = entry_point.resources.cpus + if entry_point.resources.memory is not None: + memory = entry_point.resources.memory + if entry_point.resources.walltime is not None: + walltime = entry_point.resources.walltime + + # Apply user overrides + if overrides: + if overrides.get("cpus") is not None: + cpus = overrides["cpus"] + if overrides.get("memory") is not None: + memory = overrides["memory"] + if overrides.get("walltime") is not None: + walltime = overrides["walltime"] + + return ResourceSpec( + cpus=cpus, + memory=memory, + walltime=walltime, + queue=queue, + ) + + +async def cancel_job(job_id: int, username: str) -> db.JobDB: + """Cancel a running or pending job.""" + settings = get_settings() + + with db.get_db_session(settings.db_url) as session: + db_job = db.get_job(session, job_id, username) + if db_job is None: + raise ValueError(f"Job {job_id} not found") + if db_job.status not in ("PENDING", "RUNNING"): + raise ValueError(f"Job {job_id} is not cancellable (status: {db_job.status})") + + # Cancel on cluster + if db_job.cluster_job_id: + executor = await get_executor() + await executor.cancel(db_job.cluster_job_id) + + # Update DB + now = datetime.now(UTC) + db.update_job_status(session, db_job.id, "KILLED", finished_at=now) + db_job = db.get_job(session, db_job.id, username) + session.expunge(db_job) + + logger.info(f"Job {job_id} cancelled by user {username}") + return db_job + + +# --- Job File Access --- + +def _resolve_work_dir(db_job: db.JobDB) -> Path: + """Resolve a job's work directory to an absolute path.""" + return _build_work_dir(db_job.id, db_job.app_name, db_job.entry_point_id) + + +def _resolve_browse_path(abs_path: str) -> tuple[str | None, str | None]: + """Resolve an absolute path to an FSP name and subpath for browse links.""" + settings = get_settings() + with db.get_db_session(settings.db_url) as session: + result = db.find_fsp_from_absolute_path(session, abs_path) + if result: + return result[0].name, result[1] + return None, None + + +def _make_file_info(file_path: str, exists: bool) -> dict: + """Create a file info dict with browse link resolution.""" + fsp_name, subpath = _resolve_browse_path(file_path) if exists else (None, None) + return { + "path": file_path, + "exists": exists, + "fsp_name": fsp_name, + "subpath": subpath, + } + + +def get_job_file_paths(db_job: db.JobDB) -> dict[str, dict]: + """Return file path info for a job's files (script, stdout, stderr). + + Returns a dict keyed by file type with path and existence info. + """ + work_dir = _resolve_work_dir(db_job) + + # Find script file + scripts = sorted(work_dir.glob("*.sh")) if work_dir.exists() else [] + script_path = str(scripts[0]) if scripts else str(work_dir / "script.sh") + + stdout_path = work_dir / "stdout.log" + stderr_path = work_dir / "stderr.log" + + return { + "script": _make_file_info(script_path, len(scripts) > 0), + "stdout": _make_file_info(str(stdout_path), stdout_path.is_file()), + "stderr": _make_file_info(str(stderr_path), stderr_path.is_file()), + } + + +async def get_job_file_content(job_id: int, username: str, file_type: str) -> Optional[str]: + """Read the content of a job file (script, stdout, or stderr). + + All job files live in the job's work directory: + - *.sh — the generated script (written by cluster-api) + - stdout.log — captured standard output + - stderr.log — captured standard error + + Returns the file content as a string, or None if the file doesn't exist. + """ + settings = get_settings() + + with db.get_db_session(settings.db_url) as session: + db_job = db.get_job(session, job_id, username) + if db_job is None: + raise ValueError(f"Job {job_id} not found") + session.expunge(db_job) + + work_dir = _resolve_work_dir(db_job) + + if file_type == "script": + # Find the script generated by cluster-api (e.g. jobname.1.sh) + scripts = sorted(work_dir.glob("*.sh")) + if scripts: + return scripts[0].read_text() + return None + elif file_type == "stdout": + path = work_dir / "stdout.log" + elif file_type == "stderr": + path = work_dir / "stderr.log" + else: + raise ValueError(f"Unknown file type: {file_type}") + + if path.is_file(): + return path.read_text() + return None diff --git a/fileglancer/cli.py b/fileglancer/cli.py index d62ac720..141b59e3 100644 --- a/fileglancer/cli.py +++ b/fileglancer/cli.py @@ -178,7 +178,7 @@ def start(host, port, reload, workers, ssl_keyfile, ssl_certfile, # Build uvicorn config config_kwargs = { - 'app': 'fileglancer.app:app', + 'app': 'fileglancer.server:app', 'host': host, 'port': port, 'access_log': False, diff --git a/fileglancer/database.py b/fileglancer/database.py index 21616aac..5feb0c92 100644 --- a/fileglancer/database.py +++ b/fileglancer/database.py @@ -1,10 +1,10 @@ import secrets import hashlib -from datetime import datetime, UTC +from datetime import datetime, timedelta, UTC import os from functools import lru_cache -from sqlalchemy import create_engine, Column, String, Integer, DateTime, JSON, UniqueConstraint +from sqlalchemy import create_engine, Column, String, Integer, Boolean, DateTime, JSON, UniqueConstraint from sqlalchemy.orm import sessionmaker, declarative_base, Session from sqlalchemy.engine.url import make_url from sqlalchemy.pool import StaticPool @@ -136,6 +136,31 @@ class TicketDB(Base): # ) +class JobDB(Base): + """Database model for storing cluster jobs""" + __tablename__ = 'jobs' + + id = Column(Integer, primary_key=True, autoincrement=True) + username = Column(String, nullable=False, index=True) + cluster_job_id = Column(String, nullable=True, index=True) + app_url = Column(String, nullable=False) + app_name = Column(String, nullable=False) + manifest_path = Column(String, nullable=False, server_default="") + entry_point_id = Column(String, nullable=False) + entry_point_name = Column(String, nullable=False) + parameters = Column(JSON, nullable=False) + status = Column(String, nullable=False, default="PENDING") + exit_code = Column(Integer, nullable=True) + resources = Column(JSON, nullable=True) + env = Column(JSON, nullable=True) + pre_run = Column(String, nullable=True) + post_run = Column(String, nullable=True) + pull_latest = Column(Boolean, nullable=False, default=False) + created_at = Column(DateTime, nullable=False, default=lambda: datetime.now(UTC)) + started_at = Column(DateTime, nullable=True) + finished_at = Column(DateTime, nullable=True) + + class SessionDB(Base): """Database model for storing user sessions""" __tablename__ = 'sessions' @@ -796,3 +821,98 @@ def delete_expired_sessions(session: Session): deleted = session.query(SessionDB).filter(SessionDB.expires_at < now).delete() session.commit() return deleted + + +# --- Job database functions --- + +def create_job(session: Session, username: str, app_url: str, app_name: str, + entry_point_id: str, entry_point_name: str, parameters: Dict, + resources: Optional[Dict] = None, manifest_path: str = "", + env: Optional[Dict] = None, pre_run: Optional[str] = None, + post_run: Optional[str] = None, pull_latest: bool = False) -> JobDB: + """Create a new job record""" + now = datetime.now(UTC) + job = JobDB( + username=username, + app_url=app_url, + app_name=app_name, + manifest_path=manifest_path, + entry_point_id=entry_point_id, + entry_point_name=entry_point_name, + parameters=parameters, + resources=resources, + env=env, + pre_run=pre_run, + post_run=post_run, + pull_latest=pull_latest, + status="PENDING", + created_at=now + ) + session.add(job) + session.commit() + return job + + +def get_jobs_by_username(session: Session, username: str, status: Optional[str] = None) -> List[JobDB]: + """Get all jobs for a user, newest first""" + query = session.query(JobDB).filter_by(username=username) + if status: + query = query.filter_by(status=status) + return query.order_by(JobDB.created_at.desc()).all() + + +def get_job(session: Session, job_id: int, username: str) -> Optional[JobDB]: + """Get a single job by ID and username""" + return session.query(JobDB).filter_by(id=job_id, username=username).first() + + +def get_active_jobs(session: Session) -> List[JobDB]: + """Get all jobs with PENDING or RUNNING status""" + return session.query(JobDB).filter( + JobDB.status.in_(["PENDING", "RUNNING"]) + ).all() + + +def get_job_by_cluster_id(session: Session, cluster_job_id: str) -> Optional[JobDB]: + """Get a single job by its cluster job ID""" + return session.query(JobDB).filter_by(cluster_job_id=cluster_job_id).first() + + +def update_job_status(session: Session, job_id: int, status: str, + exit_code: Optional[int] = None, + cluster_job_id: Optional[str] = None, + started_at: Optional[datetime] = None, + finished_at: Optional[datetime] = None) -> Optional[JobDB]: + """Update a job's status and related fields""" + job = session.query(JobDB).filter_by(id=job_id).first() + if not job: + return None + job.status = status + if exit_code is not None: + job.exit_code = exit_code + if cluster_job_id is not None: + job.cluster_job_id = cluster_job_id + if started_at is not None: + job.started_at = started_at + if finished_at is not None: + job.finished_at = finished_at + session.commit() + return job + + +def delete_job(session: Session, job_id: int, username: str) -> bool: + """Delete a single job record. Returns True if deleted, False if not found.""" + deleted = session.query(JobDB).filter_by(id=job_id, username=username).delete() + session.commit() + return deleted > 0 + + +def delete_old_jobs(session: Session, days: int = 30) -> int: + """Delete completed/failed jobs older than the specified number of days""" + cutoff = datetime.now(UTC) - timedelta(days=days) + deleted = session.query(JobDB).filter( + JobDB.status.in_(["DONE", "FAILED", "KILLED"]), + JobDB.created_at < cutoff + ).delete(synchronize_session='fetch') + session.commit() + return deleted diff --git a/fileglancer/dev_launch.py b/fileglancer/dev_launch.py index 4d1b08d0..555ac05a 100755 --- a/fileglancer/dev_launch.py +++ b/fileglancer/dev_launch.py @@ -28,7 +28,7 @@ def main(): # Launch uvicorn with the certificates print("Starting uvicorn server with HTTPS...") uvicorn_cmd = [ - 'uvicorn', 'fileglancer.app:app', + 'uvicorn', 'fileglancer.server:app', '--host', '0.0.0.0', '--port', '443', '--reload', diff --git a/fileglancer/model.py b/fileglancer/model.py index 5c0018b6..7cf897a9 100644 --- a/fileglancer/model.py +++ b/fileglancer/model.py @@ -1,7 +1,8 @@ +import re from datetime import datetime -from typing import List, Optional, Dict +from typing import Annotated, Any, List, Literal, Optional, Dict, Union -from pydantic import BaseModel, Field, HttpUrl +from pydantic import BaseModel, Discriminator, Field, HttpUrl, Tag, field_validator, model_validator class FileSharePath(BaseModel): @@ -309,3 +310,226 @@ class NeuroglancerShortLinkResponse(BaseModel): links: List[NeuroglancerShortLink] = Field( description="A list of stored Neuroglancer short links" ) + + +# --- App Manifest Models --- + +class AppParameter(BaseModel): + """A parameter definition for an app entry point""" + flag: Optional[str] = Field( + description="CLI flag syntax (e.g. '--outdir', '-n'). Omit for positional arguments.", + default=None, + ) + key: str = Field( + description="Internal key for this parameter, auto-generated from flag or positional index", + default="", + ) + name: str = Field(description="Display name of the parameter") + type: Literal["string", "integer", "number", "boolean", "file", "directory", "enum"] = Field( + description="The data type of the parameter" + ) + description: Optional[str] = Field(description="Description of the parameter", default=None) + required: bool = Field(description="Whether the parameter is required", default=False) + default: Optional[Any] = Field(description="Default value for the parameter", default=None) + options: Optional[List[str]] = Field(description="Allowed values for enum type", default=None) + min: Optional[float] = Field(description="Minimum value for numeric types", default=None) + max: Optional[float] = Field(description="Maximum value for numeric types", default=None) + pattern: Optional[str] = Field(description="Regex validation pattern for string types", default=None) + + @field_validator("flag") + @classmethod + def validate_flag(cls, v): + if v is not None: + if not v.startswith("-"): + raise ValueError(f"Flag must start with '-', got '{v}'") + stripped = v.lstrip("-") + if not stripped: + raise ValueError("Flag must have content after dashes") + return v + + +class AppParameterSection(BaseModel): + """A collapsible section that groups parameters in the UI""" + section: str = Field(description="Section title") + description: Optional[str] = Field(default=None) + collapsed: bool = Field(default=False) + parameters: List[AppParameter] = Field(default=[]) + + +def _param_item_discriminator(v): + if isinstance(v, dict): + return 'section' if 'section' in v else 'parameter' + return 'section' if isinstance(v, AppParameterSection) else 'parameter' + + +AppParameterItem = Annotated[ + Union[ + Annotated[AppParameter, Tag('parameter')], + Annotated[AppParameterSection, Tag('section')], + ], + Discriminator(_param_item_discriminator), +] + + +class AppResourceDefaults(BaseModel): + """Resource defaults for an app entry point""" + cpus: Optional[int] = Field(description="Number of CPUs", default=None) + memory: Optional[str] = Field(description="Memory allocation (e.g. '16 GB')", default=None) + walltime: Optional[str] = Field(description="Wall time limit (e.g. '04:00')", default=None) + + +class AppEntryPoint(BaseModel): + """An entry point (command) within an app""" + id: str = Field(description="Unique identifier for the entry point") + name: str = Field(description="Display name of the entry point") + description: Optional[str] = Field(description="Description of the entry point", default=None) + command: str = Field(description="The base CLI command to execute") + parameters: List[AppParameterItem] = Field(description="Parameters for this entry point", default=[]) + resources: Optional[AppResourceDefaults] = Field(description="Default resource requirements", default=None) + env: Optional[Dict[str, str]] = Field(description="Default environment variables", default=None) + pre_run: Optional[str] = Field(description="Script to run before the main command", default=None) + post_run: Optional[str] = Field(description="Script to run after the main command", default=None) + + def flat_parameters(self) -> List[AppParameter]: + """Return a flat list of all parameters, traversing sections.""" + result = [] + for item in self.parameters: + if isinstance(item, AppParameterSection): + result.extend(item.parameters) + else: + result.append(item) + return result + + @model_validator(mode='after') + def generate_parameter_keys(self): + positional_index = 0 + keys_seen: dict[str, str] = {} + for param in self.flat_parameters(): + if param.flag is not None: + param.key = param.flag.lstrip("-") + else: + param.key = f"_arg{positional_index}" + positional_index += 1 + if param.key in keys_seen: + raise ValueError( + f"Duplicate parameter key '{param.key}' " + f"(from '{param.name}' and '{keys_seen[param.key]}')" + ) + keys_seen[param.key] = param.name + return self + + +SUPPORTED_TOOLS = {"pixi", "npm", "maven"} + + +class AppManifest(BaseModel): + """Top-level app manifest (runnables.yaml)""" + name: str = Field(description="Display name of the app") + description: Optional[str] = Field(description="Description of the app", default=None) + version: Optional[str] = Field(description="Version of the app", default=None) + repo_url: Optional[str] = Field( + description="GitHub repo URL where the tool code lives. If absent, uses the repo containing this manifest.", + default=None, + ) + requirements: List[str] = Field( + description="Required tools, e.g. ['pixi>=0.40', 'npm']", + default=[], + ) + runnables: List[AppEntryPoint] = Field(description="Available entry points for this app") + + @field_validator("requirements") + @classmethod + def validate_requirements(cls, v): + for req in v: + tool = re.split(r"[><=!]", req)[0].strip() + if tool not in SUPPORTED_TOOLS: + raise ValueError(f"Unsupported tool: '{tool}'. Supported: {SUPPORTED_TOOLS}") + return v + + +class UserApp(BaseModel): + """A user's saved app reference""" + url: str = Field(description="URL to the app manifest") + manifest_path: str = Field(description="Relative directory path to the manifest within the repo", default="") + name: str = Field(description="App name from manifest") + description: Optional[str] = Field(description="App description from manifest", default=None) + added_at: datetime = Field(description="When the app was added") + manifest: Optional[AppManifest] = Field(description="Cached manifest data", default=None) + + +class ManifestFetchRequest(BaseModel): + """Request to fetch an app manifest""" + url: str = Field(description="URL to the app manifest or GitHub repo") + manifest_path: str = Field(description="Relative directory path to the manifest within the repo", default="") + + +class AppAddRequest(BaseModel): + """Request to add an app""" + url: str = Field(description="URL to the app manifest or GitHub repo") + + +class AppRemoveRequest(BaseModel): + """Request to remove an app""" + url: str = Field(description="URL of the app to remove") + + +class JobFileInfo(BaseModel): + """Information about a job file""" + path: str = Field(description="Absolute path to the file") + exists: bool = Field(description="Whether the file exists on disk") + fsp_name: Optional[str] = Field(description="File share path name for browse link", default=None) + subpath: Optional[str] = Field(description="Subpath within the FSP for browse link", default=None) + + +class Job(BaseModel): + """A job record""" + id: int = Field(description="Unique job identifier") + app_url: str = Field(description="URL of the app manifest") + app_name: str = Field(description="Name of the app") + manifest_path: str = Field(description="Relative manifest path within the app repo", default="") + entry_point_id: str = Field(description="Entry point that was executed") + entry_point_name: str = Field(description="Display name of the entry point") + parameters: Dict = Field(description="Parameters used for the job") + status: str = Field(description="Job status (PENDING, RUNNING, DONE, FAILED, KILLED)") + exit_code: Optional[int] = Field(description="Exit code of the job", default=None) + resources: Optional[Dict] = Field(description="Requested resources", default=None) + env: Optional[Dict[str, str]] = Field(description="Environment variables used for the job", default=None) + pre_run: Optional[str] = Field(description="Script run before the main command", default=None) + post_run: Optional[str] = Field(description="Script run after the main command", default=None) + pull_latest: bool = Field(description="Whether pull latest was enabled", default=False) + cluster_job_id: Optional[str] = Field(description="Cluster-assigned job ID", default=None) + created_at: datetime = Field(description="When the job was created") + started_at: Optional[datetime] = Field(description="When the job started running", default=None) + finished_at: Optional[datetime] = Field(description="When the job finished", default=None) + files: Optional[Dict[str, JobFileInfo]] = Field(description="Job file paths and existence", default=None) + + +class JobSubmitRequest(BaseModel): + """Request to submit a new job""" + app_url: str = Field(description="URL of the app manifest") + manifest_path: str = Field(description="Relative manifest path within the app repo", default="") + entry_point_id: str = Field(description="Entry point to execute") + parameters: Dict = Field(description="Parameter values keyed by parameter key") + resources: Optional[AppResourceDefaults] = Field(description="Resource overrides", default=None) + pull_latest: bool = Field( + description="Pull latest code from GitHub before running", + default=False, + ) + env: Optional[Dict[str, str]] = Field(description="Environment variables to export", default=None) + pre_run: Optional[str] = Field(description="Script to run before the main command", default=None) + post_run: Optional[str] = Field(description="Script to run after the main command", default=None) + + +class PathValidationRequest(BaseModel): + """Request to validate file/directory paths""" + paths: Dict[str, str] = Field(description="Map of parameter key to path value") + + +class PathValidationResponse(BaseModel): + """Response with path validation results""" + errors: Dict[str, str] = Field(description="Map of parameter key to error message (empty if all valid)") + + +class JobResponse(BaseModel): + """Response containing a list of jobs""" + jobs: List[Job] = Field(description="A list of jobs") diff --git a/fileglancer/app.py b/fileglancer/server.py similarity index 84% rename from fileglancer/app.py rename to fileglancer/server.py index 97f9a30f..0b771f94 100644 --- a/fileglancer/app.py +++ b/fileglancer/server.py @@ -28,6 +28,7 @@ from fileglancer import database as db from fileglancer import auth +from fileglancer import apps as apps_module from fileglancer.model import * from fileglancer.settings import get_settings from fileglancer.issues import create_jira_ticket, get_jira_ticket_details, delete_jira_ticket @@ -261,10 +262,21 @@ def mask_password(url: str) -> str: else: logger.debug(f"No notifications file found at {notifications_file}") + # Start cluster job monitor + try: + await apps_module.start_job_monitor() + logger.info("Cluster job monitor started") + except Exception as e: + logger.warning(f"Failed to start cluster job monitor: {e}") + logger.info(f"Server ready") yield - # Cleanup (if needed) - pass + + # Cleanup: stop job monitor + try: + await apps_module.stop_job_monitor() + except Exception as e: + logger.warning(f"Error stopping cluster job monitor: {e}") app = FastAPI(lifespan=lifespan) @@ -1450,6 +1462,287 @@ async def delete_file_or_dir(fsp_name: str, return JSONResponse(status_code=200, content={"message": "Item deleted"}) + # --- Apps & Jobs API --- + + @app.post("/api/apps/manifest", response_model=AppManifest, + description="Fetch and validate an app manifest from a URL") + async def fetch_manifest(body: ManifestFetchRequest, + username: str = Depends(get_current_user)): + try: + logger.info(f"Fetching manifest for URL: '{body.url}' path: '{body.manifest_path}'") + manifest = await apps_module.fetch_app_manifest(body.url, body.manifest_path) + return manifest + except ValueError as e: + raise HTTPException(status_code=404, detail=str(e)) + except Exception as e: + raise HTTPException(status_code=400, detail=f"Invalid manifest: {str(e)}") + + @app.get("/api/apps", response_model=list[UserApp], + description="Get the user's configured apps with their manifests") + async def get_user_apps(username: str = Depends(get_current_user)): + with db.get_db_session(settings.db_url) as session: + pref = db.get_user_preference(session, username, "apps") + + app_list = pref.get("apps", []) if pref else [] + result = [] + for app_entry in app_list: + user_app = UserApp( + url=app_entry["url"], + manifest_path=app_entry.get("manifest_path", ""), + name=app_entry.get("name", "Unknown"), + description=app_entry.get("description"), + added_at=app_entry.get("added_at", datetime.now(UTC).isoformat()), + ) + # Try to fetch manifest from local clone + try: + user_app.manifest = await apps_module.fetch_app_manifest( + app_entry["url"], app_entry.get("manifest_path", "") + ) + # Update name/description from manifest + user_app.name = user_app.manifest.name + user_app.description = user_app.manifest.description + except Exception as e: + logger.warning(f"Failed to fetch manifest for {app_entry['url']}: {e}") + result.append(user_app) + return result + + @app.post("/api/apps", response_model=list[UserApp], + description="Add an app by URL (discovers all manifests in the repo)") + async def add_user_app(body: AppAddRequest, + username: str = Depends(get_current_user)): + # Clone the repo and discover all manifests + try: + repo_dir = await apps_module._ensure_repo_cache(body.url, pull=True) + discovered = apps_module._find_manifests_in_repo(repo_dir) + except ValueError as e: + raise HTTPException(status_code=404, detail=str(e)) + except Exception as e: + raise HTTPException(status_code=400, detail=f"Failed to clone or scan repo: {str(e)}") + + if not discovered: + filenames = ", ".join(apps_module._MANIFEST_FILENAMES) + raise HTTPException( + status_code=404, + detail=f"No manifest files found ({filenames}). " + f"Make sure a manifest exists in the repository.", + ) + + now = datetime.now(UTC) + + with db.get_db_session(settings.db_url) as session: + pref = db.get_user_preference(session, username, "apps") + app_list = pref.get("apps", []) if pref else [] + + # Build set of existing (url, manifest_path) for dedup + existing_keys = { + (a["url"], a.get("manifest_path", "")) for a in app_list + } + + new_apps: list[UserApp] = [] + for manifest_path, manifest in discovered: + if (body.url, manifest_path) in existing_keys: + continue # silently skip duplicates + + new_entry = { + "url": body.url, + "manifest_path": manifest_path, + "name": manifest.name, + "description": manifest.description, + "added_at": now.isoformat(), + } + app_list.append(new_entry) + new_apps.append(UserApp( + url=body.url, + manifest_path=manifest_path, + name=manifest.name, + description=manifest.description, + added_at=now, + manifest=manifest, + )) + + if not new_apps: + raise HTTPException( + status_code=409, + detail="All apps in this repository have already been added.", + ) + + db.set_user_preference(session, username, "apps", {"apps": app_list}) + + return new_apps + + @app.delete("/api/apps", + description="Remove an app by URL and manifest path") + async def remove_user_app(url: str = Query(..., description="URL of the app to remove"), + manifest_path: str = Query("", description="Manifest path within the repo"), + username: str = Depends(get_current_user)): + with db.get_db_session(settings.db_url) as session: + pref = db.get_user_preference(session, username, "apps") + app_list = pref.get("apps", []) if pref else [] + + new_list = [ + a for a in app_list + if not (a["url"] == url and a.get("manifest_path", "") == manifest_path) + ] + if len(new_list) == len(app_list): + raise HTTPException(status_code=404, detail="App not found") + + db.set_user_preference(session, username, "apps", {"apps": new_list}) + + return {"message": "App removed"} + + @app.post("/api/apps/update", response_model=AppManifest, + description="Pull latest code and re-read the manifest for an app") + async def update_user_app(body: ManifestFetchRequest, + username: str = Depends(get_current_user)): + try: + await apps_module._ensure_repo_cache(body.url, pull=True) + manifest = await apps_module.fetch_app_manifest(body.url, body.manifest_path) + except ValueError as e: + raise HTTPException(status_code=404, detail=str(e)) + except Exception as e: + raise HTTPException(status_code=400, detail=f"Failed to update app: {str(e)}") + + # Update stored name/description from refreshed manifest + with db.get_db_session(settings.db_url) as session: + pref = db.get_user_preference(session, username, "apps") + app_list = pref.get("apps", []) if pref else [] + for entry in app_list: + if entry["url"] == body.url and entry.get("manifest_path", "") == body.manifest_path: + entry["name"] = manifest.name + entry["description"] = manifest.description + break + db.set_user_preference(session, username, "apps", {"apps": app_list}) + + return manifest + + @app.post("/api/apps/validate-paths", response_model=PathValidationResponse, + description="Validate file/directory paths for app parameters") + async def validate_paths(body: PathValidationRequest, + username: str = Depends(get_current_user)): + errors = {} + for param_key, path_value in body.paths.items(): + # Normalize backslashes to forward slashes + normalized = path_value.replace("\\", "/") + # Require absolute path + if not normalized.startswith("/") and not normalized.startswith("~"): + errors[param_key] = f"Must be an absolute path (starting with / or ~)" + continue + expanded = os.path.normpath(os.path.expanduser(normalized)) + if not os.path.exists(expanded): + errors[param_key] = f"Path does not exist: {normalized}" + elif not os.access(expanded, os.R_OK): + errors[param_key] = f"Path is not accessible: {normalized}" + return PathValidationResponse(errors=errors) + + @app.post("/api/jobs", response_model=Job, + description="Submit a new job") + async def submit_job(body: JobSubmitRequest, + username: str = Depends(get_current_user)): + try: + resources_dict = None + if body.resources: + resources_dict = body.resources.model_dump(exclude_none=True) + + db_job = await apps_module.submit_job( + username=username, + app_url=body.app_url, + entry_point_id=body.entry_point_id, + parameters=body.parameters, + resources=resources_dict, + pull_latest=body.pull_latest, + manifest_path=body.manifest_path, + env=body.env, + pre_run=body.pre_run, + post_run=body.post_run, + ) + return _convert_job(db_job) + except ValueError as e: + raise HTTPException(status_code=400, detail=str(e)) + except Exception as e: + logger.exception(f"Error submitting job: {e}") + raise HTTPException(status_code=500, detail=str(e)) + + @app.get("/api/jobs", response_model=JobResponse, + description="List the user's jobs") + async def get_jobs(status: Optional[str] = Query(None, description="Filter by status"), + username: str = Depends(get_current_user)): + with db.get_db_session(settings.db_url) as session: + db_jobs = db.get_jobs_by_username(session, username, status) + jobs = [_convert_job(j) for j in db_jobs] + return JobResponse(jobs=jobs) + + @app.get("/api/jobs/{job_id}", response_model=Job, + description="Get a single job by ID") + async def get_job(job_id: int, + username: str = Depends(get_current_user)): + with db.get_db_session(settings.db_url) as session: + db_job = db.get_job(session, job_id, username) + if db_job is None: + raise HTTPException(status_code=404, detail="Job not found") + return _convert_job(db_job, include_files=True) + + @app.post("/api/jobs/{job_id}/cancel", + description="Cancel a running job") + async def cancel_job(job_id: int, + username: str = Depends(get_current_user)): + try: + db_job = await apps_module.cancel_job(job_id, username) + return _convert_job(db_job) + except ValueError as e: + raise HTTPException(status_code=400, detail=str(e)) + + @app.delete("/api/jobs/{job_id}", + description="Delete a job record") + async def delete_job(job_id: int, + username: str = Depends(get_current_user)): + with db.get_db_session(settings.db_url) as session: + deleted = db.delete_job(session, job_id, username) + if not deleted: + raise HTTPException(status_code=404, detail="Job not found") + return {"message": "Job deleted"} + + @app.get("/api/jobs/{job_id}/files/{file_type}", + description="Get job file content (script, stdout, or stderr)") + async def get_job_file(job_id: int, + file_type: str = Path(..., description="File type: script, stdout, or stderr"), + username: str = Depends(get_current_user)): + if file_type not in ("script", "stdout", "stderr"): + raise HTTPException(status_code=400, detail="file_type must be script, stdout, or stderr") + try: + content = await apps_module.get_job_file_content(job_id, username, file_type) + if content is None: + raise HTTPException(status_code=404, detail=f"File not found: {file_type}") + return PlainTextResponse(content) + except ValueError as e: + raise HTTPException(status_code=404, detail=str(e)) + + def _convert_job(db_job: db.JobDB, include_files: bool = False) -> Job: + """Convert a database JobDB to a Pydantic Job model.""" + files = None + if include_files: + files = apps_module.get_job_file_paths(db_job) + return Job( + id=db_job.id, + app_url=db_job.app_url, + app_name=db_job.app_name, + manifest_path=db_job.manifest_path, + entry_point_id=db_job.entry_point_id, + entry_point_name=db_job.entry_point_name, + parameters=db_job.parameters, + status=db_job.status, + exit_code=db_job.exit_code, + resources=db_job.resources, + env=db_job.env, + pre_run=db_job.pre_run, + post_run=db_job.post_run, + pull_latest=db_job.pull_latest, + cluster_job_id=db_job.cluster_job_id, + created_at=db_job.created_at, + started_at=db_job.started_at, + finished_at=db_job.finished_at, + files=files, + ) + @app.post("/api/auth/simple-login", include_in_schema=not settings.enable_okta_auth) async def simple_login_handler(request: Request, body: dict = Body(...)): """Handle simple login JSON submission""" diff --git a/fileglancer/settings.py b/fileglancer/settings.py index 3cddb73e..149a42d3 100644 --- a/fileglancer/settings.py +++ b/fileglancer/settings.py @@ -2,7 +2,7 @@ from functools import cache import sys -from pydantic import HttpUrl, ValidationError, field_validator, model_validator +from pydantic import BaseModel, HttpUrl, ValidationError, field_validator, model_validator from pydantic_settings import ( BaseSettings, PydanticBaseSettingsSource, @@ -11,6 +11,29 @@ ) +class ClusterSettings(BaseModel): + """Cluster configuration matching py-cluster-api's ClusterConfig.""" + executor: str = 'local' + cpus: Optional[int] = None + gpus: Optional[int] = None + memory: Optional[str] = None + walltime: Optional[str] = None + queue: Optional[str] = None + poll_interval: float = 10.0 + shebang: str = "#!/bin/bash" + script_prologue: List[str] = [] + script_epilogue: List[str] = [] + extra_directives: List[str] = [] + extra_args: List[str] = [] + directives_skip: List[str] = [] + lsf_units: str = "MB" + job_name_prefix: Optional[str] = None + zombie_timeout_minutes: float = 30.0 + completed_retention_minutes: float = 10.0 + command_timeout: float = 100.0 + suppress_job_email: bool = True + + class Settings(BaseSettings): """ Settings can be read from a settings.yaml file, or from the environment, with environment variables prepended @@ -67,6 +90,9 @@ class Settings(BaseSettings): # CLI mode - enables auto-login endpoint for standalone CLI usage cli_mode: bool = False + # Cluster / Apps settings (mirrors py-cluster-api ClusterConfig) + cluster: ClusterSettings = ClusterSettings() + model_config = SettingsConfigDict( yaml_file="config.yaml", env_file='.env', diff --git a/frontend/src/App.tsx b/frontend/src/App.tsx index 4dc7ee4d..09626922 100644 --- a/frontend/src/App.tsx +++ b/frontend/src/App.tsx @@ -8,6 +8,10 @@ import { MainLayout } from './layouts/MainLayout'; import { BrowsePageLayout } from './layouts/BrowseLayout'; import { OtherPagesLayout } from './layouts/OtherPagesLayout'; import Login from '@/components/Login'; +import Apps from '@/components/Apps'; +import AppJobs from '@/components/AppJobs'; +import AppLaunch from '@/components/AppLaunch'; +import JobDetail from '@/components/JobDetail'; import Browse from '@/components/Browse'; import Help from '@/components/Help'; import Jobs from '@/components/Jobs'; @@ -112,6 +116,54 @@ const AppComponent = () => { } path="nglinks" /> + + + + } + path="apps" + /> + + + + } + path="apps/jobs" + /> + + + + } + path="apps/launch/:owner/:repo/:branch/:entryPointId" + /> + + + + } + path="apps/launch/:owner/:repo/:branch" + /> + + + + } + path="apps/relaunch/:owner/:repo/:branch/:entryPointId" + /> + + + + } + path="apps/jobs/:jobId" + /> {tasksEnabled ? ( { + navigate(`/apps/jobs/${jobId}`); + }; + + const handleRelaunch = (job: Job) => { + const { owner, repo, branch } = parseGithubUrl(job.app_url); + const path = buildRelaunchPath( + owner, + repo, + branch, + job.entry_point_id, + job.manifest_path || undefined + ); + navigate(path, { + state: { + parameters: job.parameters, + resources: job.resources, + env: job.env, + pre_run: job.pre_run, + post_run: job.post_run, + pull_latest: job.pull_latest + } + }); + }; + + const handleCancelJob = async (jobId: number) => { + try { + await cancelJobMutation.mutateAsync(jobId); + toast.success('Job cancelled'); + } catch (error) { + const message = + error instanceof Error ? error.message : 'Failed to cancel job'; + toast.error(message); + } + }; + + const handleDeleteJob = async (jobId: number) => { + try { + await deleteJobMutation.mutateAsync(jobId); + toast.success('Job deleted'); + } catch (error) { + const message = + error instanceof Error ? error.message : 'Failed to delete job'; + toast.error(message); + } + }; + + const jobsColumns = useMemo( + () => + createAppsJobsColumns({ + onViewDetail: handleViewJobDetail, + onRelaunch: handleRelaunch, + onCancel: handleCancelJob, + onDelete: handleDeleteJob + }), + // eslint-disable-next-line react-hooks/exhaustive-deps + [] + ); + + return ( +
+ + Jobs + + + +
+ ); +} diff --git a/frontend/src/components/AppLaunch.tsx b/frontend/src/components/AppLaunch.tsx new file mode 100644 index 00000000..7aac1214 --- /dev/null +++ b/frontend/src/components/AppLaunch.tsx @@ -0,0 +1,274 @@ +import { useEffect, useState } from 'react'; +import { + useLocation, + useNavigate, + useParams, + useSearchParams +} from 'react-router'; + +import { Button, Typography } from '@material-tailwind/react'; +import { + HiOutlineArrowLeft, + HiOutlineDownload, + HiOutlinePlay +} from 'react-icons/hi'; +import toast from 'react-hot-toast'; + +import AppLaunchForm from '@/components/ui/AppsPage/AppLaunchForm'; +import { buildGithubUrl } from '@/utils'; +import { + useAppsQuery, + useAddAppMutation, + useManifestPreviewMutation +} from '@/queries/appsQueries'; +import { useSubmitJobMutation } from '@/queries/jobsQueries'; +import type { AppEntryPoint, AppResourceDefaults } from '@/shared.types'; + +export default function AppLaunch() { + const { owner, repo, branch, entryPointId } = useParams<{ + owner: string; + repo: string; + branch: string; + entryPointId?: string; + }>(); + const [searchParams] = useSearchParams(); + const navigate = useNavigate(); + const location = useLocation(); + const manifestMutation = useManifestPreviewMutation(); + const submitJobMutation = useSubmitJobMutation(); + const appsQuery = useAppsQuery(); + const addAppMutation = useAddAppMutation(); + const [selectedEntryPoint, setSelectedEntryPoint] = + useState(null); + + const manifestPath = searchParams.get('path') || ''; + const appUrl = buildGithubUrl(owner!, repo!, branch!); + const isRelaunch = location.pathname.startsWith('/apps/relaunch/'); + const relaunchState = isRelaunch + ? (location.state as { + parameters?: Record; + resources?: Record; + env?: Record; + pre_run?: string; + post_run?: string; + pull_latest?: boolean; + } | null) + : null; + const relaunchParameters = relaunchState?.parameters; + const relaunchResources = relaunchState?.resources as + | AppResourceDefaults + | undefined; + const relaunchEnv = relaunchState?.env; + const relaunchPreRun = relaunchState?.pre_run; + const relaunchPostRun = relaunchState?.post_run; + const relaunchPullLatest = relaunchState?.pull_latest; + + // Check if app is in user's library + const isInstalled = appsQuery.data?.some( + a => a.url === appUrl && a.manifest_path === manifestPath + ); + + useEffect(() => { + if (appUrl) { + manifestMutation.mutate({ url: appUrl, manifest_path: manifestPath }); + } + // Only fetch on mount + // eslint-disable-next-line react-hooks/exhaustive-deps + }, [appUrl]); + + const manifest = manifestMutation.data; + + // Auto-select entry point from URL param, or if there's only one + useEffect(() => { + if (!manifest) { + return; + } + if (entryPointId) { + const ep = manifest.runnables.find(e => e.id === entryPointId); + if (ep) { + setSelectedEntryPoint(ep); + return; + } + } + if (manifest.runnables.length === 1) { + setSelectedEntryPoint(manifest.runnables[0]); + } + // eslint-disable-next-line react-hooks/exhaustive-deps + }, [manifest]); + + const handleSubmit = async ( + parameters: Record, + resources?: AppResourceDefaults, + pullLatest?: boolean, + env?: Record, + preRun?: string, + postRun?: string + ) => { + if (!selectedEntryPoint) { + return; + } + try { + await submitJobMutation.mutateAsync({ + app_url: appUrl, + manifest_path: manifestPath, + entry_point_id: selectedEntryPoint.id, + parameters, + resources, + pull_latest: pullLatest, + env, + pre_run: preRun, + post_run: postRun + }); + toast.success('Job submitted'); + navigate('/apps/jobs'); + } catch (error) { + const message = + error instanceof Error ? error.message : 'Failed to submit job'; + toast.error(message); + } + }; + + const handleInstall = async () => { + try { + const apps = await addAppMutation.mutateAsync(appUrl); + const count = apps.length; + toast.success(`${count} app${count !== 1 ? 's' : ''} added`); + } catch (error) { + const message = + error instanceof Error ? error.message : 'Failed to install app'; + toast.error(message); + } + }; + + return ( +
+ + + {/* Not-installed banner */} + {!appsQuery.isPending && !isInstalled ? ( +
+ + This app is not in your library. Install it for quick access from + the Apps page. + + +
+ ) : null} + + {manifestMutation.isPending ? ( +
+ {/* Title + subtitle */} +
+
+
+
+ {/* Tab bar skeleton */} +
+
+
+
+ {/* Parameter fields */} +
+ {[1, 2, 3].map(i => ( +
+
+
+
+ ))} +
+ {/* Submit button */} +
+
+ ) : manifestMutation.isError ? ( +
+ Failed to load app manifest:{' '} + {manifestMutation.error?.message || 'Unknown error'} +
+ ) : manifest && selectedEntryPoint ? ( + <> + {manifest.runnables.length > 1 ? ( + + ) : null} + + + ) : manifest ? ( +
+ + {manifest.name} + + {manifest.description ? ( + + {manifest.description} + + ) : null} + + Select an entry point: + +
+ {manifest.runnables.map(ep => ( +
+
+ + {ep.name} + + {ep.description ? ( + + {ep.description} + + ) : null} +
+ +
+ ))} +
+
+ ) : null} +
+ ); +} diff --git a/frontend/src/components/Apps.tsx b/frontend/src/components/Apps.tsx new file mode 100644 index 00000000..2e1712b1 --- /dev/null +++ b/frontend/src/components/Apps.tsx @@ -0,0 +1,119 @@ +import { useState } from 'react'; + +import { Button, Typography } from '@material-tailwind/react'; +import { HiOutlinePlus } from 'react-icons/hi'; +import toast from 'react-hot-toast'; + +import AppCard from '@/components/ui/AppsPage/AppCard'; +import AddAppDialog from '@/components/ui/AppsPage/AddAppDialog'; +import { + useAppsQuery, + useAddAppMutation, + useUpdateAppMutation, + useRemoveAppMutation +} from '@/queries/appsQueries'; + +export default function Apps() { + const [showAddDialog, setShowAddDialog] = useState(false); + + const appsQuery = useAppsQuery(); + const addAppMutation = useAddAppMutation(); + const updateAppMutation = useUpdateAppMutation(); + const removeAppMutation = useRemoveAppMutation(); + + const handleAddApp = async (url: string) => { + const apps = await addAppMutation.mutateAsync(url); + const count = apps.length; + toast.success(`${count} app${count !== 1 ? 's' : ''} added`); + setShowAddDialog(false); + }; + + const handleRemoveApp = async ({ + url, + manifest_path + }: { + url: string; + manifest_path: string; + }) => { + try { + await removeAppMutation.mutateAsync({ url, manifest_path }); + toast.success('App removed'); + } catch (error) { + const message = + error instanceof Error ? error.message : 'Failed to remove app'; + toast.error(message); + } + }; + + const handleUpdateApp = async ({ + url, + manifest_path + }: { + url: string; + manifest_path: string; + }) => { + try { + await updateAppMutation.mutateAsync({ url, manifest_path }); + toast.success('App updated'); + } catch (error) { + const message = + error instanceof Error ? error.message : 'Failed to update app'; + toast.error(message); + } + }; + + return ( +
+ + Apps + + + Run command-line tools on the compute cluster. Add apps by URL to get + started. + + +
+ +
+ + {appsQuery.isPending ? ( + + Loading apps... + + ) : appsQuery.isError ? ( +
+ Failed to load apps: {appsQuery.error?.message || 'Unknown error'} +
+ ) : appsQuery.data?.length ? ( +
+ {appsQuery.data.map(app => ( + + ))} +
+ ) : ( +
+ + No apps configured. Click "Add App" to get started. + +
+ )} + + setShowAddDialog(false)} + open={showAddDialog} + /> +
+ ); +} diff --git a/frontend/src/components/JobDetail.tsx b/frontend/src/components/JobDetail.tsx new file mode 100644 index 00000000..06cae847 --- /dev/null +++ b/frontend/src/components/JobDetail.tsx @@ -0,0 +1,397 @@ +import { useEffect, useState } from 'react'; +import { Link, useNavigate, useParams } from 'react-router'; + +import { Button, Tabs, Typography } from '@material-tailwind/react'; +import { + HiOutlineArrowLeft, + HiOutlineDownload, + HiOutlineRefresh +} from 'react-icons/hi'; +import { Prism as SyntaxHighlighter } from 'react-syntax-highlighter'; +import { + materialDark, + coy +} from 'react-syntax-highlighter/dist/esm/styles/prism'; + +import type { JobFileInfo, FileSharePath } from '@/shared.types'; +import JobStatusBadge from '@/components/ui/AppsPage/JobStatusBadge'; +import { formatDateString, buildRelaunchPath, parseGithubUrl } from '@/utils'; +import { + getPreferredPathForDisplay, + makeBrowseLink +} from '@/utils/pathHandling'; +import { usePreferencesContext } from '@/contexts/PreferencesContext'; +import { useZoneAndFspMapContext } from '@/contexts/ZonesAndFspMapContext'; +import { useJobQuery, useJobFileQuery } from '@/queries/jobsQueries'; + +function FilePreview({ + content, + language, + isDarkMode +}: { + readonly content: string | null | undefined; + readonly language: string; + readonly isDarkMode: boolean; +}) { + if (content === undefined) { + return ( + + Loading... + + ); + } + + if (content === null) { + return ( + + File not available + + ); + } + + const theme = isDarkMode ? materialDark : coy; + const themeCodeStyles = theme['code[class*="language-"]'] || {}; + + return ( +
+ + {content} + +
+ ); +} + +function FilePathLink({ + fileInfo, + pathPreference, + zonesAndFspMap +}: { + readonly fileInfo: JobFileInfo | undefined; + readonly pathPreference: ['linux_path' | 'windows_path' | 'mac_path']; + readonly zonesAndFspMap: Record; +}) { + if (!fileInfo?.fsp_name || !fileInfo.subpath) { + return null; + } + + // Find the FSP in the zones map to get platform-specific paths + let fsp: FileSharePath | null = null; + for (const value of Object.values(zonesAndFspMap)) { + if ( + value && + typeof value === 'object' && + 'name' in value && + (value as FileSharePath).name === fileInfo.fsp_name + ) { + fsp = value as FileSharePath; + break; + } + } + + const displayPath = fsp + ? getPreferredPathForDisplay(pathPreference, fsp, fileInfo.subpath) + : fileInfo.path; + + const browseUrl = makeBrowseLink(fileInfo.fsp_name, fileInfo.subpath); + + return ( + + {displayPath} + + ); +} + +export default function JobDetail() { + const { jobId } = useParams<{ jobId: string }>(); + const navigate = useNavigate(); + const [isDarkMode, setIsDarkMode] = useState(false); + const [activeTab, setActiveTab] = useState('parameters'); + + const { pathPreference } = usePreferencesContext(); + const { zonesAndFspQuery } = useZoneAndFspMapContext(); + + const id = jobId ? parseInt(jobId) : 0; + const jobQuery = useJobQuery(id); + const scriptQuery = useJobFileQuery(id, 'script'); + const stdoutQuery = useJobFileQuery(id, 'stdout'); + const stderrQuery = useJobFileQuery(id, 'stderr'); + + useEffect(() => { + const checkDarkMode = () => { + setIsDarkMode(document.documentElement.classList.contains('dark')); + }; + checkDarkMode(); + const observer = new MutationObserver(checkDarkMode); + observer.observe(document.documentElement, { + attributes: true, + attributeFilter: ['class'] + }); + return () => observer.disconnect(); + }, []); + + const job = jobQuery.data; + + const handleDownload = (content: string, filename: string) => { + const blob = new Blob([content], { type: 'text/plain' }); + const url = URL.createObjectURL(blob); + const a = document.createElement('a'); + a.href = url; + a.download = filename; + a.click(); + URL.revokeObjectURL(url); + }; + + const handleRelaunch = () => { + if (!job) { + return; + } + const { owner, repo, branch } = parseGithubUrl(job.app_url); + const path = buildRelaunchPath( + owner, + repo, + branch, + job.entry_point_id, + job.manifest_path || undefined + ); + navigate(path, { + state: { + parameters: job.parameters, + resources: job.resources, + env: job.env, + pre_run: job.pre_run, + post_run: job.post_run, + pull_latest: job.pull_latest + } + }); + }; + + return ( +
+ + + {jobQuery.isPending ? ( +
+ {/* Title skeleton */} +
+
+
+
+
+
+
+
+ {/* Tab bar skeleton */} +
+
+
+
+
+
+ {/* Content area skeleton */} +
+
+
+
+
+
+ ) : jobQuery.isError ? ( +
+ Failed to load job: {jobQuery.error?.message || 'Unknown error'} +
+ ) : job ? ( +
+ {/* Job Info Header */} +
+ + {job.app_name} — {job.entry_point_name} + +
+ + + Submitted: {formatDateString(job.created_at)} + + {job.started_at ? ( + + Started: {formatDateString(job.started_at)} + + ) : null} + {job.finished_at ? ( + + Finished: {formatDateString(job.finished_at)} + + ) : null} + {job.exit_code !== null && job.exit_code !== undefined ? ( + + Exit code: {job.exit_code} + + ) : null} +
+
+ + {/* Tabs */} + + + + Parameters + + + Script + + + Output Log + + + Error Log + + + + + + {Object.keys(job.parameters).length > 0 ? ( +
+ {Object.entries(job.parameters).map(([key, value]) => ( +
+ + {key}: + + + {String(value)} + +
+ ))} +
+ ) : ( + + No parameters + + )} + +
+ + +
+ +
+ +
+ + +
+ + {stdoutQuery.data !== undefined && stdoutQuery.data !== null ? ( + + ) : null} +
+ +
+ + +
+ + {stderrQuery.data !== undefined && stderrQuery.data !== null ? ( + + ) : null} +
+ +
+
+
+ ) : null} +
+ ); +} diff --git a/frontend/src/components/Jobs.tsx b/frontend/src/components/Jobs.tsx index b1693604..f81b2a81 100644 --- a/frontend/src/components/Jobs.tsx +++ b/frontend/src/components/Jobs.tsx @@ -1,4 +1,5 @@ import { Typography } from '@material-tailwind/react'; +import { Link } from 'react-router'; import { useTicketContext } from '@/contexts/TicketsContext'; import { TableCard } from './ui/Table/TableCard'; @@ -12,10 +13,12 @@ export default function Jobs() { Tasks - A task is created when you request a file to be converted to a different - format. To request a file conversion, select a file in the file browser, - open the Properties panel, and click the{' '} - Convert button. + Jobs are runs of command-line tools on the compute cluster that are + launched from the{' '} + + Apps page + + . void; + readonly onAdd: (url: string) => Promise; + readonly adding: boolean; +} + +export default function AddAppDialog({ + open, + onClose, + onAdd, + adding +}: AddAppDialogProps) { + const [repoUrl, setRepoUrl] = useState(''); + const [branch, setBranch] = useState(''); + const [urlError, setUrlError] = useState(''); + + const validateUrl = (url: string): boolean => { + if (!url.trim()) { + setUrlError(''); + return false; + } + if (!isValidGitHubUrl(url)) { + setUrlError('Please enter a valid GitHub repository URL'); + return false; + } + setUrlError(''); + return true; + }; + + const handleAdd = async () => { + if (!validateUrl(repoUrl)) { + return; + } + const appUrl = buildAppUrl(repoUrl, branch); + try { + await onAdd(appUrl); + setRepoUrl(''); + setBranch(''); + setUrlError(''); + } catch (error) { + setUrlError(error instanceof Error ? error.message : 'Failed to add app'); + } + }; + + const handleClose = () => { + setRepoUrl(''); + setBranch(''); + setUrlError(''); + onClose(); + }; + + const urlIsValid = repoUrl.trim() !== '' && isValidGitHubUrl(repoUrl); + + return ( + + + Add App + + + + Enter a GitHub repository URL containing a runnables.yaml{' '} + manifest. + + +
+ + { + setRepoUrl(e.target.value); + setUrlError(''); + }} + onKeyDown={e => { + if (e.key === 'Enter') { + handleAdd(); + } + }} + placeholder="https://github.com/org/repo" + type="text" + value={repoUrl} + /> + {urlError ? ( + + {urlError} + + ) : null} +
+ +
+ + { + setBranch(e.target.value); + }} + onKeyDown={e => { + if (e.key === 'Enter') { + handleAdd(); + } + }} + placeholder="main" + type="text" + value={branch} + /> +
+ +
+ + +
+
+ ); +} diff --git a/frontend/src/components/ui/AppsPage/AppCard.tsx b/frontend/src/components/ui/AppsPage/AppCard.tsx new file mode 100644 index 00000000..59642d27 --- /dev/null +++ b/frontend/src/components/ui/AppsPage/AppCard.tsx @@ -0,0 +1,112 @@ +import { useState } from 'react'; +import { useNavigate } from 'react-router'; + +import { Button, IconButton, Typography } from '@material-tailwind/react'; +import { buildLaunchPathFromApp } from '@/utils'; +import { + HiOutlineInformationCircle, + HiOutlinePlay, + HiOutlineTrash +} from 'react-icons/hi'; + +import AppInfoDialog from '@/components/ui/AppsPage/AppInfoDialog'; +import FgTooltip from '@/components/ui/widgets/FgTooltip'; +import type { UserApp } from '@/shared.types'; + +interface AppCardProps { + readonly app: UserApp; + readonly onRemove: (params: { url: string; manifest_path: string }) => void; + readonly onUpdate: (params: { url: string; manifest_path: string }) => void; + readonly removing: boolean; + readonly updating: boolean; +} + +export default function AppCard({ + app, + onRemove, + onUpdate, + removing, + updating +}: AppCardProps) { + const navigate = useNavigate(); + const [infoOpen, setInfoOpen] = useState(false); + + const handleLaunch = () => { + navigate(buildLaunchPathFromApp(app.url, app.manifest_path)); + }; + + return ( +
+
+
+ + {app.name} + + {app.description ? ( + + {app.description} + + ) : null} +
+
+ + setInfoOpen(true)} + size="sm" + variant="ghost" + > + + + + + + onRemove({ + url: app.url, + manifest_path: app.manifest_path + }) + } + size="sm" + variant="ghost" + > + + + +
+
+ + + + setInfoOpen(false)} + onLaunch={() => { + setInfoOpen(false); + handleLaunch(); + }} + onRemove={() => { + setInfoOpen(false); + onRemove({ url: app.url, manifest_path: app.manifest_path }); + }} + onUpdate={() => + onUpdate({ url: app.url, manifest_path: app.manifest_path }) + } + open={infoOpen} + removing={removing} + updating={updating} + /> +
+ ); +} diff --git a/frontend/src/components/ui/AppsPage/AppInfoDialog.tsx b/frontend/src/components/ui/AppsPage/AppInfoDialog.tsx new file mode 100644 index 00000000..ce033760 --- /dev/null +++ b/frontend/src/components/ui/AppsPage/AppInfoDialog.tsx @@ -0,0 +1,117 @@ +import { Button, Typography } from '@material-tailwind/react'; +import { + HiExternalLink, + HiOutlinePlay, + HiOutlineRefresh, + HiOutlineTrash +} from 'react-icons/hi'; + +import FgDialog from '@/components/ui/Dialogs/FgDialog'; +import type { UserApp } from '@/shared.types'; + +interface AppInfoDialogProps { + readonly app: UserApp; + readonly open: boolean; + readonly onClose: () => void; + readonly onLaunch: () => void; + readonly onUpdate: () => void; + readonly onRemove: () => void; + readonly updating: boolean; + readonly removing: boolean; +} + +function AppInfoTable({ app }: { readonly app: UserApp }) { + const labelClass = + 'text-secondary font-medium pr-4 py-1.5 align-top whitespace-nowrap'; + const valueClass = 'text-foreground py-1.5'; + + return ( + + + + + + + {app.manifest?.version ? ( + + + + + ) : null} + {app.description ? ( + + + + + ) : null} + {app.manifest?.runnables && app.manifest.runnables.length > 0 ? ( + + + + + ) : null} + +
URL + + {app.url} + + +
Version{app.manifest.version}
Description{app.description}
Entry Points + {app.manifest.runnables.map(ep => ep.name).join(', ')} +
+ ); +} + +export default function AppInfoDialog({ + app, + open, + onClose, + onLaunch, + onUpdate, + onRemove, + updating, + removing +}: AppInfoDialogProps) { + return ( + + + {app.name} + + + + +
+ +
+ + +
+
+
+ ); +} diff --git a/frontend/src/components/ui/AppsPage/AppLaunchForm.tsx b/frontend/src/components/ui/AppsPage/AppLaunchForm.tsx new file mode 100644 index 00000000..56ffe025 --- /dev/null +++ b/frontend/src/components/ui/AppsPage/AppLaunchForm.tsx @@ -0,0 +1,852 @@ +import { useState } from 'react'; +import type { Dispatch, SetStateAction } from 'react'; + +import { Accordion, Button, Tabs, Typography } from '@material-tailwind/react'; +import { + HiChevronDown, + HiOutlinePlus, + HiOutlinePlay, + HiOutlineTrash +} from 'react-icons/hi'; + +import FileSelectorButton from '@/components/ui/BrowsePage/FileSelector/FileSelectorButton'; +import { validatePaths } from '@/queries/appsQueries'; +import { convertBackToForwardSlash } from '@/utils/pathHandling'; +import { flattenParameters, isParameterSection } from '@/shared.types'; +import type { + AppEntryPoint, + AppManifest, + AppParameter, + AppParameterSection, + AppResourceDefaults +} from '@/shared.types'; + +interface AppLaunchFormProps { + readonly manifest: AppManifest; + readonly entryPoint: AppEntryPoint; + readonly onSubmit: ( + parameters: Record, + resources?: AppResourceDefaults, + pullLatest?: boolean, + env?: Record, + preRun?: string, + postRun?: string + ) => Promise; + readonly submitting: boolean; + readonly initialValues?: Record; + readonly initialResources?: AppResourceDefaults; + readonly initialEnv?: Record; + readonly initialPreRun?: string; + readonly initialPostRun?: string; + readonly initialPullLatest?: boolean; +} + +type EnvVar = { key: string; value: string }; + +function ParameterField({ + param, + value, + onChange +}: { + readonly param: AppParameter; + readonly value: unknown; + readonly onChange: (value: unknown) => void; +}) { + const baseInputClass = + 'w-full p-2 text-foreground border rounded-sm focus:outline-none bg-background border-primary-light focus:border-primary'; + + switch (param.type) { + case 'boolean': + return ( + + ); + + case 'integer': + case 'number': + return ( + { + const val = e.target.value; + if (val === '') { + onChange(undefined); + } else { + onChange( + param.type === 'integer' ? parseInt(val) : parseFloat(val) + ); + } + }} + placeholder={param.description || param.name} + step={param.type === 'integer' ? 1 : 'any'} + type="number" + value={value !== undefined && value !== null ? String(value) : ''} + /> + ); + + case 'enum': + return ( + + ); + + case 'file': + case 'directory': + return ( +
+ onChange(e.target.value)} + placeholder={param.description || `Select a ${param.type}...`} + type="text" + value={value !== undefined && value !== null ? String(value) : ''} + /> + onChange(path)} + useServerPath + /> +
+ ); + + default: + return ( + onChange(e.target.value)} + placeholder={param.description || param.name} + type="text" + value={value !== undefined && value !== null ? String(value) : ''} + /> + ); + } +} + +function ParameterFieldRow({ + param, + value, + error, + onChange +}: { + readonly param: AppParameter; + readonly value: unknown; + readonly error?: string; + readonly onChange: (value: unknown) => void; +}) { + return ( +
+ {param.type !== 'boolean' ? ( + + ) : null} + {param.description && param.type !== 'boolean' ? ( + + {param.description} + + ) : null} + + {param.description && param.type === 'boolean' ? ( + + {param.description} + + ) : null} + {error ? ( + + {error} + + ) : null} +
+ ); +} + +function SectionContent({ + section, + values, + errors, + onParamChange +}: { + readonly section: AppParameterSection; + readonly values: Record; + readonly errors: Record; + readonly onParamChange: (paramId: string, value: unknown) => void; +}) { + return ( +
+ {section.parameters.map(param => ( + onParamChange(param.key, val)} + param={param} + value={values[param.key]} + /> + ))} +
+ ); +} + +function EnvVarRows({ + envVars, + setEnvVars +}: { + readonly envVars: EnvVar[]; + readonly setEnvVars: Dispatch>; +}) { + return ( +
+ + + Variables exported in the job script before the command runs + + {envVars.map((envVar, idx) => ( +
+ + setEnvVars(prev => + prev.map((v, i) => + i === idx ? { ...v, key: e.target.value } : v + ) + ) + } + placeholder="NAME" + type="text" + value={envVar.key} + /> + = + + setEnvVars(prev => + prev.map((v, i) => + i === idx ? { ...v, value: e.target.value } : v + ) + ) + } + placeholder="value" + type="text" + value={envVar.value} + /> + +
+ ))} + +
+ ); +} + +function EnvironmentSectionContent({ + envVars, + setEnvVars, + preRun, + setPreRun, + postRun, + setPostRun +}: { + readonly envVars: EnvVar[]; + readonly setEnvVars: Dispatch>; + readonly preRun: string; + readonly setPreRun: Dispatch>; + readonly postRun: string; + readonly setPostRun: Dispatch>; +}) { + const textareaClass = + 'w-full p-2 text-foreground border rounded-sm focus:outline-none bg-background border-primary-light focus:border-primary font-mono text-sm'; + + return ( +
+ + +
+ + + Shell commands to run before the main command (e.g. module loads) + +