You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Today, azd provision and azd down only work with declarative IaC tools — Bicep and Terraform. Many teams manage infrastructure with imperative shell scripts that call az cli, aws cli, kubectl, custom CLIs, or other tooling. These teams cannot adopt azd without rewriting their provisioning workflows into Bicep or Terraform.
The provisioning provider extensibility framework (PR #7482, Epic #7465) enables extensions to register custom provisioning providers via gRPC. This PRD defines a Script Provisioning Provider Extension that allows users to configure arbitrary shell scripts as their provisioning and deprovisioning workflow, bridging imperative scripting into azd's environment lifecycle.
Why This Matters
Adoption barrier: Teams with existing az cli scripts, Helm charts, kubectl apply workflows, or custom CLIs cannot use azd today without a full IaC rewrite.
Multi-cloud gap: Teams managing non-Azure infrastructure alongside Azure (e.g., database SaaS, CDN, DNS) have scripts that don't translate to Bicep/Terraform.
Gradual migration: Teams want to adopt azd's environment management immediately, then migrate to declarative IaC incrementally — scripts provide a pragmatic on-ramp.
Edge cases: Some provisioning tasks (seed data, certificate generation, secret rotation) are inherently imperative and don't fit declarative models.
Who: Engineer responsible for infrastructure setup and maintenance. Has existing shell scripts (bash/PowerShell) that provision infrastructure by calling az cli, aws cli, Helm, kubectl, or custom tools.
Goals:
Use existing provisioning scripts with azd without rewriting them
Get azd's environment management (named environments, .env files, azd env commands) on top of existing scripts
Parameterize scripts so the same script works across dev/staging/prod environments
Pain Points:
Cannot adopt azd without converting scripts to Bicep/Terraform
Loses environment isolation when running scripts manually
No standard way to pass azd environment values into scripts or capture outputs back
Key Scenarios:
Sets up database, networking, and secrets via a series of az cli scripts
Provisions infrastructure that spans Azure and non-Azure services
Uses a custom internal CLI that the company's platform team maintains
2. DevOps Engineer (Secondary)
Who: Engineer who orchestrates multi-step provisioning workflows that don't fit a single IaC template — database migration, secret seeding, DNS configuration, certificate provisioning.
Goals:
Define a sequence of provisioning steps that execute in order
Pass outputs from one script to the next (e.g., resource group name → database connection string → app configuration)
Have clean teardown that reverses provisioning in the correct order
Pain Points:
Provisioning requires multiple tools/scripts that must run in sequence
No standard way to chain script outputs into downstream script inputs
Teardown is often forgotten or done in the wrong order
Key Scenarios:
Runs setup-rg.sh → setup-db.sh → setup-app.sh → seed-data.sh as a provisioning pipeline
Each script produces outputs consumed by subsequent scripts
azd down runs teardown scripts in reverse order
3. Team Lead (Tertiary)
Who: Technical lead who wants to standardize their team's provisioning workflow using scripts everyone already knows, while getting azd's collaboration features (shared environments, CI/CD integration).
Goals:
Onboard the team to azd without requiring Bicep/Terraform skills
Ensure scripts are parameterized and environment-aware so the team can't accidentally provision to the wrong subscription
Provide a clear azure.yaml configuration that documents the provisioning workflow
Pain Points:
Team has tribal knowledge about which scripts to run and in what order
No single source of truth for the provisioning workflow
Risk of provisioning to wrong environment without guardrails
User Flows
Flow 1: First-Time Setup
Team Lead configures azure.yaml:
1. Sets `infra.provider: scripts`
2. Defines provision scripts under `infra.config.provision`
3. Defines destroy scripts under `infra.config.destroy`
4. Creates parameter files for scripts that need inputs
5. Commits to repo
Platform Engineer runs provisioning:
1. Clones repo
2. Runs `azd init` → selects/creates environment
3. Runs `azd provision`
4. azd loads extension, reads config
5. Extension resolves parameters:
a. Checks azd environment values
b. Checks OS environment variables
c. Prompts for missing required values (special UX for AZURE_LOCATION, AZURE_SUBSCRIPTION_ID)
6. Scripts execute sequentially, stdout/stderr streamed to console
7. Outputs captured and stored in azd environment
8. `azd provision` completes — environment is ready
Flow 2: Teardown
Platform Engineer tears down environment:
1. Runs `azd down`
2. azd confirms with user (unless --force)
3. Extension runs destroy scripts in defined order
4. Invalidated environment keys are cleaned up
5. Environment returns to pre-provisioned state
Flow 3: Team Collaboration
DevOps Engineer joins existing project:
1. Clones repo (azure.yaml + scripts already configured)
2. Runs `azd env new staging`
3. Runs `azd provision`
4. Prompted for staging-specific parameters
5. Scripts execute against staging environment
6. Outputs stored in staging environment — isolated from dev
Detailed Requirements
REQ-1: Configuration Schema
The extension reads its configuration from infra.config in azure.yaml, which the framework passes through as Options.Config map[string]any → google.protobuf.Struct.
Default or interpolated value. Supports ${ENV_VAR} syntax.
name
string
No
Human-readable name used in interactive prompts. Defaults to the parameter key.
secret
bool
No
If true, input is masked during prompts. Default: false
REQ-3.3: Resolution Order
Parameters are resolved in the following order. The first source that provides a value wins:
Environment variable interpolation: If value contains ${VAR_NAME}, resolve from azd environment values, then OS environment variables. If the referenced variable exists and is non-empty, use it.
Literal value: If value is set and contains no unresolved ${...} placeholders, use it as-is.
Interactive prompt: If no value can be resolved, prompt the user interactively.
REQ-3.4: Special Parameter Handling
Certain well-known parameter names trigger enhanced UX:
Parameter Name
Behavior
AZURE_LOCATION
Shows the azd location picker (same as azd provision for Bicep)
AZURE_SUBSCRIPTION_ID
Shows the azd subscription picker (same as azd provision for Bicep)
These are resolved via the azd environment first; the picker only appears if the value is missing.
REQ-3.5: Parameter Persistence
When a parameter is resolved via interactive prompt, the resolved value MUST be stored in the azd environment (.env file) so that:
Subsequent azd provision runs don't re-prompt
Other commands (azd deploy, azd env get-values) can access the values
The value is associated with the specific azd environment (not global)
Secret parameters are stored with the same mechanism as other azd secrets.
Acceptance Criteria:
Parameter files are loaded and validated during Initialize()
${VAR_NAME} interpolation resolves from azd environment, then OS environment
Unresolved parameters trigger interactive prompts
AZURE_LOCATION triggers the standard azd location picker
AZURE_SUBSCRIPTION_ID triggers the standard azd subscription picker
Secret parameters are masked during prompts
Prompted values are persisted to the azd environment
Invalid parameter file JSON produces a clear error message
REQ-4: Output Collection
REQ-4.1: Output Mechanism
After each provisioning script completes successfully, the extension searches for an outputs.json file to collect provisioning outputs.
Search strategy: Starting from the directory containing the script, walk up the directory tree toward the project root, stopping at the first outputs.json found.
Initialize(): Load and validate infra.config (destroy section)
Destroy(): Execute destroy scripts sequentially
options.Force(): Skip user confirmation (handled by azd core, but extension should respect it)
options.Purge(): Pass as AZD_PURGE=true environment variable to scripts
REQ-5.3: azd provision --preview → Preview
Script-based provisioning has limited preview capability. The extension should:
Return a DeployPreviewResult listing the scripts that would be executed, with their names and shell types
Do not actually execute any scripts
Include a note that script-based provisioning cannot predict infrastructure changes
REQ-5.4: State
State() returns the current outputs from the azd environment. Since scripts don't track declarative state like Bicep/Terraform, the extension reports:
Current outputs (from previous provisioning runs stored in the environment)
An empty resources list (scripts don't track individual Azure resources)
REQ-5.5: PlannedOutputs
PlannedOutputs() returns the union of all output keys declared across all parameter files and any previously collected outputs. This enables azd's multi-layer system to understand what outputs this provider will produce.
Acceptance Criteria:
azd provision executes provision scripts and stores outputs in the environment
azd down executes destroy scripts
azd provision --preview lists scripts without executing them
State() returns previously-stored outputs
AZD_PURGE=true is set when azd down --purge is used
Destroy scripts receive the current azd environment values
REQ-6: Error Handling
REQ-6.1: Script Failure
When a script exits with a non-zero exit code:
The extension captures the last N lines (configurable, default 50) of stderr as the error message
Execution of subsequent scripts stops (unless continueOnError: true)
The Deploy() / Destroy() method returns an error with:
Script name and path
Exit code
Captured stderr output
Any outputs collected from prior successful scripts are still returned (partial results)
REQ-6.2: Script Not Found
If a configured script file doesn't exist, Initialize() returns an error before any execution
Error message includes the full resolved path and the azure.yaml config line
REQ-6.3: Shell Not Found
If the specified shell binary is not available on the system, Initialize() returns an error
Error message suggests installation steps for the missing shell
REQ-6.4: Parameter File Errors
Invalid JSON in parameter files: error during Initialize() with file path and parse error
Missing parameter file: error during Initialize() with full resolved path
Unresolvable ${VAR} in non-interactive mode: error with the variable name and available sources
REQ-6.5: Timeout
Scripts have no default timeout (to support long-running provisioning)
A future enhancement could add an optional timeout field per script entry
Acceptance Criteria:
Non-zero exit codes produce errors with script name, exit code, and stderr excerpt
Partial outputs from successful scripts are preserved on failure
Missing scripts/shells/parameter files are caught during Initialize()
All error messages include enough context to diagnose the problem without debugging
REQ-7: Security
REQ-7.1: Script Path Validation
Script paths must be relative and resolve to a location within the project directory
Paths containing .. that escape the project root are rejected
Absolute paths are rejected with a descriptive error
REQ-7.2: Secret Parameters
Parameters with secret: true are masked in interactive prompts (input not echoed)
Secret values are not logged to stdout/stderr by the extension itself
Secret values are passed to scripts as environment variables (same as non-secret values — scripts are responsible for not logging them)
REQ-7.3: Shell Injection Prevention
Script paths are passed as arguments to the shell binary, not interpolated into a command string
Parameter values are passed as environment variables, not interpolated into command strings
The extension does not construct or execute shell command strings dynamically
Acceptance Criteria:
Script paths that escape the project root are rejected
Absolute script paths are rejected
Secret parameters are masked during interactive prompts
No shell injection vectors via parameter values or script paths
REQ-8: Multi-Script Orchestration
REQ-8.1: Sequential Execution
Provision and destroy scripts execute sequentially in the order defined in azure.yaml. There is no parallel execution (by design — scripts often have implicit dependencies).
REQ-8.2: Environment Accumulation
Each script in the sequence receives:
Base environment (OS env + azd env + resolved parameters) — same for all scripts
Plus outputs from all previously completed scripts in the sequence
This creates a pipeline effect where early scripts can produce values consumed by later scripts.
REQ-8.3: Destroy Order
Destroy scripts execute in the order defined in the destroy list. The user is responsible for defining the correct teardown order (typically reverse of provisioning). The extension does not automatically reverse the order.
Acceptance Criteria:
Provision scripts execute in defined order
Destroy scripts execute in defined order
Outputs accumulate across scripts in a sequence
Failure in script N prevents scripts N+1..M from executing (default behavior)
REQ-9: Progress Reporting
REQ-9.1: Progress Messages
The extension reports progress to azd via the ProgressFunc callback provided by the framework:
Event
Progress Message
Script starting
Running script: <name> (<shell>)
Script completed
Completed: <name>
Script failed
Failed: <name> (exit code: <code>)
Collecting outputs
Collecting outputs from: <name>
REQ-9.2: Console Output
Script stdout and stderr are streamed to the console in real-time. The extension does not buffer or reformat script output — it appears exactly as the script produces it.
Acceptance Criteria:
Progress callbacks are sent for script start, completion, and failure
Script stdout/stderr appear in real-time in the user's terminal
Progress messages include the script's display name (or filename if no name specified)
Extension Metadata
extension.yaml
id: microsoft.azd.scriptsnamespace: scriptsdisplayName: Script Provisioning Providerdescription: > Enables custom shell scripts (bash, sh, pwsh, powershell) as a provisioning provider for azd. Configure scripts in azure.yaml to run during `azd provision` and `azd down`.version: 0.1.0language: gocapabilities:
- provisioning-providerproviders:
- name: scriptstype: provisioningdescription: > Provisions infrastructure by executing user-defined shell scripts configured in azure.yaml under infra.config.tags:
- provisioning
- scripts
- bash
- powershell
- infrastructure
Parallel script execution — Scripts run sequentially. Parallelism adds complexity around output merging, error handling, and progress reporting.
Script dependency graphs — Users define order explicitly in YAML. Topological sorting based on declared dependencies is a future enhancement.
Built-in retry logic — Scripts handle their own retry. The extension doesn't retry failed scripts.
Script timeout — No default timeout. Long-running provisioning is common and we don't want to introduce arbitrary limits.
Automatic destroy ordering — Users define destroy order explicitly. The extension doesn't infer reverse order from provision scripts.
State tracking / drift detection — Scripts are imperative; there's no state file like Terraform. State() returns previously-stored outputs only.
Windows cmd.exe support — Only bash, sh, pwsh, and powershell are supported. cmd.exe scripts should be wrapped in PowerShell.
Open Questions
Output file location: The POC searched upward from the script directory. Should we instead use a fixed location (e.g., project root) or let users configure it per-script?
Parameter file format evolution: Should we support YAML parameter files in addition to JSON? The ARM-style JSON format is familiar but verbose.
Multi-layer support: Should script provisioning support azd's multi-layer system (infra.layers[]), or is that unnecessary complexity for script-based workflows?
azd provision --preview depth: Should preview execute scripts with a --dry-run flag (convention-based) or only list scripts as proposed?
PRD: Script Provisioning Provider Extension
Problem Statement
Today,
azd provisionandazd downonly work with declarative IaC tools — Bicep and Terraform. Many teams manage infrastructure with imperative shell scripts that callaz cli,aws cli,kubectl, custom CLIs, or other tooling. These teams cannot adopt azd without rewriting their provisioning workflows into Bicep or Terraform.The provisioning provider extensibility framework (PR #7482, Epic #7465) enables extensions to register custom provisioning providers via gRPC. This PRD defines a Script Provisioning Provider Extension that allows users to configure arbitrary shell scripts as their provisioning and deprovisioning workflow, bridging imperative scripting into azd's environment lifecycle.
Why This Matters
az cliscripts, Helm charts,kubectl applyworkflows, or custom CLIs cannot use azd today without a full IaC rewrite.Prior Art
azdextension framework #5381 (closed): Original proof-of-concept that implemented script provisioning as a built-in provider. This informed the design but used a bespoke approach that predates the extension framework.WithProvisioningProvider()fluent API and gRPC protocol this extension will build on.Personas
1. Platform Engineer (Primary)
Who: Engineer responsible for infrastructure setup and maintenance. Has existing shell scripts (bash/PowerShell) that provision infrastructure by calling
az cli,aws cli, Helm,kubectl, or custom tools.Goals:
.envfiles,azd envcommands) on top of existing scriptsPain Points:
Key Scenarios:
az cliscripts2. DevOps Engineer (Secondary)
Who: Engineer who orchestrates multi-step provisioning workflows that don't fit a single IaC template — database migration, secret seeding, DNS configuration, certificate provisioning.
Goals:
Pain Points:
Key Scenarios:
setup-rg.sh→setup-db.sh→setup-app.sh→seed-data.shas a provisioning pipelineazd downruns teardown scripts in reverse order3. Team Lead (Tertiary)
Who: Technical lead who wants to standardize their team's provisioning workflow using scripts everyone already knows, while getting azd's collaboration features (shared environments, CI/CD integration).
Goals:
azure.yamlconfiguration that documents the provisioning workflowPain Points:
User Flows
Flow 1: First-Time Setup
Flow 2: Teardown
Flow 3: Team Collaboration
Detailed Requirements
REQ-1: Configuration Schema
The extension reads its configuration from
infra.configinazure.yaml, which the framework passes through asOptions.Config map[string]any→google.protobuf.Struct.REQ-1.1: Basic Configuration
REQ-1.2: Configuration with Parameters
REQ-1.3: Script Entry Schema
Each script entry supports the following fields:
shellbash,sh,pwsh,powershellrunparametersnamecontinueOnErrortrue, continue executing subsequent scripts even if this one fails. Default:falseREQ-1.4: Shell Type Support
bashbashshshpwshpwshpowershellpowershellAcceptance Criteria:
infra.configcontains at leastprovisionordestroyInitialize()azure.yaml)REQ-2: Script Execution Model
REQ-2.1: Working Directory
Scripts execute with the project root (directory containing
azure.yaml) as the working directory, regardless of the script's location.REQ-2.2: Environment Variables
Scripts receive a merged environment consisting of (in priority order, highest wins):
.envfile — e.g.,AZURE_LOCATION,AZURE_SUBSCRIPTION_ID, prior provisioning outputs)This ensures scripts can reference azd-managed values without explicitly reading
.envfiles.REQ-2.3: Shell Invocation
Scripts are invoked via:
<shell> <script-path><shell> -NoProfile -NonInteractive -File <script-path>The
-NoProfileand-NonInteractiveflags prevent user profile scripts from interfering and ensure no interactive prompts from the shell itself.REQ-2.4: stdin / stdout / stderr
stdinstdoutstderrREQ-2.5: Exit Codes
0continueOnError: true), report error to azdAcceptance Criteria:
continueOnError: trueallows execution to continue after a script failurecontinueOnError: falsefailsREQ-3: Parameter Resolution
REQ-3.1: Parameter File Format
Parameter files use a JSON format inspired by ARM template parameter files:
{ "parameters": { "AZURE_LOCATION": { "type": "string", "value": "${AZURE_LOCATION}" }, "DB_NAME": { "type": "string", "value": "mydb-${AZURE_ENV_NAME}" }, "DB_PASSWORD": { "type": "string", "name": "Database Password", "secret": true }, "REPLICA_COUNT": { "type": "integer", "value": "3" } } }REQ-3.2: Parameter Schema
Each parameter entry supports:
typestring,number,integer,booleanvalue${ENV_VAR}syntax.namesecrettrue, input is masked during prompts. Default:falseREQ-3.3: Resolution Order
Parameters are resolved in the following order. The first source that provides a value wins:
valuecontains${VAR_NAME}, resolve from azd environment values, then OS environment variables. If the referenced variable exists and is non-empty, use it.valueis set and contains no unresolved${...}placeholders, use it as-is.REQ-3.4: Special Parameter Handling
Certain well-known parameter names trigger enhanced UX:
AZURE_LOCATIONazd provisionfor Bicep)AZURE_SUBSCRIPTION_IDazd provisionfor Bicep)These are resolved via the azd environment first; the picker only appears if the value is missing.
REQ-3.5: Parameter Persistence
When a parameter is resolved via interactive prompt, the resolved value MUST be stored in the azd environment (
.envfile) so that:azd provisionruns don't re-promptazd deploy,azd env get-values) can access the valuesSecret parameters are stored with the same mechanism as other azd secrets.
Acceptance Criteria:
Initialize()${VAR_NAME}interpolation resolves from azd environment, then OS environmentAZURE_LOCATIONtriggers the standard azd location pickerAZURE_SUBSCRIPTION_IDtriggers the standard azd subscription pickerREQ-4: Output Collection
REQ-4.1: Output Mechanism
After each provisioning script completes successfully, the extension searches for an
outputs.jsonfile to collect provisioning outputs.Search strategy: Starting from the directory containing the script, walk up the directory tree toward the project root, stopping at the first
outputs.jsonfound.REQ-4.2: Output File Format
{ "outputs": { "RESOURCE_GROUP_NAME": { "type": "string", "value": "rg-myapp-dev" }, "DATABASE_CONNECTION_STRING": { "type": "string", "value": "Server=myserver.database.windows.net;..." }, "STORAGE_ACCOUNT_ID": { "type": "string", "value": "/subscriptions/.../storageAccounts/..." } } }REQ-4.3: Output Processing
DeployResult.Deployment.OutputsmapREQ-4.4: Inter-Script Output Passing
Outputs from script N are added to the environment variables available to script N+1. This enables chaining:
Acceptance Criteria:
outputs.jsonby walking up from script directoryDeployResult.Deployment.Outputsmapoutputs.jsonis not an error (scripts may have no outputs)REQ-5: Lifecycle Mapping
REQ-5.1:
azd provision→ DeployInitialize(): Load and validateinfra.config, verify script files exist, verify shells are availableEnsureEnv(): Load all parameter files, resolve parameters (interpolation → prompt), persist prompted valuesParameters(): Return parameter metadata for azd to displayDeploy(): Execute provision scripts sequentially, collect outputs, returnDeployResultREQ-5.2:
azd down→ DestroyInitialize(): Load and validateinfra.config(destroy section)Destroy(): Execute destroy scripts sequentiallyoptions.Force(): Skip user confirmation (handled by azd core, but extension should respect it)options.Purge(): Pass asAZD_PURGE=trueenvironment variable to scriptsREQ-5.3:
azd provision --preview→ PreviewScript-based provisioning has limited preview capability. The extension should:
DeployPreviewResultlisting the scripts that would be executed, with their names and shell typesREQ-5.4: State
State()returns the current outputs from the azd environment. Since scripts don't track declarative state like Bicep/Terraform, the extension reports:REQ-5.5: PlannedOutputs
PlannedOutputs()returns the union of all output keys declared across all parameter files and any previously collected outputs. This enables azd's multi-layer system to understand what outputs this provider will produce.Acceptance Criteria:
azd provisionexecutes provision scripts and stores outputs in the environmentazd downexecutes destroy scriptsazd provision --previewlists scripts without executing themState()returns previously-stored outputsAZD_PURGE=trueis set whenazd down --purgeis usedREQ-6: Error Handling
REQ-6.1: Script Failure
When a script exits with a non-zero exit code:
continueOnError: true)Deploy()/Destroy()method returns an error with:REQ-6.2: Script Not Found
Initialize()returns an error before any executionazure.yamlconfig lineREQ-6.3: Shell Not Found
Initialize()returns an errorREQ-6.4: Parameter File Errors
Initialize()with file path and parse errorInitialize()with full resolved path${VAR}in non-interactive mode: error with the variable name and available sourcesREQ-6.5: Timeout
timeoutfield per script entryAcceptance Criteria:
Initialize()REQ-7: Security
REQ-7.1: Script Path Validation
..that escape the project root are rejectedREQ-7.2: Secret Parameters
secret: trueare masked in interactive prompts (input not echoed)REQ-7.3: Shell Injection Prevention
Acceptance Criteria:
REQ-8: Multi-Script Orchestration
REQ-8.1: Sequential Execution
Provision and destroy scripts execute sequentially in the order defined in
azure.yaml. There is no parallel execution (by design — scripts often have implicit dependencies).REQ-8.2: Environment Accumulation
Each script in the sequence receives:
This creates a pipeline effect where early scripts can produce values consumed by later scripts.
REQ-8.3: Destroy Order
Destroy scripts execute in the order defined in the
destroylist. The user is responsible for defining the correct teardown order (typically reverse of provisioning). The extension does not automatically reverse the order.Acceptance Criteria:
REQ-9: Progress Reporting
REQ-9.1: Progress Messages
The extension reports progress to azd via the
ProgressFunccallback provided by the framework:Running script: <name> (<shell>)Completed: <name>Failed: <name> (exit code: <code>)Collecting outputs from: <name>REQ-9.2: Console Output
Script stdout and stderr are streamed to the console in real-time. The extension does not buffer or reformat script output — it appears exactly as the script produces it.
Acceptance Criteria:
Extension Metadata
extension.yaml
Registration
Non-Goals (Explicit Exclusions)
These are intentionally out of scope for v1:
bash,sh,pwsh, andpowershellare supported.cmd.exescripts should be wrapped in PowerShell.Open Questions
infra.provider: scripts, should azd automatically install themicrosoft.azd.scriptsextension? (See Auto-install extension support for custom provisioning providers #7502 for framework support.)infra.layers[]), or is that unnecessary complexity for script-based workflows?azd provision --previewdepth: Should preview execute scripts with a--dry-runflag (convention-based) or only list scripts as proposed?References
azdextension framework #5381: WIP: Adds support for provision providers via azd extension framework (original POC, closed)cli/azd/extensions/extension.schema.jsoncli/azd/pkg/infra/provisioning/provider.go—ProviderinterfaceOptions.Config map[string]any(added in PR feat: Add provisioning provider support to extension framework #7482)Implementation Sequence (Suggested)
Following the decision-making framework of hard dependencies first → high-risk unknowns → foundation → features → polish:
extension.yaml,main.go, build scripts, CIOptions.Configinto typed script configuration structsoutputs.json, implement inter-script output passing