Examples
These are complete, runnable pipeline examples. Copy any of them into a .pipe.yaml file and run it with jigspec run.
Two-step summarizer
A pipeline that drafts content and then refines it. Two AI steps, two different models — shows how provider switching works in practice.
The pipeline
pipeline:
spec_version: "0.1"
name: two-step-demo
config:
model: openai/gpt-4o-mini
input:
topic:
type: string
steps:
- name: draft
action: ai
prompt: |
Write a one-paragraph draft about {{ input.topic }}.
Save it as draft.md using the file_write tool.
outputs: draft.md
- name: refine
action: ai
model: anthropic/claude-haiku-4-5
prompt: |
Read the draft at this path: {{ draft.result }}
Then write a refined, polished version.
Save it as final.md using the file_write tool.
outputs: final.md
output:
final: "{{ refine.result }}"What each part does
config.model: openai/gpt-4o-mini — sets the default model for all steps. Any step that doesn't specify its own model uses this one.
draft step — asks the model to write a first draft about the input topic and save it as draft.md. The outputs: draft.md field tells JigSpec to capture that file when the step finishes.
refine step — switches to Anthropic's Claude. It receives the path to the draft file via draft.result (the captured output path from the previous step), reads it, and writes an improved version. The model: field on this step overrides the pipeline default.
output.final — the pipeline returns the path of final.md, the output file from the refine step.
How to run it
export OPENAI_API_KEY=sk-...
export ANTHROPIC_API_KEY=sk-ant-...
jigspec run two-step-demo.pipe.yaml --input topic="the history of the internet"Expected output
Progress on stderr as each step runs, then the pipeline completes. The workspace directory (e.g., /tmp/jigspec-abc123/) contains both draft.md and final.md. The output.final value is the path to final.md.
Research assistant
A multi-step pipeline that researches a topic, writes a summary, and then reviews and scores it. Shows how steps chain together and how output can be structured.
The pipeline
pipeline:
spec_version: "0.1"
name: research-assistant
config:
model: openai/gpt-4o-mini
input:
topic:
type: string
depth:
type: string
steps:
- name: research
action: ai
prompt: |
Research the following topic thoroughly: {{ input.topic }}
Depth level: {{ input.depth }}
Write a research document covering:
- Key concepts and definitions
- Current state and recent developments
- Practical applications
- Open questions or controversies
Save it as research.md using the file_write tool.
outputs: research.md
- name: summarize
action: ai
prompt: |
Read the research document at: {{ research.result }}
Write a concise executive summary (3-5 bullet points) that captures
the most important insights. Save it as summary.md using the
file_write tool.
outputs: summary.md
- name: review
action: ai
model: anthropic/claude-haiku-4-5
prompt: |
You are a critical reviewer. Read the research at: {{ research.result }}
And the summary at: {{ summarize.result }}
Evaluate the quality of the summary. Does it accurately capture the
most important points from the research? What is missing?
Write your review as review.md with a score (1-10) and explanation.
Save it using the file_write tool.
outputs: review.md
output:
research: "{{ research.result }}"
summary: "{{ summarize.result }}"
review: "{{ review.result }}"What each part does
research step — uses both input.topic and input.depth in its prompt. The depth input lets callers control how detailed the research should be (e.g., "brief" vs. "comprehensive").
summarize step — receives the research file path from the previous step. It does not need to know the filename; research.result resolves to whatever path the research step wrote.
review step — switches models to Claude Haiku for the critical review. It reads both the original research and the summary, giving it full context to evaluate the summary quality.
output — three output values; the caller receives all three file paths and can read or display any of them.
How to run it
jigspec run research-assistant.pipe.yaml \
--input topic="large language models" \
--input depth="comprehensive"Or with a JSON file:
cat > input.json << 'EOF'
{
"topic": "large language models",
"depth": "comprehensive"
}
EOF
jigspec run research-assistant.pipe.yaml --input-file input.jsonExpected output
Three files in the workspace: research.md (detailed notes), summary.md (bullet-point summary), and review.md (the critical score and feedback). The pipeline outputs the paths to all three.
Claude Agent SDK — tool-using pipeline
A pipeline that uses the Claude Agent SDK with filesystem tools. Shows how to declare tools at the pipeline level and let the model use them directly.
INFO
This example requires Claude (Anthropic API key or Claude Max/Pro OAuth). OpenAI models do not use the Claude Agent SDK.
The pipeline
pipeline:
spec_version: "0.1"
name: claude-agent-demo
config:
model: anthropic/claude-haiku-4-5
tools: [Read, Write, Glob]
input:
topic:
type: string
steps:
- name: summarize
action: ai
prompt: |
You are in a fresh empty directory. Do the following in order:
1. Use the Write tool to create source.md with a 5-sentence
description of {{ input.topic }}.
2. Use the Glob tool to list all *.md files in the current directory.
3. Use the Read tool to read source.md.
4. Use the Write tool to create summary.md containing a 3-bullet
summary of what you read.
outputs: [source.md, summary.md]
max_attempts: 2
output:
source: "{{ summarize.source.md }}"
summary: "{{ summarize.summary.md }}"What each part does
tools: [Read, Write, Glob] — at the pipeline level, this allows-lists the tools the model can use. The Claude Agent SDK provides these filesystem tools; JigSpec enforces the allow-list at runtime.
outputs: [source.md, summary.md] — when a step produces multiple files, list them as an array. Each file can be referenced individually: summarize.source.md and summarize.summary.md.
max_attempts: 2 — if the step fails (e.g., a required output file is missing), retry once before failing.
output — bracket-style output references (summarize.source.md) are used because the filenames contain a . character.
How to run it
export ANTHROPIC_API_KEY=sk-ant-...
jigspec run claude-agent-demo.pipe.yaml --input topic="artificial intelligence"If you're logged into Claude Code, Anthropic models work automatically — no API key needed.
Expected output
Two files: source.md (the 5-sentence description) and summary.md (the 3-bullet summary). Both file paths are returned in the pipeline output.