Pipeline
A pipeline is a YAML file that describes what your AI workflow does: what goes in, what steps run, and what comes out. Think of it like a recipe — ingredients go in, a dish comes out.
The simplest possible pipeline is just three things:
pipeline:
name: greet
input:
name: string
steps:
- name: greet
action: ai
prompt: "Say hello to {{ input.name }}"
output: "{{ greet.text }}"That's it. Five lines of steps, and you have a working AI pipeline.
Why pipelines?
You could just call an AI API directly. So why wrap it in a pipeline file?
Because pipelines give you something raw API calls don't: a shareable, inspectable, rerunnable description of what your workflow does. You can version-control it, diff it, hand it to someone else, and know exactly what will happen when it runs.
The structure
Every pipeline has four parts:
pipeline:
name: my-pipeline # unique identifier
input: # what the caller provides
topic: string
steps: # the work
- name: write
action: ai
prompt: "Write about {{ input.topic }}"
output: "{{ write.text }}" # what comes backname Implemented
A unique identifier for this pipeline. Use lowercase with hyphens: summarize-article, triage-support-ticket.
input Implemented
Declares what data the pipeline needs. Each field has a name and a type:
input:
url: string
max_words: number
tags:
type: array
items: stringInput is optional — some pipelines don't need it. But declaring it makes your pipeline self-documenting.
steps Implemented
The list of things to do. Steps run in order by default. Each step has a name and an action. For full details, see Steps.
output Implemented
What the pipeline returns. This is a template reference pointing to some step's output:
output: "{{ summarize.text }}"Or a structured object:
output:
summary: "{{ summarize.text }}"
category: "{{ classify.category }}"If you omit output, the pipeline returns nothing (useful for side-effect pipelines that write files or send emails).
The spec_version field
JigSpec is versioned. You can pin a pipeline to a spec version:
pipeline:
name: my-pipeline
spec_version: "0.1"
# ...This tells the runtime which version of the spec to validate against. If you omit it, the runtime uses its default spec version.
Building up: a real example
Start with a prompt — the simplest step:
pipeline:
name: summarize
input:
text: string
steps:
- name: summarize
action: ai
prompt: "Summarize this in 3 bullets: {{ input.text }}"
output: "{{ summarize.text }}"Now add a second step that uses the first one's output:
pipeline:
name: summarize-and-score
input:
text: string
steps:
- name: summarize
action: ai
prompt: "Summarize this in 3 bullets: {{ input.text }}"
- name: score
action: ai
prompt: "Rate this summary from 1-10 for clarity: {{ summarize.text }}"
output:
summary: "{{ summarize.text }}"
score: "{{ score.text }}"Data flows forward: later steps can reference outputs from earlier steps using the step_name.field template syntax.
Keep names descriptive
Step names become data references. summarize.text is clearer than s1.text. Choose names that make the data flow readable.
Secrets
If your pipeline needs API keys or credentials, declare them:
pipeline:
name: fetch-and-summarize
secrets:
- OPENAI_API_KEY
- INTERNAL_API_KEY
steps:
- name: fetch
action: code
run: |
const resp = await fetch(input.url, {
headers: { Authorization: `Bearer ${secrets.INTERNAL_API_KEY}` }
})
// ...Secrets are injected at runtime from environment variables. For more, see Data References — Secrets.