Skip to content

Getting Started

Let's build your first AI pipeline. By the end of this page you'll have a working pipeline that calls an AI model and produces real output — no abstractions, no boilerplate, just a YAML file and a single command.

Prerequisites

  • Node.js 20+
  • One of the following for AI access (see Authentication below):
    • A logged-in Claude Code instance (recommended — zero config)
    • An AI provider API key (ANTHROPIC_API_KEY or OPENAI_API_KEY)

Install

bash
npm install -g @jigspec/cli

Verify the install worked:

bash
jigspec --version

Authentication

JigSpec needs credentials to call AI models. How you set them up depends on which models your pipelines use.

Anthropic models (anthropic/*)

If you use Claude Code and are already logged in, JigSpec picks up your session automatically — no extra setup needed.

Otherwise, set your API key:

bash
export ANTHROPIC_API_KEY=sk-ant-...

OpenAI models (openai/*)

bash
export OPENAI_API_KEY=sk-...

You can use both at the same time. Each step in a pipeline can target a different provider.

Your first pipeline

Create a file called hello.pipe.yaml:

yaml
pipeline:
  spec_version: "0.1"
  name: hello
  input:
    topic:
      type: string
  steps:
    - name: write
      action: ai
      prompt: |
        Write a short paragraph about {{ input.topic }}.
        Save it as result.md using the file_write tool.
      outputs: result.md
  output:
    text: "{{ write.result }}"

That's the whole thing. Let's read it:

  • input.topic — the pipeline expects one string: what to write about
  • steps — one step named write using the ai action
  • prompt — the instruction sent to the model; input.topic gets substituted at runtime
  • outputs: result.md — the step writes a file; JigSpec captures its path
  • output.text — the pipeline's final return value references the file the step wrote

Running a pipeline

Run it with jigspec run:

bash
jigspec run hello.pipe.yaml --input topic="space exploration"

You'll see progress on stderr and a file appears in a temporary workspace. That's it — you just ran an AI pipeline.

Passing input

There are three ways to pass input to a pipeline:

Key=value flags (simplest):

bash
jigspec run pipeline.pipe.yaml --input topic="quantum computing" --input style="casual"

JSON file (for complex or reusable input):

bash
jigspec run pipeline.pipe.yaml --input-file input.json

stdin (for piping from other commands):

bash
echo '{"topic": "quantum computing"}' | jigspec run pipeline.pipe.yaml

Getting structured output

Add --json to get machine-readable output on stdout:

bash
jigspec run hello.pipe.yaml --input topic="AI" --json
json
{
  "status": "completed",
  "run_id": "abc123",
  "workspace": "/tmp/jigspec-abc123",
  "outputs": { "text": "/tmp/jigspec-abc123/result.md" },
  "duration_ms": 4210
}

This makes it easy to chain pipelines with other tools — pipe the JSON to jq, feed it into another script, or use it in CI.

Add a second step

One step is useful. Two steps are a pipeline. Let's add a step that takes the first draft and improves it:

yaml
pipeline:
  spec_version: "0.1"
  name: hello
  input:
    topic:
      type: string
  steps:
    - name: draft
      action: ai
      prompt: |
        Write a one-paragraph draft about {{ input.topic }}.
        Save it as draft.md using the file_write tool.
      outputs: draft.md

    - name: refine
      action: ai
      prompt: |
        Read the draft at this path: {{ draft.result }}
        Then write a polished, improved version.
        Save it as final.md using the file_write tool.
      outputs: final.md

  output:
    final: "{{ refine.result }}"

Steps run in declaration order. The second step references draft.result — the file path that the draft step wrote. JigSpec passes it to refine automatically; you don't wire anything up manually.

Run it:

bash
jigspec run hello.pipe.yaml --input topic="quantum computing"

Switch to a different model

Every step can use a different model. To use Claude instead of GPT-4o for the refine step, add one line:

yaml
    - name: refine
      action: ai
      model: anthropic/claude-haiku-4-5   # <-- add this
      prompt: |
        ...

You can also set a pipeline-wide default model and override it per step:

yaml
pipeline:
  config:
    model: openai/gpt-4o-mini   # default for all steps
  steps:
    - name: draft
      action: ai
      # uses gpt-4o-mini

    - name: refine
      action: ai
      model: anthropic/claude-haiku-4-5   # overrides just this step

Validate before you run

Before running a pipeline, you can check it for errors without actually executing anything:

bash
jigspec validate hello.pipe.yaml
# ✓ hello.pipe.yaml is valid

If there's a problem — a missing required field, an unknown action, an unresolvable reference — validate will tell you exactly what and where. Exit code 0 means valid, 1 means invalid.

Next steps

You've built a working two-step pipeline. From here:

  • Pipeline reference — full field documentation for the pipeline structure
  • Actions reference — all available actions: ai, code, and what's planned
  • CLI reference — all commands and flags: --input-file, --json, stdin piping
  • Examples — complete real-world pipeline walkthroughs you can copy and run

Released under the Apache 2.0 License.