Turn claude -p into a multi-turn REPL with one named pipe
The documented story is that claude -p is one-shot: you send a prompt, it runs, it exits. If you want multiple turns you reach for --continue or --resume, each of which starts a fresh process and pulls the previous session off disk. That works, but it's one process per turn and it doesn't help you drive a live session from a shell.
The undocumented story is that --input-format stream-json plus a named pipe plus one file-descriptor trick gives you a single, persistent claude -pprocess you can type at from any other shell command. It's eight lines of bash. Here's the whole recipe, followed by the line-by-line explanation.
The recipe
mkfifo input_pipe
cat input_pipe | claude -p \
--input-format stream-json \
--output-format stream-json \
--verbose \
| jq --unbuffered -r '
if .type == "assistant" then
(.message.content[] | select(.type == "text") | .text)
elif .type == "result" then
"--- done (cost: \(.total_cost_usd)) ---"
else
empty
end' &
exec 3>input_pipe
# Send messages:
echo '{"type":"user","message":{"role":"user","content":"Hello"}}' >&3
# Cleanup when done:
exec 3>&-
rm input_pipePaste that into a bash shell. After the first echo, Claude's reply streams into your terminal. You can echo more messages any time you like; the session stays alive. When you close fd 3, the whole pipeline tears down cleanly.
Why each piece matters
mkfifo input_pipe
Creates a named pipe (FIFO) on the filesystem. A FIFO looks like a regular file in ls -l but acts like an in-memory pipe: writers queue data, readers drain it. The big difference from an anonymous shell pipe (|) is that a FIFO has a name, so anyone on the machine can open it for writing without being part of the same pipeline.
cat input_pipe | claude -p … &
A three-stage background pipeline:
cat input_pipereads from the FIFO and writes to its stdout. It blocks until a writer appears.claude -p --input-format stream-json --output-format stream-json --verbosetakes newline-delimited JSON events on stdin, processes each one as a turn of the conversation, and emits newline-delimited JSON events on stdout.--verboseis required in both directions for the stream-json formats.jq --unbuffered -r '...'filters the output stream so you only see the human-readable assistant text and a small “done” footer per turn. The--unbufferedflag is not optional — without it jq buffers output and the REPL feels laggy.&runs the whole pipeline in the background, returning shell control to you so you can keep typing.
exec 3>input_pipe — the trick
This is the part most tutorials miss. FIFOs close as soon as the last writer goes away. If you ran echo ... > input_pipe directly, each echo would open the FIFO, write, and close — which would send EOF to cat, which would send EOF to claude, which would exit. One message per session.
exec 3>input_pipe opens file descriptor 3 as a persistent writer to the FIFO. As long as fd 3 is open, the FIFO stays open, which keeps cat reading, which keeps claude alive. Every echo ... >&3 after that just streams more data into the existing pipe without closing it.
When you're done, exec 3>&- closes fd 3, which closes the last writer, which cascades EOF down the pipeline and gives you a clean shutdown.
echo '{..."user"...}' >&3
Send a user message. The JSON event shape is what --input-format stream-json expects: a type of user and a messagewith role and content. Every event has to fit on a single line (it's newline-delimited), so if you're sending multi-line prompts you want to JSON-encode them with jq -c or python -c:
PROMPT="Explain this error message:
$(cat /tmp/err.log)"
jq -nc --arg c "$PROMPT" \
'{type:"user", message:{role:"user", content:$c}}' >&3The jq filter, expanded
That one-liner is doing real work. Here's what each branch means:
if .type == "assistant"matches the events Claude emits when it has something to say..message.content[] | select(.type == "text") | .textwalks the content array and pulls only text blocks, droppingtool_use,tool_result, and anything else in there. Useful for a clean REPL feel.elif .type == "result"matches the terminal event Claude emits at the end of each turn."--- done (cost: \(.total_cost_usd)) ---"interpolates the running cost of the session so far into a little footer. Great for “did I just burn $2 on one question” situations.else emptysuppresses everything else — tool events, system events, partial deltas. Add them back when you want richer output.
What this composes with
Because input is “anything that writes to fd 3” and output is “anything that reads stdout of the pipeline,” this pattern composes with every UNIX primitive you already know:
- Shell loop.
for f in *.py; do jq -nc --arg c "Review $f: $(cat $f)" ... >&3; done— drive a review session across a directory of files. - File tail.
tail -f app.log | while read line; do ... >&3; done— feed log lines to Claude as they arrive. - TCP input.
socatreading from a local port and writing to fd 3 turns this into a network-listening REPL. - Second FIFO for output. Swap
jq --unbufferedfortee output_pipeand a second reader and now you have full-duplex IPC.
Gotchas
--verboseis required for stream-json formats on both input and output. Leave it off and the pipeline silently does nothing useful.jq --unbufferedmatters more than it looks. Without it, jq buffers output and your REPL becomes “ask, wait 10 seconds, answer.”- Clean up the FIFO. Leaving
input_pipeon disk after a crash means the next run'smkfifowill fail. Usetrap "rm -f input_pipe; exec 3>&-" EXITin a real script. - One writer at a time. Multiple processes fanning into the same FIFO at once will interleave bytes and break JSON line boundaries. Fan in with a single shepherd if you need concurrency.
- Don't forget
--max-budget-usdand--max-turns. A live REPL backed by a FIFO is a long-running claude session with no supervisor. Budget it like you would any other unattended run. See the CLI reference for the full flag list.
The other half of the pipeline
This post is about the input side of stream-json. If you want to understand the output side — system/api_retryevents, partial token deltas, and the ~30-line Node consumer that handles line-buffering correctly — that's the stream-json output deep dive. Together, input-stream-json and output-stream-json turn claude -p into a full-duplex agent protocol over stdio, which is what every background agent loop is actually built on.
Where the FIFO pattern stops
It's one process, one shell, one human (or script) at a time. The moment you want:
- Multiple parallel sessions, each on its own branch, without fighting over a single FIFO
- Runs triggered by webhooks instead of a human's shell
- State persistence across process restarts
- A real audit trail of who sent what, when
…you've outgrown the FIFO and you're describing a background agent loop. Same claude -p underneath, but now with worktree isolation, webhook handlers, and streamed events posted back to the issue tracker that triggered the run. See also git worktree for Claude Code — the isolation primitive that makes parallelism safe.
Takeaways
mkfifo+--input-format stream-json+exec 3>FIFOgives you a multi-turn claude REPL in eight lines of bash.- The fd-3 trick is the non-obvious piece: without it, each echo would close the FIFO and end the session.
--verboseon both the input and output sides is mandatory.jq --unbufferedis mandatory for responsive output.- The pattern composes with file tails, TCP sockets, shell loops, and a second FIFO for full-duplex IPC — all standard UNIX primitives, no SDK required.
- If you're reaching for this pattern to run anything unattended, add
--max-budget-usdand--max-turnsbefore the firstecho.
Same protocol. Fleet scale. Cyrus.
Cyrus runs claude -p over stream-json per Linear issue, in isolated git worktrees, with rich mid-run approvals and streamed events back to the issue. Community self-hosted is free forever, BYOK across Claude, Codex, Cursor, and Gemini.
Try Cyrus free →