Sinks
Drain pipeline
createDrainPipeline wraps any drain in batch + retry + buffer overflow protection. Required for non-trivial production volume; supports fanout to multiple drains in parallel.
Every drain in production should be wrapped in the drain pipeline — createDrainPipeline(). It batches events, retries on transient failures, and drops the oldest events when the buffer overflows. Without it, you make one HTTP request per emitted event, which doesn't scale beyond local dev.
Canonical guide
Full reference at Drain pipeline:
createDrainPipeline()API + options (batch size / interval, retry attempts / backoff, buffer overflow)- Wrapping a single drain
- Fanout to multiple drains in one pipeline
- Lifecycle (
flush(),dispose()) on shutdown
This page exists in the build-on-top section as a pointer — same content, classified by axis.
Quick example
import { createDrainPipeline } from 'evlog/pipeline'
import { createAxiomDrain } from 'evlog/axiom'
const pipeline = createDrainPipeline<DrainContext>({
batch: { size: 50, intervalMs: 2000 },
retry: { maxAttempts: 3 },
})
const drain = pipeline(createAxiomDrain())
// Use `drain` wherever you'd register a drain (Nitro hook, initLogger, etc.)
Fanout pattern
const drain = pipeline(
createAxiomDrain(),
createDatadogDrain(),
createSentryDrain(),
createFsDrain(),
)
All four destinations receive every event in the same batch. See Fanout & multi-drain for the full recipe.
Common pitfalls
- Don't forget
drain.flush()on shutdown — buffered events are lost otherwise - Tune
batch.sizeto match your provider's recommended payload — too small wastes overhead, too big risks rejection
Going further
- Custom drains — what you put inside the pipeline
- Fanout & multi-drain — concrete fanout recipe