Request Lifecycle
If the issue appears before any queue work exists, start with image delivery or job submission. If the issue appears after a job was accepted, jump to worker execution, playback, or cleanup.
Flow map
Synchronous HTTP
- real-time image delivery through
/img - request validation, cache lookup, source fetch, and immediate response
Asynchronous jobs
POST /api/jobsvalidation and idempotency- Redis / asynq enqueueing and worker-side execution
Playback and cleanup
/stream/{hash}/*playback reads from the media bucket/api/key/{hash}key delivery for encrypted playback- cleanup removes derived assets, queue work, and related metadata
1. Real-time image flow
Key implementation details:
- validate the HMAC signature and normalized options
- check memory LRU, then the media-bucket storage cache
- on a miss, fetch the original from the source bucket
- use singleflight to suppress duplicate fetch and transform work
- transform with libvips
- write the result to memory immediately and to storage asynchronously
- return CDN-friendly cache headers and an
ETag
This path is fully synchronous and does not require the queue.
/img never waits for a workerIf a request is on the real-time image path, the queue is not involved. Debug request validation, cache behavior, source reads, and libvips execution first.
2. Job submission flow
The important part is not only enqueueing work. The server first computes a request fingerprint from:
typehashsource- canonicalized
options
This is what gives POST /api/jobs its idempotency behavior. The source bucket itself is deployment-owned runtime config, not caller input.
For video jobs, the server also checks the configured source store before enqueueing so it can confirm existence, measure actual size, and route oversized work to video:large when needed.
202 Accepted means the request passed the server boundaryOnce the server accepts the job, the next debugging boundary is worker execution rather than request validation.
3. Worker execution flow
Worker execution falls into two categories: single-stage jobs and the video:full workflow.
Single-stage jobs
image:thumbnailvideo:covervideo:previewvideo:transcode
Shared pattern:
- dequeue a task
- mark the job as
processing - download or fetch the source media
- run the media toolchain
- upload artifacts to the media bucket
- persist progress and final results
- optionally send a webhook callback
video:full
video:full is not implemented as a parent job that spawns child jobs. Instead it runs as one workflow task:
- download the source once
- run cover and preview in parallel
- if either fails, emit
stagesandretry_plan - only proceed to transcode if both succeed
- persist one aggregated result payload
This keeps the external API simple while preserving stage-level observability.
video:full is one workflow, not a parent/child graphWhen you inspect logs, traces, or retry behavior, think of video:full as one worker-owned workflow with stage-level results, not as multiple independent public jobs.
4. Playback flow
The server does not keep local copies of segments. It maps /stream/{hash}/... directly to media-bucket objects.
Public players should use /stream/{hash} and /api/key/{hash}. Raw media-bucket keys remain an internal storage detail.
5. Cleanup flow
DELETE /api/media/{hash} is shorter-lived but important for consistency:
- resolve media-bucket objects associated with the hash
- clear image-cache tracking and related metadata
- cancel active, retry, or scheduled queue tasks
- remove encryption keys and job records
This flow is intentionally best-effort and idempotent, which makes it suitable for upstream compensation or retention jobs.
Because cleanup is best-effort and idempotent, upstream retention or compensation flows can retry it without treating every repeat call as an error.