Getting Started
Prerequisites
- Go 1.26+
- Docker and Docker Compose
curl- S3-compatible storage for source and derived assets such as RustFS, R2, or S3
If you use the repository Docker image directly, the runtime already contains ffmpeg, vips, and packager. If you run Vylux on the host with go run, install FFmpeg, libvips, pkg-config, and Shaka Packager locally. On macOS with Homebrew, brew install vips pkg-config provides the libvips toolchain that the Go image pipeline links against.
Recommended local development flow
1. Start the infrastructure services
docker compose -f docker-compose.dev.yml up -d
This starts:
- PostgreSQL on
localhost:5434 - Redis on
localhost:6381 - RustFS S3 API on
localhost:9002 - RustFS Console on
localhost:9003
2. Prepare environment variables
cp .env.example .env.local
For the complete environment-variable reference, validation rules, and secret guidance, see Configuration.
If you use docker-compose.dev.yml, at minimum point the following settings to localhost:
DATABASE_URL=postgres://myuser:mypassword@localhost:5434/mydb
REDIS_URL=redis://localhost:6381
SOURCE_S3_ENDPOINT=http://localhost:9002
SOURCE_S3_REGION=auto
SOURCE_BUCKET=app-bucket
MEDIA_S3_ENDPOINT=http://localhost:9002
MEDIA_S3_REGION=auto
MEDIA_BUCKET=media-bucket
BASE_URL=http://localhost:3000
The most important required settings are:
DATABASE_URLREDIS_URLSOURCE_S3_ENDPOINTSOURCE_S3_ACCESS_KEYSOURCE_S3_SECRET_KEYSOURCE_BUCKETMEDIA_S3_ENDPOINTMEDIA_S3_ACCESS_KEYMEDIA_S3_SECRET_KEYMEDIA_BUCKETHMAC_SECRETWEBHOOK_SECRETAPI_KEYKEY_TOKEN_SECRETENCRYPTION_KEYFFMPEG_PATHSHAKA_PACKAGER_PATH
Generate the common secrets with openssl:
cat >> .env.local <<EOF
HMAC_SECRET=$(openssl rand -hex 32)
API_KEY=$(openssl rand -hex 32)
WEBHOOK_SECRET=$(openssl rand -hex 32)
KEY_TOKEN_SECRET=$(openssl rand -hex 16)
ENCRYPTION_KEY=$(openssl rand -hex 32)
EOF
3. Create the storage buckets and upload sample source objects
Vylux does not create SOURCE_BUCKET or MEDIA_BUCKET for you. At minimum you need:
- a source bucket for original objects, configured by
SOURCE_BUCKETandSOURCE_S3_* - a media bucket for image cache entries, thumbnails, previews, covers, and HLS output, configured by
MEDIA_BUCKETandMEDIA_S3_*
If both roles use the same local RustFS instance or the same S3 provider, still set both storage groups explicitly. Vylux does not infer media settings from source settings.
For S3-compatible storage, Vylux writes derived objects with CRC32C upload checksums. That works with AWS S3, Cloudflare R2, and RustFS in the supported setup here; validate checksum-header support first if you swap in another S3-compatible service.
Upload at least one test asset such as:
- image:
uploads/sample.jpg - video:
uploads/sample.mp4
4. Start Vylux
go run ./cmd/vylux
Or split roles into two processes:
go run ./cmd/vylux --mode=server
go run ./cmd/vylux --mode=worker
If startup fails with a linker error such as library 'vips' not found, the most common causes are:
- libvips is not installed on the host
pkg-configcannot resolve the current libvips installation- Go's build cache still contains stale cgo linker flags from an older Homebrew Cellar path
On macOS with Homebrew, the fastest recovery path is usually:
brew install vips pkg-config
go clean -cache
go run ./cmd/vylux
If brew install reports that both packages are already installed, rerun go clean -cache anyway after a Homebrew upgrade. That forces cgo to rebuild with the current pkg-config output instead of reusing stale linker paths.
5. Validate service health
Check liveness, readiness, and metrics first:
curl -i http://localhost:3000/healthz
curl -i http://localhost:3000/readyz
curl -s http://localhost:3000/metrics | rg '^vylux_'
If you also run worker-only mode:
curl -i http://localhost:3001/healthz
curl -s http://localhost:3001/metrics | rg '^vylux_'
Minimal validation order
The smallest useful API validation flow is:
- Create a preview job
BASE_URL='http://localhost:3000'
API_KEY='replace-with-api-key'
curl -s \
-X POST "$BASE_URL/api/jobs" \
-H 'Content-Type: application/json' \
-H "X-API-Key: $API_KEY" \
-d '{
"type": "video:preview",
"hash": "quickstart-preview-sample",
"source": "uploads/sample.mp4",
"options": {
"start_sec": 1,
"duration": 3,
"width": 480,
"fps": 12,
"format": "webp"
}
}'
- Poll the job until it becomes
completedorfailed
curl -s \
-H "X-API-Key: $API_KEY" \
http://localhost:3000/api/jobs/<job-id>
- Validate the derived asset from
results.keyorresults.streaming.master_playlist
At this point, do not stop at the storage key alone:
- if the job returned an image-like media-bucket key such as a cover, preview, or thumbnail, convert it into a signed
/thumb/{sig}/{encoded_key}URL before exposing it to a browser - if the job returned streaming results, use
/stream/{hash}/master.m3u8as the public playback entrypoint rather than the rawmaster_playliststorage key - if encrypted playback is enabled, mint a Bearer token for
/api/key/{hash}and attach it only on key requests
For the full mapping from job results to public URLs, see Integration Guide.
Before release, cover at least these three smoke-test groups:
video:previewwithgifvideo:previewwithwebpvideo:transcodewithencrypt=true
What you should observe after startup
GET /healthzreturns200GET /readyzconfirms PostgreSQL, Redis, and buckets are readyGET /metricsexposes Prometheus metricsPOST /api/jobscan enqueue asynchronous media work
Next, most teams continue with Integration Guide, Configuration, Jobs API, and Observability.