Skip to main content
Log Drains automatically forward your application logs to an external HTTPS endpoint in real time. Use them to send logs to Datadog, Splunk, Elasticsearch, or any service that accepts HTTPS webhooks.

Setting Up a Log Drain

  1. Go to Dashboard > Log Drains
  2. Click Create Log Drain
  3. Configure:
    • Name — A label for this drain (e.g., “Production Datadog”)
    • Endpoint URL — The HTTPS URL to send logs to (must be HTTPS)
    • Secret Token — Used to sign payloads so you can verify they came from fal (minimum 64 characters)
    • Sampling Rate — Maximum logs per delivery batch (1-5000, default: 1000)
  4. Click Test to verify connectivity before saving
Log Drains require Admin role in your team. Only one log drain can be active per account.

Log Format

Logs are delivered as NDJSON (newline-delimited JSON) via HTTP POST. Each line is a JSON object:
{"timestamp": "2026-02-17T10:30:00.123Z", "message": "Model loaded successfully", "level": "info", "fal_app_name": "my-model", "fal_app_id": "abc123", "fal_request_id": "req-456", "fal_job_id": "job-789", "fal_endpoint": "/", "fal_node_id": "node-1", "fal_worker_id": "worker-42"}
{"timestamp": "2026-02-17T10:30:01.456Z", "message": "Inference completed in 1.2s", "level": "info", "fal_app_name": "my-model", "fal_app_id": "abc123", "fal_request_id": "req-456", "fal_job_id": "job-789", "fal_endpoint": "/", "fal_node_id": "node-1", "fal_worker_id": "worker-42"}

Fields

Every log line contains the core fields plus any available context labels. Fields beyond timestamp, message, and level are included only when the platform has a value for them.
FieldDescription
timestampISO 8601 timestamp
messageThe log message
levelLog level (info, warning, error, etc.)
fal_app_nameYour application name
fal_app_idApplication identifier
fal_request_idThe request that generated this log
fal_job_idThe fal-internal job identifier for the runner
fal_endpointThe endpoint path that handled the request (e.g., /)
fal_node_idThe unique ID of the infrastructure node the runner is on
fal_worker_idThe scheduler-level allocation ID for the runner
fal_sourceLog source (e.g., run, gateway)
fal_isolate_sourceRuntime-level log source

Verifying Signatures

Every delivery includes an X-Fal-Signature header containing an HMAC-SHA256 signature of the request body, signed with your secret token. Use this to verify that deliveries are genuinely from fal.
import hmac
import hashlib

def verify_signature(body: bytes, signature: str, secret: str) -> bool:
    expected = hmac.new(
        secret.encode(), body, hashlib.sha256
    ).hexdigest()
    return hmac.compare_digest(signature, expected)
The Content-Type header is application/x-ndjson.

Delivery Behavior

  • Logs are delivered in batches on a regular schedule
  • New drains start delivering logs from the last 30 seconds
  • The sampling rate controls the maximum number of log lines per batch
  • Deliveries time out after 10 seconds

Failure Handling

If your endpoint returns an error or is unreachable:
  • fal tracks consecutive failures
  • After 5 consecutive failures, the drain is automatically disabled
  • You can re-enable it from the dashboard after fixing the endpoint
  • The dashboard shows the failure count and last successful delivery time

Managing Log Drains

Manage your drains from the Dashboard. You can enable/disable, test connectivity, update the endpoint URL or sampling rate, or delete a drain entirely.

External Services

Log drains work with any service that accepts HTTPS webhooks with NDJSON payloads. Point the drain at your service’s HTTP log intake URL and include any required API keys in the URL path or query parameters as needed by your provider.

Observability Overview

See all monitoring interfaces: dashboard, CLI, and integrations