Skip to content

Quickstart

klite is a Kafka-compatible broker in a single Go binary. This guide gets you from zero to producing and consuming messages in under a minute.

Terminal window
docker run -p 9092:9092 ghcr.io/klaudworks/klite

See the configuration guide for all available options.

klite is now listening on localhost:9092. You should see:

INFO klite started listen=:9092 cluster_id=abc123 node_id=0

Use any Kafka client. klite speaks the standard Kafka wire protocol.

Terminal window
# Produce a message
echo "hello klite" | kcat -P -b localhost:9092 -t my-topic
# Consume all messages
kcat -C -b localhost:9092 -t my-topic -e

By default klite stores data in a local WAL. Adding an S3-compatible backend gives you durable, cost-efficient storage. This example uses SeaweedFS as a local S3 server, but any S3-compatible store (AWS S3, MinIO, R2, etc.) works.

Terminal window
# Create an S3 credentials config
cat > /tmp/s3-config.json <<'EOF'
{"identities":[{"name":"klite","credentials":[{"accessKey":"klite","secretKey":"kliteklite"}],"actions":["Admin","Read","Write","List","Tagging"]}]}
EOF
# Start SeaweedFS with S3 API on port 8333
docker run -d --name seaweedfs -p 8333:8333 \
-v /tmp/s3-config.json:/etc/s3-config.json \
chrislusf/seaweedfs server -s3 -s3.config=/etc/s3-config.json
Terminal window
docker run --network host \
-e AWS_ACCESS_KEY_ID=klite \
-e AWS_SECRET_ACCESS_KEY=kliteklite \
ghcr.io/klaudworks/klite \
--s3-bucket klite-data \
--s3-region us-east-1 \
--s3-endpoint http://localhost:8333

You should see:

INFO S3 storage initialized bucket=klite-data

Produce and consume work exactly the same as before. klite writes to the local WAL first, then periodically flushes data to SeaweedFS.