Quickstart
klite is a Kafka-compatible broker in a single Go binary. This guide gets you from zero to producing and consuming messages in under a minute.
Install and start
Section titled “Install and start”docker run -p 9092:9092 ghcr.io/klaudworks/kliteSee the configuration guide for all available options.
# macOS (Apple Silicon)curl -L https://github.com/klaudworks/klite/releases/latest/download/klite-darwin-arm64 -o klite
# macOS (Intel)curl -L https://github.com/klaudworks/klite/releases/latest/download/klite-darwin-amd64 -o klite
# Linux (amd64)curl -L https://github.com/klaudworks/klite/releases/latest/download/klite-linux-amd64 -o klite
# Linux (arm64)curl -L https://github.com/klaudworks/klite/releases/latest/download/klite-linux-arm64 -o klite
chmod +x klite./klitego install github.com/klaudworks/klite/cmd/klite@latestkliteklite is now listening on localhost:9092. You should see:
INFO klite started listen=:9092 cluster_id=abc123 node_id=0Produce and consume
Section titled “Produce and consume”Use any Kafka client. klite speaks the standard Kafka wire protocol.
# Produce a messageecho "hello klite" | kcat -P -b localhost:9092 -t my-topic
# Consume all messageskcat -C -b localhost:9092 -t my-topic -epackage main
import ( "context" "fmt" "github.com/twmb/franz-go/pkg/kgo")
func main() { // Producer client, _ := kgo.NewClient(kgo.SeedBrokers("localhost:9092")) defer client.Close()
ctx := context.Background() client.Produce(ctx, &kgo.Record{ Topic: "my-topic", Value: []byte("hello from Go"), }, func(r *kgo.Record, err error) { if err != nil { panic(err) } fmt.Printf("produced to partition %d offset %d\n", r.Partition, r.Offset) }) client.Flush(ctx)
// Consumer consumer, _ := kgo.NewClient( kgo.SeedBrokers("localhost:9092"), kgo.ConsumeTopics("my-topic"), ) defer consumer.Close()
fetches := consumer.PollFetches(ctx) fetches.EachRecord(func(r *kgo.Record) { fmt.Printf("consumed: %s\n", string(r.Value)) })}from confluent_kafka import Producer, Consumer
# Producep = Producer({'bootstrap.servers': 'localhost:9092'})p.produce('my-topic', value='hello from Python')p.flush()
# Consumec = Consumer({ 'bootstrap.servers': 'localhost:9092', 'group.id': 'my-group', 'auto.offset.reset': 'earliest',})c.subscribe(['my-topic'])
msg = c.poll(5.0)if msg: print(f"consumed: {msg.value().decode()}")c.close()const { Kafka } = require('kafkajs');
const kafka = new Kafka({ brokers: ['localhost:9092'] });
// Produceconst producer = kafka.producer();await producer.connect();await producer.send({ topic: 'my-topic', messages: [{ value: 'hello from Node.js' }],});await producer.disconnect();
// Consumeconst consumer = kafka.consumer({ groupId: 'my-group' });await consumer.connect();await consumer.subscribe({ topic: 'my-topic', fromBeginning: true });await consumer.run({ eachMessage: async ({ message }) => { console.log(`consumed: ${message.value.toString()}`); },});Add S3 storage
Section titled “Add S3 storage”By default klite stores data in a local WAL. Adding an S3-compatible backend gives you durable, cost-efficient storage. This example uses SeaweedFS as a local S3 server, but any S3-compatible store (AWS S3, MinIO, R2, etc.) works.
1. Start SeaweedFS
Section titled “1. Start SeaweedFS”# Create an S3 credentials configcat > /tmp/s3-config.json <<'EOF'{"identities":[{"name":"klite","credentials":[{"accessKey":"klite","secretKey":"kliteklite"}],"actions":["Admin","Read","Write","List","Tagging"]}]}EOF
# Start SeaweedFS with S3 API on port 8333docker run -d --name seaweedfs -p 8333:8333 \ -v /tmp/s3-config.json:/etc/s3-config.json \ chrislusf/seaweedfs server -s3 -s3.config=/etc/s3-config.json2. Start klite with S3
Section titled “2. Start klite with S3”docker run --network host \ -e AWS_ACCESS_KEY_ID=klite \ -e AWS_SECRET_ACCESS_KEY=kliteklite \ ghcr.io/klaudworks/klite \ --s3-bucket klite-data \ --s3-region us-east-1 \ --s3-endpoint http://localhost:8333You should see:
INFO S3 storage initialized bucket=klite-dataProduce and consume work exactly the same as before. klite writes to the local WAL first, then periodically flushes data to SeaweedFS.