Connectors

Connect without
a JVM, ever.

The streaming industry runs Kafka Connect on a JVM. We don’t. Every connector is natively compiled Rust shipped inside the same binary as Iggy: no JVM heap, no GC pauses, no external runtimes. Activate from the Console, map stream payloads, deploy in seconds.

8

sink connectors

4

source connectors

Rust

native · no JVM

0

external runtimes

Architecture

Source. Stream. Sink. That's it.

Connectors run as a separate process managed by Warden on the same node, connecting to Iggy locally over TCP. Connector traffic stays on your infrastructure; it never transits an external network.

External Source

PostgreSQL
InfluxDB
Elasticsearch
Random (dev)

Iggy + Connectors Runtime

Rust native · same node

Streams
Topics
Partitions

External Sink

PostgreSQL
Iceberg
Elasticsearch
HTTP · Stdout

Connector traffic stays on the node; data never transits an external network

Source connectors

Pull data from external systems and produce messages into Iggy streams. PostgreSQL, Elasticsearch, InfluxDB, and Random (for testing).

Sink connectors

Consume messages from Iggy streams and push them to external destinations: databases, search engines, analytics tables, and HTTP endpoints.

Multiple instances

Activate multiple instances of the same connector: for example, two PostgreSQL sinks writing to different databases simultaneously.

Sink Connectors

Send data anywhere.

Eight production-ready sink plugins for routing Iggy messages to databases, search engines, analytics platforms, and HTTP endpoints.

PG

PostgreSQL

Relational DB

Push messages to PostgreSQL databases. Map stream payloads to table columns with configurable schema transforms.

ES

Elasticsearch

Search

Index messages into Elasticsearch clusters. Real-time search and analytics over your streaming data.

ICE

Apache Iceberg

Analytics

Write messages to Apache Iceberg tables for analytics. First-class lakehouse integration with schema evolution.

QW

Quickwit

Search

Index messages into the Quickwit search engine. Sub-second full-text search over high-volume streams.

MDB

MongoDB

Document DB

Push messages to MongoDB collections. Flexible document mapping for event-driven architectures.

IDB

InfluxDB

Time Series

Write messages to InfluxDB time-series databases. Native support for metrics and time-series workloads.

HT

HTTP

Webhook

Send messages to any HTTP endpoint. Webhooks, REST APIs, or custom ingestion pipelines, no code required.

OUT

Stdout

Dev / Debug

Output messages to standard output. Ideal for debugging connector pipelines and development workflows.

Source Connectors

Ingest from anywhere.

Four source plugins to pull from external systems into Iggy streams, plus a Random source for load testing and development.

PG

PostgreSQL

Relational DB

Ingest data from PostgreSQL databases into Iggy streams. CDC-style ingestion for change-driven architectures.

ES

Elasticsearch

Search

Ingest data from Elasticsearch into Iggy streams. Replay stored documents as streaming events.

IDB

InfluxDB

Time Series

Ingest data from InfluxDB time-series databases into Iggy streams. Time-series replay and real-time forwarding.

RND

Random

Testing

Generate random test messages for development and load testing. Spin up realistic workloads without external dependencies.

More connectors coming. The connector catalog is continuously expanded by the Apache Iggy community, and LaserData-managed premium connectors are on the roadmap. See the full catalog →

Lifecycle

Activate from the Console. No code.

Browse the connector catalog in your deployment's Console tab, click Activate, configure stream mappings, and the platform handles the rest: provisioning on all nodes, monitoring, and lifecycle management.

1Open the Connectors tab in the Console
2Browse the catalog and click Activate
3Set an instance name and configure stream mappings
4Platform provisions on all nodes · status transitions to Active

Instance states

Pending

Instance created, waiting for nodes to activate.

Active

Running and processing messages.

Inactive

Disabled, configuration preserved for re-enable.

Failed

Encountered errors. Check logs and retry.

Built-in connector monitoring

Per-instance metrics are available in the Console's Metrics tab and via API: messages produced, consumed, processed, error count, CPU and memory usage.

messages_producedmessages_consumedmessages_processederrorsCPU / Memory

Connect your data stack today.

Join the LaserData preview and activate your first connector in minutes: no code, no external runtimes, no infrastructure to manage.