Franco Geraci is the Head of Engineering at Bitrock, where, together with his Team, he maintains and evolves Waterstream – while also helping companies escape cloud lock-in. When he’s not freeing Clients from hostage architectures, he’s probably explaining why, no, blockchain won’t solve this problem either.
The Realization
It was one of those days every software architect knows well. The client sitting across from me had just finished describing their IoT infrastructure: 50,000 industrial sensors, millions of messages per second, and a latency requirement of under 100ms. “Obviously, we’ll use Kafka,” he concluded, as if it was the only option. I nodded, but something didn’t feel right. Not for the first time, I was facing what I call the “Kafka Default Syndrome” – the automatic assumption that Kafka is the solution for any streaming problem.
That evening, heading home, I did the math. To manage those 50,000 devices with Kafka, we would have needed:
- A Kafka cluster with at least 5 brokers for redundancy
- Zookeeper or KRaft (which adds its own complexity)
- A custom MQTT-to-Kafka translation layer
- A dedicated team just to keep it all running
- A cloud budget that would make the CFO cry
But the real problem wasn’t the cost. It was the pointless complexity we were about to sell to the client.
The Problem Everyone Knows But No One Admits
Let me say this clearly: Kafka was not designed for IoT. It was designed for data pipelines between enterprise systems. It’s like using a Formula 1 car to go grocery shopping – technically possible, but practically absurd.
IoT devices speak MQTT; it’s their native language.
MQTT is lightweight and efficient, built for unstable connections and resource-constrained devices.
Kafka speaks… Kafka. It’s powerful but heavy, designed for servers with GBs of RAM, not for sensors with KBs of memory.
At Bitrock, we’ve implemented dozens of these bridges over the years, so we know exactly what it entails. The process looks like this:
- Devices send MQTT messages to an MQTT broker.
- A custom component reads from MQTT and writes to Kafka.
- Applications consume data from Kafka.
- Another component reads from Kafka and responds via MQTT.
Every step adds latency, every component adds a point of failure, and every translation loses something in the process. And we kept selling this complexity as a “best practice”.
The Cloud Lock-in Trap (August 2025 Edition)
In 2025, every cloud provider has its own “solution”.
- AWS IoT Core + MSK: They promise seamless integration, but the reality is you pay for GBs of data ingested into IoT Core, then for streaming to MSK, and then for processing. One client saw their bill jump from $5K to $45K per month just by tripling their sensors. And trying to migrate? Good luck with their proprietary APIs.
- Azure IoT Hub + Event Hubs: Microsoft sells “synergy” with the rest of their Azure stack. However, their proprietary SDKs, custom message formats, and the inability to cleanly export data hold you hostage. One of our clients took eight months to migrate away.
- Google Cloud IoT Core + Pub/Sub: Oh, wait, they discontinued IoT Core in 2023. If you built on that, congratulations. Now they tell you to use Pub/Sub directly, but guess what? It doesn’t speak MQTT natively; you need a bridge. We’re back to square one.
- Oracle Cloud IoT + Streaming: I won’t even comment. If you end up there, you have bigger problems than MQTT vs. Kafka.
The pattern is always the same: they lure you in with entry-level pricing, and when you have 100K devices in production, costs explode and migration becomes impossible. It’s lock-in by design.
The Discovery That Changed Everything
It was during a due diligence for the acquisition of Waterstream by Fortitude Group that I truly understood what they had built. It wasn’t just “another MQTT broker.” It was the solution to the problem everyone pretended not to see.
Waterstream is a broker that speaks both MQTT and Kafka natively. It doesn’t translate or use a bridge. It speaks both protocols as a native language. Devices connect via MQTT, applications consume via the Kafka API, and in between… there’s nothing.
Literally nothing to manage, debug, or break at 3 a.m.
From Users to Maintainers: The Turning Point
When Waterstream joined the Fortitude Group, we didn’t just acquire a product. The original team passed the torch entirely to us at Bitrock. Today, we maintain, evolve, and support Waterstream directly.
This means that when a client has a problem at 2 a.m., they don’t have to open a ticket and pray. They call us. When a specific use case requires a new feature, they don’t have to convince a product manager in Silicon Valley. We discuss it over coffee and implement it.
In recent months, we have:
- Released full support for MQTT 5.0
- Optimized performance for edge deployments (sub-millisecond latency)
- Added native integration with OpenTelemetry
- Implemented enterprise-grade multi-tenancy
But above all, we’ve kept the original promise: simplicity. Every feature added must simplify, not complicate.
Concrete Results in 2025
Logistics Client (migrated from AWS IoT Core):
- Before, they paid $38K/month to AWS and had total vendor lock-in.
- After, their on-prem infrastructure and Waterstream cost $8K/month.
- The migration took two weeks (10 of which were spent convincing management it was really that simple).
Energy Client (migrated from Azure IoT Hub)
- Before, they had six Azure components and proprietary SDKs everywhere.
- After, they had one Waterstream deployment on Kubernetes, with their choice of cloud provider.
- The freedom to move workloads went from zero to total.
Manufacturing Client (migrated from a custom architecture)
- Before, they had 12 microservices to manage MQTT-to-Kafka.
- After, they had a single Waterstream deployment.
- Their DevOps team went from four people to 0.5 FTE.
Why Not the Alternatives (The Brutally Honest Version)
- HiveMQ: It costs as much as a luxury German car. An enterprise license for 50K devices? Get ready to cry. And you still need a separate Kafka instance.
- EMQ X: The open-source version is okay, but then you go into production, need support, and find it costs as much as HiveMQ. Oh, and the Kafka integration? It’s a plugin no one wants to touch.
- Confluent MQTT Proxy: It’s. A. Proxy. It adds latency, it’s another component to manage, and it costs like the Confluent Platform. No thanks.
- Eclipse Mosquitto + Custom Bridge: We’ve all done it. It works for 1,000 devices. At 10K, it starts to creak. At 50K, it’s a maintenance nightmare.
- RabbitMQ with MQTT Plugin: Cute for hobby projects. In production with millions of messages? Good luck.
The Future We’re (Literally) Building
Now that Waterstream is in our hands at Bitrock, the 2025-2026 roadmap is clear:
- Q3 2025: WebAssembly plugins for custom transformations without forking
- Q4 2025: Edge computing mode for Raspberry Pi deployments
- Q1 2026: Native AI-powered anomaly detection
- Q2 2026: Multi-region federation with automatic consensus
But the philosophy remains: every feature must eliminate complexity, not add it.
The Lesson for the Industry
After years of “best practices” that were just accepted workarounds, here’s what I’ve learned:
- Lock-in isn’t inevitable: You can have cloud convenience without vendor prison.
- Complexity isn’t sophistication: It’s technical debt in disguise.
- Bridges are admissions of failure: If you need a bridge, the architecture is wrong.
- True TCO includes freedom: How much does it cost to not be able to change?
Today, when a client tells me, “We’ll use Kafka for IoT,” my answer is simple: “Perfect. Let me show you how to do it without going crazy”. And then we deploy Waterstream. One container. Devices on MQTT. Apps on Kafka. Zero drama.
The revolution isn’t always noisy. Sometimes, it’s just a system that works the way it should.
Main Author: Franco Geraci, Head of Engineering @ Bitrock