Many companies watch their IoT platform costs escalate faster than their installed sensor base, trapped in a jungle of intermediary components and custom bridges. In this article, we will explore how to reduce Total Cost of Ownership (TCO) by eliminating unnecessary complexity between the edge and the cloud through a lean architecture based on Waterstream and Kafka.
Let’s start with a use case: a manufacturing company with thousands of devices in the field decides to stream all IoT data into Kafka to enable real-time analytics and data-driven applications. On paper, the architecture looks flawless: a cloud provider’s managed IoT service, a dedicated Kafka cluster, custom MQTT-to-Kafka bridges, serverless functions for data normalization, and intermediary storage for buffering.
After a few months in production, the Ops team is buried in monitoring dashboards, while the CFO questions how it is possible for cloud-related costs to continue growing faster than the number of sensors. As we will see, the problem is not the costs themselves, but the structural complexity that the company is forced to pay for every single month without being able to truly reduce it.
The Hidden Costs
When discussing TCO for IoT platforms, the conversation tends to focus on software licenses or the number of Kafka nodes required. In reality, the true costs lurk within the jungle of intermediary components: separate MQTT brokers (managed or self-hosted), Kafka clusters, integration bridges, connectors, functions, queues, and intermediary databases used to compensate for the limitations of upstream or downstream systems. Each component adds infrastructure costs, operational costs (monitoring, patching, incidents, on-call rotations), and the development and maintenance costs for that MQTT-to-Kafka bridge that no one wants to touch anymore.
TCO therefore becomesimpossible to predict and nearly impossible to reduce without re-evaluating the entire architecture. A real-life case study featured on our blog shows a logistics customer that went from spending around $38,000 per month to $8,000 per month by migrating from a public cloud-based IoT service to an on-premises solution based on Waterstream. It is not the cloud that is too expensive: it is the unnecessary complexity that accumulates between the devices and the data platform.
Waterstream: When Subtraction is Worth More Than Addition
The idea behind Waterstream is as simple as it is disruptive: allow devices to speak MQTT directly to Kafka, without a separate broker and without custom integration layers. Devices connect via MQTT to Waterstream, which uses Kafka as its sole backend for persistence and message distribution, while applications continue to consume from Kafka as they do today. One container, MQTT devices, Kafka apps, and zero custom “glue code.”
The consequence is the disappearance of an entire architectural layer made of bridges, intermediate queues, and functions for data normalization or transformation. Waterstream acts as a stateless component, perfect for Kubernetes and cloud-native environments, with deployment and scaling that can be automated without reconfiguring complex clusters. This means a dedicated team is no longer needed to maintain the plumbing between the MQTT and Kafka worlds: that complexity simply no longer exists.
Where TCO is Truly Reduced
Reducing TCO does not mean spending on fewer nodes, but rather removing structural complexity. With Waterstream, this translates into three concrete impacts that every CFO and IT manager can measure. Eliminating the separate MQTT broker and custom bridges removes entire line items from the infrastructure and operational budget. Kafka becomes the unique back-end, Waterstream exposes it via MQTT, and every proprietary managed service replaced represents a reduction in vendor lock-in and recurring cloud provider fees. Every eliminated component is one less monitoring chart, one less alert, and one less root cause analysis.
Furthermore, Waterstream does not maintain state: messages and session information live in Kafka. This radically simplifies the work of those managing the platform in production. Deployment and scaling become standard in Kubernetes, version updates and rollouts are managed like any other microservice, and a single observability stack is sufficient for both Kafka and Waterstream. Less time spent maintaining infrastructure means more time building features that generate business value.
Development Focused on Use Cases, Not Protocols
Without ad-hoc integration layers, teams can think in terms of use cases instead of protocols. IoT data is available directly in Kafka, ready for real-time analytics, AI, and automation, without duplicated logic between the MQTT and Kafka worlds. Simpler data pipelines are easier to test, put into production, and evolve over time. This is how TCO is truly lowered: fewer lines of “invisible” code written to make systems communicate that should already be talking to each other, and more investment in functionalities with a direct impact on the business.
From IoT Complexity to an AI-Ready Platform
Consider the case of a manufacturing company managing dozens of plants with thousands of sensors per line. In the initial scenario, the architecture included a managed IoT service plus Kafka and custom bridges, leading to rising cloud costs and stalled AI projects because data was not reaching data scientists reliably. With the introduction of Waterstream and an architectural revision, devices continued to speak MQTT but toward Waterstream, Kafka became the single central nerve for all operational data flows, and Data & AI teams were able to access real-time streams without going through additional pipelines.
The benefits were immediate: a significant reduction in spending related to managed IoT services and intermediary components, and an onboarding time for new use cases (alerts, predictive maintenance, digital twins) that dropped from months to weeks. The platform became natively ready to integrate AI models and advanced use cases without further architectural patches. It is not just about saving money: it is about transforming a platform that drains budgets into a strategic asset that enables innovation.
The Role of Bitrock: From PoC to Production Run
To truly contain TCO, solid architectural choices, change governance, and a clear roadmap focused on business objectives (not just the tech stack) are required.
This is exactly where Bitrock comes in, with the design and implementation of cloud-native and streaming platforms built for scalability, resilience, and cost control.
The path Bitrock designs with clients always starts with an assessment of the current architecture and effective TCO (including hidden complexity costs), continues with the design of a target architecture aligned with business goals and the digital roadmap, and materializes in an incremental implementation (PoC, rollout, industrialization) with a focus on observability, security, and change management.Contact our Professionals to present your use case and receive a dedicated consultation.