Real-time Data Streaming in the Large-Scale Retail Sector

Back-end & Platform Engineering Solution

context​

In the large-scale retail sector, the ability to process data and act on it in real-time is considered a fundamental strategic necessity. Data is a crucial lever for optimizing operational efficiency and reducing costs. In-depth customer knowledge, derived from data analysis, is also essential for personalizing offers and providing a superior shopping experience. Furthermore, timely access to data enables a whole range of innovations in services and the shopping experience that were not previously possible. 

Despite the critical importance of this information, the crucial problem that often emerges is data fragmentation, which is scattered across heterogeneous systems and difficult to rapidly move where it is needed.

PAIN POINTs

For leaders that operate in the large-scale retail sector, a modern data architecture must overcome crucial and often conflicting operational challenges:

  • Lack of Local Resilience: Individual stores must be able to maintain full operation, including essential functions such as receipt issuance and point-of-sale management, even in the event of a network or Internet connectivity failure.
  • Complexity in Bidirectional Synchronization: It is essential to ensure a reliable flow of data from local operations (e.g., receipts) to the central HQ, and at the same time, ensure that centrally generated decisions or updates (e.g., promotions) flow rapidly to the stores, maintaining the autonomy of the parties.
  • Fragmented Data: Crucial information is dispersed across heterogeneous systems, preventing the creation of a unique and coherent view.
  • Latency in Strategic Decisions: The inability to react quickly to data and information arriving from the external world limits innovation in services and commercial strategies.
  •  

solutions

Bitrock proposes an advanced data streaming architecture, based on the Hub-and-Spoke model, which fully leverages the capabilities of Apache Kafka and the Confluent Platform to balance local resilience with global visibility. Specifically:

Hub-and-Spoke Distributed Architecture

The solution involves configuring a local Kafka cluster in each store (Spoke), which ensures resilience by allowing stores to operate in isolation. The Headquarters (Hub) integrates the Confluent Platform and is responsible for central processing and aggregation of all data.

Data Transport and Peripheral Production

Cash registers, acting as Kafka Producers, directly publish fundamental operational data, such as that generated by the digital receipt, to the store’s local Kafka cluster

Automated Bidirectional Synchronization

Synchronization between the central cluster and local clusters occurs thanks to Confluent Replicator. Confluent Replicator ensures that, in the event of a network partition or connectivity failure, data synchronization automatically resumes as soon as the connection is restored, reducing the need for manual intervention. This mechanism supports both the inbound data flow (receipts) and the outbound data flow (promotions).

Validation and Consolidation of the Central Data Contract

Once the HQ receives the receipt data, it undergoes a series of validation processes (semantic and syntactic) and is subsequently consolidated into a central topic. This validated data acts as the Data Contract for the receipt and becomes the fundamental trigger that feeds countless business logics simultaneously.

Dynamic distribution of configurations

In the outbound stream, centrally generated configurations, such as promotions or other operational rules, are aggregated and consolidated and then republished to the central Kafka cluster via Confluent Replicator. These configurations are then replicated to the local Kafka clusters of individual edge points, where local systems can consume and apply them in near real time.

benefits

The implementation of the Kafka-based event-driven architecture offers profound operational, technical, and organizational advantages, including:

  • Guaranteed Operational Resilience: Stores maintain full operational capacity even in case of isolation or lack of connection with the central HQ.
  • Near Real-time Data Visibility: Data timeliness is valuable for analytics and for procurement calculation logics, which are crucial for centralized management.
  • Operational Efficiency and Acceleration of Critical Processes: Event-driven processing enables immediate invoicing, reducing customer waiting times and timely updating of local warehouse status for goods procurement logics.
  • Simplified Agility and Scalability: Adding new stores is simple and fast, requiring only the replication of the local architecture.
  • Strong Synergy and Data Consistency: A single event (the receipt) feeds multiple business processes (invoicing, procurement, analytics, loyalty), avoiding redundancy and ensuring consistency.

Promotion of Innovation: The data streaming architecture constitutes a solid foundation for future integrations, such as Edge Computing and the use of Artificial Intelligence for advanced loyalty logics and purchasing behavior analysis.

Technology Stack and Key Skills​

 

  • Apache Kafka
  • Confluent Platform
  • Flink
  • Machine Learning and Deep Learning
  • Real-time Analytics
  • Predictive Modeling
  • Large Language Models (LLMs)
  • RAG (Retrieval-Augmented Generation)
  • MLOps and Model Monitoring
  • Drift Detection
  • Agentic AI

Do you want to know more about our services? Fill in the form and schedule a meeting with our team!