Mainframe Offloading for Real-time Data Distribution

Back-end & Platform Engineering Solution

context​

In the financial sector, digital transformation is no longer an option but a strategic necessity for maintaining competitiveness. Institutions still operating on traditional mainframe infrastructures today face a decisive challenge: evolving towards modern, more agile architectures capable of enabling near real-time data distribution. 

This need is particularly urgent in core business support systems, such as customer viewing applications used in branches. Often based on batch processes and updates via data warehouses, these solutions are no longer adequate to meet the timeliness and responsiveness expectations that characterize today’s banking world. 

However, modernizing these flows does not mean abandoning the strengths of mainframes, such as reliability, security, and operational robustness, but rather integrating them with technologies capable of ensuring more dynamic, flexible, and near real-time data access.

mainframe

PAIN POINTs

  • Outdated data: Batch processing introduces delays in updating information, preventing a real-time view of critical data.
  • Restricted operating window: Batch processes must be executed during periods of low activity to avoid overloads, limiting operational flexibility.
  • Complex error management: An error in a batch process can compromise the entire set of processed data, making timely identification and correction of problems difficult.
  • Poor responsiveness: The batch approach is not suitable for scenarios that require rapid responses or real-time processing, limiting the company’s responsiveness.

solution

Bitrock offers an innovative architectural solution for the financial world based on the Lambda Architecture pattern, introducing a Speed Layer to enable real-time data management. The architecture is developed through several interconnected functional layers: starting with a Change Data Capture (CDC) system based on SQDATA, the solution constantly monitors mainframe DB2/VSAM logs to intercept data changes. 

These are then propagated through a robust Event & Data Streaming layer, implemented using the Kafka/Confluent platform, which efficiently distributes the changes across the infrastructure. The streaming data is then processed using KSQL to perform the necessary transformations and aggregations before being indexed in Elasticsearch, chosen as the search engine and storage optimized for real-time queries. 

Finally, the entire architecture feeds the new customer viewing interface, allowing branch operators to access up-to-date information in near real-time.

benefits

  • Immediate access to customer information by branches
  • Optimal workload balancing between systems
  • Ability to develop new business cases based on real-time data
  • Reduced load on the mainframe thanks to the distributed architecture
  • Increased operational efficiency thanks to the elimination of batch latencies
  • Scalable architectures prepared for future real-time use cases
Technology Stack and Key Skills​

 

  • Apache Kafka
  • Confluent Platform
  • KSQL
  • Elasticsearch
  • Change Data Capture (CDC)
  • SQDATA
  • DB2
  • VSAM
  • Lambda Architecture
  • Event Streaming
  • Real-time Data Processing
  • System Integration
  • Mainframe Integration

Do you want to know more about our services? Fill in the form and schedule a meeting with our team!