Reactive Corporate Chat

Reactive Corporate Chat

The perfect combination between your daily workflow and data property.

Bitrock Reactive Corporate Chat is designed to provide a fully customizable and adaptable multifunctional chat solution that allows enterprises to maintain full property and control of their own data.

What are its main functionalities?

Bitrock Reactive Corporate Chat has the all the standard funcionalities of chat designed to operate at a professional level:

Reactive Corporate Chat

  • One to one Chat
  • Topic Chat
  • Secret Chat with self-destructing messages
  • VOIP & Video Call
  • File sharing
  • Message editing and deletion
  • “Last seen” as you want it
  • No embarrassing photos
  • Locked chat
  • In app pin code
  • Tailorable Contact List
  • Snooze during weekends/vacations

Who's it for?

  • For those companies who do not want make their data reside in third-party infrastructure (ie: avoid that your data are used for marketing purposes or sold to third parties or competitors)
  • For those companies who need a professional tool for business collaboration (ie: avoid mixing personal with working communications, and eventually limit the periods of usage of the chat to working days)
  • For those companies who want to integrate a new tool with their own company’s workflow tools (ie: receive real-time notifications about the progress status of a project within the project management group, or discover details and info about a workflow practice directly in the app)

What's inside?

iOS and Android app

Pluggable and event-driven architecture

Kubernetes ready, so capable to work perfectly both in on premise and cloud environments

What's next?

  • Web Client in development
  • Supervised Machine Learning in development (through iterative cycles of questions and answers, the knowledge base becomes more autonomous in interacting with users)
  • NPL & ML in development (using domain knowledge and natural language comprehension to analyze and understand user intentions and respond in the most useful way)
  • Analytics Tool in development, in order to activate several actions of intelligence (ie: sentiment analysis, productivity monitoring, ...)
  • Full Chatbot interaction
Read More
Confluent Operations Training for Apache Kafka

Confluent Operations Training for Apache Kafka

In this three-day hands-on course you will learn how to build, manage, and monitor clusters using industry best-practices developed by the world’s foremost Apache Kafka experts.

You will learn how Kafka and the Confluent Platform work, their main subsystems, how they interact, and how to set up, manage, monitor, and tune your cluster.

Hands-On Training

Throughout the course, hands-on exercises reinforce the topics being discussed. Exercises include:

  • Cluster installation
  • Basic cluster operations
  • Viewing and interpreting cluster metrics
  • Recovering from a Broker failure
  • Performance-tuning the cluster
  • Securing the cluster

This course is designed for engineers, system administrators, and operations staff responsible for building, managing, monitoring, and tuning Kafka clusters.

Course Prerequisites

Attendees should have a strong knowledge of Linux/Unix, and understand basic TCP/IP networking concepts. Familiarity with the Java Virtual Machine (JVM) is helpful. Prior knowledge of Kafka is helpful, but is not required.

Course Contents

The Motivation for Apache Kafka

  • Systems Complexity

  • Real-Time Processing is Becoming Prevalent

  • Kafka: A Stream Data Platform

    Kafka Fundamentals

  • An Overview of Kafka

  • Kafka Producers

  • Kafka Brokers

  • Kafka Consumers

  • Kafka’s Use of ZooKeeper

  • Comparisons with Traditional Message Queues

    Providing Durability

  • Basic Replication Concepts

  • Durability Through Intra-Cluster Replication

  • Writing Data to Kafka Reliably

  • Broker Shutdown and Failures

  • Controllers in the Cluster

  • The Kafka Log Files

  • Offset Management

    Designing for High Availability

  • Kafka Reference Architecture* Brokers

  • ZooKeeper

  • Connect

  • Schema Registry

  • REST Proxy

  • Multiple Data Centers

    Managing a Kafka Cluster

  • Installing and Running Kafka

  • Monitoring Kafka

  • Basic Cluster Management

  • Log Retention and Compaction

  • An Elastic Cluster

    Optimizing Kafka Performance

  • Producer Performance

  • Broker Performance

  • Broker Failures and Recovery Time

  • Load Balancing Consumption

  • Consumption Performance

  • Performance Testing

    Kafka Security

  • SSL for Encryption and Authentication

  • SASL for Authentication* Data at Rest Encryption

  • Securing ZooKeeper and the REST Proxy

  • Migration to a Secure Cluster

    Integrating Systems with Kafka Connect

  • The Motivation for Kafka Connect

  • Types of Connectors

  • Kafka Connect Implementation

  • Standalone and Distributed Modes

  • Configuring the Connectors

  • Deployment Considerations

  • Comparison with Other Systems

Read More
Confluent Developer Training

Confluent Developer Training

Building Kafka Solutions

In this three-day hands-on course you will learn how to build an application that can publish data to, and subscribe to data from, an Apache Kafka cluster.

You will learn the role of Kafka in the modern data distribution pipeline, discuss core Kafka architectural concepts and components, and review the Kafka developer APIs. As well as core Kafka, Kafka Connect, and Kafka Streams, the course also covers other components in the broader Confluent Platform, such as the Schema Registry and the REST Proxy.

Hands-On Training

Throughout the course, hands-on exercises reinforce the topics being discussed. Exercises include:

  • Using Kafka’s command-line tools
  • Writing Consumers and Producers
  • Writing a multi-threaded Consumer
  • Using the REST Proxy
  • Storing Avro data in Kafka with the Schema Registry
  • Ingesting data with Kafka Connect

This course is designed for application developers, ETL (extract, transform, and load) developers, and data scientists who need to interact with Kafka clusters as a source of, or destination for, data.

Course Prerequisites

Attendees should be familiar with developing in Java (preferred) or Python. No prior knowledge of Kafka is required.

Course Contents

The Motivation for Apache Kafka

  • Systems Complexity

  • Real-Time Processing is Becoming Prevalent

  • Kafka: A Stream Data Platform

    Kafka Fundamentals

  • An Overview of Kafka

  • Kafka Producers

  • Kafka Brokers

  • Kafka Consumers

  • Kafka’s Use of ZooKeeper

  • Kafka Efficiency

    Kafka’s Architecture

  • Kafka’s Log Files

  • Replicas for Reliability

  • Kafka’s Write Path

  • Kafka’s Read Path

  • Partitions and Consumer Groups for Scalability

    Developing With Kafka

  • Using Maven for Project Management

  • Programmatically Accessing Kafka* Writing a Producer in Java

  • Using the REST API to Write a Producer

  • Writing a Consumer in Java

  • Using the REST API to Write a Consumer

    More Advanced Kafka Development

  • Creating a Multi-Threaded Consumer

  • Specifying Offsets

  • Consumer Rebalancing

  • Manually Committing Offsets

  • Partitioning Data

  • Message Durability

    Schema Management in Kafka

  • An Introduction to Avro

  • Avro Schemas

  • Using the Schema Registry

    Kafka Connect for Data Movement

  • The Motivation for Kafka Connect

  • Kafka Connect Basics

  • Modes of Working: Standalone and Distributed

  • Configuring Distributed Mode

  • Tracking Offsets

  • Connector Configuration

  • Comparing Kafka Connect with Other Options

    Basic Kafka Installation and Administration

  • Kafka Installation

  • Hardware Considerations

  • Administering Kafka

    Kafka Streams

  • The Motivation for Kafka Streams

  • Kafka Streams Fundamentals

  • Investigating a Kafka Streams Application

Read More
Interview with Marco Stefani

Software Development @Bitrock

An interview with Marco Stefani (Head of Software Engineering

We met up with Marco Stefani, Head of Software Engineering at Bitrock, in order to understand his point of view and vision related to software development today. A good opportunity to explore the key points of Bitrock technology experience, and to better understand the features that a developer must have to become part of this team.

Marco Stefani (Head of Engineering @Bitrock) interviewed by Luca Lanza (Corporate Strategist @Databiz Group).

1) How has the approach to software development evolved over time, and what does it mean to be a software developer today in this company?

When I started working, in the typical enterprise environment the developer was required to code all day long, test his work a little bit, and then pass it to somebody else who would package and deploy it. The tests were done by some other unit, and if some bug was found it was reported back to the developer. No responsibility was taken by developers after they thought the software was ready ("It works on my machine"). Releases to production were rare and painful, because nobody was really in charge. In the last years we have seen a radical transformation in the way of working in this industry, thanks to the Agile methodologies and practices. XP, Scrum and DevOps shifted the focus of developers on quality, commitment and control of the whole life cycle of software: developers don't just write source code, they own the code. At Bitrock developers want to own the code.

2) What are the strengths of your Unit today? And which are the differentiating values?

Developers at Bitrock range from highly experienced to just graduated. Regardless of the seniority, they all want to improve their skills in this craft. We work both with old style corporate and with innovative companies, and we are always able to provide useful contributions, not just in terms of lines of codes, but also with technical leadership. Despite the fact that we can manage essentially any kind of software requirement, as a company our preference goes to functional programming and event-based, micro services architectures. We believe that in complex systems, these two paradigms are key to creating the most flexible and manageable software: something that can can evolve as easily as possible with the new and often unforeseen needs of the future.

3) What challenges are you prepared to face with your Unit, and with which approach?

Today's expectations of end users are that software applications must be always available and fast; the expectations of our customers are that the same system must be as economical as possible, in terms of development and maintainability, but also in terms of operational costs. That means that we need to build systems that are responsive, resilient and elastic. If you throw in asynchronous messages as the means of communication between components, then you get a "reactive system" as defined by the Reactive Manifesto.

(Discover Reactive Manifesto at:

Modern software systems development requires a set of skills that are not just technical, but also of understanding the underlying business and organisational: if the we want to model the messages (the events, actually) correctly, or wants to split the application in meaningful and tractable components, we need to understand the business logic and processes; and the definition of the components will have an impact on how the work and the teams are structured. Designing a software architecture requires tools that are able to connect all these different levels. Best practices, design and architectural patterns must be in the developer's toolbox.

4) What skills should a new member of your team have?

Software development is a very complex and challenging discipline. Technical skills need to be improved and change all the time; keeping up with technology advances is a job in itself, and often is not enough. At Bitrock we believe in continuous learning, so for new members it is important to understand that what they know during the interview is just the first step of a never-ending process. Working in a consultant company adds even more challenges, because it requires a constant adaptation to new environments and ways of working that are dictated by the customers. The payoff is that both technical and soft skills can improve in a much shorter time, compared to a "comfortable" product company.

Read More
Reactive Supply Chain

Reactive Supply Chain

A modern and scalable managing system for Supply Chain

Bitrock’s Reactive Supply Chain is a modern, state-of-the-art, overall scalable and performing system to implement a full-cycle Supply Chain Management System. The solution is totally modular, so to be implemented as a one-stop-shop for a new project or it can be composed and integrated to existing systems using only the needed components. From functional perspective the system has been designed to implement best practices and to provide the maximum flexibility for all industry standards, but at the same time adapting to each customer specific process and workflow. The solution is based on OpenSource technologies covered by Enterprise Support, and can be scaled to manage millions of SKUs, with all data tracked and analyzed in real time. The delivery of the solution can be provided on Customer Premises or on Cloud or in Hybrid ways.

For internal Supply Chain Operators/Customers who:

  • provide/perform analytics
  • manage warehouse
  • manage resources and goods in and out
  • fulfil orders

Our Reactive Supply Chain is an uptodate digital Supply Chain system that scales with business demand and provides best customer experience matching all customer process and system, unlike non-customizable ERP systems which are poorly scaled and not flexible, demanding long time and high costs to be implemented and integrated. Our solution is a highly scalable modular system, implemented on top of the most innovative Open Source technologies (with Enterprise support when needed), capable to enable lean Supply Chain process with real-time analytics and all needed metrics for operation and C-levels with a mid-short term activation & customization time.

Bitrock’s Reactive Supply Chain is designed for:

  • Providing high scalable inventory system capable to increase the inventory velocity
  • Enabling lean logistic and supply chain management (reducing lead times, yield planning, stock levels, ...)
  • Compressing cycle time (reducing time from supplier procurement up to the customer delivery)
  • Segment and adapt the Supply Chain to different kind of retailers (no monolithic approach)
  • Designed to manage SC standards (e.g. GS1) but acting according business logic (B2B and B2C)
  • Providing a real-time analytics with all needed metrics (from operational to C-levels) so to achieve the higher automation (possibility to apply Artificial Intelligence to optimize several process) and control

The achievement of these kind of benefits are enabling Direct To Market models by Supply Chain operators and as well more modern integration (and or self implementation) of eCommerce strategies, increasing Supply Chain operators brand identity.

Reactive Supply Chain

All systems and components are Dockerized, so to allow an easy deployment both on Premises both on a Cloud solution (adding orchestration services such as Docker Compose, or Kubernetes, or Mesos DCOS). All components are configured to use Kafka and provide even a set of REST APIs.

Message broker technology: Confluent Kafka (opensource or enterprise)

Main components technical architecture references: Java/Scala Microservices, Play for APIs and frontend, Akka (with persistence, cluster, http, alpakka) for distributed logic, Cassandra/MySQL, Elasticsearch, Kafka.

Every Microservice has its own architecture and can use different kind of technologies and data source, to grant the best choice for each specific service goal, but a general onion—architecture approach has been taken to keep when possible an uniformity of service architectures.

Analytics: Apache Flink, Spark, Cassandra, Kibana, Elasticsearch, Kafka.

Read More