An introduction to "Deno"

An Introduction to "Deno"


What is Deno?

“A secure runtime for JavaScript and TypeScript.” This is the definition contained in the official Deno website.

Before going into details, let’s start by clarifying the two main concepts included in this definition.


What is a runtime system?

As for Deno, we can say that’s what makes Javascript run outside the browser, adding a series of features that it is not possible to find in the Javascript engine itself.


What is Typescript?

Typescript is a superset of Javascript, which adds a series of features that make the language more interesting. Its main features are:

  • Optional static typing
  • Type inference
  • Improved Classes
  • Interfaces

At this point, you may find a similarity between Node and our Deno definition, since they seem to do almost the same thing and they are both built upon the Javascript V8 engine.


So Why Deno?

“Deno” - as Ryan Dahl, Creator of both Deno and Node, said - “is not by any means a rival of Node”. Simply, he was no longer happy with Node and decided to create a new runtime to make up for its “mistakes” and shortages (and adding also a bunch of new features).



Getting closer to Deno

Let’s now discover what makes Deno so promising and interesting, along with some key differences with Node.js:


Deno supports out-of-the-box Typescript

Deno can run Typescript code without installing additional libraries, such as ‘ts-node’.

It is possible to create an app.ts file and make it run with the simple command “Deno run app.ts”, without any other additional step.


ES Modules

Deno drops Commonjs Modules, which are still used in Node.js, and embraces the modern ES modules that are defined as standard in the Javascript world and mostly used in the front-end development scenario.

Deno borrows from Golang the possibility to import the modules directly from an Url.


Security First

Deno implements a philosophy of “least privileges” when it comes to security.To run a script, indeed, you need to add appropriate flags in order to enable certain permissions.

Here’s the list of flags that can be used:

  • --allow-env: allow environment access
  • --allow-hrtime: allow high-resolution time measurement
  • --allow-net=<allow-net>: allow network access
  • --allow-plugin: allow loading plugins
  • --allow-read=<allow-read>: allow file system read access
  • --allow-run: allow running subprocesses
  • --allow-write=<allow-write>: allow file system write access
  • --allow-all: allow all permissions (same as -A)


Standard Libraries

These libraries (click here to find out more) are developed and maintained by the core team of Deno.

Many other languages - Python included - share this concept of having a library of reference that is stable and tested by developers who maintain it.

Since Deno is at an initial stage, the list is still short - but certainly there will be further implementations in the future.


Built-in Tools

When it comes to Node.Js, if you want to have specific tools, you have to install them manually; furthermore, they are essentially third-parties tools, which are not maintained by the Node Team.

Deno, instead, embraces another philosophy: it offers, indeed, a built-in tool to improve the development. This creates a standard, which makes Deno not so dispersive as the Node ecosystem.

Here’s a partial list of them, along with the link to the relevant documentation for a deeper understanding of the topic:

  • fmt a built-in code formatter (similar to gofmt in Go)* test: runs test
  • debugger
  • Bundler
  • Documentation Generator
  • Dependency inspector
  • Linter


Compatibility with Browser API

Deno API was created to be as compatible as possible with the Browser API, in order to be able to implement any upcoming feature easily. This is one of the main “issues” that Node has, since it uses an incompatible global namespace (“Global” instead of “window”). This is the reason why an API like Fetch has never been implemented in Node.


Style Guide to building a Module (Opinionated Modules)

Unlike Node, Deno has a set of rules that a Developer should follow in order to publish a module. This can avoid the creation of a different way to reach the same output, thus creating a standard - which is a main principle within the Deno ecosystem. To find out more about the topic, click here.


Where is package.json?

As seen before, there is no package.json in Deno where it is possible to put all the dependencies. The answer, then, is deps.ts.

Deps is used for 2 main reasons:

  • to group all the dependencies needed for the project;
  • to manage versions.

This is a sort of replication of package.json present in Node.js, but many Developers are no longer considering it as best practice because of the decentralized nature of Deno. Indeed, they are now experimenting a better way to organize the code, which might lead to a different management of the modules. Let’s see how it will evolve in the future...

Here’s an example of a deps.ts file:


What about locking the dependencies?

A file called lock.json is needed in order to lock them. By using the following command, it is possible to cache and assign a hash to every dependency, in a way that no one can temper it:

deno cache --lock=lock.json --lock-write src/deps.ts

For further explanation about integrity check and lock files, please have a look at the official documentation.


Server Setup

Last but not least, here’s a quick but interesting comparison between a server setup in Node and in Deno.

Node server:

An example of Node server

Deno Server:

An example of Deno server

As you can notice, the snippets are pretty similar, but with fundamental differences that can sum up what we discussed above. More specifically:

  • ES modules;
  • decentralized import from an URL;
  • nextgen javascript feature out of the box;
  • the permission needed to run the script.



Future Improvements on the Roadmap and Conclusions

One of the key features in roadmap is the possibility to create a single executable file, as it happens now in many other languages (like Golang, for instance) - something that could revolutionize the Javascript ecosystem itself.

Also the compatibility layer with the Node.js stdlib is still in progress; this could lead to a faster development of the runtime system.

To sum up, we can say that Deno is in continuous evolution and probably will be the next game-changer of the Javascript ecosystem. The foundations for this runtime are in place and it is already a hot topic, so... keep an eye out!



Author: Yi Zhang, UX/UI Engineer @Bitrock

Read More
Bitrock_Rooms

Bitrock_Rooms

Our solution to guarantee a safe working environment

One of the most recent challenges that we as UX/UI Team have faced in Bitrock is the creation of a brand-new Web App to solve a contingent issue inside the company.

The communication was sudden and with few details about the project: what we had was a problem to solve, and a strict, dynamic deadline.


The challenge

Our mission consisted in delivering an App whose main goal was to manage the booking of the desks available in our Milan HQ office. The social distancing measures caused by Covid-19 pandemic, indeed, had forced Bitrock Team to reduce the capacity of the rooms. Our task was thus to provide a booking platform that could allow our colleagues to book their desks in advance, even from home: in this way it would be possible to ensure that employees keep sufficient space from each other and to provide a safe working place.

The rules we had to follow when designing the App were simple: every room would have a max capacity (of desks) to be respected, and a user would be able to book just one desk in a room per day. Another feature we were required to implement was the ability to see the bookings made by other colleagues in real time, in order to have a better feedback on the current rooms capacity.

On the UX/UI side, we had two kind of views in mind: a daily view, and a weekly one (a feature asked us to ease the booking for several days).

The functional analysis was ready, the deadline was clear. We thus started to work.


The project

At first we created wireframes: they were simple and useful to us in order to have a better understanding of the project.

As Backend and Database Platform, we chose to rely on Firebase and Firestore in order to speed up the implementation.

Firebase was a good fit for every need that we were trying to respond to:

  • Oauth authentication out of the box
  • Real Time Database perfectly integrated with RxJs library

Every decision we made was based on the concept of “Reactive Programming”, in order to achieve a data stream able to facilitate the automatic propagation of data changes.

For the selection of the Frontend Framework, the choice was easy: Angular, which is well integrated with Firebase in every aspect and synergised with Rxjs (A/N: for those who are not familiar to the Angular ecosystem, Rxjs is a library that embrace the concept of reactiveness with a functional approach) - everything was made reactive out of the box.

To sum up, here’s the list of the libraries we chosen:

  • Angular
  • RxJs
  • AngularFire - Firebase integration to Angular
  • Moment.js - Library to manage the date
  • Angular Material - UI Library with premade component

Our philosophy was to have the right balance between best practices and productivity, while respecting the limited available time.

The first point to tackle was the data schema to represent the booking of the desks. We thus created a flat structure, where the main keys were :

  • date
  • room
  • user

Here’s the schema that we used:

We then started creating components and services by using the tools that Angular had, for instance using cli commands.

Our choice not to use a state management like NgRx was dictated by the fact that this was a rather simple project with a limited number of components.

The tasks related to the daily view were carried out fast and smoothly: everything revolved around the RxJs libraries and the communication with Firebase.

Even the real-time update of the data from Firestore was great: it was so easy to implement...like magic! The implementation of the weekly view was a bit more challenging, but we managed to carry it out using our components.

The last part of the project covered the styling aspects: we decided to use Amber (Bitrock design system) as reference, in order to create a web app with the company “look and feel”.


Conclusions

This project represents the perfect playground for those situations that envisage a sudden problem that needs to be solved in a very short amount of time. During its different steps we had the possibility to reinforce our team-work spirit, as well as develop a proactive attitude. Everything was indeed designed, delivered and implemented very well thanks to the effort of the team as a whole, and not just because of individuals.

Bitrock Rooms is now used every day by Bitrock Team as one of the solutions to face Covid-19 challenges, helping creating a safe working environment and delivering a smart and smooth experience to users.


Bitrock Rooms' daily view interface:

Bitrock Rooms' weekly view interface:


Bitrock Rooms' mobile view interface:



Authors: Marco Petreri, UX/UI Engineer @Bitrock - Yi Zhang, UX/UI Engineer @Bitrock

Read More
React Bandersnatch Experiment

React Bandersnatch Experiment

Getting Close to a Real Framework

Huge success for Claudia Bressi (Bitrock Frontend developer) at Byteconf React 2020, the annual online conference with the best React speakers and teachers in the world.

During the event, Claudia had the opportunity to talk about her experiment called “React Bandersnatch” (the name coming from one of the episodes of Black Mirror tv series, where freedom is represented as a sort of well-designed illusion).

Goal of this contribution is to give anyone that could not join the virtual event the chance to delve into her experiment and main findings, which represent a high-value contribution for the whole front-end community.

Here’s Claudia words, describing the project in detail.

(Claudia): The project starts with one simple idea: what if React was a framework, instead of a library to build user interfaces?
For this project, I built some real applications using different packages that are normally available inside a typical frontend framework. I measured a few major web application metrics and then compared the achieved results.

The experiment’s core was the concept of framework, i.e. a platform where it is possible to find ready components and tools that can be used to design a user interface, without the need to search for external dependencies.
Thanks to frameworks, you just need to add a proper configuration code and then you’re immediately ready to go and code whatever you want to implement. Developers often go for a framework because it’s so comfortable to have something ready and nothing to choose.

Moving on to a strict comparison with libraries, frameworks are more opinionated: they can give you rules to follow in your code, and they can solve for you the order of things to be executed behind the scenes. This is the case of lazy loading when dealing with modules in a big enterprise web application.
On the other hand, libraries are more minimalistic: they give you only the necessary to build applications. But, at the same time, you have more control and freedom to choose whatever package in the world.

However, this can lead sometimes to bad code: it is thus important to be careful and follow all best practices when using libraries.


The Project

As initial step of my project, I built a very simple web application in React in order to implement a weekly planner. This consisted of one component showing the week, another one showing the details of a specific day, and a few buttons to update the UI, for instance for adding events and meetings.

I used React (the latest available release) and Typescript (in a version that finally let users employ the optional chaining). In order to style the application, I used only .scss files, so I included a Sass compiler (btw, while writing the components, I styled them using the CSS modules syntax).

Then I defined a precise set of metrics, in order to be able to measure the experimental framework. More specifically:

  • bundle size (measured in kilobytes, to understand how much weight could be reached with each build) ;
  • loading time (the amount of time needed to load the HTML code in the application);
  • scripting time (the actual time needed to load the Javascript files);
  • render time (the time needed to render the stylesheets inside the browser);
  • painting time (the time for handling media files, such as images or videos)

The packages used for this experiment can be considered as the ingredients of the project. I tried to choose both well-known tools among the React scenario and some packages that are maybe less famous, but which have some features that can improve the overall performance on medium-size projects.

The first implementation can be considered as a classic way to solve a project in React. An implementation based on Redux for the state management, and on Thunk as Middleware solution. I used also the well-known React router and, last but not least, the popular Material UI to have some ready-to-use UI components.
The second application can be considered more sophisticated: it was actually made of Redux, combined with the Redux-observable package for handling the middleware part. As for the ROUTER, I applied a custom ROUTER solution, in order to let me play more with React hooks. As icing on the cake, I took the Ant library, in order to build up some UI components.

As for the third application, I blended together the MobX state manager with my previous hook-based custom router, while for the UI part I used the Ant library.
Finally, for the fourth experimental application, I created a rather simple solution with MobX, my hook-based custom router (again) and Material UI to complete the overall framework.


Main Findings

Analyzing these four implementations, what I found out is that, as State manager, Redux has a cleaner and better organized structure due to its functional paradigm. Another benefit is the immutable store that prevents any possible inconsistency when data is updated.

On the other hand, MobX allows multiple stores: this can be particularly useful if you need to reuse some data (let’s say the business logic part of your store), in a different – but similar –application with a shareable logic.
Another advantage of MobX is the benefit of having a reactive paradigm that takes care of the data updates, so that you can skip any middleware dependency.

Talking about routing, a built-in solution (such as the react-router-dom package) is pretty easy and smooth to apply in order to set-up all the routes we need within the application. A custom solution, however, such as our hooks-based router solution, let us keep our final bundle lighter than a classic dancer.

Moving on to the UI side of the framework, we can say that Material is a widespread web styling paradigm: its rules are easy to apply and the result is clean, tidy and minimalistic.
However, from my personal point of view, it is not so ‘elegant’ to pollute Javascript code with CSS syntax – this is why I preferred to keep things separated. I then searched for another UI library and I found Ant, which is written in Typescript with predictable static types – a real benefit for my applications. However, in the end, we can say that MaterialUI is lighter than Ant.
Anyway, both packages allow you to import only the necessaries components you need for your project. So, in the end, it is a simple matter of taste which library to use for the UI components (anyway: if you’re looking more at performance, then go for Material!)


Comparing the Results

As final step, I compared the achieved results for each of the above-mentioned metrics.
From the collected results, it’s quite significant that the most performant application is built with the lighter bundle size and, in particular, when using Redux coupled with Thunk; MaterialUI for the UI components and our custom router, then the resulting application has a smaller output artifact and optimized values for loading, scripting and render times.

To cut a long story short, the winner of this experiment is application no. 4.

However, for bigger projects the results may vary.


Ideal Plugins

Frameworks, sometimes, offer useful auxiliary built-in plugins, such as the CLI to easily use some commands or automate repetitive tasks – which can in turn improve developers’ life. That’s why some of my preferred tools available for React (which I would include in a React ideal framework scenario) are:

  • the famous React and Redux extensions for Chrome: I found these essentials and useful as the ruler for an architect;
  • the ‘eslint-plugin’ specifically written for React, which makes it easier to be consistent to the rules you want to keep in your code;

- Prettier, another must-have react plugin if you use VS Code, which helps a lot in getting a better formatted code;

- React Sign, a Chrome extension that helps you show a graph and represent the overall structure in order to analyze your components;

- last but not least, the popular React extension pack that you can find as VS code extension, which offers lots of developer automated actions, such as hundreds of code snippets, Intellisense and the option of file search within node modules.


If you want to listen to Claudia’s full speech during the event, click here and access the official video recording available on YouTube.

Read More
HashiCorp and Bitrock sign Partnership

HashiCorp and Bitrock sign Partnership to boost IT Infrastructure Innovation

The product suite of the American leader combined with the expertise of the Italian system integrator are now at the service of companies

Bitrock, Italian system integrator specialized in delivering innovation and evolutionary architecture to companies, has signed a high-value strategic partnership with HashiCorp, a market leader in multi-cloud infrastructure automation and member of the Cloud Native Computing Foundation (CNCF).

HashiCorp is well-known in the IT infrastructure environment; their open source tools Terraform, Vault, Nomad and Consul are downloaded tens of millions of times each year and enable organizations to accelerate their digital transformation, as well as adopt a common cloud operating model for HashiCorp’s portfolio of multi-cloud infrastructure automation products for infrastructure, security, networking, and application automation.

As companies scale and increase in complexity, enterprise versions of these products enhance the open-source tools with features that promote collaboration, operations, governance, and multi-data center functionality. They must also rely on a trusted partner that is able to guide them through the architectural design phase and who can grant enterprise-grade assistance when it comes to application development, delivery and maintenance.

Due to the highly technical nature of the HashiCorp portfolio, being a HashiCorp partner means that above all, the Bitrock DevOps Team has the expertise and know-how required to manage the complexity of large infrastructures. Composed of highly-skilled professionals who can already count on several “Associate” Certifications and attended the first Vault Bootcamp in Europe - Bitrock are proudly one of the most certified HashiCorp partner in Italy. The partnership with HashiCorp represents not only the result of Bitrock’s investments in the DevOps area, but also the start of a new journey that will allow large Italian companies to rely on more agile, flexible and secure infrastructure. Especially when it comes to the provisioning, protection and management of services and applications across private, hybrid and public cloud architectures.

We are very proud of this new partnership, which not only rewards the hard work of our DevOps Team, but also allows us to offer Italian and European companies the best tools to evolve their infrastructure and digital services” – says Leo Pillon, Bitrock CEO.

With its dedication in delivering reliable innovation through the design and development of business-driven IT solutions, Bitrock is an ideal partner for HashiCorp in Italy. We look forward to working closely with the Bitrock team to jointly enable organizations across the country to benefit from a cloud operating model. With Bitrock’s expertise around DevOps, we are confident in the results we can jointly deliver to organizations leveraging our suite of products” – says Michelle Graff, Global Partner Chief for HashiCorp.

Read More
Bitrock DevOps Team joining HashiCorp EMEA Vault CHIP Virtual Bootcamp

Bitrock DevOps Team joining HashiCorp EMEA Vault CHIP Virtual Bootcamp

Another great achievement for our DevOps Team: the possibility to take part in HashiCorp EMEA Vault CHIP Virtual Bootcamp.

The Bootcamp – coming for the first time to the EMEA region – involves the participation of high-skilled professionals that already have experience with Vault and want to get Vault CHIP (Certified HashiCorp Implementation Partner) certified for delivering on Vault Enterprise.

Our DevOps Team will be challenged with a series of highly technical tasks to demonstrate their expertise in the field. A 3 full-day training, that will get them ready to implement in a customer engagement.

This comes after the great success of last week, which saw our DevOps Team members Matteo Gazzetta, Michael Tabolsky, Gianluca Mascolo, Francesco Bartolini and Simone Ripamonti successfully obtaining HashiCorp Certification as Vault Associate. A source of pride for the Bitrock community and a remarkable recognition of our DevOps Team expertise and know-how worldwide.

With the Virtual Bootcamp, the Team is now ready to raise the bar and takes on a new challenge, proving that there’s no limit to self-improvement and continuous learning.


HashiCorp EMEA Vault CHIP Virtual Bootcamp

May 5–May 8, 2020

https://www.hashicorp.com/

Read More
Turning Data at REST into Data in Motion with Kafka Streams

Turning Data at REST into Data in Motion with Kafka StreamsTurning Data at REST into Data in Motion with Kafka Streams

From Confluent Blog

Another great achievement for our Team: we are now on Confluent Official Blog with one of our R&D projects based on Event Stream Processing.

Event stream processing continues to grow among business cases that have been reliant primarily on batch data processing. In recent years, it has proven especially prominent when the decision-making process must take place within milliseconds (for ex. in cybersecurity and artificial intelligence), when the business value is generated by computations on event-based data sources (for ex. in industry 4.0 and home automation applications), and – last but not least – when the transformation, aggregation or transfer of data residing in heterogeneous sources involves serious limitations (for ex. in legacy systems and supply chain integration).

Our R&D decided to start an internal POC based on Kafka Streams and Confluent Platform (primarily Confluent Schema Registry and Kafka Connect) to demonstrate the effectiveness of these components in four specific areas:

1. Data refinement: filtering the raw data in order to serve it to targeted consumers, scaling the applications through I/O savings

2. System resiliency: using the Apache Kafka® ecosystem, including monitoring and streaming libraries, in order to deliver a resilient system

3. Data update: getting the most up-to-date data from sources using Kafka

4. Optimize machine resources: decoupling data processing pipelines and exploiting parallel data processing and non-blocking IO in order to maximize hardware capacity These four areas can impact data ingestion and system efficiency by improving system performance and limiting operational risks as much as possible, which increases profit margin opportunities by providing more flexible and resilient systems.

At Bitrock, we tackle software complexity through domain-driven design, borrowing the concept of bounded contexts and ensuring a modular architecture through loose coupling. Whenever necessary, we commit to a microservice architecture.

Due to their immutable nature, events are a great fit as our unique source of truth. They are self-contained units of business facts and also represent a perfect implementation of a contract amongst components. The Team chose the Confluent Platform for its ability to implement an asynchronous microservice architecture that can evolve over time, backed by a persistent log of immutable events ready to be independently consumed by clients.

This inspired our Team to create a dashboard that uses the practices above to clearly present processed data to an end user—specifically, air traffic, which provides an open, near-real-time stream of ever-updating data.

If you want to read the full article and discover all project details, architecture, findings and roadmap, click here: https://bit.ly/3c3hQfP.

Read More
Reactive Corporate Chat

Reactive Corporate Chat

The perfect combination between your daily workflow and data property.

Bitrock Reactive Corporate Chat is designed to provide a fully customizable and adaptable multifunctional chat solution that allows enterprises to maintain full property and control of their own data.


What are its main functionalities?

Bitrock Reactive Corporate Chat has the all the standard funcionalities of chat designed to operate at a professional level:

Reactive Corporate Chat

  • One to one Chat
  • Topic Chat
  • Secret Chat with self-destructing messages
  • VOIP & Video Call
  • File sharing
  • Message editing and deletion
  • “Last seen” as you want it
  • No embarrassing photos
  • Locked chat
  • In app pin code
  • Tailorable Contact List
  • Snooze during weekends/vacations


Who's it for?

  • For those companies who do not want make their data reside in third-party infrastructure (ie: avoid that your data are used for marketing purposes or sold to third parties or competitors)
  • For those companies who need a professional tool for business collaboration (ie: avoid mixing personal with working communications, and eventually limit the periods of usage of the chat to working days)
  • For those companies who want to integrate a new tool with their own company’s workflow tools (ie: receive real-time notifications about the progress status of a project within the project management group, or discover details and info about a workflow practice directly in the app)


What's inside?

iOS and Android app

Pluggable and event-driven architecture

Kubernetes ready, so capable to work perfectly both in on premise and cloud environments


What's next?

  • Web Client in development
  • Supervised Machine Learning in development (through iterative cycles of questions and answers, the knowledge base becomes more autonomous in interacting with users)
  • NPL & ML in development (using domain knowledge and natural language comprehension to analyze and understand user intentions and respond in the most useful way)
  • Analytics Tool in development, in order to activate several actions of intelligence (ie: sentiment analysis, productivity monitoring, ...)
  • Full Chatbot interaction
Read More
Confluent Operations Training for Apache Kafka

Confluent Operations Training for Apache Kafka

In this three-day hands-on course you will learn how to build, manage, and monitor clusters using industry best-practices developed by the world’s foremost Apache Kafka experts.

You will learn how Kafka and the Confluent Platform work, their main subsystems, how they interact, and how to set up, manage, monitor, and tune your cluster.

Hands-On Training

Throughout the course, hands-on exercises reinforce the topics being discussed. Exercises include:

  • Cluster installation
  • Basic cluster operations
  • Viewing and interpreting cluster metrics
  • Recovering from a Broker failure
  • Performance-tuning the cluster
  • Securing the cluster

This course is designed for engineers, system administrators, and operations staff responsible for building, managing, monitoring, and tuning Kafka clusters.

Course Prerequisites

Attendees should have a strong knowledge of Linux/Unix, and understand basic TCP/IP networking concepts. Familiarity with the Java Virtual Machine (JVM) is helpful. Prior knowledge of Kafka is helpful, but is not required.



Course Contents


The Motivation for Apache Kafka

  • Systems Complexity

  • Real-Time Processing is Becoming Prevalent

  • Kafka: A Stream Data Platform

    Kafka Fundamentals

  • An Overview of Kafka

  • Kafka Producers

  • Kafka Brokers

  • Kafka Consumers

  • Kafka’s Use of ZooKeeper

  • Comparisons with Traditional Message Queues

    Providing Durability

  • Basic Replication Concepts

  • Durability Through Intra-Cluster Replication

  • Writing Data to Kafka Reliably

  • Broker Shutdown and Failures

  • Controllers in the Cluster

  • The Kafka Log Files

  • Offset Management

    Designing for High Availability

  • Kafka Reference Architecture* Brokers

  • ZooKeeper

  • Connect

  • Schema Registry

  • REST Proxy

  • Multiple Data Centers

    Managing a Kafka Cluster

  • Installing and Running Kafka

  • Monitoring Kafka

  • Basic Cluster Management

  • Log Retention and Compaction

  • An Elastic Cluster

    Optimizing Kafka Performance

  • Producer Performance

  • Broker Performance

  • Broker Failures and Recovery Time

  • Load Balancing Consumption

  • Consumption Performance

  • Performance Testing

    Kafka Security

  • SSL for Encryption and Authentication

  • SASL for Authentication* Data at Rest Encryption

  • Securing ZooKeeper and the REST Proxy

  • Migration to a Secure Cluster

    Integrating Systems with Kafka Connect

  • The Motivation for Kafka Connect

  • Types of Connectors

  • Kafka Connect Implementation

  • Standalone and Distributed Modes

  • Configuring the Connectors

  • Deployment Considerations

  • Comparison with Other Systems

Read More
Confluent Developer Training

Confluent Developer Training

Building Kafka Solutions

In this three-day hands-on course you will learn how to build an application that can publish data to, and subscribe to data from, an Apache Kafka cluster.

You will learn the role of Kafka in the modern data distribution pipeline, discuss core Kafka architectural concepts and components, and review the Kafka developer APIs. As well as core Kafka, Kafka Connect, and Kafka Streams, the course also covers other components in the broader Confluent Platform, such as the Schema Registry and the REST Proxy.

Hands-On Training

Throughout the course, hands-on exercises reinforce the topics being discussed. Exercises include:

  • Using Kafka’s command-line tools
  • Writing Consumers and Producers
  • Writing a multi-threaded Consumer
  • Using the REST Proxy
  • Storing Avro data in Kafka with the Schema Registry
  • Ingesting data with Kafka Connect

This course is designed for application developers, ETL (extract, transform, and load) developers, and data scientists who need to interact with Kafka clusters as a source of, or destination for, data.

Course Prerequisites

Attendees should be familiar with developing in Java (preferred) or Python. No prior knowledge of Kafka is required.



Course Contents


The Motivation for Apache Kafka

  • Systems Complexity

  • Real-Time Processing is Becoming Prevalent

  • Kafka: A Stream Data Platform

    Kafka Fundamentals

  • An Overview of Kafka

  • Kafka Producers

  • Kafka Brokers

  • Kafka Consumers

  • Kafka’s Use of ZooKeeper

  • Kafka Efficiency

    Kafka’s Architecture

  • Kafka’s Log Files

  • Replicas for Reliability

  • Kafka’s Write Path

  • Kafka’s Read Path

  • Partitions and Consumer Groups for Scalability

    Developing With Kafka

  • Using Maven for Project Management

  • Programmatically Accessing Kafka* Writing a Producer in Java

  • Using the REST API to Write a Producer

  • Writing a Consumer in Java

  • Using the REST API to Write a Consumer

    More Advanced Kafka Development

  • Creating a Multi-Threaded Consumer

  • Specifying Offsets

  • Consumer Rebalancing

  • Manually Committing Offsets

  • Partitioning Data

  • Message Durability

    Schema Management in Kafka

  • An Introduction to Avro

  • Avro Schemas

  • Using the Schema Registry

    Kafka Connect for Data Movement

  • The Motivation for Kafka Connect

  • Kafka Connect Basics

  • Modes of Working: Standalone and Distributed

  • Configuring Distributed Mode

  • Tracking Offsets

  • Connector Configuration

  • Comparing Kafka Connect with Other Options

    Basic Kafka Installation and Administration

  • Kafka Installation

  • Hardware Considerations

  • Administering Kafka

    Kafka Streams

  • The Motivation for Kafka Streams

  • Kafka Streams Fundamentals

  • Investigating a Kafka Streams Application

Read More
Interview with Marco Stefani

Software Development @Bitrock

An interview with Marco Stefani (Head of Software Engineering

We met up with Marco Stefani, Head of Software Engineering at Bitrock, in order to understand his point of view and vision related to software development today. A good opportunity to explore the key points of Bitrock technology experience, and to better understand the features that a developer must have to become part of this team.

Marco Stefani (Head of Engineering @Bitrock) interviewed by Luca Lanza (Corporate Strategist @Databiz Group).


1) How has the approach to software development evolved over time, and what does it mean to be a software developer today in this company?

When I started working, in the typical enterprise environment the developer was required to code all day long, test his work a little bit, and then pass it to somebody else who would package and deploy it. The tests were done by some other unit, and if some bug was found it was reported back to the developer. No responsibility was taken by developers after they thought the software was ready ("It works on my machine"). Releases to production were rare and painful, because nobody was really in charge. In the last years we have seen a radical transformation in the way of working in this industry, thanks to the Agile methodologies and practices. XP, Scrum and DevOps shifted the focus of developers on quality, commitment and control of the whole life cycle of software: developers don't just write source code, they own the code. At Bitrock developers want to own the code.

2) What are the strengths of your Unit today? And which are the differentiating values?

Developers at Bitrock range from highly experienced to just graduated. Regardless of the seniority, they all want to improve their skills in this craft. We work both with old style corporate and with innovative companies, and we are always able to provide useful contributions, not just in terms of lines of codes, but also with technical leadership. Despite the fact that we can manage essentially any kind of software requirement, as a company our preference goes to functional programming and event-based, micro services architectures. We believe that in complex systems, these two paradigms are key to creating the most flexible and manageable software: something that can can evolve as easily as possible with the new and often unforeseen needs of the future.

3) What challenges are you prepared to face with your Unit, and with which approach?

Today's expectations of end users are that software applications must be always available and fast; the expectations of our customers are that the same system must be as economical as possible, in terms of development and maintainability, but also in terms of operational costs. That means that we need to build systems that are responsive, resilient and elastic. If you throw in asynchronous messages as the means of communication between components, then you get a "reactive system" as defined by the Reactive Manifesto.

(Discover Reactive Manifesto at: https://www.reactivemanifesto.org/)

Modern software systems development requires a set of skills that are not just technical, but also of understanding the underlying business and organisational: if the we want to model the messages (the events, actually) correctly, or wants to split the application in meaningful and tractable components, we need to understand the business logic and processes; and the definition of the components will have an impact on how the work and the teams are structured. Designing a software architecture requires tools that are able to connect all these different levels. Best practices, design and architectural patterns must be in the developer's toolbox.

4) What skills should a new member of your team have?

Software development is a very complex and challenging discipline. Technical skills need to be improved and change all the time; keeping up with technology advances is a job in itself, and often is not enough. At Bitrock we believe in continuous learning, so for new members it is important to understand that what they know during the interview is just the first step of a never-ending process. Working in a consultant company adds even more challenges, because it requires a constant adaptation to new environments and ways of working that are dictated by the customers. The payoff is that both technical and soft skills can improve in a much shorter time, compared to a "comfortable" product company.

Read More