Last month we had the chance to attend the amazing Kafka Summit 2022 event organized by Confluent, one of Bitrock’s key Technology Partners.

Over 1500 people attended the event, which took place at the O2 in east London over two days of workshops, presentations, and networking.

Lots of news was given regarding Kafka, the Confluent Platform, Confluent Cloud, and the ecosystem altogether. An incredible opportunity to meet so many enthusiasts of this technology and discuss what is currently happening and what is on the radar for the upcoming future.

Modern Data Flow: Data Pipelines Done Right

The opening keynote of the event was hosted by Jay Kreps (CEO @ Confluent). The main topic (no pun intended :D) of the talk revolved around modern data flow and the growing need to process and move data in near real time

From healthcare to grocery delivery, a lot of applications and services we use everyday are based on streaming data: in this scenario, Kafka stands as one of the main and most compelling technologies. The growing interest in Kafka is confirmed by the numerous organizations that are currently using it (more than 100.000 companies) and by the amount of interest and support that the project is receiving. The community is growing year after year: Kafka meetups are very popular and numerous people express a lot of interest about it, as proved by the countless questions asked on a daily basis on StackOverflow and the big amount of Jira tickets opened on the Apache Kafka project. 

Of course, this success is far from accidental: if it is true that Kafka is a perfect fit for the requirements of modern architectures, it is also important to remember how many improvements were introduced in the Kafka ecosystem that helped create the image of a very mature, reliable tool when it comes to build fast, scalable, and correct streaming applications and pipelines

This can be seen, for instance, in the new features introduced in Confluent Cloud (the Confluent solution for managed Kafka) to enhance the documentation and the monitoring of the streaming pipelines running in the environment with the new Stream Catalog and Lineage system. Those two features provide an easy-to-access way to identify and search the different resources and data available in the environment, and how this data flows inside the system improving the governance and monitoring of the platform.

Kafka Summit 2022 - Keynote (London O2)

The near future of Kafka - Upcoming features

Among all the numerous upcoming features in the ecosystem presented during the event, there are some that we really appreciated and we had been waiting for quite some time.

One of these is KIP-516, which introduces topic IDs to uniquely identify topics. As you may know since the very beginning - and this holds also today - the identifier for a topic is its name. This has some drawbacks, such as the fact that a topic cannot be renamed (for instance, when you would like to update your naming strategy), since this would be required both to delete and recreate the topic, migrating the whole content, and to update all the producers and consumers that refer to that specific topic. An equally annoying issue is when you want to delete a topic and then recreate another one with the same name, with the goal of dropping its content and creating the new one with different configurations. Also in this scenario, we can currently face issues, since Kafka will not immediately delete the topic, but will plan a deletion that needs to be spread through the cluster without the certainty on when this operation will be actually completed. This makes the operation, as of today, not automatable (our consultants have often faced this limitation in some of our client projects).

The second long-awaited feature is the possibility to run Kafka without Zookeeper. At first, it was very useful and practical to take advantage of the distributed configuration management capabilities provided by Zookeeper (this is specifically important in processes like controller election or partition leader election). During the past years, Kafka has started incorporating more and more functionalities and also maintaining a Zookeeper cluster, instead of just the Kafka one, which feels like an unnecessary effort, risk and cost. As of today, this feature is not yet production-ready, but we can say that it’s pretty close. Indeed, Confluent has shared the plan, and we are all waiting for this architecture simplification to arrive.

The third upcoming feature that we found extremely valuable is the introduction of modular topologies for ksqlDB. ksqlDB is relatively recent in the Kafka ecosystem, but it’s having a good momentum given its capability to easily write stream transformations with minimal effort and just an SQL-like command, without the need to create dedicated Kafka-Stream applications that will require a good amount of boilerplate that, later, have to be maintained. 

ksqlDB will not be able to complete the detailed development of some Kafka-streams but, for a good amount of them, it will be an excellent solution. The introduction of modular topologies will simplify the management of the streams inside ksqlDB, and it will simplify its scalability (which is currently limited in some scenarios).

Our Insights from Breakout Sessions & Lightning Talks

The inner beauty of tech conferences lies in the talks, and Kafka Summit was no different!

During the event, indeed, not only the feature announcements caught our attention, but also what was presented during the various breakout sessions and talks: an amazing variety of topics gave us plenty of options to dig more into the Kafka world.

One of the sessions that we particularly enjoyed is, for sure, the one led by New Relic (“Monitoring Kafka Without Instrumentation Using eBPF”). The contribution focused on an interesting way of monitoring Kafka and Kafka-based applications using eBPF without the need for Instrumentation. Antón Rodríguez, as speaker, ran a cool demo of Pixie, in which it was very easy to see what is going on with our applications. It was also easy to get a graphical representation of the actual topology of the streams, and all the links between producers to topics, and topics to consumers, easing answering questions like “Who is producing to topic A?” or “Who is consuming from topic B?”.

Another session that we particularly enjoyed was the talk by LinkedIn (“Geo-replicated Kafka Streams Apps”): Ryanne Dolan outlined some strategies to deal with geo-replicated Kafka topics - in particular in case of Kafka streams applications. Ryanne gave some precious tips on how to manage the replication of Kafka topics in a disaster recovery cluster to guarantee high availability in case of failure, and on how to develop our Kafka streams application to work almost transparently in the original cluster and in the DR one. The talk was also a great opportunity to highlight the high scalability of Kafka in a multi-datacenter scenario, where different clusters can coexist creating some kind of layered architecture composed by a scalable ingestion layer that can fan out the data to different geo-replicated clusters in a transparent way for the Kafka streams applications.

Conclusions

Undoubtedly, the event has been a huge success, bringing the Apache Kafka community together to share best practices, learn how to build next-generation systems, and discuss the future of streaming technologies.

For us, this experience has been a blend of innovation, knowledge, and networking: all the things we missed from in-person conferences were finally back. It was impressive seeing people interact with each other after two years of social distancing, and we could really feel that “sense of community” that online events can only partially deliver.

If you want to know more about the event and its main topics - from real-time analytics to machine learning and event streaming - be sure to also check the dedicated Blog post by our sister-company Radicalbit. You can read it here.

Authors: Simone Esposito, Software Engineer @ Bitrock - Luca Tronchin, Software Engineer @ Bitrock

Read More

A Joint Event from Bitrock and HashiCorp

Last week we hosted an exclusive event in Milan dedicated to the exploration of modern tools and technologies for the next-generation enterprise.
The first event of its kind, it was held in collaboration with HashiCorp, US market leader in multi-cloud infrastructure automation, after the Partnership we signed in May 2020.

HashiCorp's open-source tools Terraform, Vault, Nomad and Consul enable organizations to accelerate their digital evolution, as well as adopt a common cloud operating model for infrastructure, security, networking, and application automation.
As companies scale and increase in complexity, enterprise versions of these products enhance the open-source tools with features that promote collaboration, operations, governance, and multi-data center functionality.
Organizations must also rely on a trusted Partner that is able to guide them through the architectural design phase and who can grant enterprise-grade assistance when it comes to application development, delivery and maintenance. And that’s exactly where Bitrock comes into play.

During the Conference Session, the Speakers had the chance to describe to the audience how large companies can rely on more agile, flexible and secure infrastructure thanks to HashiCorp’s suite and Bitrock’s expertise and consulting services. Especially when it comes to the provisioning, protection and management of services and applications across private, hybrid and public cloud architectures.

“We are ready to offer Italian and European companies the best tools to evolve their infrastructure and digital services. By working closely with HashiCorp, we jointly enable organizations to benefit from a cloud operating model.” – said Leo Pillon, Bitrock CEO.

After the Keynotes, the event continued with a pleasant Dinner & Networking night at the fancy restaurant by Cascina Cuccagna in Milan.
Take a look at the pictures below to see how the event went on, and keep following us on our blog and social media channels to discover what other incredible events we have in store!

Read More
Corporate Event

Last week our Team gathered for our first corporate event of 2021, which turned out to be a great success for at least three different reasons.

To begin with, this was the first live event after the lengthy Covid-19 emergency. With proper precautions and respecting social distancing norms, we were able to meet in person at the event location - a cool and fancy restaurant in Milan - laughing, eating, and drinking together (as every proper event requires).

This occasion allowed many people to finally get together face to face: as we all know, seeing each other via  a computer screen may be fun and necessary these days, but meeting colleagues in the “real” world, shaking hands, sharing laughs and jokes is another story!

Secondly, this was the first official Fortitude Group event, with all team members from Bitrock, Radicalbit and ProActivity participating. A great opportunity to mark the new Fortitude era, after the 2021 Group rebranding.

Last but not least, events of this kind are also important since many colleagues that seldom have the chance to meet due to the allocation on different projects or different geographical location can finally spend some time together. During this evening, we finally had all people from Treviso, Lugano, Milano (and many other cities around Italy) together in one spot.

The event started with a welcome aperitif followed by a tasty dinner (typical Milanese cuisine...what else?!). Our CEO Leo Pillon took the chance to greet all participants and deliver a brief talk, addressing the challenges this period has meant for the Group, but also all the great results and success we were able to achieve while working remotely. It is a distinct corporate culture, a sense of togetherness and a clear direction that have fuelled the passion emerging in our daily work.

Curious to know more about the Bitrock world? Look at the pics below to get a taste of our event, and visit our Instagram page to discover much more!

We are now ready to start planning our next big event. Will you join us? 🙂

Sales & Key Client
Management, Sales & Key Client
Team Front End
Front End Team
Team DevOps
DevOps Team
Team Back End
Back End Team
HR & Marketing
HR & Marketing
Read More
React Bandersnatch Experiment

React Bandersnatch Experiment

Getting Close to a Real Framework

Huge success for Claudia Bressi (Bitrock Frontend developer) at Byteconf React 2020, the annual online conference with the best React speakers and teachers in the world.

During the event, Claudia had the opportunity to talk about her experiment called “React Bandersnatch” (the name coming from one of the episodes of Black Mirror tv series, where freedom is represented as a sort of well-designed illusion).

Goal of this contribution is to give anyone that could not join the virtual event the chance to delve into her experiment and main findings, which represent a high-value contribution for the whole front-end community.

Here’s Claudia words, describing the project in detail.

(Claudia): The project starts with one simple idea: what if React was a framework, instead of a library to build user interfaces?
For this project, I built some real applications using different packages that are normally available inside a typical frontend framework. I measured a few major web application metrics and then compared the achieved results.

The experiment’s core was the concept of framework, i.e. a platform where it is possible to find ready components and tools that can be used to design a user interface, without the need to search for external dependencies.
Thanks to frameworks, you just need to add a proper configuration code and then you’re immediately ready to go and code whatever you want to implement. Developers often go for a framework because it’s so comfortable to have something ready and nothing to choose.

Moving on to a strict comparison with libraries, frameworks are more opinionated: they can give you rules to follow in your code, and they can solve for you the order of things to be executed behind the scenes. This is the case of lazy loading when dealing with modules in a big enterprise web application.
On the other hand, libraries are more minimalistic: they give you only the necessary to build applications. But, at the same time, you have more control and freedom to choose whatever package in the world.

However, this can lead sometimes to bad code: it is thus important to be careful and follow all best practices when using libraries.


The Project

As initial step of my project, I built a very simple web application in React in order to implement a weekly planner. This consisted of one component showing the week, another one showing the details of a specific day, and a few buttons to update the UI, for instance for adding events and meetings.

I used React (the latest available release) and Typescript (in a version that finally let users employ the optional chaining). In order to style the application, I used only .scss files, so I included a Sass compiler (btw, while writing the components, I styled them using the CSS modules syntax).

Then I defined a precise set of metrics, in order to be able to measure the experimental framework. More specifically:

  • bundle size (measured in kilobytes, to understand how much weight could be reached with each build) ;
  • loading time (the amount of time needed to load the HTML code in the application);
  • scripting time (the actual time needed to load the Javascript files);
  • render time (the time needed to render the stylesheets inside the browser);
  • painting time (the time for handling media files, such as images or videos)

The packages used for this experiment can be considered as the ingredients of the project. I tried to choose both well-known tools among the React scenario and some packages that are maybe less famous, but which have some features that can improve the overall performance on medium-size projects.

The first implementation can be considered as a classic way to solve a project in React. An implementation based on Redux for the state management, and on Thunk as Middleware solution. I used also the well-known React router and, last but not least, the popular Material UI to have some ready-to-use UI components.
The second application can be considered more sophisticated: it was actually made of Redux, combined with the Redux-observable package for handling the middleware part. As for the ROUTER, I applied a custom ROUTER solution, in order to let me play more with React hooks. As icing on the cake, I took the Ant library, in order to build up some UI components.

As for the third application, I blended together the MobX state manager with my previous hook-based custom router, while for the UI part I used the Ant library.
Finally, for the fourth experimental application, I created a rather simple solution with MobX, my hook-based custom router (again) and Material UI to complete the overall framework.


Main Findings

Analyzing these four implementations, what I found out is that, as State manager, Redux has a cleaner and better organized structure due to its functional paradigm. Another benefit is the immutable store that prevents any possible inconsistency when data is updated.

On the other hand, MobX allows multiple stores: this can be particularly useful if you need to reuse some data (let’s say the business logic part of your store), in a different – but similar –application with a shareable logic.
Another advantage of MobX is the benefit of having a reactive paradigm that takes care of the data updates, so that you can skip any middleware dependency.

Talking about routing, a built-in solution (such as the react-router-dom package) is pretty easy and smooth to apply in order to set-up all the routes we need within the application. A custom solution, however, such as our hooks-based router solution, let us keep our final bundle lighter than a classic dancer.

Moving on to the UI side of the framework, we can say that Material is a widespread web styling paradigm: its rules are easy to apply and the result is clean, tidy and minimalistic.
However, from my personal point of view, it is not so ‘elegant’ to pollute Javascript code with CSS syntax – this is why I preferred to keep things separated. I then searched for another UI library and I found Ant, which is written in Typescript with predictable static types – a real benefit for my applications. However, in the end, we can say that MaterialUI is lighter than Ant.
Anyway, both packages allow you to import only the necessaries components you need for your project. So, in the end, it is a simple matter of taste which library to use for the UI components (anyway: if you’re looking more at performance, then go for Material!)


Comparing the Results

As final step, I compared the achieved results for each of the above-mentioned metrics.
From the collected results, it’s quite significant that the most performant application is built with the lighter bundle size and, in particular, when using Redux coupled with Thunk; MaterialUI for the UI components and our custom router, then the resulting application has a smaller output artifact and optimized values for loading, scripting and render times.

To cut a long story short, the winner of this experiment is application no. 4.

However, for bigger projects the results may vary.


Ideal Plugins

Frameworks, sometimes, offer useful auxiliary built-in plugins, such as the CLI to easily use some commands or automate repetitive tasks – which can in turn improve developers’ life. That’s why some of my preferred tools available for React (which I would include in a React ideal framework scenario) are:

  • the famous React and Redux extensions for Chrome: I found these essentials and useful as the ruler for an architect;
  • the ‘eslint-plugin’ specifically written for React, which makes it easier to be consistent to the rules you want to keep in your code;

- Prettier, another must-have react plugin if you use VS Code, which helps a lot in getting a better formatted code;

- React Sign, a Chrome extension that helps you show a graph and represent the overall structure in order to analyze your components;

- last but not least, the popular React extension pack that you can find as VS code extension, which offers lots of developer automated actions, such as hundreds of code snippets, Intellisense and the option of file search within node modules.


If you want to listen to Claudia’s full speech during the event, click here and access the official video recording available on YouTube.

Read More
Bitrock DevOps Team joining HashiCorp EMEA Vault CHIP Virtual Bootcamp

Bitrock DevOps Team joining HashiCorp EMEA Vault CHIP Virtual Bootcamp

Another great achievement for our DevOps Team: the possibility to take part in HashiCorp EMEA Vault CHIP Virtual Bootcamp.

The Bootcamp – coming for the first time to the EMEA region – involves the participation of high-skilled professionals that already have experience with Vault and want to get Vault CHIP (Certified HashiCorp Implementation Partner) certified for delivering on Vault Enterprise.

Our DevOps Team will be challenged with a series of highly technical tasks to demonstrate their expertise in the field. A 3 full-day training, that will get them ready to implement in a customer engagement.

This comes after the great success of last week, which saw our DevOps Team members Matteo Gazzetta, Michael Tabolsky, Gianluca Mascolo, Francesco Bartolini and Simone Ripamonti successfully obtaining HashiCorp Certification as Vault Associate. A source of pride for the Bitrock community and a remarkable recognition of our DevOps Team expertise and know-how worldwide.

With the Virtual Bootcamp, the Team is now ready to raise the bar and takes on a new challenge, proving that there’s no limit to self-improvement and continuous learning.


HashiCorp EMEA Vault CHIP Virtual Bootcamp

May 5–May 8, 2020

https://www.hashicorp.com/

Read More
Corporate Event Databiz Group

Corporate Event | "Databiz Group"

As a yearly tradition, Databiz Group, holding group owner of Bitrock, celebrated a corporate event in the marvelous setting of the southern coast of the Garda Lake (Desenzano del Garda - BS - Italy).

During this 2 day event, held at the exciting location of Hotel Acquaviva del Garda, the 3 souls of Databiz Group met up together to review 1H - 2018 achievements and discuss about next strategical operations: Databiz Holding Management and Staff, Bitrock's team and Radicalbit developers had the opportunity to meet up and share mutual experiences, achievements, projects and targets.

A good opportunity to discover and explore the several natures of the group, understanding better the nature of 3 different companies united by a single vision and distinguished by different missions. Greeted by a group breakfast, developers, managers and staff had the opportunity to encounter and discuss about mutual working activities and life experiences. New employees had the chance to meet new colleagues and better understand the group working and experience environment. Merging together different experiences and background is a relevant issue.

We are aiming to design a culture-centered working place, where diversity an peculiarity become together sources of valor.

./event-1.jpg

./event-2.jpg


The Event | Moments of corporate experience

The first morning has been dedicated to Corporate Speeches and Business reports: opening act has been held by CEO Leo Pillon, who exposed in details 2018 group's vision, next steps and future evolution of the holding, consolidating a year of strong and radical evolutive transformations.

Following CSO Lino Zagolin exposed a detailed report of recent sales activities, including new scenarios in Academy, New Business and Partnerships. He also introduced to the team a new professional figure, Luca Lanza, Corporate Strategist and Business Developer. COO Marco Veronese took the stage in order to summarize to all employees the new Bitrock's organization, including the nomination of new CTO Giampaolo Trapasso, and new Heads of Practices: Salvatore Laisa (Head of Front End), Marco Stefani (Head of Back End), Franco Geraci (Head of DevOps), joining Mirko Lazzarato (Head of Think & Check IT), Paola Casarsa (Head of Make-UX) and Riccardo Pessina (Head of Operation Costs & Planning). He also introduced new Key Client Daniele Bergo.

On the second part of the morning Michele Ridi (Marketing and Sales Manager at Radicalbit) and Roberto Bentivoglio (CTO at Radicalbit), introduced a summary of activities of Radicalbit, Products and Services and Highlights of the first half of the year, also offering a sneak peek on new solutions.

Thereafter Enrico Sala (CFO Databiz Group) and Cristina Del Vecchio Head of Recruiting & Planning) entertained the audience with highly engaging analytics of the group, and exposed an emotional motivation session including corporate values and companies' missions.

Finally Leo Pillon closed the activities with an inspirational speech about the company long term vision, introducing the inner meaning of the company's claim #lookbeyond, defining himself first of all a dreamer committed to achieve great results.

Time for serious things ended. After a brunch all together all staff dedicated to games, water and fun.

Unmissable a football match... After all we are an Italian company. While someone was standing in the Sauna and Hammam, other braves were disputing a deathmatch under the sun.

And then a little surprise. A special guest introduced guests to the way of determination, proactivity and team vision. Straight from his glorious sporting experience: the great (and tall!) Riccardo Pittis, renowned basket champion, held a strong and effective speech about loosing and win, team and group, responsibilities and changement.

./event-3.jpg

./event-4.jpg

Finally hen the light went down, a long night dinner, open bar and music brought the guests through the late night.

Read More