Polymorphic Messages in Kafka Streams

Polymorphic Messages in Kafka Streams


Things usually start simple...

You are designing a Kafka Streams application which must read commands and produce the corresponding business event.
The Avro models you’re expecting to read look like this:

While the output messages you’re required to produce look like this:

You know you can leverage the sbt-avrohugger plugin to generate the corresponding Scala class for each Avro schema, so that you can focus only on designing the business logic.

Since the messages themselves are pretty straightforward, you decide to create a monomorphic function to map properties between each command and the corresponding event.
The resulting topology ends up looking like this:


...But then the domain widens

Today new functional requirements have emerged: your application must now handle multiple types of assets, each with its own unique properties.
You are pondering how to implement this requirement and make your application more resilient to further changes in behavior.


Multiple streams

You could split both commands and events into multiple topics, one per asset type, so that the corresponding Avro schema stays consistent and its compatibility is ensured.
This solution, however, would have you replicate pretty much the same topology multiple times, so it’s not recommended unless the business logic has to be customized for each asset type.


“All-and-none” messages

Avro doesn’t support inheritance between records, so any OOP strategy to have assets inherit properties from a common ancestor is unfortunately not viable.
You could however create a “Frankenstein” object with all the properties of each and every asset and fill in only those required for each type of asset.
This is definitely the worst solution from an evolutionary and maintainability point of view.


Union types

Luckily for you, Avro offers an interesting feature named union types: you could express the diversity in each asset’s properties via a union of multiple payloads, still relying on one single message as wrapper.


Enter polymorphic streams

Objects with no shape

To cope with this advanced polymorphism, you leverage the shapeless library, which introduces the Coproduct type, the perfect companion for the Avro union type.
First of all, you update the custom types mapping of sbt-avrohugger, so that it generates an additional sealed trait for each Avro protocol containing multiple records:

The generated command class ends up looking like this:


Updating the business logic

Thanks to shapeless’ Poly1 trait you then write the updated business logic in a single class:

Changes to the topology are minimal, as you’d expect:


A special kind of Serde

Now for the final piece of the puzzle, Serdes. Introducing the avro4s library, which takes Avro GenericRecords above and beyond.
You create a type class to extend a plain old Serde providing a brand new method:

Now each generated class has its own Serde, tailored on the corresponding Avro schema.


Putting everything together

Finally, the main program where you combine all ingredients:



Conclusions

When multiple use cases share (almost) the same business logic, you can create a stream processing application with ad-hoc polymorphism and reduce the duplication of code to the minimum, while making your application even more future-proof.

Read More
Bringing GDPR in Kafka with Vault

Bringing GDPR in Kafka with Vault


Part 1: Concepts

GDPR introduced the “right to be forgotten”, which allows individuals to make verbal or written requests for personal data erasure. One of the common challenges when trying to comply with this requirement in an Apache Kafka based application infrastructure is being able to selectively delete all the Kafka records related to one of the application users.

Kafka’s data model was never supposed to support such a selective delete feature, so businesses had to find and implement workarounds. At the time of writing, the only way to delete messages in Kafka is to wait for the message retention to expire or to use compact topics that expect tombstone messages to be published, which isn't feasible in all environments and just doesn't fit all the use cases.

HashiCorp Vault provides Encryption as a Service, and as it happens, can help us implement a solution without workarounds, either in application code or Kafka data model.


Vault Encryption as a Service

Vault Transit secrets engine handles cryptographic operations on in-transit data without persisting any information. This allows a straightforward introduction of cryptography in existing or new applications by performing a simple HTTP request.

Vault fully and transparently manages the lifecycle of encryption keys, so neither developers or operators have to worry about keys compliance and rotation, while the securely stored data can always be encrypted and decrypted as long as the Vault is accessible.


Kafka Integration

What if instead of trying to selectively eliminate the data the application is not allowed to keep, we would just make sure the application (or anyone for this matter) cannot read the data under any circumstances? This would equal physical removal of data, just as requested by GDPR compliance. Such a result can be achieved by selectively encrypting information that we might want to be able to delete and throwing away the key when the deletion is requested.

However, it is necessary to perform encryption and decryption in a transparent way for the application, to reduce refactoring and integration effort for each of the applications that are using Kafka, and unlock this functionality for the applications that cannot be adapted at all.

Kafka APIs support interceptors on message production and consumption, which is the candidate link in the chain where to leverage Vault’s encryption as a service. Inside the interceptor, we can perform the needed message transformation:

  • before a record is sent to Kafka, the interceptor performs encryption and adjusts the record content with the encrypted data
  • before a record is returned to a consumer client, the interceptor performs decryption and adjusts the record content with the decrypted data


Logical Deletion

Does this allow us to delete all the Kafka messages related to a single user? Yes, and it is really simple. If the encryption key that we use for encrypting data in Kafka messages is different for each of our application’s users, we can go ahead and delete the encryption key to guarantee that it is no longer possible to read the user data.


Replication Outside EU

Given that now the sensitive data stored in our Kafka cluster is encrypted at rest, it is possible to replicate our Kafka cluster outside the EU, for example for disaster recovery purposes. The data will only be accessible by those users that have the right permissions to perform the cryptographic operations in Vault.



Part 2: Technicalities

In the previous part we drafted the general idea behind the integration of HashiCorp Vault and Apache Kafka for performing a fine grained encryption at rest of the messages, in order to address GDPR compliance requirements within Kafka. In this part, instead, we do a deep dive on how to bring this idea alive.


Vault Transit Secrets Engine

Vault Transit secrets engine is part of Vault Open Source, and it is really easy to get started with. Setting the engine up is just a matter of enabling it and creating some encryption keys:

Crypto operations can be performed as well in a really simple way, it’s just a matter of providing base64 encoded plaintext data:

The resulting ciphertext will look like vault:v1: - where v1 represents the first key generation, given it has not been rotated yet.

What about decryption? Well, it’s just another API call:

Integrating Vault’s Encryption as a Service within your application becomes really easy to implement and requires little to no refactoring of the existing codebase.


Kafka Producer Interceptor

The Producer Interceptor API can intercept and possibly mutate the records received by the producer before they are published to the Kafka cluster. In this scenario, the goal is to perform encryption within this interceptor, in order to avoid sending plaintext data to the Kafka cluster...

Integrating encryption in the Producer Interceptor is straightforward, given that the onSend method is invoked one message at a time.


Kafka Consumer Interceptor

The Consumer Interceptor API can intercept and possibly mutate the records received by the consumer. In this scenario, we want to perform decryption of the data received from Kafka cluster and return plaintext data to the consumer.

Integrating decryption with Consumer Interceptor is a bit trickier because we wanted to leverage the batch decryption capabilities of Vault, in order to minimize Vault API calls.

Usage

Once you have built your interceptors, enabling them is just a matter of configuring your Consumer or Producer client:

or

Notice that value and key serializer class must be set to the StringSerializer, since Vault Transit can only handle strings containing base64 data. The client invoking Kafka Producer and Consumer API, however, is able to process any supported type of data, according to the serializer or deserializer configured in the interceptor.value.serializer or interceptor.value.deserializer properties.


Conclusions

HashiCorp Vault Transit secrets engine is definitely the technological component you may want to leverage when addressing cryptographical requirements in your application, even when dealing with legacy components. The entire set of capabilities offered by HashiCorp Vault makes it easy to modernize applications on a security perspective, allowing developers to focus on the business logic rather than spending time in finding a way to properly manage secrets.



Author: Simone Ripamonti, DevOps Engineer @Bitrock

Read More
The JAMStack Proposition

The JAMStack Proposition

With the surge in popularity of JavaScript frameworks, Node and container technologies, the past years have seen the rise of microservices as the leading pattern in the architecture of distributed applications on the Web; the lingua franca of these applications being, of course, APIs. Developers adopting these modern tools for frontend environments have though faced emerging challenges when dealing with search engine optimization, rendering the content and serving the applications compared to the common LAMP stack, where PHP does the bulk of the work and JavaScript only provides the interactions and dynamic elements.


From Client Side to Hybrid

While in the past using frameworks on the client side meant single page applications, using “hacks” such as the hashbang to provide navigation, in recent years the leading JavaScript frameworks embraced the hybrid approach in rendering, where both the server and the client would run the same virtual DOM and reconnect on the browser later, “rehydrating” the application on the client. This way, applications supported both the common navigation controls of the browser and provided accessibility to users with older browsers or even no JavaScript, since the page is readily available on the server. This provided improved performance on the first load, and supported the traditional spiders from search engines.

However, this meant:

  • having an improved developer experience as the entire application uses only JavaScript and HTML, with a single code base...
  • ...but Node doesn’t actually support the same modules and featuresets of a browser
  • taking a hit on a number of metrics, such as Time To First Byte and Time To Interactive, as the code runs on both ends
  • relying on an increasingly complex deployment on the server compared to traditional shared hosting
  • using “brute force” solutions to prerendering and caching applications, such as headless browsers


Limits of CMSs

Many modern web applications are still nothing more than glorified lists of contents, sometimes enabling modest interactions - such as filtering or sorting contents -, providing taxonomies and interacting with limited components - such as forms for comments or the search bar. As most of the content is static, a complete frontend solution is eventually considered an added cost compared to existing, monolith CMSs; they still offer big communities, a plethora of themes and plugins, and well rodated interfaces for content creators.

Yet, these CMSs still do not provide the same speed or developer experience as the applications written for Node, which can be started on any machine with nothing more than the Node runtime and have very fast cycles for changes. Moreover, they usually have limited support for components, or restrict them to the content side, while requiring the developer to code in additional custom parts for the rest of the page, often in a different language than the one spoken by the browser. Interaction is still tackled on, with JavaScript being unable to cross the boundaries of the single static page.

Last but not least, popular CMS such as WordPress come with larger surface areas for attack as they’re both incredibly popular and the very same endpoint for both the backend and the frontend; hosted on the same old machine, with the same address for both: subject to varying degrees of loads which might need horizontal scaling, creating issues for the cache.


Enter the Static Sites Generators

Even if CMSs can definitely render pages quickly and avoid the lengthy reconciliation with the browser’s context, they still require a server and dedicated support, with a plurality of codebases and a bad experience for the developer that does not have everything on hand on the local machine, or might not even have the required knowledge to deal with all issues in this tightly coupled project.

Static websites, instead, have no server loading times; require no session on the server, no instance of Linux running, and no real requirements other than a web server to deliver the resources. They can even live off incredibly cheap storage, such as S3 from Amazon.


A Modern Solution to Static Content

While traditional frameworks such as Jekyll or Hugo are fast and still a good solution, the new frameworks that have entered the space in the last years (such as Gatsby, Gridsome, Nuxt and Next.js) have took static sites to the future. Learning the lessons from hybrid applications and SPAs, they rely on improved tooling running on Node; modern web frameworks are now first class - improving both UX and developer experience. They feature:

  • complete SEO support - as pages are just HTML and CSS as before
  • no need for a running service; deploy simply delivers the static files to the CDN
  • improved performance on all metrics, and well engineered solutions for smaller JavaScript bundles and pipelines for other resources
  • the same complex interactions of a SPAs, such as transitions and persistent state, and the same tools (like Webpack, Parcel, etc.)
  • support for content from many sources: static files, version repositories, headless CMSs, APIs and more

Frameworks such as Gatsby and Gridsome take a page off the CMS’s playbook by offering solutions to different needs in the form of themes and plugins, yet retaining a single cohesive codebase with dependencies handled through the very same ecosystem of JavaScript, well familiar with developers already used to working with modern frameworks. They also come with configurations for older browsers, solving the rebus of packagers, and simple commands to either build or start a development service.

The reduction in complexity is significant. A streamlined frontend solution enables developers to focus on the core experience of users instead of wasting time on configuration. The absence of an actual service running the pages allows the website to run just about everywhere, letting backend developers focus on the business logic, and in between, APIs representing the contract between the two sides. DevOps have one less thing to be concerned about. It is the JAMstack: JavaScript, APIs, and markup languages.


Challenges of the JAM Stack

The massive improvements brought forth by the stack still face issues that many of the other solutions don’t, and by the nature of static contents; consequently, its strong points also represent its main pain points.

First of all, static content is never entirely static: contents will be probably updated in time and might even be real time. While recreating the bundle every time the content changes represents the quickest solution, it takes time - more than in real time at least. There have been big improvements in the speed of the tools when generating this bundle; it used to take way longer than today and support up to a few thousands pages, while today, with solutions such as partial builds, it can be mitigated. It is still not real time; real time content can only be handled through dynamic components on the client side.

Secondly, by relying on the delivery through services such as Netlify, S3, Vercel and so on, we’re leaving to the middleman to handle security and performance optimization for static files. We can also do it on our own, of course.

Third and last (but also probably the trickiest part), is that by having static files we move the concern for sessions and authentication/authorization to external microservices instead of the server. With careful reliance on Service Workers, and/or solutions such as Firebase, we can solve this. The JAM stack also strongly favours a serverless approach to server interactions: by writing simple functions in Node, to be deployed in the same hands-off approach from the same codebase (possibly even sharing code), we can handle just about everything as before with traditional AJAX requests.

Both the serverless approach and the delivery of static files is a big reduction in costs compared to deploying virtual servers, as we only use what we need and scale naturally as more the resources are required. But rarely accessed contents or functions do require some extra time as the provider has to “boot up” the context of the functions for us.


A common Use-Case: a Blog and its Pages

Static websites are really suited for delivering the contents of an editorial product. Text and images are usually not updated in real time, and the business requirements are more often aligned with the value proposition of static site generators:

  • A safe environment with low attack surface for the public.
  • Fast performance on all metrics, to boost the SEO and mobile performances.
  • Low costs, in development, deployment and maintenance

This environment has been, for the past decades, very much dominated by WordPress and its themes, which provide a good enough solution for most companies and, since they are so commonly used, editors do not need to learn again how to do things. As WordPress developed its own API in the last decade, we can rely on it to provide the base for our contents and access the existing ecosystem and know-how, deploying it on a low cost solution such as WordPress.com or perhaps a small VPS. All that we really need is a safe way to get our content from our install to our static site generator, that will generate the page at build time calling the APIs. We could just as easily deploy an headless CMS or even a completely custom solution - on the frontend site it doesn’t really matter much.

Our choice for a static site generator can also be decided according to the skills of the developers working on the project. On the React side, both Gatsby and Next.js are very popular solutions, with Gatsby having an already established set of plugins and starters very similar to WordPress that can speed up the development. On the Vue side, Vuepress and Gridsome are two common solutions: the first one being the easiest of the two in terms of features and approach to content (by using Markdown files), while the second more similar to Gatsby, providing plugins and starters. Both Gridsome and Gatsby, in fact, use GraphQL as a lingua franca for our contents, so that we can integrate many sources and use them in a common way.

Last but not least, we can decide where to deploy our contents. There’s a huge number of possibilities, from CDNs to storages (such as S3) or many services that pride themselves on simplicity like Netlify and Heroku. Anyway, what we really need is a channel to deliver the bundle of the contents to our users; whenever we update our contents, we will simply call an API to trigger again the build process and reload the files.


An Example with Gatsby

To build an example solution, we’re going to use DigitalOcean to host our WordPress installation. Our generator will be Gatsby for this very specific example, but the concepts are quite similar for many of them. Note that we will be just using a function to build the pages, but many of these generators offer integration with external CMSs and might use GraphQL and such to do the queries; here is just a generic example. To begin, we created a droplet on DigitalOcean using their image for WordPress on Ubuntu 18.04. You can find more information about this on their website, as their wizard will do the bulk of the work for you. Don’t forget to follow the installation of WordPress itself. For this example, we’re not going to even configure a domain for our install, but you definitely want to use a proper configuration. Many hostings also offer simple solutions to host applications such as WordPress, and will do the job nicely.

Now that our WordPress is set up, we can start working on our frontend. First thing first, we create the project using the command line interface for Gatsby.

npm install -g gatsby-cli

gatsby new example-wordpress

This creates our base project using the default starter kit from Gatsby. Inside the \example-wordpress\ we can find the modules needed already preinstalled, some configuration for styling the code (with Prettier), and the source code folder (\src\); the latter having inside both the folder for the React \components\ and the folder for the \pages\. Files inside the \pages\ folder will be accessible by default through their filename (for example, \page-2\ will be located at \example.com/page-2\).

What we want to do is to hook up into the build process of Gatsby and generate our pages from the WordPress API. You can find more information about the APIs from the REST API Handbook, but the gist of it is that we’re requesting the posts resource from it using the correct endpoint. You can preview the available resources by going at the page \example.com\\wp-json\; we will be accessing the \wp\\v2\, under which we have the editorial contents, and query for the posts. Our URL will be something like \example.com/wp-json/wp/v2/posts\.

Now we just have to pull it inside Gatsby and build the pages. To do this, we open our project and navigate to the \gatsby-node.js\ file that should be in the root. We install and import the module \node-fetch\, so that we have an easy interface to get our resource by using:

npm install --save node-fetch

And putting at the top of the file our import:

_const fetch = require(\MARKDOWN_HASH03c697f1f26e7438c661b7bc6dd0f4b2MARKDOWNHASH);

Next, we hook up into the \createPages\ step of Gatsby. In order to do so, we will export an asynchronous method from our file called \createPages\, which receives an object with the \actions\ available to us and a \reporter\ object that can tell Gatsby if something went wrong. Inside this function, we fetch our posts and create a page for each of them.

Let’s create the template page for the blog posts. We create a file in the \pages\ folder named \post.js\, and access the post data by reading it from the props of our page component.

We should now have a corresponding page in our frontend:

This is of course just the beginning.

  • To host our content online, we could rely on something like Netlify. We just push our project to Github, then add Netlify as an application.
  • To trigger the rebuilding of our contents, we could for example make a call using cURL to our services on the \save_post\ hook.
  • We could build our taxonomy pages, either by using the API from WordPress or the posts JSON. To better integrate it into Gatsby (or Gridsome perhaps), we could add our posts as GraphQL nodes.
  • A common criticism of this kind of solution is that editors don’t really have an idea of how the content will end up looking on the frontend. We can build a simple WYSIWYG editor on the frontend side by relying on a good library like Draft.js. Of course this also requires authentication and so forth. We could also share the same CSS and major HTML between both the WordPress environment and Gatsby.
  • We could lock our WordPress APIs behind a simple authentication using either Apache or Nginx, as they’re quite common in this kind of setups. Logging in through Node is trivial.


Conclusions

Static site generators enable us to provide a good user experience and also good performance, with a bit more effort than using common CMSs such as WordPress as a monolithic approach. We can integrate different sources and create very custom solutions using modern scaffolding and tooling. However, it does require quite a bit more effort, and the disconnect between frontend and backend can be a pain point for our editors.



Author: Federico Muzzo, UX/UI Engineer @Bitrock

Read More
From Layered to Hexagonal Architecture

From layered to Hexagonal Architecture (Hands On)


Introduction

The hexagonal architecture (also called “ports and adapters”) is an architectural pattern used in software design designed in 2005 by Alistair Cockburn.

The hexagonal architecture is allegedly at the origin of the microservices architecture.


What it Brings to the Table

The most used service architecture is layered. Often, this type of architecture leads to dependencies of business logic from external contract (e.g., database, external service, and so on). This brings stiffness and coupling to the system, forcing us to recompile classes that contain the business logic whenever an API changes.


Loose coupling

In the hexagonal architecture, components communicate with each other using a number of exposed ports, which are simple interfaces. This is an application of the Dependency Inversion Principle (the “D” in SOLID).


Exchangeable components

An adapter is a software component that allows a technology to interact with a port of the hexagon. Adapters make it easy to exchange a certain layer of the application without impacting business logic. This is a core concept of evolutionary architectures.


Maximum isolation

Components can be tested in isolation from the outside environment or you can use dependency injection and other techniques (e.g., mocks, stubs) to enable easier testing.

Contract testing supersedes integration testing for a faster and easier development flow.


The domain at the center

Domain objects can contain both state and behavior. The closer the behavior is to the state, the easier the code will be to understand, reason about, and maintain.

Since domain objects have no dependencies on other layers of the application, changes in other layers don’t affect them. This is a prime example of the Single Responsibility Principle (the “S” in “SOLID”).


How to Implement it

Let's now have a look on what it means to build a project following the hexagonal architecture to better understand the difference and its benefit in comparison with a more common plain layered architecture.


Project layout

In a layered architecture project, the package structure usually looks like the following:

Here we can find a package for each application layer:

  • the one responsible for exposing the service for external communication (e.g., REST APIs);
  • the one where the core business logic is defined;
  • the one with all the database integration code;
  • the one responsible for communicating with other external services;
  • and more...


Layers Coupling

At first glance, this could look like a nice and clean solution to keep the different pieces of the application separated and well organized, but, if we dive a bit deeper into the code, we can find some code smells that should alert us. In fact, after a quick inspection of the core business logic of the application, we immediately find something definitely in contrast with our idea of clean and well defined separation of the various components. The business logic that we'd like to keep isolated from all the external layers clearly references some dependencies from the database and the external service package.

These dependencies imply that in case of changes in the database code or in the external service communication, we'll need to recompile the main logic and probably change and adapt it, in order to make it compatible with the new database and external service versions. This means that we need to spend time on this new integration, test it properly and, during this process, we expose ourselves to the introduction of some bugs.


Interfaces to the Rescue

This is where the hexagonal architecture really shines and helps us avoid all of this. First we need to decouple the business logic from its database dependencies: this can be easily achieved with the introduction of a simple interface (also called “port”) that will define the behavior that a certain database class needs to implement to be compatible with our main logic.

Then we can use this contract in the actual database implementation to be sure that it's compliant with the defined behavior.

Now we can come back to our main logic class and, thanks to the changes described above, we can finally get rid of the database dependency and have the business logic completely decoupled from the persistence details.

It's important to note that the new interface we introduced is defined inside the business logic package and, therefore, it’s part of it and not of the database layer. This trick allows us to apply the Dependency Inversion Principle and keep our application core pure and isolated from all the external details.

We can then apply the same approach to the external service dependency and finally clear the whole logic class of all its dependencies from the other layer of the application.


DTO for model abstraction

This already give us a nice level of separation, but there is still room for improvement. In fact if you look at the definition of the Database class you will notice that we are using the same model from our main logic to operate on the persistence layer. While this is not a problem for the isolation of our core logic, it could be a good idea to create a separate model for the persistence layer, so that if we need to make some changes in the structure of the table, for example, we are not forced to propagate the changes also to the business logic layer. This can be achieved with the introduction of a DTO (Data transfer object).

A DTO is nothing more that a new external model with pair of mapping function that allow us to transform our internal business model to the external one and the other way around. First of all, we need to define the new private model for our database and external service layers.

Then we need to create a proper function to transform this new database model into the internal business logic model (and vice versa based on the application needs).

Now we can finally change the Database class to work with the newly introduced model and transform it into the logic one when it communicates with the business logic layer.

This approach works very well to protect our logic from external interference, but it has some consequence. The main one is an explosion of the number of the models, when most of the time the models are the same; the other one is that the logic about transforming models can be tedious and always need to be properly tested to avoid errors. One compromise that we can take is starting only with the business models (defining them in the correct package) and introduce the external models only when the two models diverge.

When to embrace it

Hexagonal architecture is no silver bullet. If you’re building an application with rich business rules that can be expressed in a rich domain model that combines state with behavior, then this architecture really shines because it puts the domain model in the center.

Combine it with microservices architecture and you’ll get a future-proof evolutionary architecture.

Read More
An introduction to "Deno"

An Introduction to "Deno"


What is Deno?

“A secure runtime for JavaScript and TypeScript.” This is the definition contained in the official Deno website.

Before going into details, let’s start by clarifying the two main concepts included in this definition.


What is a runtime system?

As for Deno, we can say that’s what makes Javascript run outside the browser, adding a series of features that it is not possible to find in the Javascript engine itself.


What is Typescript?

Typescript is a superset of Javascript, which adds a series of features that make the language more interesting. Its main features are:

  • Optional static typing
  • Type inference
  • Improved Classes
  • Interfaces

At this point, you may find a similarity between Node and our Deno definition, since they seem to do almost the same thing and they are both built upon the Javascript V8 engine.


So Why Deno?

“Deno” - as Ryan Dahl, Creator of both Deno and Node, said - “is not by any means a rival of Node”. Simply, he was no longer happy with Node and decided to create a new runtime to make up for its “mistakes” and shortages (and adding also a bunch of new features).



Getting closer to Deno

Let’s now discover what makes Deno so promising and interesting, along with some key differences with Node.js:


Deno supports out-of-the-box Typescript

Deno can run Typescript code without installing additional libraries, such as ‘ts-node’.

It is possible to create an app.ts file and make it run with the simple command “Deno run app.ts”, without any other additional step.


ES Modules

Deno drops Commonjs Modules, which are still used in Node.js, and embraces the modern ES modules that are defined as standard in the Javascript world and mostly used in the front-end development scenario.

Deno borrows from Golang the possibility to import the modules directly from an Url.


Security First

Deno implements a philosophy of “least privileges” when it comes to security.To run a script, indeed, you need to add appropriate flags in order to enable certain permissions.

Here’s the list of flags that can be used:

  • --allow-env: allow environment access
  • --allow-hrtime: allow high-resolution time measurement
  • --allow-net=<allow-net>: allow network access
  • --allow-plugin: allow loading plugins
  • --allow-read=<allow-read>: allow file system read access
  • --allow-run: allow running subprocesses
  • --allow-write=<allow-write>: allow file system write access
  • --allow-all: allow all permissions (same as -A)


Standard Libraries

These libraries (click here to find out more) are developed and maintained by the core team of Deno.

Many other languages - Python included - share this concept of having a library of reference that is stable and tested by developers who maintain it.

Since Deno is at an initial stage, the list is still short - but certainly there will be further implementations in the future.


Built-in Tools

When it comes to Node.Js, if you want to have specific tools, you have to install them manually; furthermore, they are essentially third-parties tools, which are not maintained by the Node Team.

Deno, instead, embraces another philosophy: it offers, indeed, a built-in tool to improve the development. This creates a standard, which makes Deno not so dispersive as the Node ecosystem.

Here’s a partial list of them, along with the link to the relevant documentation for a deeper understanding of the topic:

  • fmt a built-in code formatter (similar to gofmt in Go)* test: runs test
  • debugger
  • Bundler
  • Documentation Generator
  • Dependency inspector
  • Linter


Compatibility with Browser API

Deno API was created to be as compatible as possible with the Browser API, in order to be able to implement any upcoming feature easily. This is one of the main “issues” that Node has, since it uses an incompatible global namespace (“Global” instead of “window”). This is the reason why an API like Fetch has never been implemented in Node.


Style Guide to building a Module (Opinionated Modules)

Unlike Node, Deno has a set of rules that a Developer should follow in order to publish a module. This can avoid the creation of a different way to reach the same output, thus creating a standard - which is a main principle within the Deno ecosystem. To find out more about the topic, click here.


Where is package.json?

As seen before, there is no package.json in Deno where it is possible to put all the dependencies. The answer, then, is deps.ts.

Deps is used for 2 main reasons:

  • to group all the dependencies needed for the project;
  • to manage versions.

This is a sort of replication of package.json present in Node.js, but many Developers are no longer considering it as best practice because of the decentralized nature of Deno. Indeed, they are now experimenting a better way to organize the code, which might lead to a different management of the modules. Let’s see how it will evolve in the future...

Here’s an example of a deps.ts file:


What about locking the dependencies?

A file called lock.json is needed in order to lock them. By using the following command, it is possible to cache and assign a hash to every dependency, in a way that no one can temper it:

deno cache --lock=lock.json --lock-write src/deps.ts

For further explanation about integrity check and lock files, please have a look at the official documentation.


Server Setup

Last but not least, here’s a quick but interesting comparison between a server setup in Node and in Deno.

Node server:

An example of Node server

Deno Server:

An example of Deno server

As you can notice, the snippets are pretty similar, but with fundamental differences that can sum up what we discussed above. More specifically:

  • ES modules;
  • decentralized import from an URL;
  • nextgen javascript feature out of the box;
  • the permission needed to run the script.



Future Improvements on the Roadmap and Conclusions

One of the key features in roadmap is the possibility to create a single executable file, as it happens now in many other languages (like Golang, for instance) - something that could revolutionize the Javascript ecosystem itself.

Also the compatibility layer with the Node.js stdlib is still in progress; this could lead to a faster development of the runtime system.

To sum up, we can say that Deno is in continuous evolution and probably will be the next game-changer of the Javascript ecosystem. The foundations for this runtime are in place and it is already a hot topic, so... keep an eye out!



Author: Yi Zhang, UX/UI Engineer @Bitrock

Read More
Bitrock_Rooms

Bitrock_Rooms

Our solution to guarantee a safe working environment

One of the most recent challenges that we as UX/UI Team have faced in Bitrock is the creation of a brand-new Web App to solve a contingent issue inside the company.

The communication was sudden and with few details about the project: what we had was a problem to solve, and a strict, dynamic deadline.


The challenge

Our mission consisted in delivering an App whose main goal was to manage the booking of the desks available in our Milan HQ office. The social distancing measures caused by Covid-19 pandemic, indeed, had forced Bitrock Team to reduce the capacity of the rooms. Our task was thus to provide a booking platform that could allow our colleagues to book their desks in advance, even from home: in this way it would be possible to ensure that employees keep sufficient space from each other and to provide a safe working place.

The rules we had to follow when designing the App were simple: every room would have a max capacity (of desks) to be respected, and a user would be able to book just one desk in a room per day. Another feature we were required to implement was the ability to see the bookings made by other colleagues in real time, in order to have a better feedback on the current rooms capacity.

On the UX/UI side, we had two kind of views in mind: a daily view, and a weekly one (a feature asked us to ease the booking for several days).

The functional analysis was ready, the deadline was clear. We thus started to work.


The project

At first we created wireframes: they were simple and useful to us in order to have a better understanding of the project.

As Backend and Database Platform, we chose to rely on Firebase and Firestore in order to speed up the implementation.

Firebase was a good fit for every need that we were trying to respond to:

  • Oauth authentication out of the box
  • Real Time Database perfectly integrated with RxJs library

Every decision we made was based on the concept of “Reactive Programming”, in order to achieve a data stream able to facilitate the automatic propagation of data changes.

For the selection of the Frontend Framework, the choice was easy: Angular, which is well integrated with Firebase in every aspect and synergised with Rxjs (A/N: for those who are not familiar to the Angular ecosystem, Rxjs is a library that embrace the concept of reactiveness with a functional approach) - everything was made reactive out of the box.

To sum up, here’s the list of the libraries we chosen:

  • Angular
  • RxJs
  • AngularFire - Firebase integration to Angular
  • Moment.js - Library to manage the date
  • Angular Material - UI Library with premade component

Our philosophy was to have the right balance between best practices and productivity, while respecting the limited available time.

The first point to tackle was the data schema to represent the booking of the desks. We thus created a flat structure, where the main keys were :

  • date
  • room
  • user

Here’s the schema that we used:

We then started creating components and services by using the tools that Angular had, for instance using cli commands.

Our choice not to use a state management like NgRx was dictated by the fact that this was a rather simple project with a limited number of components.

The tasks related to the daily view were carried out fast and smoothly: everything revolved around the RxJs libraries and the communication with Firebase.

Even the real-time update of the data from Firestore was great: it was so easy to implement...like magic! The implementation of the weekly view was a bit more challenging, but we managed to carry it out using our components.

The last part of the project covered the styling aspects: we decided to use Amber (Bitrock design system) as reference, in order to create a web app with the company “look and feel”.


Conclusions

This project represents the perfect playground for those situations that envisage a sudden problem that needs to be solved in a very short amount of time. During its different steps we had the possibility to reinforce our team-work spirit, as well as develop a proactive attitude. Everything was indeed designed, delivered and implemented very well thanks to the effort of the team as a whole, and not just because of individuals.

Bitrock Rooms is now used every day by Bitrock Team as one of the solutions to face Covid-19 challenges, helping creating a safe working environment and delivering a smart and smooth experience to users.


Bitrock Rooms' daily view interface:

Bitrock Rooms' weekly view interface:


Bitrock Rooms' mobile view interface:



Authors: Marco Petreri, UX/UI Engineer @Bitrock - Yi Zhang, UX/UI Engineer @Bitrock

Read More
React Bandersnatch Experiment

React Bandersnatch Experiment

Getting Close to a Real Framework

Huge success for Claudia Bressi (Bitrock Frontend developer) at Byteconf React 2020, the annual online conference with the best React speakers and teachers in the world.

During the event, Claudia had the opportunity to talk about her experiment called “React Bandersnatch” (the name coming from one of the episodes of Black Mirror tv series, where freedom is represented as a sort of well-designed illusion).

Goal of this contribution is to give anyone that could not join the virtual event the chance to delve into her experiment and main findings, which represent a high-value contribution for the whole front-end community.

Here’s Claudia words, describing the project in detail.

(Claudia): The project starts with one simple idea: what if React was a framework, instead of a library to build user interfaces?
For this project, I built some real applications using different packages that are normally available inside a typical frontend framework. I measured a few major web application metrics and then compared the achieved results.

The experiment’s core was the concept of framework, i.e. a platform where it is possible to find ready components and tools that can be used to design a user interface, without the need to search for external dependencies.
Thanks to frameworks, you just need to add a proper configuration code and then you’re immediately ready to go and code whatever you want to implement. Developers often go for a framework because it’s so comfortable to have something ready and nothing to choose.

Moving on to a strict comparison with libraries, frameworks are more opinionated: they can give you rules to follow in your code, and they can solve for you the order of things to be executed behind the scenes. This is the case of lazy loading when dealing with modules in a big enterprise web application.
On the other hand, libraries are more minimalistic: they give you only the necessary to build applications. But, at the same time, you have more control and freedom to choose whatever package in the world.

However, this can lead sometimes to bad code: it is thus important to be careful and follow all best practices when using libraries.


The Project

As initial step of my project, I built a very simple web application in React in order to implement a weekly planner. This consisted of one component showing the week, another one showing the details of a specific day, and a few buttons to update the UI, for instance for adding events and meetings.

I used React (the latest available release) and Typescript (in a version that finally let users employ the optional chaining). In order to style the application, I used only .scss files, so I included a Sass compiler (btw, while writing the components, I styled them using the CSS modules syntax).

Then I defined a precise set of metrics, in order to be able to measure the experimental framework. More specifically:

  • bundle size (measured in kilobytes, to understand how much weight could be reached with each build) ;
  • loading time (the amount of time needed to load the HTML code in the application);
  • scripting time (the actual time needed to load the Javascript files);
  • render time (the time needed to render the stylesheets inside the browser);
  • painting time (the time for handling media files, such as images or videos)

The packages used for this experiment can be considered as the ingredients of the project. I tried to choose both well-known tools among the React scenario and some packages that are maybe less famous, but which have some features that can improve the overall performance on medium-size projects.

The first implementation can be considered as a classic way to solve a project in React. An implementation based on Redux for the state management, and on Thunk as Middleware solution. I used also the well-known React router and, last but not least, the popular Material UI to have some ready-to-use UI components.
The second application can be considered more sophisticated: it was actually made of Redux, combined with the Redux-observable package for handling the middleware part. As for the ROUTER, I applied a custom ROUTER solution, in order to let me play more with React hooks. As icing on the cake, I took the Ant library, in order to build up some UI components.

As for the third application, I blended together the MobX state manager with my previous hook-based custom router, while for the UI part I used the Ant library.
Finally, for the fourth experimental application, I created a rather simple solution with MobX, my hook-based custom router (again) and Material UI to complete the overall framework.


Main Findings

Analyzing these four implementations, what I found out is that, as State manager, Redux has a cleaner and better organized structure due to its functional paradigm. Another benefit is the immutable store that prevents any possible inconsistency when data is updated.

On the other hand, MobX allows multiple stores: this can be particularly useful if you need to reuse some data (let’s say the business logic part of your store), in a different – but similar –application with a shareable logic.
Another advantage of MobX is the benefit of having a reactive paradigm that takes care of the data updates, so that you can skip any middleware dependency.

Talking about routing, a built-in solution (such as the react-router-dom package) is pretty easy and smooth to apply in order to set-up all the routes we need within the application. A custom solution, however, such as our hooks-based router solution, let us keep our final bundle lighter than a classic dancer.

Moving on to the UI side of the framework, we can say that Material is a widespread web styling paradigm: its rules are easy to apply and the result is clean, tidy and minimalistic.
However, from my personal point of view, it is not so ‘elegant’ to pollute Javascript code with CSS syntax – this is why I preferred to keep things separated. I then searched for another UI library and I found Ant, which is written in Typescript with predictable static types – a real benefit for my applications. However, in the end, we can say that MaterialUI is lighter than Ant.
Anyway, both packages allow you to import only the necessaries components you need for your project. So, in the end, it is a simple matter of taste which library to use for the UI components (anyway: if you’re looking more at performance, then go for Material!)


Comparing the Results

As final step, I compared the achieved results for each of the above-mentioned metrics.
From the collected results, it’s quite significant that the most performant application is built with the lighter bundle size and, in particular, when using Redux coupled with Thunk; MaterialUI for the UI components and our custom router, then the resulting application has a smaller output artifact and optimized values for loading, scripting and render times.

To cut a long story short, the winner of this experiment is application no. 4.

However, for bigger projects the results may vary.


Ideal Plugins

Frameworks, sometimes, offer useful auxiliary built-in plugins, such as the CLI to easily use some commands or automate repetitive tasks – which can in turn improve developers’ life. That’s why some of my preferred tools available for React (which I would include in a React ideal framework scenario) are:

  • the famous React and Redux extensions for Chrome: I found these essentials and useful as the ruler for an architect;
  • the ‘eslint-plugin’ specifically written for React, which makes it easier to be consistent to the rules you want to keep in your code;

- Prettier, another must-have react plugin if you use VS Code, which helps a lot in getting a better formatted code;

- React Sign, a Chrome extension that helps you show a graph and represent the overall structure in order to analyze your components;

- last but not least, the popular React extension pack that you can find as VS code extension, which offers lots of developer automated actions, such as hundreds of code snippets, Intellisense and the option of file search within node modules.


If you want to listen to Claudia’s full speech during the event, click here and access the official video recording available on YouTube.

Read More
HashiCorp and Bitrock sign Partnership

HashiCorp and Bitrock sign Partnership to boost IT Infrastructure Innovation

The product suite of the American leader combined with the expertise of the Italian system integrator are now at the service of companies

Bitrock, Italian system integrator specialized in delivering innovation and evolutionary architecture to companies, has signed a high-value strategic partnership with HashiCorp, a market leader in multi-cloud infrastructure automation and member of the Cloud Native Computing Foundation (CNCF).

HashiCorp is well-known in the IT infrastructure environment; their open source tools Terraform, Vault, Nomad and Consul are downloaded tens of millions of times each year and enable organizations to accelerate their digital transformation, as well as adopt a common cloud operating model for HashiCorp’s portfolio of multi-cloud infrastructure automation products for infrastructure, security, networking, and application automation.

As companies scale and increase in complexity, enterprise versions of these products enhance the open-source tools with features that promote collaboration, operations, governance, and multi-data center functionality. They must also rely on a trusted partner that is able to guide them through the architectural design phase and who can grant enterprise-grade assistance when it comes to application development, delivery and maintenance.

Due to the highly technical nature of the HashiCorp portfolio, being a HashiCorp partner means that above all, the Bitrock DevOps Team has the expertise and know-how required to manage the complexity of large infrastructures. Composed of highly-skilled professionals who can already count on several “Associate” Certifications and attended the first Vault Bootcamp in Europe - Bitrock are proudly one of the most certified HashiCorp partner in Italy. The partnership with HashiCorp represents not only the result of Bitrock’s investments in the DevOps area, but also the start of a new journey that will allow large Italian companies to rely on more agile, flexible and secure infrastructure. Especially when it comes to the provisioning, protection and management of services and applications across private, hybrid and public cloud architectures.

We are very proud of this new partnership, which not only rewards the hard work of our DevOps Team, but also allows us to offer Italian and European companies the best tools to evolve their infrastructure and digital services” – says Leo Pillon, Bitrock CEO.

With its dedication in delivering reliable innovation through the design and development of business-driven IT solutions, Bitrock is an ideal partner for HashiCorp in Italy. We look forward to working closely with the Bitrock team to jointly enable organizations across the country to benefit from a cloud operating model. With Bitrock’s expertise around DevOps, we are confident in the results we can jointly deliver to organizations leveraging our suite of products” – says Michelle Graff, Global Partner Chief for HashiCorp.

Read More
Bitrock DevOps Team joining HashiCorp EMEA Vault CHIP Virtual Bootcamp

Bitrock DevOps Team joining HashiCorp EMEA Vault CHIP Virtual Bootcamp

Another great achievement for our DevOps Team: the possibility to take part in HashiCorp EMEA Vault CHIP Virtual Bootcamp.

The Bootcamp – coming for the first time to the EMEA region – involves the participation of high-skilled professionals that already have experience with Vault and want to get Vault CHIP (Certified HashiCorp Implementation Partner) certified for delivering on Vault Enterprise.

Our DevOps Team will be challenged with a series of highly technical tasks to demonstrate their expertise in the field. A 3 full-day training, that will get them ready to implement in a customer engagement.

This comes after the great success of last week, which saw our DevOps Team members Matteo Gazzetta, Michael Tabolsky, Gianluca Mascolo, Francesco Bartolini and Simone Ripamonti successfully obtaining HashiCorp Certification as Vault Associate. A source of pride for the Bitrock community and a remarkable recognition of our DevOps Team expertise and know-how worldwide.

With the Virtual Bootcamp, the Team is now ready to raise the bar and takes on a new challenge, proving that there’s no limit to self-improvement and continuous learning.


HashiCorp EMEA Vault CHIP Virtual Bootcamp

May 5–May 8, 2020

https://www.hashicorp.com/

Read More
Turning Data at REST into Data in Motion with Kafka Streams

Turning Data at REST into Data in Motion with Kafka StreamsTurning Data at REST into Data in Motion with Kafka Streams

From Confluent Blog

Another great achievement for our Team: we are now on Confluent Official Blog with one of our R&D projects based on Event Stream Processing.

Event stream processing continues to grow among business cases that have been reliant primarily on batch data processing. In recent years, it has proven especially prominent when the decision-making process must take place within milliseconds (for ex. in cybersecurity and artificial intelligence), when the business value is generated by computations on event-based data sources (for ex. in industry 4.0 and home automation applications), and – last but not least – when the transformation, aggregation or transfer of data residing in heterogeneous sources involves serious limitations (for ex. in legacy systems and supply chain integration).

Our R&D decided to start an internal POC based on Kafka Streams and Confluent Platform (primarily Confluent Schema Registry and Kafka Connect) to demonstrate the effectiveness of these components in four specific areas:

1. Data refinement: filtering the raw data in order to serve it to targeted consumers, scaling the applications through I/O savings

2. System resiliency: using the Apache Kafka® ecosystem, including monitoring and streaming libraries, in order to deliver a resilient system

3. Data update: getting the most up-to-date data from sources using Kafka

4. Optimize machine resources: decoupling data processing pipelines and exploiting parallel data processing and non-blocking IO in order to maximize hardware capacity These four areas can impact data ingestion and system efficiency by improving system performance and limiting operational risks as much as possible, which increases profit margin opportunities by providing more flexible and resilient systems.

At Bitrock, we tackle software complexity through domain-driven design, borrowing the concept of bounded contexts and ensuring a modular architecture through loose coupling. Whenever necessary, we commit to a microservice architecture.

Due to their immutable nature, events are a great fit as our unique source of truth. They are self-contained units of business facts and also represent a perfect implementation of a contract amongst components. The Team chose the Confluent Platform for its ability to implement an asynchronous microservice architecture that can evolve over time, backed by a persistent log of immutable events ready to be independently consumed by clients.

This inspired our Team to create a dashboard that uses the practices above to clearly present processed data to an end user—specifically, air traffic, which provides an open, near-real-time stream of ever-updating data.

If you want to read the full article and discover all project details, architecture, findings and roadmap, click here: https://bit.ly/3c3hQfP.

Read More