Introduction to HC Boundary

Secure your access in a dynamic world

The current IT landscape is characterized by multiple challenges and quite a bit of them are related to the increasing dynamicity of the environments IT professionals are working in. One of these challenges is securing access to private services. The dynamic nature of the access manifests itself on multiple levels:

  • Services: they tend to be deployed in multiple instances per environment
  • Environments: hosts, where the workload is deployed, can change in a  transparent way to the final user
  • Users: people change role, people come and go from a team
  • Credentials: the more often they are changed, the more secure they are

Tools developed when this kind of dynamism  was not foreseeable are starting to show their limitations. For example, accessing a service would often mean to provide networking access to a subnet where careful network and firewall policies need to be set up. The resulting access is allowed to a user independently from their current role. 

Zero trust in a dynamic environment

A Zero trust approach is highly desirable in every environment. Being able to assume zero trust and granularly providing access to resources with role based rules without the need to configure delicate resources like network and firewalls are paramount in a modern IT architecture.

This is even more so in a dynamic environment, where the rate of change can put under pressure the security teams and their toolchains as  they try to keep access configurations up to date. 

Boundary to the rescue

In the following diagram we can see how HashiCorp’s Boundary is designed to fulfill the requirements of granting secure access in a zero trust environment. The access to a remote resource is granted by defining policies on high level constructs that encapsulate the dynamic nature of the access.

The main components are:

  • Controller (control plane): the admin user interacts with the controller to configure access to resources. The normal user interacts to ask for authentication / authorization.
  • Worker (data plane): the connection is established between the local agent and the remote host by passing through this gateway that allows for the connection based on what the controller allows.
  • Local Agent: interact with the controller and the worker to establish the connection.

Boundary concepts

Identity is a core concept in Boundary. Identity is represented by two types of resources, mapping to common security principals:

  • Users, which represent distinct entities that can be tied to authentication accounts
  • Groups, which are collections of Users that allow easier access management

Roles map users and groups to a set of grants, which provide the ability to perform actions within the system.

Boundary’s permissions model is based on RBAC and each grant string is a mapping that describes a resource or set of resources and the permissions that should be granted to them.

A scope is a permission boundary modeled as a container. There are three types of scopes in Boundary: 

  • a single global scope: which is the outermost container
  • organizations: which are contained by the global scope
  • projects:  which are contained by orgs

Each scope is itself a resource.

Boundary administrators define host catalogs that contain information about hosts. These hosts are then collected into host sets which represent sets of equivalent hosts. Finally, targets tie together host sets with connection information.

Boundary interfaces

Boundary offers multiple interfaces to interact with the tool:

  • a CLI that we DevOps engineers love
  • a user friendly Desktop application
  • a Web UI for the server

Integration is key

So how can this be kept up to date with the current dynamic environments?

The answer lies in the integrations that are available to add flexibility to the tool: specifically when it comes to the authentication of users, the integration with an identity provider with standard OIDC protocol can be leveraged. When it comes to credentials, the integration with HashiCorp Vault surely (pun intended) covers the need of correctly managed secrets with their lifecycle (Vault Credentials Brokering). Finally, when we talk about the list of hosts and services we can leverage the so-called Dynamic Hosts Catalog. The catalog can be kept up to date in a push mode by using the integration with HashiCorp Terraform or in a pull mode by interacting with HashiCorp Consul.

Want to get your feet wet?

Seems like this tool is providing a lot of value: so why not integrate it into your environment? We are already planning to add it into our open source Caravan tool.

There’s a high chance for you to get your feet wet playing with Boundary and other cool technologies, don’t be shy, join us on (the) Caravan


Discover more on Zero Trust in our upcoming Webinar in collaboration with HashiCorp

When: Thursday, 31st March 2022
Where: Virtual Event
More details available soon – Follow us on our Social Media channels to find out more!

Read More
Caravan Series PT4

This is the fourth and last entry in our article series about Caravan, Bitrock’s Cloud-Native Platform based on the HashiCorp stack. Read the first, second and third part on our blog.

The communication layer between application components running on top of Caravan leverages HashiCorp Consul to expose advanced functionalities. Service discovery, health checks, and service mesh are the key features that Consul enables in Caravan.

Service Discovery & Health Checks

While Consul makes it easy to lodge services in its registry, it offers a painless discovery process thanks to the different ways of inspection, such as API, CLO or DNS SRV queries. 

The service registry would not be complete without the health checking capabilities. It is possible to set up different kinds of health checks, to inspect whether a service is healthy and thus can be shown as available in the registry. When a health check fails, the registry no longer returns the failed instance in the client queries. In this way the consumer services stop making requests to the faulty instance.

Caravan Logo

Consul Connect with Envoy

Consul Connect provides Authorization and Encryption of the communication between services using mutual TLS. Applications are not aware of Consul Connect thanks to sidecar proxies deployed next to them to compose a Service Mesh. These proxies “see” all service-to-service traffic and can collect data about it. 

Consul Connect uses Envoy proxies and can be configured to collect layer 7 metrics and export them to tools such as Prometheus. Connect uses the registered service identity (rather than IP addresses) to enforce access control with intentions. Intentions declare the source and the destination flow where the connection is allowed – by default all connections are denied following the Zero Trust principles.

Within the Service Mesh, incoming and outgoing communication traffic is handled with a dedicated component called Gateway. The Gateway is secure by default, it encrypts all the traffic and requires explicit intentions to allow the requests to pass through. 

Service Mesh in Nomad

Nomad thoroughly integrates with Consul, allowing the specification of Consul configurations inside the Nomad job description. This way operators can define in a single place all the configurations needed to run a Nomad task and to register it in Consul, making it available to other components running in the platform.  In detail, Nomad agent automatically registers the service in Consul, sets up its health check, requests dynamic short-lived TLS certificates for a safe in-mesh communication enabled by the Envoy sidecar proxy, whose lifecycle is managed directly by Nomad without any manual intervention required.


Want to know more about Caravan? Visit the dedicated website, check our GitHub repository and explore our documentation.

Authors: Matteo Gazzetta, DevOps Engineer @ Bitrock – Simone Ripamonti, DevOps Engineer @ Bitrock

Read More

This is the third entry in our article series about Caravan, Bitrock’s Cloud-Native Platform based on the HashiCorp stack. Check the first and second part.

Caravan heavily relies on the features offered by HashiCorp Vault. Vault is at the foundation of the high dynamicity and automation of Caravan. We may even say that Caravan would have not been the same without Vault, given its deep integration with all the components in use.

In this article, we show some of the Vault features that Caravan relies on.

PKI Secrets Engine

The PKI secrets engine generates dynamic X.509 certificates. It is possible to upload an existing certification authority or let Vault generate a new one, and in this way Vault will fully manage its lifecycle. This engine replaces the manual process of generating private keys and CSRs, submitting them to the CA, and waiting for the verification and signing process to complete. By using short TTLs it is even less likely that one needs to revoke a certificate, thus CRLs are short and the entire system easily scales to large workloads.

In Caravan we use Vault’s PKI to sign both Consul Connect mTLS certificates and server-side (eg. Consul and Nomad) certificates for TLS communications. 

Consul & Nomad Dynamic Secrets

Dynamic Secrets are a key feature of Vault. Their peculiarity is the fact that the secrets do not exist until they are read, so there is no risk of someone stealing them or another client using the same secrets. Vault has built-in revocation mechanisms: this way dynamic secrets are periodically revoked and regenerated, minimizing the risk exposure.

Vault integrates dynamic secrets with different components:

  • Cloud providers (e.g.  AWS, Azure, GCP, …)
  • Databases (e.g. PostgreSQL, Elasticsearch, MongoDB, …)
  • Consul
  • Nomad
  • and many more…

In Caravan, we use the dynamic secrets engine for the generation of access tokens for both Consul and Nomad agents. First of all, we define in Vault the needed Consul and Nomad roles with the needed permissions, and then we map them to Vault roles. This way, we allow authenticated Vault entities to request Consul and Nomad tokens with the permissions defined in the associated role. For example, we set up Nomad Server role and Nomad Client role, with different authorization scopes.

Caravan Logo

Cloud Auth Backends

Distributing access credentials to Vault clients might be a difficult and sensitive task, especially in dynamic environments with ephemeral instances. Luckily for us, Vault addressed this operation and simplified it a lot in the cloud scenario. Vault implements different auth methods that rely on the cloud provider for the authentication of Vault entities.

For example, when running Vault with AWS instances, it is possible to authenticate the entities according to their associated AWS IAM role. Vault leverages AWS APIs to validate the identity of the clients, using the cloud offered primitives. This way, a Vault client running in an AWS instance does not need to know any Vault-related access credentials to access Vault, instead, AWS directly validates the identity of the client. The same logic applies also to other cloud providers such as Azure, GCP, and many more.

In Caravan, we rely on cloud auth backends to authenticate both the server-side and client-side components of the platform. This way, we no longer need to distribute credentials to the spinned instances, which would be a difficult and tedious task. 

Vault Agent

Vault Agent is a client daemon that provides useful functionality to clients who need to integrate and communicate with Vault without changing the client application code. Vault Agent allows for easy authentication to Vault in a wide variety of environments. Vault Agent allows client-side caching of responses containing newly created tokens and responses containing leased secrets generated off of these newly created tokens. Vault Agent allows rendering of user-supplied templates by Vault Agent, using the token generated by the Auto-Auth.

In particular, Caravan relies on Vault Agent templates to render configuration files for a variety of components. For example, the configuration file of Nomad agents is a template rendered by Vault Agent, since it contains dynamic secrets like the Consul token and the TLS certificates used for communication with the server components.


Want to know more about Caravan? Visit the dedicated website, check our GitHub repository and explore our documentation.

Authors: Matteo Gazzetta, DevOps Engineer @ Bitrock – Simone Ripamonti, DevOps Engineer @ Bitrock

Read More

A Joint Event from Bitrock and HashiCorp

Last week we hosted an exclusive event in Milan dedicated to the exploration of modern tools and technologies for the next-generation enterprise.
The first event of its kind, it was held in collaboration with HashiCorp, US market leader in multi-cloud infrastructure automation, after the Partnership we signed in May 2020.

HashiCorp’s open-source tools Terraform, Vault, Nomad and Consul enable organizations to accelerate their digital evolution, as well as adopt a common cloud operating model for infrastructure, security, networking, and application automation.
As companies scale and increase in complexity, enterprise versions of these products enhance the open-source tools with features that promote collaboration, operations, governance, and multi-data center functionality.
Organizations must also rely on a trusted Partner that is able to guide them through the architectural design phase and who can grant enterprise-grade assistance when it comes to application development, delivery and maintenance. And that’s exactly where Bitrock comes into play.

During the Conference Session, the Speakers had the chance to describe to the audience how large companies can rely on more agile, flexible and secure infrastructure thanks to HashiCorp’s suite and Bitrock’s expertise and consulting services. Especially when it comes to the provisioning, protection and management of services and applications across private, hybrid and public cloud architectures.

“We are ready to offer Italian and European companies the best tools to evolve their infrastructure and digital services. By working closely with HashiCorp, we jointly enable organizations to benefit from a cloud operating model.” – said Leo Pillon, Bitrock CEO.

After the Keynotes, the event continued with a pleasant Dinner & Networking night at the fancy restaurant by Cascina Cuccagna in Milan.
Take a look at the pictures below to see how the event went on, and keep following us on our blog and social media channels to discover what other incredible events we have in store!

Read More
Caravan Series - GitOps

This is the second entry in our article series about Caravan, Bitrock’s Cloud-Native Platform based on the HashiCorp stack. Click here for the first part.

What is GitOps

GitOps is “a paradigm or a set of practices that empowers developers to perform tasks that typically fall under the purview of IT operations. GitOps requires us to describe and observe systems with declarative specifications that eventually form the basis of continuous everything” (source: Cloudbees).

GitOps upholds the principle that Git is the only source of truth. GitOps requires the system’s desired state to be stored in version control such that anyone can view the entire audit trail of changes. All changes to the desired state are fully traceable commits, associated with committer information, commit IDs, and time stamps.

Together with Terraform, GitOps allows the creation of Immutable Infrastructure as Code. When we need to add or perform an update, we have to modify our code and create a Merge/Pull Request to let our colleagues review our changes. After validating our changes we merge to our main branch and let our CI/CD pipelines apply the changes to our infrastructure environments.

Another approach in GitOps avoids triggering a CI/CD pipeline after a new change is merged. Instead, the system automatically pulls the new changes from the source code, and executes the needed actions to align the current state of the system to the new desired state declared in the source code.

Caravan Logo

How GitOps helped us build Caravan

GitOps provides us with the ability and framework to automate Caravan provisioning. In practice, GitOps is achieved by combining IAC, Git repositories, MRs/PRs, and CI/CD pipelines.

First of all we define our infra resources as code. Each layer of the Caravan stack is built following GitOps principles, and the first one is of course the Infrastructure layer that allows declaring the required building block for the major cloud provider. Networking, Compute resources and Security rules are all tracked in the Git repository.

Then, the following layer is the Platform one where we bring online the needed components with the required configuration. Finally, we declare the Application Support components deployed on top of the Platform.

Currently, the applications are deployed using a simpler approach leveraging standard Terraform files that we called “Carts”. Nomad itself can pull configuration files from git repository but lacks a solution like ArgoCD for automatically pulling all the nomad job descriptors from git.


Want to know more about Caravan? Visit the dedicated website, check our GitHub repository and explore our documentation.

Authors: Matteo Gazzetta, DevOps Engineer @ Bitrock – Simone Ripamonti, DevOps Engineer @ Bitrock

Read More
Caravan Series Part 1

Introduction

The current IT industry is characterised by multiple needs, often addressed by an heterogeneous number of products and services. To help professionals adopt the best performing solutions for sustainable development, the Cloud Native Computing Foundation was created in 2015 with the aim of advancing container technology and aligning the IT industry around its evolution.

We conceived Bitrock’s Caravan project following the Cloud Native principles defined by the CNCF:

  • leverage the Cloud
  • be designed to tolerate Failure and be Observable
  • be built using modern SW engineering practices
  • base the Architecture on containers and service meshes

The HashiCorp stack fulfills these needs, enabling developers to build and run applications faster and more efficiently.

The Caravan Project

Caravan is your open-source platform builder based on the HashiCorp stack. Terraform and Packer are used to build and deploy a cloud-native and ready-to-use platform composed of Vault, Consul and Nomad.

Vault allows you to keep secrets, credentials and certificates safe across the Company. Consul allows the Service Discovery and, with Consul Connect, a Service Mesh to get the power of a truly dynamic communication among your next gen and legacy applications. Nomad allows powerful placing, scaling and balancing of your workloads that may be containerized or legacy, services or batches.

Thanks to Terraform and Ansible, the Infrastructure and Configuration as Code lie at the core of Caravan.

The rationale behind Caravan is to provide a one-click experience to deploy an entire infrastructure and the configuration needed to run the full HashiCorp stack in your preferred cloud environment.

Caravan’s codebase is modular and layered to achieve maximum flexibility and cover the most common use cases. Multiple cloud providers and optional components can be mixed to achieve specific goals.

Caravan supports both Open Source and Enterprise versions of HashiCorp products.

Caravan Project Functioning

Caravan in a nutshell

Caravan is the perfect modern platform for your containerized and legacy applications:

  • Security by default
  • Service mesh out of the box
  • Scheduling & Orchestration
  • Observability
  • Fully automated

Want to know more about Caravan? Visit the dedicated website, check our GitHub repository and explore our documentation.

Authors: Matteo Gazzetta, DevOps Engineer @ Bitrock – Simone Ripamonti, DevOps Engineer @ Bitrock

Read More
Intro to HashiCorp Vault

A Hands-on Workshop by Bitrock and HashiCorp


On June, 16th 2021 we held our virtual HashiCorp Vault Hands-On Workshop, an important event in collaboration with our partner HashiCorp, during which attendees had the opportunity to get a thorough presentation of the HashiCorp stack before starting a hands-on labs session to learn how to secure sensitive data with Vault.

Do you already know all the secrets of HashiCorp Vault?

HashiCorp Vault is an API-driven, cloud agnostic Secrets Management System, which allows you to safely store and manage sensitive data in hybrid cloud environments. You can also use Vault to generate dynamic short-lived credentials, or encrypt application data on the fly.

Vault was designed to address the security needs of modern applications. It differs from the traditional approach by using:

  • Identity based rules allowing security to stretch across network perimeters
  • Dynamic, short lived credentials that are rotated frequently
  • Individual accounts to maintain provenance (tie action back to entity)
  • Credentials and Entities that can easily be invalidated

Vault can be used in untrusted networks. It can authenticate users and applications against many systems, and it runs in highly available clusters that can be replicated across regions.

Thanks to our experts Gianluca Mascolo (Senior DevOps Engineer at Bitrock) and Luca Bolli (Senior Solution Engineer at HashiCorp) for the overview of the HashiCorp toolset and the unmissable labs session.

If you’d like to learn more about our enterprise offerings or if you want to receive the presentation slides, please reach out to us at info@bitrock.it

To access the workshop recording, simply click here

We look forward to seeing you at a future Bitrock event!

Read More
Terraform Community Tools

Terraform Community Tools

Despite not having reached version 1.0 yet, Terraform has become the de facto tool for cloud infrastructure management. One of its major winning points is definitely the extensive cross cloud support, which allows projects to span from one cloud vendor to another with a minimal operational effort. Moreover, the popularity in the community continuously releasing reusable infrastructure components, the Terraform modules, makes it easy to bootstrap new projects with a fully functional setup right from the start.

In order to address all the different use cases of Terraform, whether it is executed as part of a GitOps pipeline or right from developers machines, the community has built a set of tools to enhance the developers experience.

In this blog post we will describe some of them, focusing on those that might not be that popular or widely adopted, but certainly deserve some attention.

Pull Request Automation

Atlantis

GitHub Website

Atlantis

Atlantis is a golang application that listens for Terraform pull request events via webhooks. It allows users to remotely execute "terraform plan" and "terraform apply" according to the pull request content commenting back the result. Atlantis is a good starting point for making infrastructure changes visible to all teams, allowing even non-operations ones to contribute to Terraform infrastructure codebase. If you want to see Atlantis in action, check this walkthrough video.

If you want to restrict and audit the execution of Terraform changes still providing a friendly interface, Terraform Cloud and Enterprise support invoking remote operations by UI, VCS, CLI and API. The offering includes an extensive set of capabilities for integrating infrastructure changes in CI pipelines.


Importing Existing Cloud Resources

Importing existing resources into a Terraform codebase is a long and tedious process. Terraform is capable of importing an existing resource into its state through "import" command, however the responsibility of writing the HCL describing the resource is on the developer. The community has come up with tools that are able to automate this process.

Terraforming

GitHub Website

Terraforming supports the export of existing AWS resources into Terraform resources, importing them to Terraform state and writing the configuration to a file.

Terraformer

GitHub

Terraformer supports the export of existing resources from many different providers, such as AWS, Azure and GCP. The tool leverages Terraform providers for performing the mapping of resource attributes to Terraform ones, which makes it more resilient to API upgrades. Terraformer has been developed by Waze and now maintained by Google Cloud Platform team.

Version Management

tfenv

GitHub

When working with projects that are based on different Terraform versions, it is tedious to switch from one version to another and the risk of updating the states’ Terraform version to a new one is high. tfenv comes in support and makes it easy to have different Terraform versions installed on the same machine.

Security and Compliance Scanning

tfsec

GitHub

Logo

tfsec performs static analysis of your Terraform code in order to detect potential vulnerabilities in the resulting infrastructure configuration. It comes with a set of rules that work cross provider and a set of provider specific ones, with support for AWS, Azure and GCP. It supports disabling checks on specific resources making it easy to include the tool in a CI pipeline.

Terrascan

GitHub Website

Terrascan

Terrascan detects security and compliance violations in your Terraform codebase, mitigating the risk of provisioning unsecure cloud infrastructures. The tool supports AWS, Azure, GCP and Kubernetes, and comes with a set of more than 500 policies for security best practices. It is possible to write custom policies with Open Policy Agent Rego language.

Regula

GitHub

Regula is a tool that inspects Terraform code looking for security misconfigurations and compliance violations. It supports AWS, Azure and GCP, and includes a library of rules written in Open Policy Agent language Rego. Regula consists of two parts, the first one generates a Terraform plan in JSON that is then consumed by the Rego framework which in turn evaluates the rules and produces a report.

Terraform Compliance

GitHub Website

Logo

Terraform Compliance approaches the problem from a different perspective, allowing to write compliance rules in a Behaviour Driven Development (BDD) fashion. An extensive set of examples provides an overview of the capabilities of the tool. It is easy to bring Terraform Compliance into your CI chain and validate infrastructure before deployment.

While Terraform Compliance is free to use and easy to get started with, a much wider set of policies can be defined using HashiCorp Sentinel, which is part of the HashiCorp Enterprise offering. Sentinel supports fine-grained condition-based policies, with different enforcing levels, that are evaluated as part of a Terraform remote execution.

Linting

TFLint

GitHub

TFLint is a Terraform linter that focuses on potential errors and best practices. The tool comes with a general purpose and AWS rule set while the rules for other cloud providers such as Azure and GCP are being added. It does not focus on security or compliance issues, rather on validating configuration variables such as instance types, which might cause a runtime error when applying the changes. TFLint tries to fill the gap of “terraform validate”, which is not able to validate variable values beside syntax and internal consistency checks.


Cost Estimation

infracost

GitHub Website

Infracost

Keeping track of infrastructure pricing is quite a mess and one usually discovers the actual cost of a deployment after running it for days if not weeks. infracost comes in help providing a way to estimate how much the resources you are going to deploy will cost. At the moment the tool supports only AWS, providing insights for the costs of both hourly priced resources and usage based resources such as AWS Lambda Functions. For the latter, it requires the usage of infracost Terraform provider which allows describing usage estimates for a more realistic cost estimate. This enables quick “what-if” analysis like “what if this month my Lambda gets 2 times more requests?”. The ability to output a “diff” of the costs is useful when integrating infracost in your CI pipeline.

Terraform Enterprise provides a Cost Estimation feature that extends infracost offering with the support for the three major public cloud providers: AWS, Azure and GCP. Moreover, Sentinel policies can be applied for example to prevent the execution of Terraform changes according to the increment of costs.


Author: Simone Ripamonti, DevOps Engineer @Bitrock

Read More
Bringing GDPR in Kafka with Vault

Bringing GDPR in Kafka with Vault


Part 1: Concepts

GDPR introduced the “right to be forgotten”, which allows individuals to make verbal or written requests for personal data erasure. One of the common challenges when trying to comply with this requirement in an Apache Kafka based application infrastructure is being able to selectively delete all the Kafka records related to one of the application users.

Kafka’s data model was never supposed to support such a selective delete feature, so businesses had to find and implement workarounds. At the time of writing, the only way to delete messages in Kafka is to wait for the message retention to expire or to use compact topics that expect tombstone messages to be published, which isn’t feasible in all environments and just doesn’t fit all the use cases.

HashiCorp Vault provides Encryption as a Service, and as it happens, can help us implement a solution without workarounds, either in application code or Kafka data model.


Vault Encryption as a Service

Vault Transit secrets engine handles cryptographic operations on in-transit data without persisting any information. This allows a straightforward introduction of cryptography in existing or new applications by performing a simple HTTP request.

Vault fully and transparently manages the lifecycle of encryption keys, so neither developers or operators have to worry about keys compliance and rotation, while the securely stored data can always be encrypted and decrypted as long as the Vault is accessible.


Kafka Integration

What if instead of trying to selectively eliminate the data the application is not allowed to keep, we would just make sure the application (or anyone for this matter) cannot read the data under any circumstances? This would equal physical removal of data, just as requested by GDPR compliance. Such a result can be achieved by selectively encrypting information that we might want to be able to delete and throwing away the key when the deletion is requested.

However, it is necessary to perform encryption and decryption in a transparent way for the application, to reduce refactoring and integration effort for each of the applications that are using Kafka, and unlock this functionality for the applications that cannot be adapted at all.

Kafka APIs support interceptors on message production and consumption, which is the candidate link in the chain where to leverage Vault’s encryption as a service. Inside the interceptor, we can perform the needed message transformation:

  • before a record is sent to Kafka, the interceptor performs encryption and adjusts the record content with the encrypted data
  • before a record is returned to a consumer client, the interceptor performs decryption and adjusts the record content with the decrypted data


Logical Deletion

Does this allow us to delete all the Kafka messages related to a single user? Yes, and it is really simple. If the encryption key that we use for encrypting data in Kafka messages is different for each of our application’s users, we can go ahead and delete the encryption key to guarantee that it is no longer possible to read the user data.


Replication Outside EU

Given that now the sensitive data stored in our Kafka cluster is encrypted at rest, it is possible to replicate our Kafka cluster outside the EU, for example for disaster recovery purposes. The data will only be accessible by those users that have the right permissions to perform the cryptographic operations in Vault.



Part 2: Technicalities

In the previous part we drafted the general idea behind the integration of HashiCorp Vault and Apache Kafka for performing a fine grained encryption at rest of the messages, in order to address GDPR compliance requirements within Kafka. In this part, instead, we do a deep dive on how to bring this idea alive.


Vault Transit Secrets Engine

Vault Transit secrets engine is part of Vault Open Source, and it is really easy to get started with. Setting the engine up is just a matter of enabling it and creating some encryption keys:

Crypto operations can be performed as well in a really simple way, it’s just a matter of providing base64 encoded plaintext data:

The resulting ciphertext will look like vault:v1: – where v1 represents the first key generation, given it has not been rotated yet.

What about decryption? Well, it’s just another API call:

Integrating Vault’s Encryption as a Service within your application becomes really easy to implement and requires little to no refactoring of the existing codebase.


Kafka Producer Interceptor

The Producer Interceptor API can intercept and possibly mutate the records received by the producer before they are published to the Kafka cluster. In this scenario, the goal is to perform encryption within this interceptor, in order to avoid sending plaintext data to the Kafka cluster…

Integrating encryption in the Producer Interceptor is straightforward, given that the onSend method is invoked one message at a time.


Kafka Consumer Interceptor

The Consumer Interceptor API can intercept and possibly mutate the records received by the consumer. In this scenario, we want to perform decryption of the data received from Kafka cluster and return plaintext data to the consumer.

Integrating decryption with Consumer Interceptor is a bit trickier because we wanted to leverage the batch decryption capabilities of Vault, in order to minimize Vault API calls.

Usage

Once you have built your interceptors, enabling them is just a matter of configuring your Consumer or Producer client:

or

Notice that value and key serializer class must be set to the StringSerializer, since Vault Transit can only handle strings containing base64 data. The client invoking Kafka Producer and Consumer API, however, is able to process any supported type of data, according to the serializer or deserializer configured in the interceptor.value.serializer or interceptor.value.deserializer properties.


Conclusions

HashiCorp Vault Transit secrets engine is definitely the technological component you may want to leverage when addressing cryptographical requirements in your application, even when dealing with legacy components. The entire set of capabilities offered by HashiCorp Vault makes it easy to modernize applications on a security perspective, allowing developers to focus on the business logic rather than spending time in finding a way to properly manage secrets.



Author: Simone Ripamonti, DevOps Engineer @Bitrock

Read More
HashiCorp and Bitrock sign Partnership

HashiCorp and Bitrock sign Partnership to boost IT Infrastructure Innovation

The product suite of the American leader combined with the expertise of the Italian system integrator are now at the service of companies

Bitrock, Italian system integrator specialized in delivering innovation and evolutionary architecture to companies, has signed a high-value strategic partnership with HashiCorp, a market leader in multi-cloud infrastructure automation and member of the Cloud Native Computing Foundation (CNCF).

HashiCorp is well-known in the IT infrastructure environment; their open source tools Terraform, Vault, Nomad and Consul are downloaded tens of millions of times each year and enable organizations to accelerate their digital transformation, as well as adopt a common cloud operating model for HashiCorp’s portfolio of multi-cloud infrastructure automation products for infrastructure, security, networking, and application automation.

As companies scale and increase in complexity, enterprise versions of these products enhance the open-source tools with features that promote collaboration, operations, governance, and multi-data center functionality. They must also rely on a trusted partner that is able to guide them through the architectural design phase and who can grant enterprise-grade assistance when it comes to application development, delivery and maintenance.

Due to the highly technical nature of the HashiCorp portfolio, being a HashiCorp partner means that above all, the Bitrock DevOps Team has the expertise and know-how required to manage the complexity of large infrastructures. Composed of highly-skilled professionals who can already count on several “Associate” Certifications and attended the first Vault Bootcamp in Europe – Bitrock are proudly one of the most certified HashiCorp partner in Italy. The partnership with HashiCorp represents not only the result of Bitrock’s investments in the DevOps area, but also the start of a new journey that will allow large Italian companies to rely on more agile, flexible and secure infrastructure. Especially when it comes to the provisioning, protection and management of services and applications across private, hybrid and public cloud architectures.

We are very proud of this new partnership, which not only rewards the hard work of our DevOps Team, but also allows us to offer Italian and European companies the best tools to evolve their infrastructure and digital services” – says Leo Pillon, Bitrock CEO.

With its dedication in delivering reliable innovation through the design and development of business-driven IT solutions, Bitrock is an ideal partner for HashiCorp in Italy. We look forward to working closely with the Bitrock team to jointly enable organizations across the country to benefit from a cloud operating model. With Bitrock’s expertise around DevOps, we are confident in the results we can jointly deliver to organizations leveraging our suite of products” – says Michelle Graff, Global Partner Chief for HashiCorp.

Read More