Infrastructure Automation

In today’s fast-paced digital world, Infrastructure Automation stands as a transformative process, enabling organizations to manage, deploy, and scale resources quickly, accurately, and efficiently. This innovative approach not only eliminates human error, but also accelerates processes, improves security, and optimizes resource allocation.

Why Automation?

Imagine a scenario where repetitive and standardized tasks like server provisioning, network configuration, and application deployment happen automatically, freeing your IT team for more creative and value-adding tasks. That’s the power of Infrastructure Automation which delivers several benefits, including:

  • Consistency and Reliability: Manual tasks introduce human error, leading to service interruptions and security vulnerabilities. Automation ensures consistency and reliability, eliminating human error and keeping systems running smoothly.
  • Scalability: Automation facilitates resource scalability, allowing organizations to quickly and flexibly adapt to changing or fluctuating business needs. This is especially important in cloud and DevOps environments.
  • Security: Automation facilitates consistent and timely implementation of security policies. Resources are always configured and managed according to the strictest security standards.
  • Cost Savings: Efficient resource management and minimal errors result in significant long-term cost savings.

Key Concepts

At the core of Infrastructure Automation is Infrastructure as Code (IaC). This approach treats your infrastructure like software, defining and managing its configuration through code. This makes infrastructure management simpler, more reproducible, and easier to understand, promoting consistency and compliance.

To translate principle into practice, we rely on continuous integration. This includes making changes to the source code and automatically distributing the resulting software. This approach, often called CI/CD (Continuous Integration/Continuous Deployment) plays a key role in automating DevOps processes, enabling swift and reliable release of applications. In a dedicated blog post, we’ll discuss CI/CD in more detail. 

Orchestration coordinates processes and services within an automated environment. Orchestrators ensure that activities occur in the correct order and that the infrastructure runs smoothly.

Monitoring continues to be important as it provides detailed information about the performance and health of the system. Logs and metrics facilitate traceability and timely resolution of issues.

Essential Tools

Several powerful tools fuel the engine of Infrastructure Automation. Let’s take a look at some of the major players.

  • Hashicorp Terraform is a widely adopted IaC tool for defining, building, and managing infrastructure. It supports different clouds by using providers and is versatile thanks to its declarative syntax.
  • Ansible is a powerful automation tool that leverages the concept of “playbook” to define and automate infrastructure configuration and management tasks, giving you granular control over your systems.
  • Jenkins and GitLab are CI/CD automation tools that enable continuous integration and continuous deployment, automating the development and release process.
  • Kubernetes is a container orchestration system that simplifies the deployment, management and automation to keep container environments running smoothly.
  • Prometheus and Grafana are commonly used to monitor and visualize infrastructure performance, provide detailed insight into system behavior using exporters, and enable the definition of alerts on the state of the infrastructure.

It’s Time to Automate

Infrastructure Automation is a strategic imperative for organizations looking to improve operational efficiency, reduce errors, and adopt DevOps practices. By leveraging tools like Terraform, Ansible, Kubernetes, and monitoring solutions, you can transform how you manage your IT infrastructure, unlocking agility, efficiency, and a competitive advantage. So, embrace automation, and watch your business grow in today’s digital environment.


Main Author: Matteo Gazzetta, Team Leader and DevOps Engineer @ Bitrock

Read More
FHIR

FHIR, acronym for Fast Healthcare Interoperability Resources, is an interoperability standard for exchanging electronic health data. Developed by HL7 (Health Level Seven International), an international organization that focuses on standardization in the healthcare sector, FHIR is rapidly becoming the benchmark standard for clinical data management. It is a scalable and standardized format, which means that health information can be easily exchanged between different systems without sacrificing data integrity. It is suitable for a variety of applications and is supported by a wide range of vendors, including Apple, Google, and Microsoft.

FHIR is revolutionizing healthcare by enabling secure data sharing among hospitals, clinics, and even patients themselves. This interconnectedness opens a world of possibilities for improved care.

Data Interoperability and Artificial Intelligence

With over 2 million FHIR applications, the global healthcare market fueled by data interoperability standards is projected to reach a staggering $8 billion by the end of the year. Undoubtedly, 2024 marks a turning point for the healthcare sector. Stakeholders across the industry are demanding seamless data exchange to improve care delivery and outcomes. Patients demand better access to their own health information, frontline clinicians need reliable, accurate data at their fingertips for informed treatment decisions and healthcare payers need interoperable data to facilitate collaborative, team-based care that reduces costs and improves quality.

However, converting existing data sources into FHIR compatible formats and integrating sources with different security protocols takes time and expertise. It goes without saying that with the increasing availability of healthcare data and the rapid progress in analytic techniques – whether machine learning, logic-based or statistical – Artificial Intelligence tools could transform the healthcare sector.  

The World Health Organization (WHO) recognizes the potential of AI in enhancing health outcomes by strengthening clinical trials; improving medical diagnosis, treatment, self-care and person-centered care; and supplementing health care professionals’ knowledge, skills and competencies. For example, AI could be beneficial in settings with a lack of medical specialists, e.g. in interpreting retinal scans and radiology images among many others.

In response to the growing countries’ need to responsibly manage the rapid rise of AI health technologies, the World Health Organization has released a new publication listing key regulatory considerations on Artificial Intelligence (AI) for health. The publication outlines six areas for regulation:

  • To foster trust, the publication stresses the importance of transparency and documentation, such as through documenting the entire product lifecycle and tracking development processes.
  • For risk management, issues like ‘intended use’, ‘continuous learning’, human interventions, training models and cybersecurity threats must all be comprehensively addressed, with models made as simple as possible.
  • Externally validating data and being clear about the intended use of AI helps assure safety and facilitate regulation.
  • A commitment to data quality, such as through rigorously evaluating systems pre-release, is vital to ensuring systems do not amplify biases and errors.
  • The challenges posed by important, complex regulations – such as the General Data Protection Regulation (GDPR) in Europe and the Health Insurance Portability and Accountability Act (HIPAA) in the United States of America – are addressed with an emphasis on understanding the scope of jurisdiction and consent requirements, in service of privacy and data protection.
  • Fostering collaboration between regulatory bodies, patients, healthcare professionals, industry representatives, and government partners, can help ensure products and services stay compliant with regulation throughout their life cycles.

AI and FHIR are a powerful duo with both positive and negative implications. AI can unlock the potential of FHIR data, leading to better analysis, improved interoperability, personalized care, and automation. At the same time, there are many concerns regarding privacy, bias, lack of transparency, and accessibility of AI tools in healthcare. In a nutshell, finding the right balance between these forces is crucial to maximize the benefits of AI in FHIR while mitigating its potential risks (AI Observability tools and the MLOps approach are critical factors).

Enabling a Connected Future for Healthcare

In this rapidly growing scenario, Bitrock supports organizations to achieve FHIR compliance swiftly and efficiently, without overburdening IT resources. Few other companies can match the expertise of Bitrock’s professionals, who have hundreds of days of experience working on projects involving data exchange and processing within the FHIR protocol.

Bitrock supports enterprise clients with an integrated, end-to-end approach that leverages the power of cutting-edge technologies, including modern event-driven paradigms and data-intensive application offerings. In addition, the Fortitude Group’s expertise in AI/ML and MLOps (AI Readiness & AI Observability) allows us to develop and implement a path that seamlessly moves from protocol implementation to the use of advanced data processing techniques using Artificial Intelligence and Machine Learning, enabling innovative use cases with a clear business impact.

All this unlocks seamless data sharing across the care continuum, paving the way for a future of connected, collaborative care that benefits all stakeholders.

Read More
From A to Z Front-End Development Technologies (Part II)

In our previous exploration, we unveiled the varied landscape of the main front-end technologies and tools from the dynamic Angular to the versatile Node.js. Fasten your coding seatbelts as we deep dive into the ever-evolving ecosystem of front-end until the last letter of the alphabet!

O is for OAuth

OAuth is a standard authorization protocol used to securely and selectively grant third-party applications access to user resources on the Internet without sharing their login credentials. Often used in authentication and authorization processes for web applications and online services, allowing users to grant a third-party application limited access to their information on another website without sharing login credentials. This is done by issuing access tokens that grant specific actions or resources for a limited period of time, thereby maintaining the security and privacy of user data.

P is for Progressive Web App (PWA)

Progressive Web Apps (PWAs) are web applications that combine the best features of native apps and traditional websites. PWAs are designed to provide an engaging, fast, and reliable user experience even when there is no internet connection. By using modern web technologies such as Service Workers, PWAs can be installed directly on the user’s device, allowing access through the app icon, and providing features such as push notifications, offline access, and optimized performance. PWAs are considered the best alternative to native apps.

Q is for Qwik

Qwik.js is Builder.io’s new framework, designed for building fast, lightweight web applications, with an innovative approach, focused on fast loading and interactivity through code splitting, lazy loading, and application rendering. The main features introduced compared to traditional frameworks such as React are Lazy Executing and the process of Resumability. Additionally,  the introduction of the meta-framework Qwik City completes that framework.

Qwik City includes an API that supports components with common server-focused functionalities (routing, layouts, endpoints, middleware, etc.). The new features introduced by Qwik are certainly highly anticipated, and only time and community usage will determine the success of this new framework.

R is for React

React is a JavaScript library developed by Facebook to create dynamic and reactive user interfaces. It is based on the concept of reusable components, which allows developers to build and manage the application’s user interface in a modular way. One of the special features of React is the Virtual DOM, which optimizes the updating of interface elements, improving the overall performance of the application.

React uses JSX, an extension of JavaScript that allows writing UI components in an HTML-like syntax, making the creation of interfaces more intuitive. This library uses a declarative approach that allows developers to create dynamic and reactive user interfaces, updating only the relevant parts when the state of the application changes, improving overall efficiency.

Thanks to its active community, rich ecosystem of related libraries and tools (such as React Router, Redux, Next.js), and its flexibility in integrating with other technologies, React has become one of the most popular frameworks for developing dynamic, complex, and high-performance user interfaces in the modern world of web development.

S is for Sass

Sass (Syntactically Awesome Stylesheets) is an extension of the CSS language that provides additional features and improvements to the basic CSS syntax. This technology allows developers to create more complex stylesheets in a more efficient and organized way by introducing concepts such as variables, nesting, mixins, inheritance, and more, providing modular and readable structure stylesheets.

Variables allow values to be defined and reused consistently, while nesting allows selectors to be organised hierarchically, making the stylesheet structure clearer. Mixins allow code snippets  o be reused, allowing developers to define style blocks that can be called in different contexts. Sass also offers advanced features such as inheritance rules, which make it easy to manage styles for multiple similar elements. Once Sass code is written, it is compiled into standard CSS, ensuring compatibility with browsers. This transition from Sass to CSS is done through a compilation process that can be integrated into the development workflow to automate the generation of CSS files.

T is for TypeScript

TypeScript is an open-source programming language developed by Microsoft. It is a typed version of JavaScript that adds optional static types to the language. This typed approach offers many advantages, including better code understanding, error detection during development, and increased scalability for large projects. One of the key features of TypeScript is its static typing system, which allows developers to declare the types of variables, function parameters, and more. This allows type errors to be caught during development common runtime bugs to be avoided. TypeScript is becoming popular among developers looking to improve the quality and maintainability of their JavaScript code.

U is for Uglify

Uglify is a JavaScript code optimization tool used to minify and compress JavaScript files. Minification improves web page loading performance by removing whitespace, comments, and reducing variable names to minimize the size of the source files.  Uglify allows to optimize and compress JavaScript code by minimizing the file size while maintaining its functionality. It is widely used in web development to optimize resources and improve the overall performance of applications.

V is for Vue.js

Vue.js is an advanced framework for building user interfaces. This lightweight and powerful framework is ideal for building single web pages as well as complex web applications. Its structure is based on components, making it easy to create, manage, and reuse UI parts. Vue.js provides an efficient reactivity system, meaning that changes in the state of the application are  automatically reflected in the user interface, ensuring a better end-user experience. One of the key features of Vue.js’s distinctive features is its adaptability. It can be integrated incrementally into existing projects, making it easy to learn and use even for developers with varying levels of experience.

W is for Webpack

Webpack is a powerful module bundler for JavaScript applications. It is a fundamental tool for modern front-end resource management. Its main function is to manage dependencies within an application by grouping different source files (JavaScript, CSS, images, etc.) and converting them into optimized bundles that are ready for use in the browser.

Webpack allows to organize code, define dependencies, and automate a series of tasks such as code minification and image optimization. Webpack can be configured using loaders and plugins, making it highly customizable and allowing developers to adapt their Webpack to the specific needs of their projects.

X is for XState

XState is a JavaScript library for state management and state machine creation. It implements the concept of state machines in the context of application development, providing a clear and declarative way to manage application state. A state machine is a model  that represents the behavior of a system through various states and transitions between them. Therefore, XState allows developers to declaratively define application states and transitions between states, making application behavior more predictable and manageable.

XState is often used for complex applications where state management is critical, such as large-scale applications, applications with complex user interfaces, or applications that require strict state management, due to its declarative nature and unambiguous state management.

Y is for Yarn

Yarn is a JavaScript package manager. It focuses on efficiency, security, and speed when managing dependencies for JavaScript projects. Yarn utilizes a local cache to store previously downloaded packages, speeding up dependency management and reducing installation and module update times compared to other package managers. 

One of Yarn’s key features is the ability to consistently manage dependency versions and ensure that all members of the development team have the same package versions installed in their working environment. This helps reduce potential inconsistencies or errors caused by different package versions within the project.

Z is for Zeplin

Zeplin is a collaboration platform for designers and developers that facilitates the transfer of design projects from design tools such as Sketch, Adobe XD, or Figma to developers. The platform allows designers to easily upload their design projects and share specifications, colors, dimensions, assets, and guidelines directly with the development team members. Zeplin helps improve communication and collaboration between designers and developers, giving developers easy access to the design elements needed to faithfully implement the user interface in the final product.


While this exploration covered 26 key technologies for the front-end development, remember, it’s just the tip of the iceberg. Stay curious, explore, experiment, and keep learning. So go forth coding and don’t forget to have fun along the way! 


Main Author: Mattia Ripamonti, Team Leader and Front-End Developer @ Bitrock

Read More
Front-End Development Technologies (Part I)

The world of web development requires the use of a wide range of front-end technologies and tools to create responsive and functional user interfaces. From HTML, CSS, and JS to advanced frameworks, and tools to optimize development workflows, the range of technologies is vast and constantly evolving.

That’s why we thought it would be a good idea to create the ultimate A to Z list for Front-End Development. We went through letter-by-letter and picked what we felt to be the technologies and the tools that most nobly represented all the letters of the alphabet, then laid out exactly why. From Angular, to Zeplin, passing through Flutter and React – they’re all here!

A is for Angular

Angular is a JavaScript framework developed by Google and used to build single-page applications (SPAs) and dynamic web applications. Its main feature is to organize code into reusable modules and components. Based on TypeScript, Angular provides static typing that helps you identify code errors during development, thereby making your applications more robust.

Angular introduces the concept of “two-way data binding“, which allows synchronization of data between view and model. This simplifies application state management and improves greater interactivity. Additionally, the framework provides a range of built-in features such as routing, state management, form validation, testability, and integration with external services.

B is for Babel

Babel is a JavaScript compiler tool that allows developers to write code using the latest language features and convert it into a format compatible with older browser versions. This tool is very important for cross-browser compatibility as it allows older browsers to use the latest JavaScript features. Babel is highly configurable and customizable, allowing developers to tailor it to the specific needs of their projects, increasing development productivity and ensuring a consistent experience for end users.

C is for Cypress

Cypress is a powerful end-to-end (E2E) testing framework designed specifically  for front-end development, known for its user-friendly nature, and reliability. Cypress allows users to simulate user interactions, run tests across different browsers simultaneously, access the DOM directly , and write tests declaratively through a simple and intuitive API. Serverless architecture enables fast and reliable tests, providing instant feedback during development.

Due to Its open-source nature and active community, Cypress is a popular choice among developers to ensure the quality and reliability when it comes to ensuring the quality and reliability of front-end applications throughout the development process.

D is for D3.js

D3.js short for “Data-Driven Documents,” is one of the most powerful and flexible libraries for data manipulation and creating dynamic visualizations on the web. With its versatility and power, D3.js enables developers to transform complex data into clear and engaging visualizations, creating charts, geographical maps, bar charts, histograms, pie charts, and a wide range of other visual representations.

What makes D3.js unique is its approach which is based on the fundamental concepts of manipulating the Document Object Model (DOM) with data. This means that D3.js allows direct data binding to DOM elements, providing greater flexibility and control in creating custom and interactive visualizations.

The library provides  a powerful set of tools for dynamically handling data presentation, enabling smooth transitions between different view states. D3.js leverages SVG (Scalable Vector Graphics) and Canvas features to create highly customizable and high-performance charts that scale to different sizes and devices.

E is for ESLint

ESLint is a JavaScript Linting tool widely used by the software development community. It works by analyzing JavaScript code to identify errors, style inconsistencies, and potential problems. For example, it identifies common errors such as undeclared variables, dead code, or style inconsistencies, helping developers to write cleaner, more readable, and maintainable code. The result of working with ESLint is an improvement in the overall quality of the code. 

ESLint is often integrated into development processes, such as pre-commit checks or within Continuous Integration/Continuous Deployment (CI/CD) workflows, to ensure that the code conforms to defined standards and to identify  potential problems during development, reducing the likelihood of errors in production.

F is for Flutter

Flutter is an open-source framework developed by Google for creating native applications for mobile, web, and desktop. Based on the Dart language, with its highly customizable widgets and support for hot reload, Flutter simplifies the development process and allows developers to create cross-platform applications with a single code base.

G is for GraphQL

GraphQL is a query language for APIs, designed to provide an efficient and flexible way to request and deliver data from a server to a client. Unlike traditional REST APIs, GraphQL allows clients to specify exactly the data they need, reducing the amount of data transferred and improving application performance. This allows developers to precisely define the structure of requests to get exactly the desired data, providing greater flexibility and scalability in optimizing API requests.

H is for Husky

Husky is a tool primarily used to manage pre-commit hooks in Git repositories. Pre-commit hooks are scripts that are automatically executed before a commit is created in the repository. These scripts enable various checks or actions to be performed on the code before it is committed, ensuring code quality and consistency. This allows developers to easily define and run custom scripts or specific commands to perform tests, Code Linting, formatting checks, and other necessary validations before making a commit to a shared repository.

I is for Immer

Immer is a JavaScript library that simplifies the management of  immutable state. By using Immer, developers can easily produce and manipulate immutable data structures more intuitively. The library provides a simpler and more readable approach to updating immutable objects, allowing developers to write clearer and more maintainable code when managing application state.

Immer is particularly useful in contexts such as React applications, Redux, or any situation where maintaining stability and predictability of the state is critical. Its developer-friendly features and functional programming approach have made it an attractive alternative to the more popular Immutable.js.

J is for Jest

Jest is a JavaScript testing framework known for its ease of use, simple configuration, and execution speed. Jest is widely used for conducting unit tests and integration testing.

One of the main advantages of Jest is its comprehensive and integrated structure, which includes everything needed to perform tests efficiently. It comes with features such as built-in automocking that simplifies dependency substitution, support for asynchronous testing, and parallel test execution that improves productivity and reduces execution times.

K is for Kotlin

Kotlin is a modern and versatile programming language that can be used to develop applications for multiple platforms, including Android, web, server-side, and more.

Recognized for its conciseness, clarity, and interoperability with Java, Kotlin offers many advanced features that increase developer productivity. Thanks to its expressive syntax and type safety, Kotlin has become a popular choice for developers who want to write clean and maintainable code across platforms. Although it’s not strictly a web technology, its importance and steady growth in mobile application development deserve to be mentioned.

L is for Lodash

Lodash is a JavaScript library that provides a set of utility functions for data manipulation, array handling, object management, and more. It is known for providing efficient and performant methods that make it easier to write more concise and readable code. Lodash includes a wide range of utility functions such as map, filter, and many others that allow developers to work with complex data easily and efficiently.

In addition to utility functions, Lodash also provides support for string manipulation, number handling, creating iterator functions, and other advanced operations that make data management more intuitive and powerful.

M is for Material-UI

Material-UI is a React-based UI component library based on Google’s Material Design system. It offers a wide range of pre-styled and configurable components to help developers efficiently create modern and intuitive interfaces. The library provides a comprehensive collection of components such as buttons, inputs, navigation bars, tables, cards, and more. Each component is designed to follow Material Design guidelines, providing a consistent and professional look and feel throughout the application. In addition to providing predefined components, Material UI also allows for customization through themes and styles. Developers can easily customize the look and feel of components to meet specific project requirements while maintaining an overall consistent design.

N is for Node.js

How could Node.js not be mentioned on this list? Well, here we are, Node is an open-source JavaScript runtime based on Google Chrome’s V8 engine, that enables server-side execution of JavaScript code. It is widely used for building scalable and fast web applications, allowing developers to use JavaScript on both the client and server sides. Node.js offers high efficiency through its non-blocking asynchronous model, which enables efficient handling of large numbers of requests. With its extensive module ecosystem (npm), Node.js is a popular choice for developing web applications, APIs, and back-end services.


We’re only halfway through our exploration of the ultimate A to Z list for Front-End Development!

Stay tuned for the second act, as we’re going to explore some more interesting tools and technologies, each pushing the boundaries of front-end development.


Main Author: Mattia Ripamonti, Team Leader and Front-End Developer @ Bitrock

Read More
AI Act

Artificial Intelligence has played a central role in the world’s digital transformation, impacting a wide range of sectors, from industry to healthcare, from mobility to finance. This rapid development has increased the need and urgency of regulating technologies that by their nature have important and sensitive ethical implications. For these reasons, as part of its digital strategy, the EU has decided to regulate Artificial Intelligence (AI) to ensure better conditions for the development and use of this innovative technology. 

It all started in April 2021, when the European Commission proposed the first-ever EU legal framework on AI. Parliament’s priority is to ensure that AI systems used in the EU are safe, transparent, accountable, non-discriminatory and environmentally friendly. AI systems must be monitored by humans, rather than automation to prevent harmful outcomes.

The fundamental principles of the regulation are empowerment and self-evaluation. This law ensures that rights and freedoms are at the heart of the development of this technological development, securing a balance between innovation and protection.

How did we get here? Ambassadors from the 27 European Union countries voted unanimously on February 2, 2024, on the latest draft text of the AI ​​Act, confirming the political agreement reached in December 2023 after tough negotiations. The world’s first document on the regulation of Artificial intelligence is therefore approaching a final vote scheduled for April 24, 2024. Now is the time to keep up with the latest developments, but in the meantime, let’s talk about what it means for us.

Risk Classification

The AI Act has a clear approach, based on four different levels of risk: 

  • Minimal or no risk 
  • Limited risks 
  • High risk 
  • Unacceptable risk 

The greater the risk, the greater the responsibility for those who develop or deploy Artificial Intelligence systems and applications considered too dangerous to be authorized. The only cases that are not covered by the AI Act are technologies used for military and research purposes.

Most AI systems pose minimal risk to citizens’ rights, and their security is not subject to specific regulatory obligations. The logic of transparency and trust applies to high-risk categories. The latter in particular can have very significant implications, for example AI systems used in the fields of health, education or recruitment. 

In cases such as chatbots, there are also specific transparency requirements to prevent user manipulation, so that the user  is aware of the interaction with AI. For instance, the phenomenon of deep-fakes, images through which a subject’s face and movements are realistically reproduced, creating false representations that are difficult to recognise.

Finally, any application that poses unacceptable risk is prohibited, such as applications that may manipulate an individual’s free will, social evaluation or emotion recognition at work or school. An exception should be made when using Remote Biometric Identification systems (RBI), which can identify individuals based on physical, physiological, or behavioral characteristics in a publicly accessible space and at a distance from a database.

Violations can result in fines ranging from $7.5 million, or 1.5 percent of global revenue, to $35 million, or 7 percent of global revenue, depending on the size and severity of the violation.

The AI Act as a growth driver

For businesses, the AI Act represents a significant shift, introducing different compliance requirements depending on the risk classification of AI systems – as seen before. Companies developing or deploying AI-based solutions will have to adopt higher standards for high-risk systems and face challenges related to security, transparency, and ethics. At the same time, this legislation could act as a catalyst for innovation, pushing companies to explore new frontiers in AI while respecting principles of accountability and human rights protection.

The AI Act can thus be an opportunity to support a virtuous and more sustainable development of markets both on the supply and demand-side.  By the way, it is no coincidence that we use the word ‘sustainable’. Artificial Intelligence can also be an extraordinary tool for accelerating the energy transition, with concepts that save not only time but also resources and energy,  like the Green Software focused  on developing software with minimal carbon emission.

ArtificiaI Intelligence, indeed, allows companies to adapt to market changes by transforming processes and products, which in turn can change market conditions by giving new impetus to technological advances.

The present and the future of AI in Italy

The obstacles that Italian small and medium-sized enterprises face today in implementing solutions that accelerate their digitalization processes are primarily economic. This is because financial and cultural measures are often inadequate. There is a lack of real understanding of the short-term impact of these technologies and the risks of being excluded from them.

In Italy, the lack of funding for companies risks delaying more profitable and sustainable experiments with Artificial Intelligence in the short and long term. As an inevitable consequence, the implementation of the necessary infrastructure and methods may also be delayed.

The AI Act itself will not be operational “overnight,” but will require a long phase of issuing delegated acts, identifying best practices and writing specific standards.

However, it is essential to invest now in cutting-edge technologies that enable efficient and at the same time privacy-friendly data exchange. This is the only way for Artificial Intelligence to truly contribute to the country’s growth.


Sources
News European Parliament – Shaping the digital transformation: EU strategy explained
News European Parliament – AI Act: a step closer to the first rules on Artificial Intelligence

Read More
Green Software

As information technology plays an important role in our daily life, its energy demand is increasing rapidly. By 2025, data centers alone will consume about 20 percent of global electricity, and the entire digital ecosystem will account for one-third of global demand.

The concept of Green Software was developed to address these issues, by focusing on developing software with minimal carbon emission. The software we develop today will run on billions of devices over the next few years, consuming energy and potentially contributing to climate change.

Green IT represents a shift in the software industry, enabling software developers and all stakeholders to contribute to a more sustainable future.

Key principles and associated best practices for reducing the carbon footprint of software include:

  • Carbon Efficiency: Emit the least amount of carbon possible 
  • Energy Efficiency: Use the least amount of energy possible
  • Carbon Awareness: Do more when the electricity is clean and less when it is dirtier

Carbon Efficiency

Our goal is to emit as little carbon into the atmosphere as possible. Therefore, the main principle of Green Software is Carbon Efficiency: minimizing carbon emissions per unit of work.

The definition of Carbon Efficiency is to develop applications that have a lower carbon footprint while providing the same benefits to us or our users.

Energy Efficiency

Creating an energy-efficient application is the same as creating a carbon-efficient application, since carbon is replaced with electricity. Green Software aims to use as little electricity as possible and is responsible for its use.

It’s true that all software uses electricity, from mobile phone applications to machine learning models trained in data centers. Increasing the energy efficiency of applications is one of the best strategies for reducing power consumption and carbon footprint. But our responsibility goes beyond that.
Quantifying energy consumption is a critical starting point, but it’s also important to consider hardware overhead and end-user impact. By minimizing waste and optimizing energy use at every stage, we can significantly reduce the environmental impact of software development and change the way users use the product to take advantage of energy proportionality.

Carbon Awareness

Generating electricity involves using different resources at different times and in different places, with different carbon emissions. Some energy, such as hydroelectricity, solar energy, and wind, are clean, renewable energy sources with low carbon emissions. However, fossil fuel sources release varying amounts of carbon to produce energy. For example,  gas-fired power plants produce less carbon than  coal-fired power plants, but both types of energy produce more carbon than renewable energy sources.

The concept of carbon awareness is to increase activity when energy is produced from low carbon sources and to decrease activity when energy is produced from high carbon sources.

Global changes are currently occurring. Around the world, power grids are shifting from primarily burning fossil fuels to generating energy from low-carbon sources, such as solar and wind energy. This is one of our best chances of meeting global reduction targets.

How can you be more carbon aware?

More than any environmental goals, economics are driving this change. Renewable energy will become more important over time as costs fall and become more affordable. That’s why we need to make fossil fuel plants less economic and renewable energy plants more profitable to speed up the transition. The best way to do this is to use more electricity from low-carbon sources, such as renewables, and less from high-carbon sources.

The two main principles of Carbon Awareness are:

  • Demand Shifting: Being carbon conscious means adjusting demand to changes in carbon intensity. If your job allows you to be flexible with your workload, you can adapt by using energy when carbon intensity is low and stopping production when carbon intensity is high. For example, train your machine learning model using a different time or location where carbon intensity is significantly lower. Studies have shown that these measures can reduce CO2 emissions by 45% to 99%, depending on the number of renewable energy sources used to power the grid.
  • Demand Shaping: The tactic of shifting calculations to locations or periods of lowest carbon intensity is called demand shifting. Demand engineering is a similar tactic. However, rather than moving demand to a new location or time, it changes the calculation to match current supply.

The availability of carbon is critical to shaping demand for carbon smart applications. As the cost of running an application in terms of carbon emissions increases, demand is designed to match the carbon supply. Users can choose to do this or have it done automatically.

Video conferencing software that automatically adjusts streaming quality is an example of this. When bandwidth is limited, audio quality is prioritized over video quality and is always streamed at the lowest possible quality.

Hardware Efficiency

If we take into account the embodied carbon, it is clear that by the time we come to buy a computer, a significant amount of carbon has already been emitted. In addition, computers also have a limited lifespan, so over time they become obsolete and need to be upgraded to meet the needs of the modern world. 

In these terms, hardware is a proxy for carbon, and since our goal is to be carbon efficient, we must also be hardware efficient. There are two main approaches to hardware efficiency:

  • For end-user devices, it’s extending the lifespan of the hardware.
  • For cloud computing, it’s increasing the utilization of the device.

How can Green Software be measured?

Most of the companies use the Greenhouse Gas (GHG) method to calculate total carbon emissions. By learning how to evaluate software against greenhouse gas fields and industry standards, you can assess how well green software principles have been implemented and how much room there is for improvement.

GHG accounting standards are the most common and widely used method for measuring total emissions. The GHG Protocol is used by 92% of Fortune 500 companies to calculate and disclose their carbon emissions.

The GHG Protocol divides carbon emissions into three categories. Value chain emissions, or scope 3 emissions, are emissions from companies that supply other companies in the chain. Therefore, the scope 1 and 2 emissions of one organization are added to the scope 3 of another organization.

GHG protocols allow for the calculation of software-related emissions, although open-source software may provide challenges in this regard. It is also possible to adopt Software Carbon Intensity (SCI) specifications in addition to GHG protocols. SCI is a percentage, not a total, and is specifically designed to calculate software emissions.

The purpose of this specification is to evaluate the emission rate of software and encourage its elimination. Greenhouse gas emissions, on the other hand, are a more universal measure that can be used by all types of organizations. The SCI is an additional statistic that helps software teams understand software performance in terms of carbon emissions and make better decisions.

How can you calculate the total carbon footprint of software?

Calculating the total carbon footprint of software requires access to comprehensive information on energy consumption, carbon intensity, and the hardware on which the software runs. This type of data is difficult to collect, even for closed-source software products that companies own and can monitor through telemetry and logs.

Many people from many organizations contribute to open-source projects. Therefore, it is unclear who is responsible for determining emissions and who is responsible for reducing emissions. Considering that 90% of a typical enterprise stack is open-source software, it’s clear that a significant portion of carbon emissions remains unsolved.

Carbon Reduction Methodologies

A number of methodologies are commonly used to reduce emissions and to support efforts to address climate change. These can be broadly grouped into the following categories: carbon elimination (also known as “abatement”), carbon avoidance (also known as “compensation”) and carbon removal (also known as “neutralization”).

  • Abatement or Elimination: includes increasing energy efficiency to eliminate some of the emissions associated with energy production. Abatement is the most effective way to fight climate change although complete carbon elimination is not possible.
  • Compensation or Avoidance: using sustainable living techniques, recycling, planting trees, and other renewable energy sources are examples of compensation. 
  • Neutralization or Removal: is the removal and permanent storage of carbon from the atmosphere to offset the effects of CO2 emissions into the atmosphere. Offsets tend to remove carbon from the atmosphere in the short to medium term.

An organization can claim to be Carbon Neutral if it offsets emissions and, more ambitiously, to be Net Zero if it reduces emissions as much as possible while offsetting only unavoidable emissions.

The Green Software Foundation

The Green Software movement began as an informal effort, but has become more visible in recent years. The Green Software Foundation was launched in May 2021 to help the software industry reduce its carbon footprint by creating a trusted ecosystem of people, standards, tooling, and best practices for building Green Software.

Specifically, the non-profit organization is part of the Linux Foundation and has three main objectives:

  • Establish standards for the green software industry: The Foundation will create and distribute green programming guidelines, models and green practices across computing disciplines and technology domains.
  • Accelerate Innovation: To develop the Green Software industry, the Foundation will encourage the creation of robust open source and open data projects that support the creation of green software applications.
  • Promote Awareness: One of the Foundation’s key missions is to promote the widespread adoption of Green Software through ambassador programmes, training and education leading to certification, and events to facilitate the development of Green Software.

Conclusions

At Bitrock, we believe that everyone has a role to play in solving climate change. In fact, sustainable software development is inclusive. No matter your industry, role, or technology, there is always more you can do to make an impact. Collective efforts and interdisciplinary collaboration across industries and within engineering are essential to achieve global climate goals.

If you can write greener code, your software projects will be more robust, reliable, faster, and more resilient. Sustainable software applications not only reduce an application’s carbon footprint, but also create applications with fewer dependencies, better performance, lower resource consumption, lower costs, and energy-efficient features.


Sources: Green Software for Practitioners (LFC131)The Linux Foundation

Read More
Next.js 14

In recent years, the development of web applications has become increasingly complex, requiring more sophisticated approaches to guarantee optimal performance and a seamless user experience. Next.js has emerged as a leading framework for modern web application development, providing a powerful and flexible development environment based on React.

The JavaScript realm is constantly undergoing disruption, driven by the introduction of cutting-edge technologies and the advancement of existing ones. While React’s introduction revolutionized JavaScript frameworks, newer technologies have emerged from the initial disruption.

At the time of this article’s publication, the official React documentation discourages the conventional usage of React and the generation of standard projects using create-react-app, instead recommending several frameworks, including Next.js.

In the forthcoming paragraphs, we will delve into the core features of Next.js, particularly in its latest version, and guide you through the process of creating a project.

What is Next.js?

Next.js stands as a revolutionary React framework that seamlessly blends Client-Side Rendering (CSR) and Server-Side Rendering (SSR) techniques. Led by Vercel, Next.js has gained popularity in the web development community for its user-friendly approach and the robust features it offers. Its comprehensive set of features empowers developers to create high-performance and SEO-friendly web applications with ease.

What are the main features of Next.js?

Server-Side and Client-Side Rendering for Optimal Performance
Next.js’s ability to render pages on both the server and the client side offers a unique advantage. The ability to generate precompiled HTML on the server side ensures ultra-fast initial page loads and improves search engine indexing.

Automatic Routing for Seamless Navigation
Next.js simplifies navigation by employing a file-based routing system. Simply create files named “page” or “route” within the designated folder structure, and the routing is handled automatically.

Hot Module Replacement (HMR) for Enhanced Development Efficiency
Next.js supports HMR, allowing developers to make code changes without refreshing the entire page. This feature significantly expedites the development process.

Built-In Image Optimization for Responsive Content
Next.js built-in image optimization system ensures that images are loaded in the most efficient format and size based on the user’s device and screen resolution, enhancing site performance.

Effortless API Integration for Data Management
Integrating external or internal APIs into a Next.js application is straightforward. This simplifies data management and eliminates the need to maintain a separate server application. Next.js version 14 further simplifies API integration, making it ideal for externalizing functionalities.

Special Pages for Enhanced Routing and Layout

Next.js introduces a unique feature: the automatic detection of files named in a specific pattern. This capability enables developers to efficiently structure their applications and enhance the overall user experience. For instance, Next.js recognizes files named “page.*” as the source for the page component that should be rendered for a specific route.  The “layout.tsx” file, located at the root of the app directory and/or in the pages structure, plays a crucial role in defining a common layout structure that can be applied across multiple pages. Next.js automatically injects the page component into this layout, providing a consistent presentation across the application.

Server and Client Component

Traditional React components, familiar to front-end developers, are known as Client Components. They function identically to their counterparts. NextJs introduces the concept of Server Component which generates an HTML file on the server side before delivering it to the client. This reduces the amount of JavaScript that the browser needs to compile. Now, let’s delve into the home page.

Upon inspection, it’s evident that this component resembles a conventional React component. Indeed, it is! However, there is a hidden distinction: this is a server component. This implies that it  is compiled and rendered on the server side, delivering only the pre-rendered HTML to the client.

This type of component offers  several advantages, including enhanced SEO, increased security due to its placement within a more protected portion of the code, and simplified development. For instance, server components can efficiently fetch data from the database without adhering to the traditional component lifecycle, enabling the utilization of straightforward asynchronous await statements. However, certain functionalities, such as user interactions, cannot be handled on the server side. To address this, Next.js introduces the concept of Partial Rendering, which involves rendering the HTML on the server side while leaving “empty” spaces for specific components designated for client-side management.

As an example, let’s examine the following component:

This component will be rendered as:

This distinction proves beneficial in scenarios like the ShoppingCart component, which requires accessing client-side browser data to retrieve saved items. 

It’s crucial to acknowledge that this approach also presents certain restrictions. For example, functions cannot be passed as props from server components to client components, and client components cannot have server components as their direct descendants. For more in-depth information, refer to the official Next.js Official Documentation.

It’s worth noting that using actions instead of an API system requires more effort than simply adding a callback function to interact with the server. While callbacks are common and straightforward for front-end developers, they can introduce development and security challenges. Actions, on the other hand, introduce additional complexity due to the need to use forms for even basic buttons. 

If you need to send additional information (such as a  slug), actions force you to create a hidden input to send it. Both approaches have their advantages and disadvantages, so the choice  depends on project and development preferences.

Deployment with Vercel

Now that you’ve created your Next.js app, it’s time to put it online for the world to see. Next.js is owned by Vercel, a leading deployment platform that seamlessly integrates with Next.js to make the deployment process even easier. To get started, you’ll need to create a Vercel account. This is a straightforward process that takes just a few minutes.

Once you’ve created your account, you can connect it to your GitHub or GitLab repository to automatically deploy your Next.js app whenever you push changes to your code.

This process can undoubtedly be used to generate one of the pre-designed templates, providing a head start with several components of the project already set up.

The system prompts you to enter the values to use for environment settings and the git branch to use  for deployment (main is the default). Once you provide the final confirmations, the deployment process will start. Any errors encountered will be reported and stored in an appropriate section.  Additionally, if you haven’t modified the default settings, Vercel will automatically trigger a deployment process, significantly expediting the online deployment cycle. By default, Vercel  provides a free domain, but the developer can purchase and replace it with a domain of their preferences. And with that, our dashboard is ready for users, who may inevitably encounter bugs.

Conclusions

We’ve completed the foundational elements of our application; leaving us to decide its content and replicate the steps we’ve covered thus far. While this is a very simple dashboard, it effectively illustrates Next.js’s distinctive features compared to others, particularly React. We’ve witnessed a fully functional structure that’s effortlessly implementable, SEO-compliant, and integrated with robust security measures, all within the framework of React’s flexibility. 

Currently, version 14 is in its early stages,  with frequent enhancements every few months. It’s an ever-evolving realm, vastly different from prior versions. Nonetheless, the advantages are already apparent, particularly the seamless handling of aspects that have long challenged developers.


Main Author: Davide Filippi, Front-End Developer @ Bitrock

Read More
AI Ethics

Artificial intelligence (AI) is the tech buzzword of the moment, and for good reason. Its transformative potential is already revolutionizing industries and shaping our future.

Its role in both everyday life and the world of work is now undeniable.  Machine Learning, combined with IoT and industrial automation technologies, has emerged as a disruptive force in the evolution of production processes across all economic and manufacturing sectors. 

The fact that these solutions can be widely deployed is intensifying the debate about the control and discipline of Artificial Intelligence. The ability to identify, understand and, where necessary, limit the applications of AI is clearly linked to the ability to derive sustainable value from it

First and foremost, the focus must be ethical, to ensure that society is protected based on the principles of transparency and accountability. But it is also a business imperative. Indeed, it is difficult to incorporate AI into decision-making processes without an understanding of how the algorithm works, certainty about the quality of the data, and the absence of biases or preconceptions that can undermine conclusions.

The AI Act from principles to practice

The European Union’s commitment to ethical AI has taken a significant step forward with the preliminary adoption of the AI Act, the first regulation on AI to create more favourable conditions for the development and application of this innovative technology. This legislation aims to ensure that AI deployed within EU countries is not only safe and reliable but also respects fundamental rights, upholds democratic principles, and promotes environmental sustainability.

It is also important to note that the AI Act is not about protecting the world from imaginary conspiracy theories or far-fetched scenarios in which new technologies take over human intelligence and lives. Instead, it focuses on practical risk assessments and regulations that address threats to individual well-being and fundamental rights.

In other words, while it may sound like the EU has launched a battle against robots taking over the world, the real purpose of the AI Act is more prosaic: to protect the health, safety and fundamental rights of people interacting with AI. Indeed, the new regulatory framework aims to maintain a balanced and fair environment for the deployment and use of AI technologies.

More specifically, the framework of the AI Act revolves around the potential risks posed by AI systems, which are categorised into four different classes:

  • Unacceptable risk: Systems deemed to pose an unacceptable threat to citizens’ rights, such as biometric categorization based on sensitive characteristics or behavioural manipulation, are strictly prohibited within the EU.
  • High risk: Systems operating in sensitive areas such as justice, migration management, education, and employment are subject to increased scrutiny and regulation. For these systems, the European Parliament requires a thorough impact assessment to identify and mitigate any potential risks, ensuring that fundamental rights are safeguarded.
  • Limited risk: Systems like chatbots or generative AI models, while not posing significant risks, must adhere to minimum transparency requirements, ensuring users are informed about their nature and the training data used to develop them.
  • Minimal or no risk: AI applications such as spam filters or video games are considered to pose minimal or no risk and are therefore not subject to specific restrictions under the AI Act.

Understanding the above categorization is fundamental for organizations to accurately assess their AI systems. It ensures alignment with the Act’s provisions, allowing companies to adopt the necessary measures and compliance strategies that are relevant to the specific risk category of their AI systems.

In short, the EU AI Act represents a pivotal moment in the EU’s commitment to the responsible development and deployment of AI, setting a precedent for other jurisdictions around the world. By prioritizing ethical principles and establishing regulatory guidelines, it paves the way for a future where AI improves our lives without compromising our fundamental rights and values.

Infrastructure AI and Expertise

Infrastructure AI and Expertise

Our relationship with technology and work is about to change. As AI becomes the driving force of a new industrial revolution, it is imperative that we better equip ourselves to remain competitive and meet new business needs.

As we have seen, in the age of Artificial Intelligence (AI), where Large Language Models (LLMs) and Generative AI hold enormous potential, the need for robust control measures has become a critical component. While these cutting-edge technologies offer groundbreaking capabilities, their inherent complexity requires careful oversight to ensure responsible and ethical deployment. This is where Infrastructure AI systems come in, providing the necessary tools to manage AI models with precision and transparency.

Infrastructure AI systems like Radicalbits platform are meticulously designed to address the critical challenge of AI governance. A solution that not only simplifies the development and management of AI, but also enables organizations to gain deep insight into the inner workings of these complex models. Its ability to simplify complex tasks and provide granular oversight makes it an indispensable asset for companies seeking to harness the power of AI ethically and effectively.

In this scenario, Europe faces a new challenge: the need for proficient expertise. Indeed, to unlock the full potential of AI, we need a mix of skills that includes domain and process expertise as well as  technological prowess.

Infrastructure AI, the foundation upon which AI models are built and deployed, requires a diverse set of skills. It’s not just about coding and algorithms: it’s about encompassing an ecosystem of platforms and technologies that enable AI to work seamlessly and effectively.

Newcomers to the AI field will therefore need a mix of technical skills and complementary expertise to be successful. Domain expertise is critical for identifying challenges, while process skills facilitate the development of AI solutions that are not only effective, but also sustainable and ethical. The ability to think critically, solve problems creatively, and prioritize goals over rigid instructions will be essential.

How to support responsible AI development

The need for companies to regularly monitor the production and use of AI models and their reliability over time is highlighted by the new regulatory framework being promoted by the EU. To prepare for the new transparency and oversight requirements, it is essential to focus on two areas. 

First, the issue of skills, particularly in the context of the Italian manufacturing industry. It’s well known that the asymmetry between the supply and demand for specialized jobs in technology and Artificial Intelligence, to name but two, slows down growth and damages the economy. 

In this transformative era, Italy is uniquely positioned with its strong academic background and technological expertise. By combining Italian skills and technologies, we can harness the transformative power of Infrastructure AI to promote responsible AI development and to establish ethical AI governance practices. Our history of innovation and technological advancement, together with the capabilities of Infrastructure AI, could  provide a competitive advantage in this field.

The second point is technology. Tools that support the work of data teams (Data Scientists, ML Engineers) in creating AI-based solutions provide companies with a real competitive advantage. This is where the term MLOps comes in, referring to the methods, practices and devices that simplify and automate the machine learning lifecycle, from training and building models to monitoring and observing data integrity.

Conclusions

The adoption of the EU AI Act marks the beginning of a transformative era in AI governance. It’s a first, fundamental step towards ensuring that AI systems play by the rules, follow ethical guidelines and prioritize the well-being of users and society. 

It should now be clear that it can serve as a proactive measure to prevent AI from becoming wild and uncontrolled, and instead foster an ecosystem where AI operates responsibly, ethically and in the best interests of all stakeholders.

On this exciting path towards responsible and sustainable AI, both cutting-edge technology solutions and skilled workforce are essential to unlock the full potential of AI, while safeguarding our values and the well-being of society. The smooth integration of expertise and cutting-edge technology is therefore the real formula for unlocking the full potential of AI.

This is the approach we take at Bitrock and in the Fortitude Group in general, where a highly specialized consulting offering is combined with a proprietary MLOps platform, 100% made in Italy. And it is the approach that allows us to address the challenges of visibility and control of Artificial Intelligence. In other words, the ability to fully, ethically and consciously exploit the opportunities of this disruptive technology.

Read More
Quality Assurance

Across a diverse range of manufacturing industries, including engineering, pharmaceuticals, and IT, Quality Assurance (QA) plays a pivotal role in shaping and overseeing processes. It ensures the seamless integration of these processes into the company’s ecosystem, fostering enhanced efficiency at both the organizational and production levels, ultimately leading to product refinement

QA proactively anticipates and mitigates potential issues such as production flow disruptions , communication breakdowns, implementation and design bugs. Unlike rigid, bureaucratic procedures, QA’s process-defining approach aims to simplify and streamline operations, achieving both ease of use and superior product quality.

QA adopts a holistic project perspective, encompassing the entire process from the initial requirements gathering phase to the final reporting stage. QA and testing, while closely related, have distinct roles and responsibilities. In fact, QA upholds the proper execution of the entire process, supporting testing activities from start to finish.

Quality Assurance vs. Testing: understanding the main differences

Quality Assurance vs. Testing

Quality Assurance (QA) and Testing are two closely related but distinct fields within software development.  QA is a broader concept that encompasses the entire process of ensuring that software meets its requirements and is of high quality. Testing is a specific product-oriented activity within QA that involves executing test cases to identify and report defects. 

In other words, QA is about preventing defects, while testing is about finding them. QA plays a central role in defining processes to implement the Software Development Life Cycle (SDLC), a structured framework that guides the software development process from conception to deployment.

The definition of an SDLC model is intricate, as it entails consideration of various factors including the company organizational structure , the type of software developed (for instance, an Agile model does not combine with safety-critical software), the technologies employed and the organization’s maturity level. QA plays an important role in shaping the SDLC, not merely contributing but rather serving as an integral component. QA oversees the establishment and execution of the Software Testing Life Cycle (STLC), where testing activities overlap with development to ensure product testability. In addition, QA continuously monitors and refines processes, ensuring that the designed workflow aligns with the desired outcomes.

On the other hand, testing plays a corrective role, actively seeking to identify and verify bugs before the product reaches production. This proactive approach, known as “Shift-Left” testing, involves testers collaborating with product owners in the early stages of requirement definition to ensure clarity and testability.

The cost of bug fixing escalates with each development phase, making early detection during requirement definition both faster and more cost-effective  As development progresses, costs rise significantly, particularly after release into the test environment and up to production wheretime and resource constraints are significantly higher. Furthermore, uncovering bugs in production damages stakeholder trust.

Testing encompasses a diverse range of types and techniques tailored to specific development phases and product categories. For instance, unit tests are employed during code writing, while usability, portability, interruptibility, load, and stress tests are conducted for mobile apps.
In general, QA and Testing are both essential for ensuring that software meets its requirements and is of high quality. QA professionals provide the framework for achieving quality, while testers are responsible for executing the tests that identify and report defects.

How Automation and Artificial Intelligence impact Quality Assurance and Testing?

Both QA and Testing are evolving to keep pace with the latest trends in the IT industry, with a particular focus on Automation and Artificial Intelligence (AI)

Automation is having a profound impact on  processes , simplifying, standardizing, and reducing software management costs. This has led to an increasing synergy between QA and Devops within companies, with QA becoming an integral part of  the testing and development process and ensuring  its presence at every level.

Automation also plays a crucial role in testing, reducing execution times, mitigating human errors in repetitive test steps, and freeing up resources for testing activities where automation is less  effective or impractical. 

Examples of automated testing include regression tests, performance tests, and integration tests, the latter of which provides significant benefits for APIs. Various automation methods exist, each with advantages suited to specific contexts. For tests with lower complexity and abstraction, simpler test methods are recommended. Types of automated tests include linear scripting, scripting using libraries, keyword-driven  and model-based testing. Different tools are available to support these methodologies, and the choice depends on factors such as the System Under Test (SUT) and the test framework

Recently , AI has emerged as a valuable tool for test support. Among the various methods,the “Natural Language Processing”(NLP) approach is particularly compelling as it allows test cases to be written in a descriptive mode using common language. This will empower a broader  population to perform automated tests with ease.

Quality Assurance behind innovation and success

The concept of “Quality” has been an essential element of success since the ancient Phoenicians employed inspectors to ensure quality standards were met.

Quality evolves alongside progress, influencing every field from manufacturing to technology. For instance, the mobile phone transformation from a simple calling device to today’s versatile smartphones highlights the importance of quality driven by user feedback.

In a nutshell, Quality Assurance is an essential process for a successful business as well as the key differentiator between successful products from those that fail to meet expectations. QA plays a crucial role in distinguishing a company from its competitors and achieving its goals of innovation and success by maintaining the high quality of its products and services.


Are you curious to learn more about the main differences between Quality Assurance and Testing? Are you interested in further exploring the future perspectives with Automation and Artificial Intelligence? Listen to the latest episode of our Bitrock Tech Radio Podcast, or get in contact with one of our experienced engineers and consultants!


Main Author: Manuele Salvador, Software Quality Automation Manager @ Bitrock

Read More

Bridging the Gap Between Machine Learning Development and Production

In the field of Artificial Intelligence, Machine Learning has emerged as a transformative force, empowering businesses to unlock the power of data and make informed decisions. However, bridging the gap between developing ML models and deploying them into production environments can pose a significant challenge. 

In this interview with Alessandro Conflitti (Head of Data Science at Radicalbit , a Bitrock sister company), we explore the world of MLOps, delving into its significance, challenges, and strategies for successful implementation. 

Let’s dive right into the first question.

What is MLOps and why is it important?

MLOps is an acronym for «Machine Learning Operations». It describes a set of practices ranging from data ingestion, development of a machine learning model (ML model), its deployment into production and its continuous monitoring. 

In fact, developing a good machine learning model is just the first step in an AI solution. Imagine for instance that you have an extremely good model but you receive thousands of inference requests (input to be predicted by the model) per second: if your underlying infrastructure does not scale very well, your model is going to immediately break down, or anyway will be too slow for your needs.

Or, imagine that your model requires very powerful and expensive infrastructures, e.g. several top notch GPUs: without a careful analysis and a good strategy you might end up losing money on your model, because the infrastructural costs are higher than your return.

This is where MLOps comes into the equation, in that it integrates an ML model organically into a business environment.

Another example: very often raw data must be pre-processed before being sent into the model and likewise the output of an ML model must be post-processed before being used. In this case, you can put in place an MLOps data pipeline which takes care of all these transformations.

One last remark: a very hot topic today is model monitoring. ML models must be maintained at all times, since they degrade over time, e.g. because of drift (which roughly speaking happens when the training data are no longer representative of the data sent for inference). Having a good monitoring system which analyses data integrity (i.e. that data sent as input to the model is not corrupted) and model performance (i.e. that the model predictions are not degrading and still trustworthy) is therefore paramount.

What can be the different components of an MLOps solution?

An MLOps solution may include different components depending on the specific needs and requirements of the project. A common setup may include, in order:

  • Data engineering: As a first step, you have to collect, prepare and store data; this includes tasks such as data ingestion, data cleaning and a first exploratory data analysis.
  • Model development: This is where you build and train your Machine Learning model. It includes tasks such as feature engineering, encoding, choosing the right metrics, choosing the model’s architecture selection, training the model, and hyperparameter tuning.
  • Experiment tracking: This can be seen as a part of model development, but I like to highlight it separately because if you keep track of all experiments you can refer to them later, for instance if you need to tweak the model in the future, or if you need to build similar projects using the same or similar datasets. More specifically you keep track of how different models behave (e.g. Lasso vs Ridge regression, or XGBoost vs CatBoost), but also of hyperparameter configurations, model artefacts, and other results.
  • Model deployment: in this step you put your ML model into production, i.e. you make it available for users who can then send inputs to the model and get back predictions. What this looks like can vary widely, from something as simple as a Flask or FastAPI to much more complicated solutions.
  • Infrastructure management: with a deployed model you need to manage the associated infrastructure, taking care of scalability, both vertically and horizontally, i.e. being sure that the model can smoothly handle high–volume and high–velocity data. A popular solution is using Kubernetes, but by no means is it the only one.
  • Model monitoring: Once all previous steps are working fine you need to monitor that your ML model is performing as expected: this means on the one hand logging all errors, and on the other hand it also means tracking its performance and detecting drift.

What are some common challenges when implementing MLOps?

Because MLOps is a complex endeavour, it comes with many potential challenges, but here I would like to focus on aspects related to Data. After all that is one of the most important things; as Sherlock Holmes would say: «Data! data! data! (…) I can’t make bricks without clay.»

For several reasons, it is not trivial to have a good enough dataset for developing, training and testing a ML model. For example, it might not be large enough, it might not have enough variety (e.g. think of a classification problem with very unbalanced, underrepresented classes), or it might not have enough quality (very dirty data, from different sources, with different data format and type, plenty of missing values or inconsistent values, e.g. {“gender”: “male”, “pregnant”: True}).

Another issue with Data is having the right to access it. For confidentiality or legal (e.g. GDPR) reasons, it might not be possible to move data out of a company server, or out of a specific country (e.g. financial information that cannot be exported) and this limits the kinds of technology or infrastructures that can be used, and deployment on cloud can be hindered (or outright forbidden). In other cases only a very small curated subset of data can be accessed by humans and all other data are machine–readable only.

What is a tool or technology that you consider to be very interesting for MLOps but might not be completely widespread yet?

This might be the Data Scientist in me talking, but I would say a Feature Store. You surely know about Feature Engineering, which is the process of extracting new features or information from raw data: for instance, having a date, e.g. May 5th, 1821, compute and add the corresponding week day, Saturday. This might be useful if you are trying to predict the electricity consumption of a factory, since often they are closed on Sundays and holidays. Therefore, when working on a Machine Learning model, one takes raw data and transforms it into curated data, with new information/features and organised in the right way. A feature store is a tool that allows you to save and store all these features.

In this way, when you want to develop new versions of your ML model, or a different ML model using the same data sources, or when different teams are working on different projects with the same data sources, you can ensure data consistency. 

Moreover, preprocessing of raw data is automated and reproducible: for example anyone working on the project can retrieve curated data (output of feature engineering) computed on a specific date (e.g. average of the last 30 days related to the date of computation) and be sure the result is consistent.

Before we wrap up, do you have any tips or tricks of the trade to share?

I would mention three things that I find helpful in my experience. 

My first suggestion is to use a standardised structure for all your Data Science projects. This makes collaboration easier when several people are working on the same project and also when new people are added to an existing project. It also helps with consistency, clarity and reproducibility. From this perspective I like using Cookiecutter Data Science.

Another suggestion is using MLflow (or a similar tool) for packaging your ML model. This makes your model readily available through APIs and easy to share. And finally I would recommend having a robust CI/CD (Continuous Integration and Continuous Delivery) in place. In this way, once you push your model artefacts to production the model is immediately live and available. And you can look at your model running smoothly and be happy about a job well done.


Main Author: Dr. Alessandro Conflitti, PhD in Mathematics at University of Rome Tor Vergata & Head of Data Science @ Radicalbit (a Bitrock sister company).

Interviewed by Luigi Cerrato, Senior Software Engineer @ Bitrock

Read More