Front-End Development Technologies (Part I)

The world of web development requires the use of a wide range of front-end technologies and tools to create responsive and functional user interfaces. From HTML, CSS, and JS to advanced frameworks, and tools to optimize development workflows, the range of technologies is vast and constantly evolving.

That's why we thought it would be a good idea to create the ultimate A to Z list for Front-End Development. We went through letter-by-letter and picked what we felt to be the technologies and the tools that most nobly represented all the letters of the alphabet, then laid out exactly why. From Angular, to Zeplin, passing through Flutter and React - they're all here!

A is for Angular

Angular is a JavaScript framework developed by Google and used to build single-page applications (SPAs) and dynamic web applications. Its main feature is to organize code into reusable modules and components. Based on TypeScript, Angular provides static typing that helps you identify code errors during development, thereby making your applications more robust.

Angular introduces the concept of "two-way data binding", which allows synchronization of data between view and model. This simplifies application state management and improves greater interactivity. Additionally, the framework provides a range of built-in features such as routing, state management, form validation, testability, and integration with external services.

B is for Babel

Babel is a JavaScript compiler tool that allows developers to write code using the latest language features and convert it into a format compatible with older browser versions. This tool is very important for cross-browser compatibility as it allows older browsers to use the latest JavaScript features. Babel is highly configurable and customizable, allowing developers to tailor it to the specific needs of their projects, increasing development productivity and ensuring a consistent experience for end users.

C is for Cypress

Cypress is a powerful end-to-end (E2E) testing framework designed specifically  for front-end development, known for its user-friendly nature, and reliability. Cypress allows users to simulate user interactions, run tests across different browsers simultaneously, access the DOM directly , and write tests declaratively through a simple and intuitive API. Serverless architecture enables fast and reliable tests, providing instant feedback during development.

Due to Its open-source nature and active community, Cypress is a popular choice among developers to ensure the quality and reliability when it comes to ensuring the quality and reliability of front-end applications throughout the development process.

D is for D3.js

D3.js short for "Data-Driven Documents," is one of the most powerful and flexible libraries for data manipulation and creating dynamic visualizations on the web. With its versatility and power, D3.js enables developers to transform complex data into clear and engaging visualizations, creating charts, geographical maps, bar charts, histograms, pie charts, and a wide range of other visual representations.

What makes D3.js unique is its approach which is based on the fundamental concepts of manipulating the Document Object Model (DOM) with data. This means that D3.js allows direct data binding to DOM elements, providing greater flexibility and control in creating custom and interactive visualizations.

The library provides  a powerful set of tools for dynamically handling data presentation, enabling smooth transitions between different view states. D3.js leverages SVG (Scalable Vector Graphics) and Canvas features to create highly customizable and high-performance charts that scale to different sizes and devices.

E is for ESLint

ESLint is a JavaScript Linting tool widely used by the software development community. It works by analyzing JavaScript code to identify errors, style inconsistencies, and potential problems. For example, it identifies common errors such as undeclared variables, dead code, or style inconsistencies, helping developers to write cleaner, more readable, and maintainable code. The result of working with ESLint is an improvement in the overall quality of the code. 

ESLint is often integrated into development processes, such as pre-commit checks or within Continuous Integration/Continuous Deployment (CI/CD) workflows, to ensure that the code conforms to defined standards and to identify  potential problems during development, reducing the likelihood of errors in production.

F is for Flutter

Flutter is an open-source framework developed by Google for creating native applications for mobile, web, and desktop. Based on the Dart language, with its highly customizable widgets and support for hot reload, Flutter simplifies the development process and allows developers to create cross-platform applications with a single code base.

G is for GraphQL

GraphQL is a query language for APIs, designed to provide an efficient and flexible way to request and deliver data from a server to a client. Unlike traditional REST APIs, GraphQL allows clients to specify exactly the data they need, reducing the amount of data transferred and improving application performance. This allows developers to precisely define the structure of requests to get exactly the desired data, providing greater flexibility and scalability in optimizing API requests.

H is for Husky

Husky is a tool primarily used to manage pre-commit hooks in Git repositories. Pre-commit hooks are scripts that are automatically executed before a commit is created in the repository. These scripts enable various checks or actions to be performed on the code before it is committed, ensuring code quality and consistency. This allows developers to easily define and run custom scripts or specific commands to perform tests, Code Linting, formatting checks, and other necessary validations before making a commit to a shared repository.

I is for Immer

Immer is a JavaScript library that simplifies the management of  immutable state. By using Immer, developers can easily produce and manipulate immutable data structures more intuitively. The library provides a simpler and more readable approach to updating immutable objects, allowing developers to write clearer and more maintainable code when managing application state.

Immer is particularly useful in contexts such as React applications, Redux, or any situation where maintaining stability and predictability of the state is critical. Its developer-friendly features and functional programming approach have made it an attractive alternative to the more popular Immutable.js.

J is for Jest

Jest is a JavaScript testing framework known for its ease of use, simple configuration, and execution speed. Jest is widely used for conducting unit tests and integration testing.

One of the main advantages of Jest is its comprehensive and integrated structure, which includes everything needed to perform tests efficiently. It comes with features such as built-in automocking that simplifies dependency substitution, support for asynchronous testing, and parallel test execution that improves productivity and reduces execution times.

K is for Kotlin

Kotlin is a modern and versatile programming language that can be used to develop applications for multiple platforms, including Android, web, server-side, and more.

Recognized for its conciseness, clarity, and interoperability with Java, Kotlin offers many advanced features that increase developer productivity. Thanks to its expressive syntax and type safety, Kotlin has become a popular choice for developers who want to write clean and maintainable code across platforms. Although it's not strictly a web technology, its importance and steady growth in mobile application development deserve to be mentioned.

L is for Lodash

Lodash is a JavaScript library that provides a set of utility functions for data manipulation, array handling, object management, and more. It is known for providing efficient and performant methods that make it easier to write more concise and readable code. Lodash includes a wide range of utility functions such as map, filter, and many others that allow developers to work with complex data easily and efficiently.

In addition to utility functions, Lodash also provides support for string manipulation, number handling, creating iterator functions, and other advanced operations that make data management more intuitive and powerful.

M is for Material-UI

Material-UI is a React-based UI component library based on Google's Material Design system. It offers a wide range of pre-styled and configurable components to help developers efficiently create modern and intuitive interfaces. The library provides a comprehensive collection of components such as buttons, inputs, navigation bars, tables, cards, and more. Each component is designed to follow Material Design guidelines, providing a consistent and professional look and feel throughout the application. In addition to providing predefined components, Material UI also allows for customization through themes and styles. Developers can easily customize the look and feel of components to meet specific project requirements while maintaining an overall consistent design.

N is for Node.js

How could Node.js not be mentioned on this list? Well, here we are, Node is an open-source JavaScript runtime based on Google Chrome's V8 engine, that enables server-side execution of JavaScript code. It is widely used for building scalable and fast web applications, allowing developers to use JavaScript on both the client and server sides. Node.js offers high efficiency through its non-blocking asynchronous model, which enables efficient handling of large numbers of requests. With its extensive module ecosystem (npm), Node.js is a popular choice for developing web applications, APIs, and back-end services.


We're only halfway through our exploration of the ultimate A to Z list for Front-End Development!

Stay tuned for the second act, as we're going to explore some more interesting tools and technologies, each pushing the boundaries of front-end development.


Main Author: Mattia Ripamonti, Team Leader and Front-End Developer @ Bitrock

Read More
AI Act

Artificial Intelligence has played a central role in the world's digital transformation, impacting a wide range of sectors, from industry to healthcare, from mobility to finance. This rapid development has increased the need and urgency of regulating technologies that by their nature have important and sensitive ethical implications. For these reasons, as part of its digital strategy, the EU has decided to regulate Artificial Intelligence (AI) to ensure better conditions for the development and use of this innovative technology. 

It all started in April 2021, when the European Commission proposed the first-ever EU legal framework on AI. Parliament’s priority is to ensure that AI systems used in the EU are safe, transparent, accountable, non-discriminatory and environmentally friendly. AI systems must be monitored by humans, rather than automation to prevent harmful outcomes.

The fundamental principles of the regulation are empowerment and self-evaluation. This law ensures that rights and freedoms are at the heart of the development of this technological development, securing a balance between innovation and protection.

How did we get here? Ambassadors from the 27 European Union countries voted unanimously on February 2, 2024, on the latest draft text of the AI ​​Act, confirming the political agreement reached in December 2023 after tough negotiations. The world's first document on the regulation of Artificial intelligence is therefore approaching a final vote scheduled for April 24, 2024. Now is the time to keep up with the latest developments, but in the meantime, let's talk about what it means for us.

Risk Classification

The AI Act has a clear approach, based on four different levels of risk: 

  • Minimal or no risk 
  • Limited risks 
  • High risk 
  • Unacceptable risk 

The greater the risk, the greater the responsibility for those who develop or deploy Artificial Intelligence systems and applications considered too dangerous to be authorized. The only cases that are not covered by the AI Act are technologies used for military and research purposes.

Most AI systems pose minimal risk to citizens' rights, and their security is not subject to specific regulatory obligations. The logic of transparency and trust applies to high-risk categories. The latter in particular can have very significant implications, for example AI systems used in the fields of health, education or recruitment. 

In cases such as chatbots, there are also specific transparency requirements to prevent user manipulation, so that the user  is aware of the interaction with AI. For instance, the phenomenon of deep-fakes, images through which a subject's face and movements are realistically reproduced, creating false representations that are difficult to recognise.

Finally, any application that poses unacceptable risk is prohibited, such as applications that may manipulate an individual’s free will, social evaluation or emotion recognition at work or school. An exception should be made when using Remote Biometric Identification systems (RBI), which can identify individuals based on physical, physiological, or behavioral characteristics in a publicly accessible space and at a distance from a database.

Violations can result in fines ranging from $7.5 million, or 1.5 percent of global revenue, to $35 million, or 7 percent of global revenue, depending on the size and severity of the violation.

The AI Act as a growth driver

For businesses, the AI Act represents a significant shift, introducing different compliance requirements depending on the risk classification of AI systems - as seen before. Companies developing or deploying AI-based solutions will have to adopt higher standards for high-risk systems and face challenges related to security, transparency, and ethics. At the same time, this legislation could act as a catalyst for innovation, pushing companies to explore new frontiers in AI while respecting principles of accountability and human rights protection.

The AI Act can thus be an opportunity to support a virtuous and more sustainable development of markets both on the supply and demand-side.  By the way, it is no coincidence that we use the word 'sustainable'. Artificial Intelligence can also be an extraordinary tool for accelerating the energy transition, with concepts that save not only time but also resources and energy,  like the Green Software focused  on developing software with minimal carbon emission.

ArtificiaI Intelligence, indeed, allows companies to adapt to market changes by transforming processes and products, which in turn can change market conditions by giving new impetus to technological advances.

The present and the future of AI in Italy

The obstacles that Italian small and medium-sized enterprises face today in implementing solutions that accelerate their digitalization processes are primarily economic. This is because financial and cultural measures are often inadequate. There is a lack of real understanding of the short-term impact of these technologies and the risks of being excluded from them.

In Italy, the lack of funding for companies risks delaying more profitable and sustainable experiments with Artificial Intelligence in the short and long term. As an inevitable consequence, the implementation of the necessary infrastructure and methods may also be delayed.

The AI Act itself will not be operational "overnight," but will require a long phase of issuing delegated acts, identifying best practices and writing specific standards.

However, it is essential to invest now in cutting-edge technologies that enable efficient and at the same time privacy-friendly data exchange. This is the only way for Artificial Intelligence to truly contribute to the country's growth.


Sources
News European Parliament - Shaping the digital transformation: EU strategy explained
News European Parliament - AI Act: a step closer to the first rules on Artificial Intelligence

Read More
Green Software

As information technology plays an important role in our daily life, its energy demand is increasing rapidly. By 2025, data centers alone will consume about 20 percent of global electricity, and the entire digital ecosystem will account for one-third of global demand.

The concept of Green Software was developed to address these issues, by focusing on developing software with minimal carbon emission. The software we develop today will run on billions of devices over the next few years, consuming energy and potentially contributing to climate change.

Green IT represents a shift in the software industry, enabling software developers and all stakeholders to contribute to a more sustainable future.

Key principles and associated best practices for reducing the carbon footprint of software include:

  • Carbon Efficiency: Emit the least amount of carbon possible 
  • Energy Efficiency: Use the least amount of energy possible
  • Carbon Awareness: Do more when the electricity is clean and less when it is dirtier

Carbon Efficiency

Our goal is to emit as little carbon into the atmosphere as possible. Therefore, the main principle of Green Software is Carbon Efficiency: minimizing carbon emissions per unit of work.

The definition of Carbon Efficiency is to develop applications that have a lower carbon footprint while providing the same benefits to us or our users.

Energy Efficiency

Creating an energy-efficient application is the same as creating a carbon-efficient application, since carbon is replaced with electricity. Green Software aims to use as little electricity as possible and is responsible for its use.

It's true that all software uses electricity, from mobile phone applications to machine learning models trained in data centers. Increasing the energy efficiency of applications is one of the best strategies for reducing power consumption and carbon footprint. But our responsibility goes beyond that.
Quantifying energy consumption is a critical starting point, but it's also important to consider hardware overhead and end-user impact. By minimizing waste and optimizing energy use at every stage, we can significantly reduce the environmental impact of software development and change the way users use the product to take advantage of energy proportionality.

Carbon Awareness

Generating electricity involves using different resources at different times and in different places, with different carbon emissions. Some energy, such as hydroelectricity, solar energy, and wind, are clean, renewable energy sources with low carbon emissions. However, fossil fuel sources release varying amounts of carbon to produce energy. For example,  gas-fired power plants produce less carbon than  coal-fired power plants, but both types of energy produce more carbon than renewable energy sources.

The concept of carbon awareness is to increase activity when energy is produced from low carbon sources and to decrease activity when energy is produced from high carbon sources.

Global changes are currently occurring. Around the world, power grids are shifting from primarily burning fossil fuels to generating energy from low-carbon sources, such as solar and wind energy. This is one of our best chances of meeting global reduction targets.

How can you be more carbon aware?

More than any environmental goals, economics are driving this change. Renewable energy will become more important over time as costs fall and become more affordable. That's why we need to make fossil fuel plants less economic and renewable energy plants more profitable to speed up the transition. The best way to do this is to use more electricity from low-carbon sources, such as renewables, and less from high-carbon sources.

The two main principles of Carbon Awareness are:

  • Demand Shifting: Being carbon conscious means adjusting demand to changes in carbon intensity. If your job allows you to be flexible with your workload, you can adapt by using energy when carbon intensity is low and stopping production when carbon intensity is high. For example, train your machine learning model using a different time or location where carbon intensity is significantly lower. Studies have shown that these measures can reduce CO2 emissions by 45% to 99%, depending on the number of renewable energy sources used to power the grid.
  • Demand Shaping: The tactic of shifting calculations to locations or periods of lowest carbon intensity is called demand shifting. Demand engineering is a similar tactic. However, rather than moving demand to a new location or time, it changes the calculation to match current supply.

The availability of carbon is critical to shaping demand for carbon smart applications. As the cost of running an application in terms of carbon emissions increases, demand is designed to match the carbon supply. Users can choose to do this or have it done automatically.

Video conferencing software that automatically adjusts streaming quality is an example of this. When bandwidth is limited, audio quality is prioritized over video quality and is always streamed at the lowest possible quality.

Hardware Efficiency

If we take into account the embodied carbon, it is clear that by the time we come to buy a computer, a significant amount of carbon has already been emitted. In addition, computers also have a limited lifespan, so over time they become obsolete and need to be upgraded to meet the needs of the modern world. 

In these terms, hardware is a proxy for carbon, and since our goal is to be carbon efficient, we must also be hardware efficient. There are two main approaches to hardware efficiency:

  • For end-user devices, it's extending the lifespan of the hardware.
  • For cloud computing, it's increasing the utilization of the device.

How can Green Software be measured?

Most of the companies use the Greenhouse Gas (GHG) method to calculate total carbon emissions. By learning how to evaluate software against greenhouse gas fields and industry standards, you can assess how well green software principles have been implemented and how much room there is for improvement.

GHG accounting standards are the most common and widely used method for measuring total emissions. The GHG Protocol is used by 92% of Fortune 500 companies to calculate and disclose their carbon emissions.

The GHG Protocol divides carbon emissions into three categories. Value chain emissions, or scope 3 emissions, are emissions from companies that supply other companies in the chain. Therefore, the scope 1 and 2 emissions of one organization are added to the scope 3 of another organization.

GHG protocols allow for the calculation of software-related emissions, although open-source software may provide challenges in this regard. It is also possible to adopt Software Carbon Intensity (SCI) specifications in addition to GHG protocols. SCI is a percentage, not a total, and is specifically designed to calculate software emissions.

The purpose of this specification is to evaluate the emission rate of software and encourage its elimination. Greenhouse gas emissions, on the other hand, are a more universal measure that can be used by all types of organizations. The SCI is an additional statistic that helps software teams understand software performance in terms of carbon emissions and make better decisions.

How can you calculate the total carbon footprint of software?

Calculating the total carbon footprint of software requires access to comprehensive information on energy consumption, carbon intensity, and the hardware on which the software runs. This type of data is difficult to collect, even for closed-source software products that companies own and can monitor through telemetry and logs.

Many people from many organizations contribute to open-source projects. Therefore, it is unclear who is responsible for determining emissions and who is responsible for reducing emissions. Considering that 90% of a typical enterprise stack is open-source software, it's clear that a significant portion of carbon emissions remains unsolved.

Carbon Reduction Methodologies

A number of methodologies are commonly used to reduce emissions and to support efforts to address climate change. These can be broadly grouped into the following categories: carbon elimination (also known as “abatement”), carbon avoidance (also known as “compensation”) and carbon removal (also known as "neutralization”).

  • Abatement or Elimination: includes increasing energy efficiency to eliminate some of the emissions associated with energy production. Abatement is the most effective way to fight climate change although complete carbon elimination is not possible.
  • Compensation or Avoidance: using sustainable living techniques, recycling, planting trees, and other renewable energy sources are examples of compensation. 
  • Neutralization or Removal: is the removal and permanent storage of carbon from the atmosphere to offset the effects of CO2 emissions into the atmosphere. Offsets tend to remove carbon from the atmosphere in the short to medium term.

An organization can claim to be Carbon Neutral if it offsets emissions and, more ambitiously, to be Net Zero if it reduces emissions as much as possible while offsetting only unavoidable emissions.

The Green Software Foundation

The Green Software movement began as an informal effort, but has become more visible in recent years. The Green Software Foundation was launched in May 2021 to help the software industry reduce its carbon footprint by creating a trusted ecosystem of people, standards, tooling, and best practices for building Green Software.

Specifically, the non-profit organization is part of the Linux Foundation and has three main objectives:

  • Establish standards for the green software industry: The Foundation will create and distribute green programming guidelines, models and green practices across computing disciplines and technology domains.
  • Accelerate Innovation: To develop the Green Software industry, the Foundation will encourage the creation of robust open source and open data projects that support the creation of green software applications.
  • Promote Awareness: One of the Foundation’s key missions is to promote the widespread adoption of Green Software through ambassador programmes, training and education leading to certification, and events to facilitate the development of Green Software.

Conclusions

At Bitrock, we believe that everyone has a role to play in solving climate change. In fact, sustainable software development is inclusive. No matter your industry, role, or technology, there is always more you can do to make an impact. Collective efforts and interdisciplinary collaboration across industries and within engineering are essential to achieve global climate goals.

If you can write greener code, your software projects will be more robust, reliable, faster, and more resilient. Sustainable software applications not only reduce an application's carbon footprint, but also create applications with fewer dependencies, better performance, lower resource consumption, lower costs, and energy-efficient features.


Sources: Green Software for Practitioners (LFC131) - The Linux Foundation

Read More
Next.js 14

In recent years, the development of web applications has become increasingly complex, requiring more sophisticated approaches to guarantee optimal performance and a seamless user experience. Next.js has emerged as a leading framework for modern web application development, providing a powerful and flexible development environment based on React.

The JavaScript realm is constantly undergoing disruption, driven by the introduction of cutting-edge technologies and the advancement of existing ones. While React's introduction revolutionized JavaScript frameworks, newer technologies have emerged from the initial disruption.

At the time of this article’s publication, the official React documentation discourages the conventional usage of React and the generation of standard projects using create-react-app, instead recommending several frameworks, including Next.js.

In the forthcoming paragraphs, we will delve into the core features of Next.js, particularly in its latest version, and guide you through the process of creating a project.

What is Next.js?

Next.js stands as a revolutionary React framework that seamlessly blends Client-Side Rendering (CSR) and Server-Side Rendering (SSR) techniques. Led by Vercel, Next.js has gained popularity in the web development community for its user-friendly approach and the robust features it offers. Its comprehensive set of features empowers developers to create high-performance and SEO-friendly web applications with ease.

What are the main features of Next.js?

Server-Side and Client-Side Rendering for Optimal Performance
Next.js's ability to render pages on both the server and the client side offers a unique advantage. The ability to generate precompiled HTML on the server side ensures ultra-fast initial page loads and improves search engine indexing.

Automatic Routing for Seamless Navigation
Next.js simplifies navigation by employing a file-based routing system. Simply create files named “page” or “route” within the designated folder structure, and the routing is handled automatically.

Hot Module Replacement (HMR) for Enhanced Development Efficiency
Next.js supports HMR, allowing developers to make code changes without refreshing the entire page. This feature significantly expedites the development process.

Built-In Image Optimization for Responsive Content
Next.js built-in image optimization system ensures that images are loaded in the most efficient format and size based on the user's device and screen resolution, enhancing site performance.

Effortless API Integration for Data Management
Integrating external or internal APIs into a Next.js application is straightforward. This simplifies data management and eliminates the need to maintain a separate server application. Next.js version 14 further simplifies API integration, making it ideal for externalizing functionalities.

Special Pages for Enhanced Routing and Layout

Next.js introduces a unique feature: the automatic detection of files named in a specific pattern. This capability enables developers to efficiently structure their applications and enhance the overall user experience. For instance, Next.js recognizes files named "page.*" as the source for the page component that should be rendered for a specific route.  The "layout.tsx" file, located at the root of the app directory and/or in the pages structure, plays a crucial role in defining a common layout structure that can be applied across multiple pages. Next.js automatically injects the page component into this layout, providing a consistent presentation across the application.

Server and Client Component

Traditional React components, familiar to front-end developers, are known as Client Components. They function identically to their counterparts. NextJs introduces the concept of Server Component which generates an HTML file on the server side before delivering it to the client. This reduces the amount of JavaScript that the browser needs to compile. Now, let's delve into the home page.

Upon inspection, it's evident that this component resembles a conventional React component. Indeed, it is! However, there is a hidden distinction: this is a server component. This implies that it  is compiled and rendered on the server side, delivering only the pre-rendered HTML to the client.

This type of component offers  several advantages, including enhanced SEO, increased security due to its placement within a more protected portion of the code, and simplified development. For instance, server components can efficiently fetch data from the database without adhering to the traditional component lifecycle, enabling the utilization of straightforward asynchronous await statements. However, certain functionalities, such as user interactions, cannot be handled on the server side. To address this, Next.js introduces the concept of Partial Rendering, which involves rendering the HTML on the server side while leaving "empty” spaces for specific components designated for client-side management.

As an example, let's examine the following component:

This component will be rendered as:

This distinction proves beneficial in scenarios like the ShoppingCart component, which requires accessing client-side browser data to retrieve saved items. 

It's crucial to acknowledge that this approach also presents certain restrictions. For example, functions cannot be passed as props from server components to client components, and client components cannot have server components as their direct descendants. For more in-depth information, refer to the official Next.js Official Documentation.

It's worth noting that using actions instead of an API system requires more effort than simply adding a callback function to interact with the server. While callbacks are common and straightforward for front-end developers, they can introduce development and security challenges. Actions, on the other hand, introduce additional complexity due to the need to use forms for even basic buttons. 

If you need to send additional information (such as a  slug), actions force you to create a hidden input to send it. Both approaches have their advantages and disadvantages, so the choice  depends on project and development preferences.

Deployment with Vercel

Now that you've created your Next.js app, it's time to put it online for the world to see. Next.js is owned by Vercel, a leading deployment platform that seamlessly integrates with Next.js to make the deployment process even easier. To get started, you'll need to create a Vercel account. This is a straightforward process that takes just a few minutes.

Once you've created your account, you can connect it to your GitHub or GitLab repository to automatically deploy your Next.js app whenever you push changes to your code.

This process can undoubtedly be used to generate one of the pre-designed templates, providing a head start with several components of the project already set up.

The system prompts you to enter the values to use for environment settings and the git branch to use  for deployment (main is the default). Once you provide the final confirmations, the deployment process will start. Any errors encountered will be reported and stored in an appropriate section.  Additionally, if you haven't modified the default settings, Vercel will automatically trigger a deployment process, significantly expediting the online deployment cycle. By default, Vercel  provides a free domain, but the developer can purchase and replace it with a domain of their preferences. And with that, our dashboard is ready for users, who may inevitably encounter bugs.

Conclusions

We've completed the foundational elements of our application; leaving us to decide its content and replicate the steps we've covered thus far. While this is a very simple dashboard, it effectively illustrates Next.js's distinctive features compared to others, particularly React. We've witnessed a fully functional structure that's effortlessly implementable, SEO-compliant, and integrated with robust security measures, all within the framework of React's flexibility. 

Currently, version 14 is in its early stages,  with frequent enhancements every few months. It's an ever-evolving realm, vastly different from prior versions. Nonetheless, the advantages are already apparent, particularly the seamless handling of aspects that have long challenged developers.


Main Author: Davide Filippi, Front-End Developer @ Bitrock

Read More
AI Ethics

Artificial intelligence (AI) is the tech buzzword of the moment, and for good reason. Its transformative potential is already revolutionizing industries and shaping our future.

Its role in both everyday life and the world of work is now undeniable.  Machine Learning, combined with IoT and industrial automation technologies, has emerged as a disruptive force in the evolution of production processes across all economic and manufacturing sectors. 

The fact that these solutions can be widely deployed is intensifying the debate about the control and discipline of Artificial Intelligence. The ability to identify, understand and, where necessary, limit the applications of AI is clearly linked to the ability to derive sustainable value from it

First and foremost, the focus must be ethical, to ensure that society is protected based on the principles of transparency and accountability. But it is also a business imperative. Indeed, it is difficult to incorporate AI into decision-making processes without an understanding of how the algorithm works, certainty about the quality of the data, and the absence of biases or preconceptions that can undermine conclusions.

The AI Act from principles to practice

The European Union's commitment to ethical AI has taken a significant step forward with the preliminary adoption of the AI Act, the first regulation on AI to create more favourable conditions for the development and application of this innovative technology. This legislation aims to ensure that AI deployed within EU countries is not only safe and reliable but also respects fundamental rights, upholds democratic principles, and promotes environmental sustainability.

It is also important to note that the AI Act is not about protecting the world from imaginary conspiracy theories or far-fetched scenarios in which new technologies take over human intelligence and lives. Instead, it focuses on practical risk assessments and regulations that address threats to individual well-being and fundamental rights.

In other words, while it may sound like the EU has launched a battle against robots taking over the world, the real purpose of the AI Act is more prosaic: to protect the health, safety and fundamental rights of people interacting with AI. Indeed, the new regulatory framework aims to maintain a balanced and fair environment for the deployment and use of AI technologies.

More specifically, the framework of the AI Act revolves around the potential risks posed by AI systems, which are categorised into four different classes:

  • Unacceptable risk: Systems deemed to pose an unacceptable threat to citizens' rights, such as biometric categorization based on sensitive characteristics or behavioural manipulation, are strictly prohibited within the EU.
  • High risk: Systems operating in sensitive areas such as justice, migration management, education, and employment are subject to increased scrutiny and regulation. For these systems, the European Parliament requires a thorough impact assessment to identify and mitigate any potential risks, ensuring that fundamental rights are safeguarded.
  • Limited risk: Systems like chatbots or generative AI models, while not posing significant risks, must adhere to minimum transparency requirements, ensuring users are informed about their nature and the training data used to develop them.
  • Minimal or no risk: AI applications such as spam filters or video games are considered to pose minimal or no risk and are therefore not subject to specific restrictions under the AI Act.

Understanding the above categorization is fundamental for organizations to accurately assess their AI systems. It ensures alignment with the Act's provisions, allowing companies to adopt the necessary measures and compliance strategies that are relevant to the specific risk category of their AI systems.

In short, the EU AI Act represents a pivotal moment in the EU's commitment to the responsible development and deployment of AI, setting a precedent for other jurisdictions around the world. By prioritizing ethical principles and establishing regulatory guidelines, it paves the way for a future where AI improves our lives without compromising our fundamental rights and values.

Infrastructure AI and Expertise

Infrastructure AI and Expertise

Our relationship with technology and work is about to change. As AI becomes the driving force of a new industrial revolution, it is imperative that we better equip ourselves to remain competitive and meet new business needs.

As we have seen, in the age of Artificial Intelligence (AI), where Large Language Models (LLMs) and Generative AI hold enormous potential, the need for robust control measures has become a critical component. While these cutting-edge technologies offer groundbreaking capabilities, their inherent complexity requires careful oversight to ensure responsible and ethical deployment. This is where Infrastructure AI systems come in, providing the necessary tools to manage AI models with precision and transparency.

Infrastructure AI systems like Radicalbits platform are meticulously designed to address the critical challenge of AI governance. A solution that not only simplifies the development and management of AI, but also enables organizations to gain deep insight into the inner workings of these complex models. Its ability to simplify complex tasks and provide granular oversight makes it an indispensable asset for companies seeking to harness the power of AI ethically and effectively.

In this scenario, Europe faces a new challenge: the need for proficient expertise. Indeed, to unlock the full potential of AI, we need a mix of skills that includes domain and process expertise as well as  technological prowess.

Infrastructure AI, the foundation upon which AI models are built and deployed, requires a diverse set of skills. It’s not just about coding and algorithms: it's about encompassing an ecosystem of platforms and technologies that enable AI to work seamlessly and effectively.

Newcomers to the AI field will therefore need a mix of technical skills and complementary expertise to be successful. Domain expertise is critical for identifying challenges, while process skills facilitate the development of AI solutions that are not only effective, but also sustainable and ethical. The ability to think critically, solve problems creatively, and prioritize goals over rigid instructions will be essential.

How to support responsible AI development

The need for companies to regularly monitor the production and use of AI models and their reliability over time is highlighted by the new regulatory framework being promoted by the EU. To prepare for the new transparency and oversight requirements, it is essential to focus on two areas. 

First, the issue of skills, particularly in the context of the Italian manufacturing industry. It’s well known that the asymmetry between the supply and demand for specialized jobs in technology and Artificial Intelligence, to name but two, slows down growth and damages the economy. 

In this transformative era, Italy is uniquely positioned with its strong academic background and technological expertise. By combining Italian skills and technologies, we can harness the transformative power of Infrastructure AI to promote responsible AI development and to establish ethical AI governance practices. Our history of innovation and technological advancement, together with the capabilities of Infrastructure AI, could  provide a competitive advantage in this field.

The second point is technology. Tools that support the work of data teams (Data Scientists, ML Engineers) in creating AI-based solutions provide companies with a real competitive advantage. This is where the term MLOps comes in, referring to the methods, practices and devices that simplify and automate the machine learning lifecycle, from training and building models to monitoring and observing data integrity.

Conclusions

The adoption of the EU AI Act marks the beginning of a transformative era in AI governance. It’s a first, fundamental step towards ensuring that AI systems play by the rules, follow ethical guidelines and prioritize the well-being of users and society. 

It should now be clear that it can serve as a proactive measure to prevent AI from becoming wild and uncontrolled, and instead foster an ecosystem where AI operates responsibly, ethically and in the best interests of all stakeholders.

On this exciting path towards responsible and sustainable AI, both cutting-edge technology solutions and skilled workforce are essential to unlock the full potential of AI, while safeguarding our values and the well-being of society. The smooth integration of expertise and cutting-edge technology is therefore the real formula for unlocking the full potential of AI.

This is the approach we take at Bitrock and in the Fortitude Group in general, where a highly specialized consulting offering is combined with a proprietary MLOps platform, 100% made in Italy. And it is the approach that allows us to address the challenges of visibility and control of Artificial Intelligence. In other words, the ability to fully, ethically and consciously exploit the opportunities of this disruptive technology.

Read More
Quality Assurance

Across a diverse range of manufacturing industries, including engineering, pharmaceuticals, and IT, Quality Assurance (QA) plays a pivotal role in shaping and overseeing processes. It ensures the seamless integration of these processes into the company's ecosystem, fostering enhanced efficiency at both the organizational and production levels, ultimately leading to product refinement

QA proactively anticipates and mitigates potential issues such as production flow disruptions , communication breakdowns, implementation and design bugs. Unlike rigid, bureaucratic procedures, QA’s process-defining approach aims to simplify and streamline operations, achieving both ease of use and superior product quality.

QA adopts a holistic project perspective, encompassing the entire process from the initial requirements gathering phase to the final reporting stage. QA and testing, while closely related, have distinct roles and responsibilities. In fact, QA upholds the proper execution of the entire process, supporting testing activities from start to finish.

Quality Assurance vs. Testing: understanding the main differences

Quality Assurance vs. Testing

Quality Assurance (QA) and Testing are two closely related but distinct fields within software development.  QA is a broader concept that encompasses the entire process of ensuring that software meets its requirements and is of high quality. Testing is a specific product-oriented activity within QA that involves executing test cases to identify and report defects. 

In other words, QA is about preventing defects, while testing is about finding them. QA plays a central role in defining processes to implement the Software Development Life Cycle (SDLC), a structured framework that guides the software development process from conception to deployment.

The definition of an SDLC model is intricate, as it entails consideration of various factors including the company organizational structure , the type of software developed (for instance, an Agile model does not combine with safety-critical software), the technologies employed and the organization's maturity level. QA plays an important role in shaping the SDLC, not merely contributing but rather serving as an integral component. QA oversees the establishment and execution of the Software Testing Life Cycle (STLC), where testing activities overlap with development to ensure product testability. In addition, QA continuously monitors and refines processes, ensuring that the designed workflow aligns with the desired outcomes.

On the other hand, testing plays a corrective role, actively seeking to identify and verify bugs before the product reaches production. This proactive approach, known as "Shift-Left" testing, involves testers collaborating with product owners in the early stages of requirement definition to ensure clarity and testability.

The cost of bug fixing escalates with each development phase, making early detection during requirement definition both faster and more cost-effective  As development progresses, costs rise significantly, particularly after release into the test environment and up to production wheretime and resource constraints are significantly higher. Furthermore, uncovering bugs in production damages stakeholder trust.

Testing encompasses a diverse range of types and techniques tailored to specific development phases and product categories. For instance, unit tests are employed during code writing, while usability, portability, interruptibility, load, and stress tests are conducted for mobile apps.
In general, QA and Testing are both essential for ensuring that software meets its requirements and is of high quality. QA professionals provide the framework for achieving quality, while testers are responsible for executing the tests that identify and report defects.

How Automation and Artificial Intelligence impact Quality Assurance and Testing?

Both QA and Testing are evolving to keep pace with the latest trends in the IT industry, with a particular focus on Automation and Artificial Intelligence (AI)

Automation is having a profound impact on  processes , simplifying, standardizing, and reducing software management costs. This has led to an increasing synergy between QA and Devops within companies, with QA becoming an integral part of  the testing and development process and ensuring  its presence at every level.

Automation also plays a crucial role in testing, reducing execution times, mitigating human errors in repetitive test steps, and freeing up resources for testing activities where automation is less  effective or impractical. 

Examples of automated testing include regression tests, performance tests, and integration tests, the latter of which provides significant benefits for APIs. Various automation methods exist, each with advantages suited to specific contexts. For tests with lower complexity and abstraction, simpler test methods are recommended. Types of automated tests include linear scripting, scripting using libraries, keyword-driven  and model-based testing. Different tools are available to support these methodologies, and the choice depends on factors such as the System Under Test (SUT) and the test framework

Recently , AI has emerged as a valuable tool for test support. Among the various methods,the "Natural Language Processing"(NLP) approach is particularly compelling as it allows test cases to be written in a descriptive mode using common language. This will empower a broader  population to perform automated tests with ease.

Quality Assurance behind innovation and success

The concept of "Quality" has been an essential element of success since the ancient Phoenicians employed inspectors to ensure quality standards were met.

Quality evolves alongside progress, influencing every field from manufacturing to technology. For instance, the mobile phone transformation from a simple calling device to today's versatile smartphones highlights the importance of quality driven by user feedback.

In a nutshell, Quality Assurance is an essential process for a successful business as well as the key differentiator between successful products from those that fail to meet expectations. QA plays a crucial role in distinguishing a company from its competitors and achieving its goals of innovation and success by maintaining the high quality of its products and services.


Are you curious to learn more about the main differences between Quality Assurance and Testing? Are you interested in further exploring the future perspectives with Automation and Artificial Intelligence? Listen to the latest episode of our Bitrock Tech Radio Podcast, or get in contact with one of our experienced engineers and consultants!


Main Author: Manuele Salvador, Software Quality Automation Manager @ Bitrock

Read More

Bridging the Gap Between Machine Learning Development and Production

In the field of Artificial Intelligence, Machine Learning has emerged as a transformative force, empowering businesses to unlock the power of data and make informed decisions. However, bridging the gap between developing ML models and deploying them into production environments can pose a significant challenge. 

In this interview with Alessandro Conflitti (Head of Data Science at Radicalbit , a Bitrock sister company), we explore the world of MLOps, delving into its significance, challenges, and strategies for successful implementation. 

Let's dive right into the first question.

What is MLOps and why is it important?

MLOps is an acronym for «Machine Learning Operations». It describes a set of practices ranging from data ingestion, development of a machine learning model (ML model), its deployment into production and its continuous monitoring. 

In fact, developing a good machine learning model is just the first step in an AI solution. Imagine for instance that you have an extremely good model but you receive thousands of inference requests (input to be predicted by the model) per second: if your underlying infrastructure does not scale very well, your model is going to immediately break down, or anyway will be too slow for your needs.

Or, imagine that your model requires very powerful and expensive infrastructures, e.g. several top notch GPUs: without a careful analysis and a good strategy you might end up losing money on your model, because the infrastructural costs are higher than your return.

This is where MLOps comes into the equation, in that it integrates an ML model organically into a business environment.

Another example: very often raw data must be pre-processed before being sent into the model and likewise the output of an ML model must be post-processed before being used. In this case, you can put in place an MLOps data pipeline which takes care of all these transformations.

One last remark: a very hot topic today is model monitoring. ML models must be maintained at all times, since they degrade over time, e.g. because of drift (which roughly speaking happens when the training data are no longer representative of the data sent for inference). Having a good monitoring system which analyses data integrity (i.e. that data sent as input to the model is not corrupted) and model performance (i.e. that the model predictions are not degrading and still trustworthy) is therefore paramount.

What can be the different components of an MLOps solution?

An MLOps solution may include different components depending on the specific needs and requirements of the project. A common setup may include, in order:

  • Data engineering: As a first step, you have to collect, prepare and store data; this includes tasks such as data ingestion, data cleaning and a first exploratory data analysis.
  • Model development: This is where you build and train your Machine Learning model. It includes tasks such as feature engineering, encoding, choosing the right metrics, choosing the model’s architecture selection, training the model, and hyperparameter tuning.
  • Experiment tracking: This can be seen as a part of model development, but I like to highlight it separately because if you keep track of all experiments you can refer to them later, for instance if you need to tweak the model in the future, or if you need to build similar projects using the same or similar datasets. More specifically you keep track of how different models behave (e.g. Lasso vs Ridge regression, or XGBoost vs CatBoost), but also of hyperparameter configurations, model artefacts, and other results.
  • Model deployment: in this step you put your ML model into production, i.e. you make it available for users who can then send inputs to the model and get back predictions. What this looks like can vary widely, from something as simple as a Flask or FastAPI to much more complicated solutions.
  • Infrastructure management: with a deployed model you need to manage the associated infrastructure, taking care of scalability, both vertically and horizontally, i.e. being sure that the model can smoothly handle high–volume and high–velocity data. A popular solution is using Kubernetes, but by no means is it the only one.
  • Model monitoring: Once all previous steps are working fine you need to monitor that your ML model is performing as expected: this means on the one hand logging all errors, and on the other hand it also means tracking its performance and detecting drift.

What are some common challenges when implementing MLOps?

Because MLOps is a complex endeavour, it comes with many potential challenges, but here I would like to focus on aspects related to Data. After all that is one of the most important things; as Sherlock Holmes would say: «Data! data! data! (...) I can't make bricks without clay.»

For several reasons, it is not trivial to have a good enough dataset for developing, training and testing a ML model. For example, it might not be large enough, it might not have enough variety (e.g. think of a classification problem with very unbalanced, underrepresented classes), or it might not have enough quality (very dirty data, from different sources, with different data format and type, plenty of missing values or inconsistent values, e.g. {“gender”: “male”, “pregnant”: True}).

Another issue with Data is having the right to access it. For confidentiality or legal (e.g. GDPR) reasons, it might not be possible to move data out of a company server, or out of a specific country (e.g. financial information that cannot be exported) and this limits the kinds of technology or infrastructures that can be used, and deployment on cloud can be hindered (or outright forbidden). In other cases only a very small curated subset of data can be accessed by humans and all other data are machine–readable only.

What is a tool or technology that you consider to be very interesting for MLOps but might not be completely widespread yet?

This might be the Data Scientist in me talking, but I would say a Feature Store. You surely know about Feature Engineering, which is the process of extracting new features or information from raw data: for instance, having a date, e.g. May 5th, 1821, compute and add the corresponding week day, Saturday. This might be useful if you are trying to predict the electricity consumption of a factory, since often they are closed on Sundays and holidays. Therefore, when working on a Machine Learning model, one takes raw data and transforms it into curated data, with new information/features and organised in the right way. A feature store is a tool that allows you to save and store all these features.

In this way, when you want to develop new versions of your ML model, or a different ML model using the same data sources, or when different teams are working on different projects with the same data sources, you can ensure data consistency. 

Moreover, preprocessing of raw data is automated and reproducible: for example anyone working on the project can retrieve curated data (output of feature engineering) computed on a specific date (e.g. average of the last 30 days related to the date of computation) and be sure the result is consistent.

Before we wrap up, do you have any tips or tricks of the trade to share?

I would mention three things that I find helpful in my experience. 

My first suggestion is to use a standardised structure for all your Data Science projects. This makes collaboration easier when several people are working on the same project and also when new people are added to an existing project. It also helps with consistency, clarity and reproducibility. From this perspective I like using Cookiecutter Data Science.

Another suggestion is using MLflow (or a similar tool) for packaging your ML model. This makes your model readily available through APIs and easy to share. And finally I would recommend having a robust CI/CD (Continuous Integration and Continuous Delivery) in place. In this way, once you push your model artefacts to production the model is immediately live and available. And you can look at your model running smoothly and be happy about a job well done.


Main Author: Dr. Alessandro Conflitti, PhD in Mathematics at University of Rome Tor Vergata & Head of Data Science @ Radicalbit (a Bitrock sister company).

Interviewed by Luigi Cerrato, Senior Software Engineer @ Bitrock

Read More

On September 29, Fortitude Group held its annual convention, bringing together colleagues from different places for a day of fun and teamwork. The highlight of the event was the Fortigames, a two-hour challenge featuring soccer, volleyball, table tennis and even board games. To make the event even more exciting, a custom Fortitude Convention Companion App was developed, focused mostly on the games, but also convenient for helping attendees navigate the event schedule and locations.

The Fortitude Convention Companion App

The Fortitude Convention Companion App, developed as a Progressive Web App (PWA), was a marvel of teamwork and creativity. 

The Front-End team behind it (a special thanks goes to our colleagues: Stefano Bruno - Head of UX/UI, Mobile and Front-End Engineering; Gabriella Moro - Lead UX Designer and Daniel Zotti - Senior Front-End Developer) worked tirelessly to create a user-friendly and visually appealing interface. Despite the challenges and tight deadline, the team developed a successful app that made the Fortigames even more interactive.

The app was developed with a combination of three powerful technologies and development practices. 

Qwik, a modern and innovative framework, was chosen for its exceptional performance and developer-friendly experience. Its key advantages include:

  • Resumability: This Qwik's unique feature allows components to be resumed and rendered efficiently, leading to faster page load times and a smoother user experience.
  • Hydration: Qwik's hydration process is optimized for performance, ensuring that only the necessary components are hydrated and rendered, reducing the initial load time and improving overall performance.
  • Developer Experience: Qwik offers a delightful developer experience with its intuitive API, excellent tooling and strong community support.

Supabase, an open-source Firebase alternative, was selected for its complete backend solution, including a real-time database, authentication, and storage. Its key advantages include:

  • Real-time Database: Supabase's real-time database enables live data synchronization between the app and the backend, ensuring that users always have access to the latest data.
  • Authentication: Supabase offers built-in authentication features, including social login, email verification, and role-based access control, simplifying user management.
  • Storage: Supabase provides secure and scalable storage for images, videos, and other files, making it easy to store and manage application data.

Vercel, a high-performance cloud platform, was chosen for its seamless deployment and powerful features for scaling and optimizing applications. Its key advantages include:

  • Seamless Deployment: Vercel integrates seamlessly with GitHub and other Git providers, enabling easy deployment of applications directly from the code repository.
  • Global Edge Servers: Vercel's global edge servers ensure that applications are delivered with low latency and high performance to users around the world.
  • Scalability and Optimization: Vercel provides powerful features for scaling applications to handle increasing traffic and optimizing performance for various devices and network environments.

The app's development process also involved a combination of tools and techniques to ensure a streamlined and efficient workflow. Balsamiq was utilized for wireframing, providing a visual representation of the app's layout and user interface. Figma, the UI design tool, was employed to create detailed and interactive prototypes, allowing the team to refine the app's visual elements and user experience. Font Awesome, a comprehensive icon library, was incorporated to provide a consistent and visually appealing set of icons for the app's interface. To enhance development efficiency, an existing SCSS structure was adopted, leveraging its predefined styles and maintainability. Additionally, CSS modules were employed to encapsulate CSS rules within specific components, ensuring code isolation and preventing unintended style overrides. For version control and collaboration, GitHub was utilized to store and manage the app's open-source code, enabling the team to track changes, collaborate effectively, and maintain a consistent codebase.

Results

The result was a polished and user-friendly app that was essential to the success of the Fortigames. The Tigers (Yin) and the Dragons (Yang) teams battled fiercely on the soccer field, volleyball court, and table tennis table, with the Dragons winning in the end. The app played a vital role in bringing teams together and in making the competition even more fun.

The Fortigames app was a game-changer, showing the potential of technology to enhance teamwork, creativity, and fun. Thanks to the dedication and effort of the team behind it, the app provided a seamless platform for participants to connect, track results, and celebrate successes. On top of that, the app helped in making the Bitrock & Fortitude Convention 2023 a truly memorable experience.

Want to learn more about how the Fortigames app was built? Check out our colleague Daniel Zotti's complete article and the GitHub repo.


Main Author:  Daniel Zotti, Senior Frontend Developer @ Bitrock

Read More
Agile Software Development

How can it benefit your company?

In today's business landscape, change is a constant element. Companies that want to keep up with the times and stay competitive, have to adopt more agile approaches to software development. Agile methodologies are a set of principles and practices that enable quick adaptation to customer demands, market changes, and technological advancements. This adaptability allows businesses to seize new opportunities and effectively mitigate risks, ensuring personal growth and success.

What is the main purpose?

Agile methodology is a development approach based on principles such as flexibility, collaboration, and continuous value delivery.  The main objective of Agile methodology is to quickly and effectively respond to changes in project requirements and customer needs. It focuses on efficiently creating high-quality products with an emphasis on customer satisfaction.  It is an alternative to traditional Waterfall development models, which complete project phases sequentially and with little flexibility.
Agile is not defined by a series of specific development ceremonies or techniques. Instead, it's a group of methodologies that focus on tight feedback loops and continuous improvement.  When discussing Agile, it is essential to mention the Manifesto for Agile Software Development which emphasizes its key principles. The first one is to prioritize people, their individuality, and their interactions, rather than focusing solely on processes and tools.

Scrum: an example of Agile Methodology

One of the most well-known examples of Agile methodology is Scrum. Scrum is a framework that organizes work into defined time units called "sprints," usually taking 2-4 weeks. During a sprint, the team collaborates on a specific set of tasks known as the Sprint Backlog. In Scrum, there are three key roles: the Product Owner, the Development Team, and the Scrum Master. These roles work together to ensure the successful implementation of the Scrum framework and the delivery of high-quality products.

  • The Product Owner represents the needs of the customers and the business
  • The Development Team is responsible for doing the work
  • The Scrum Master supports the Scrum process by removing obstacles and promoting collaboration

What are the real benefits for customers?

Adopting an Agile methodology offers numerous advantages to customers seeking successful project completion and more effective satisfaction of their needs. Here are some of the key benefits that customers can expect from Agile:

Faster Delivery

Agile development enables faster delivery of products and services, promoting a quicker time-to-market and maximizing product lifecycle. Customers can see a positive impact on new features or improvements much earlier compared to traditional development models that may take months or years to deliver a complete product.

Active Involvement

Agile promotes active customer involvement in the development process. Customers are encouraged to participate in reviews, provide constant feedback, and engage in project planning sessions. This ongoing and direct involvement ensures that the developed product aligns with customer expectations, increasing overall satisfaction and minimizing the risk of creating an unused product.  By putting customers first and delivering exceptional experiences, companies can foster long-term loyalty and trust.

Flexibility and Adaptability

Requirements may change and customers may have new ideas during development. Agile allows quick adaptation to such changes without significant disruptions to the process.  Being able to anticipate and meet the changing needs of customers is a competitive advantage in today's ever-changing business landscape.

Continuous Improvement

Quality is at the heart of Agile, with a focus on constant testing and validation. Developers work closely with customers to ensure the product meets the required quality standards. Constant testing and validation drive improvement, enable experimentation with new ideas, and facilitate the implementation of changes throughout the software development cycle, ensuring a high-quality final product.

Risk Reduction

Agile development uses iterative, feedback-driven processes that help catch risks sooner and ensure that the final product meets customer requirements. This reduces the time and cost of fixing errors and results in fewer, cheaper mistakes.

Agile

Is Agile always the answer?

As observed up to this point, implementing agile methodology in an organization can bring numerous benefits, such as increased flexibility, improved communication, and faster delivery of products or services. However, there are also some situations in which adopting this methodology may not be the most appropriate choice and there are potential drawbacks and challenges to consider. Among others, we can mention:

Ineffective Planning and Execution

Ineffective planning and execution can lead to delays and inefficiencies.  Having a clear understanding of project goals, timelines, and milestones is essential for effective planning and execution, as it helps team members stay focused, prioritize tasks, and track progress towards the overall project success.

Unready Organizational Culture

When a team is resistant to change, it can have various negative impacts on the implementation of new processes, technologies, or strategies, which can ultimately lead to delays in project completion. Fostering collaboration, open communication, flexibility and providing team members with the freedom to explore new ideas are essential strategies for cultivating a culture of continuous improvement.

Lack of Stakeholder Involvement

When stakeholders are not fully engaged, conflicts and delays in project execution can happen. Educating stakeholders on the benefits of Agile and involving them in the early stages of the project can help address challenges related to their engagement.

Unclear Goals and Priorities 

Agile methodologies, while effective in enabling teams to adapt to changing circumstances and emerging requirements efficiently, can also expose them to risks. One of these risks is the potential for goals and priorities to become unclear, which can lead to confusion and miscommunication within the team, ultimately impeding progress and hindering the team's ability to deliver value effectively.

Insufficient Resources

When the team is faced with a lack of resources, they may become overloaded with work, which can negatively impact their efficiency. It’s essential to ensure that all the required resources, including team members, appropriate tools and a supportive organizational culture are provided in order to avoid burnout and enhance productivity .

Individuals and interactions over processes and tools

Transitioning to an Agile methodology involves a shift in how we approach projects and tasks.  We redefine our way of working and move away from traditional, sequential methods embracing a more iterative and collaborative approach that allows us to deliver value more quickly and effectively.  As we mentioned in the beginning, the first value in the Agile Manifesto  is “Individuals and interactions over processes and tools”.  In other words: people drive the development process and respond to business needs. By promoting a culture of collaboration, continuous improvement, and adaptability, we’re able to empower each individual to contribute with their unique talents and perspectives. By applying Agile principles and practices, companies can create an environment where innovation thrives, and customer needs are met.

 Working in an Agile team is not just a destination, but a mindset that keeps us responsive, adaptable, and focused on delivering value.


Main Author: Cristian Bertuccioli, Scrum Master Certified and Senior Software Engineer @ Bitrock

Read More
Solid.js

Reactivity has become increasingly popular in recent months, with many frameworks and libraries incorporating it into their core or being heavily influenced by it. Vue.js has reactivity at its core, while idiomatic Angular has adopted RxJS, and MobX has become popular among React developers as an alternative to the common Redux pattern. Reactivity has been one of the leading inspirations behind the original philosophy of React.

However, most libraries still use the VDOM or use some sort of batching process in the background to replicate responsive behavior, even when reactivity is first class.

Solid.js is a reactive library that prioritizes deep, granular reactivity and is designed to offer excellent performance and responsiveness without relying on the VDOM or batching processes. While Solid.js offers a developer experience similar to React, it requires a different approach to component reasoning. Let’s take a close look.

SolidJS in three words: reactive, versatile and powerful

Upon initial inspection, Solid.js code may appear to be similar to React code with a unique syntax. However, the API provided by Solid.js is more versatile and powerful, as it is not limited by the VDOM and instead relies on deep, granular reactivity. In Solid.js, components are primarily used to organize and group code, and there is no concept of rendering. We can explore a simple example:

Our React component is going to re-render endlessly. We render our interface, only to make it immediately stale by calling setValue. Every local state mutation in React triggers a re-rendering as components produce the nodes of VDOM. The process of re-rendering the interface is complex and resource-intensive, even with the use of the VDOM and React's internal optimizations. While React and Vue.js have implemented techniques to avoid unnecessary work, there are still many complex operations happening in the background.

Solid.js updates the value and that's it; once the component is mounted, there is no need to run it again. Unlike React, Solid.js will not call the component again. Solid.js doesn’t even care for createSignal to be in the same scope of the component:

In Solid.js, components are referred to as “vanishing” because they are only used to organize the interface into reusable blocks and do not serve any other purpose beyond mounting and dismounting. 

Solid.js provides more flexibility than React when it comes to managing component state. Unlike React, Solid.js does not require adherence to the “rules of hooks”, and instead allows developers to reason around scopes of modules and functions to determine which components access which states. This fine granularity means that only the element displaying the signal’s value needs to be updated; all the operations needed to maintain a VDOM are unnecessary.

Solid.js uses proxies to hide subscriptions within the function that displays the value. This allows elements consuming the signals to become the contexts that are actively called again. In contrast to React, Solid.js functions are similar to constructors that return a render method (like the JSX skeleton), while React functions are more like the render method itself.

Dealing with props

In Solid JS, getters are more than just a value, so to maintain reactivity, props need to be handled in a special way. Using the function deriveProps retains reactivity, while spreading the parameter object breaks it.  This process is more complex than using the spread and rest operators in React.

Note that we aren’t using the parenthesis to call for a getter in the case below:

We can also access the value directly.

 Although the process may seem familiar, the underlying mechanism is completely different. React re-renders the child components when the props change, which can cause a lot of work in the background to reconcile the new virtual DOM with the old. Vue.js  avoids this problem by doing simple comparisons of props, similar to wrapping a functional component inside React’s memo method. Solid.js propagates down the hierarchy of the signals, and only the elements that consume the signal are run again.

Side effects

Side effects are a common concept in functional programming that occur when a function relies on or modifies something outside its parameters. Examples of side effects include subscribing to events, calling APIs, and performing expensive computations that involve external state. In Solid.js, effects are similar to elements and subscribe to reactive values. The use of a getter simplifies the syntax compared to React.

In React, the useEffect hook is used to handle side effects. When using useEffect, a function that performs the work is passed as an argument, along with an optional array of dependencies that might change. React does a shallow comparison of the values in the array and runs the effect again if any of them change.

When using React, it can be frustrating to pass all values as props or states to avoid issues with the shallow comparison that React does. Passing an object is not a good solution because it may reference an anonymous object that is different at each render, causing the effect to run again. Solutions to this problem involve declaring multiple objects or being more literal, which adds complexity.

 In Solid.js, effects run on any signal mutation. The reference to the signal is also the dependence.

Just like React, the effect will be run again when the values change without declaring an array of dependencies or any comparison in the background. This saves time and work, while avoiding bugs related to dependencies. However, it is still possible to create an infinite loop by mutating the signal that the effect is subscribed to, so it should be avoided.

createEffect is to be thought of as the Solid.js equivalent of subscribing to observables in RxJS, in which we’re listening to all “consumed” observables - our signals - at the same time.React users may be familiar with how useEffect replaces componentDidMount, componentWillUnmount, and componentDidUpdate. Solid.js provides dedicated hooks for handling components: onMount and onCleanup. These hooks run whenever the component returns first or gets booted off the DOM, respectively. Their purpose in Solid.js is more explicit than using useEffect in React.

Handling slice of applications

In complex applications, using useState and useEffect hooks may not be enough. Passing down many variables between components, calling methods down deep, and keeping various elements in sync with each other can be challenging.  The shopping cart, language selector, user login, and themes are just a few examples of the many applications that require some sort of slice of state.

In React, there are various techniques available to handle complex applications. One approach is to use a context to allow descendant components to access a shared state. However, to avoid unnecessary re-renders, it is important to memoize and select the specific data needed. React provides native methods like useReducer and memoization techniques such as useMemo or wrapping components in  React.memo to optimize rendering and avoid unnecessary re-renders. 

Alternatively, many developers opt to define their Redux store and each of the slices. As Redux has evolved, modern Redux has become much easier to work with compared to its early days. Developers now have the option to use hooks and constructor functions, which handle concerns in a declarative manner. This eliminates the need to define constants, action creators, and other related elements in separate files for each slice.

Solid.js provides support for various state management libraries, and offers several  methods to implement different patterns. One useful method is the ability to wrap requests using resources.

Unlike React state hooks that hook into the virtual DOM, Solid.js signals are independent units that allow developers to write idiomatic JavaScript. This enables scoping signals in other modules and restricting access through methods, effectively turning signals into private singletons.

Hence, modules can act as slices of state, exporting public methods to interact with the data without the use of any external library. By declaring signals in a module scope, they can expose publicly available interfaces to shared state in all components. If signals were declared in components instead, they would be scoped to the function context, similar to the behavior of useState in React.

Furthermore, in Solid.js, API calls can be easily handled using the createResource method. This method allows developers to fetch data from an API and check the request status in a standardized manner. This function is similar to the createSignal method in Solid.js, which creates a signal that tracks a single value and can change over time, and the popular useQuery library for React.

While it may work to handle signals as different getters, at some point, it will be necessary to deal with complex, deep objects, mutating values at different levels, accessing granular slices, and in general operating over objects and arrays. The solid-js/store module provides a set of utilities for creating a store, which is a tree of signals to be accessed and mutated individually in a fully reactive manner. This is an alternative to stores in other libraries such as Redux or Pinia in Vue.js.

To set data in a Solid.js store, we can use the set method, which is similar to signals. The set method has two modes: we can pass an object that will be merged with the existing state, or pass a number of arguments that will explore our store down to the property or object that will be mutated.

For instance, let’s suppose that we have the store shown below:

We can set the user’s age to 35 by passing an object with the properties we want to update, along with a path that specifies where in the state tree to apply the update:

This will update the age property of the user object in the state tree. Furthermore, we can update the store object by passing an object that will be merged into the current one.

If we were to omit the user attribute as first parameter, we would replace the user object entirely:

Since the store is a tree of signals, which is itself a proxy, we can access the values directly using the dot syntax. Mutating a single value will cause the element to render again, just like subscribing to a signal value.

Store utilities

We have two useful methods to update our state. If we’re used to mutating a Redux store using the immer library, we can mutate the values in place using a similar syntax with the produce method:

The produce method returns a draft version of the original object, which is a new object, and any changes made to the draft object are tracked similarly to using immer. We can also pass a reconcile function call to setState. This is particularly useful when we want to match elements in the array based on a unique identifier, rather than simply overriding the entire array. For instance, we can update a specific object based on its id property by passing a reconcile function that matches the object with the same id:

This will update the object in the array with the same id, or add it to the end of the array if no matching object is found.

We can group multiple state updates together into a single transaction using the transaction utility method. This can be useful when we need to make multiple updates to the state atomically, such as when updating multiple properties of an object:

This will update the name and age properties of the user object in a single transaction, ensuring that any subscribers to the state will only receive a single notification of the change, rather than one for each update.

RxJS interoperability

We can easily work with both SolidJs and RxJS, another popular reactive library by using a couple of adapter functions. The reduce method we just talked about is shown as an example for subscriptions, similar to how services are handled in Angular.

From RxJS into Solid.js

We can turn any producer that exposes a subscribe method into a signal:

This directly handles subscription and cleaning up when the signal is dropped. We can define our signal by passing a function to track the value, and how to clean up. The set method emits the value to the contexts that are listening.

Turning our signals into observables

We can turn our signal into an Observable that exposes a subscription method, allowing it to act like a native RxJS observable.

Next, by utilizing the form method provided by  RxJS, we can transform our signal into a fully-fledged RxJS observable.

A Solid.js choice

Although it is relatively new, Solid.js has gained popularity among developers due to its unique features and exceptional performance. Compared to React, Solid.js provides useful tools out of the box and is as performant as frameworks like Svelte without the need for compilers. It is particularly suited for interfaces that require many updates to the DOM and is consistently fast even in complex applications handling real-time updates.

Solid.js offers a developer experience similar to React, but with cleaner methods and more choices.The library handles many different patterns and provides more transparency in code thanks to how scopes work natively in JavaScript. Unlike using hooks in React, there are no hidden behaviors when creating signals in Solid.js.

Using Solid.js with TypeScript solves many of the struggles developers face with complex applications made with React or Vue.js, reducing the time to market and time spent debugging issues with the VDOM. We would recommend it for any new project starting today.

Author: Federico Muzzo, Senior Front End Developer @ Bitrock

Read More