getting Started with React Push-based Architecture

Getting Started with React Push-based Architecture

When approaching the React world, using Redux or MobX as state management is almost automatic. Or, in any case, the libraries change, but the basic architecture doesn’t: it is always something similar to the Redux Pattern with reducers, actions, selector, middleware, etc.

But is there the possibility of using a different architecture? Something with RxJs as with Angular? By doing some research, it seems so. Let’s see more in detail what we are talking about.

First of all, we need to think outside the classic pull-based pattern and move to something new for those coming from the React world: a push-based architecture.

With data-push architectures, view components simply react to asynchronous data change notifications and render the current data values.

The library that allows us to manage the store in this way is Akita:

“Akita is a state management pattern, built on top of RxJS, which takes the idea of multiple data stores from Flux and the immutable updates from Redux, along with the concept of streaming data, to create the Observable Data Stores model.

So basically, Akita enables us to easily build reactive, asynchronous, data-push solutions for our state management needs.

Another important concept to add is the one related to the Facades. Facades are a programming pattern in which a simpler public interface is provided to mask a composition of internal, more-complex, component usages.

In order to build our application, we rely on RxJS and React Hooks; nothing else is needed.

Let’s now consider a very simple example built on the ideas found in some articles.

In our case we need to have a list of users and to be able to interface through the classic CRUD functions.

Starting from the well-known create-react-app with the addition of TypeScript, we create a folder that will contain our entities; in this case, it will only have a "user" folder as a child.

Inside, we define a simple interface of our "user" entity in the model.ts file:

Let’s now start by initializing the store of our entity, creating a "UsersState" interface and then creating a "UsersStore" store by extending the Akita store, and finally exporting it:

At this point, we can create services to manipulate the store, also relying on the methods that an Akita store provides.

This is where we can use all our knowledge of RxJS in order to be able to create more complex flows to act on the store.

Finally, through the "QueryEntity", we can take the whole store – or just a filtered part – and channel it into an observable stream of RxJS.

Last but not least, the creation of a custom Hooks that will internally manage all issues regarding RxJS, Facades, and Akita.

First, we map and expose the services of our "userService", in this case all.Then, we create the internal state of our custom hook. Finally, we need to build the selectors for \users\ and \active\ state changes and manage subscriptions with auto-cleanup.

Now our user entity should have everything needed. We import our custom Hooks, and that’s it.

To play a little bit, let’s divide the application into several components in order to test it. The result? Well, it works!

And here’s the child component:

Here’s how the application works in the browser:

Conclusions

Although this example is quite simple, the outcomes are pretty surprising. It was really easy – and also quite logical – to connect all the pieces to compose the state management and, as we have seen, no configurations (of any kind) were needed.

For those approaching an architecture like this for the first time, the greatest difficulty is certainly represented by RxJS. To write simple services or queries, it may be enough to know the basics of RxJS; however, in case of large applications with complex services, a good knowledge of technology makes a huge (positive difference), really giving an edge. Furthermore, you need to be very careful where and how you use all the various facades in your application. Being in a push pattern, any change of state triggers the React lifecycle in every component that uses our hooks; watching and controlling performance is thus very important.

Obviously, this is just the beginning: there is a world of things to say about Akita, RxJS, push-patterns etc, and it would take much more than one simple article to explore all of them.

The aim of this contribution was to give you just a little idea of this "new" architecture for state management with React. I hope I’ve hit the target.


Author: Mattia Ripamonti, UX/UI Engineer @Bitrock


Useful Resources:

1 – React Facade Best Practices

2 – React Hooks RxJs Facades

3 – Push Based Architectures with RxJs

4 – Managing State in React with Akita

Read More
The JAMStack Proposition

The JAMStack Proposition

With the surge in popularity of JavaScript frameworks, Node and container technologies, the past years have seen the rise of microservices as the leading pattern in the architecture of distributed applications on the Web; the lingua franca of these applications being, of course, APIs. Developers adopting these modern tools for frontend environments have though faced emerging challenges when dealing with search engine optimization, rendering the content and serving the applications compared to the common LAMP stack, where PHP does the bulk of the work and JavaScript only provides the interactions and dynamic elements.


From Client Side to Hybrid

While in the past using frameworks on the client side meant single page applications, using “hacks” such as the hashbang to provide navigation, in recent years the leading JavaScript frameworks embraced the hybrid approach in rendering, where both the server and the client would run the same virtual DOM and reconnect on the browser later, “rehydrating” the application on the client. This way, applications supported both the common navigation controls of the browser and provided accessibility to users with older browsers or even no JavaScript, since the page is readily available on the server. This provided improved performance on the first load, and supported the traditional spiders from search engines.

However, this meant:

  • having an improved developer experience as the entire application uses only JavaScript and HTML, with a single code base…
  • …but Node doesn’t actually support the same modules and featuresets of a browser
  • taking a hit on a number of metrics, such as Time To First Byte and Time To Interactive, as the code runs on both ends
  • relying on an increasingly complex deployment on the server compared to traditional shared hosting
  • using “brute force” solutions to prerendering and caching applications, such as headless browsers


Limits of CMSs

Many modern web applications are still nothing more than glorified lists of contents, sometimes enabling modest interactions – such as filtering or sorting contents -, providing taxonomies and interacting with limited components – such as forms for comments or the search bar. As most of the content is static, a complete frontend solution is eventually considered an added cost compared to existing, monolith CMSs; they still offer big communities, a plethora of themes and plugins, and well rodated interfaces for content creators.

Yet, these CMSs still do not provide the same speed or developer experience as the applications written for Node, which can be started on any machine with nothing more than the Node runtime and have very fast cycles for changes. Moreover, they usually have limited support for components, or restrict them to the content side, while requiring the developer to code in additional custom parts for the rest of the page, often in a different language than the one spoken by the browser. Interaction is still tackled on, with JavaScript being unable to cross the boundaries of the single static page.

Last but not least, popular CMS such as WordPress come with larger surface areas for attack as they’re both incredibly popular and the very same endpoint for both the backend and the frontend; hosted on the same old machine, with the same address for both: subject to varying degrees of loads which might need horizontal scaling, creating issues for the cache.


Enter the Static Sites Generators

Even if CMSs can definitely render pages quickly and avoid the lengthy reconciliation with the browser’s context, they still require a server and dedicated support, with a plurality of codebases and a bad experience for the developer that does not have everything on hand on the local machine, or might not even have the required knowledge to deal with all issues in this tightly coupled project.

Static websites, instead, have no server loading times; require no session on the server, no instance of Linux running, and no real requirements other than a web server to deliver the resources. They can even live off incredibly cheap storage, such as S3 from Amazon.


A Modern Solution to Static Content

While traditional frameworks such as Jekyll or Hugo are fast and still a good solution, the new frameworks that have entered the space in the last years (such as Gatsby, Gridsome, Nuxt and Next.js) have took static sites to the future. Learning the lessons from hybrid applications and SPAs, they rely on improved tooling running on Node; modern web frameworks are now first class – improving both UX and developer experience. They feature:

  • complete SEO support – as pages are just HTML and CSS as before
  • no need for a running service; deploy simply delivers the static files to the CDN
  • improved performance on all metrics, and well engineered solutions for smaller JavaScript bundles and pipelines for other resources
  • the same complex interactions of a SPAs, such as transitions and persistent state, and the same tools (like Webpack, Parcel, etc.)
  • support for content from many sources: static files, version repositories, headless CMSs, APIs and more

Frameworks such as Gatsby and Gridsome take a page off the CMS’s playbook by offering solutions to different needs in the form of themes and plugins, yet retaining a single cohesive codebase with dependencies handled through the very same ecosystem of JavaScript, well familiar with developers already used to working with modern frameworks. They also come with configurations for older browsers, solving the rebus of packagers, and simple commands to either build or start a development service.

The reduction in complexity is significant. A streamlined frontend solution enables developers to focus on the core experience of users instead of wasting time on configuration. The absence of an actual service running the pages allows the website to run just about everywhere, letting backend developers focus on the business logic, and in between, APIs representing the contract between the two sides. DevOps have one less thing to be concerned about. It is the JAMstack: JavaScript, APIs, and markup languages.


Challenges of the JAM Stack

The massive improvements brought forth by the stack still face issues that many of the other solutions don’t, and by the nature of static contents; consequently, its strong points also represent its main pain points.

First of all, static content is never entirely static: contents will be probably updated in time and might even be real time. While recreating the bundle every time the content changes represents the quickest solution, it takes time – more than in real time at least. There have been big improvements in the speed of the tools when generating this bundle; it used to take way longer than today and support up to a few thousands pages, while today, with solutions such as partial builds, it can be mitigated. It is still not real time; real time content can only be handled through dynamic components on the client side.

Secondly, by relying on the delivery through services such as Netlify, S3, Vercel and so on, we’re leaving to the middleman to handle security and performance optimization for static files. We can also do it on our own, of course.

Third and last (but also probably the trickiest part), is that by having static files we move the concern for sessions and authentication/authorization to external microservices instead of the server. With careful reliance on Service Workers, and/or solutions such as Firebase, we can solve this. The JAM stack also strongly favours a serverless approach to server interactions: by writing simple functions in Node, to be deployed in the same hands-off approach from the same codebase (possibly even sharing code), we can handle just about everything as before with traditional AJAX requests.

Both the serverless approach and the delivery of static files is a big reduction in costs compared to deploying virtual servers, as we only use what we need and scale naturally as more the resources are required. But rarely accessed contents or functions do require some extra time as the provider has to “boot up” the context of the functions for us.


A common Use-Case: a Blog and its Pages

Static websites are really suited for delivering the contents of an editorial product. Text and images are usually not updated in real time, and the business requirements are more often aligned with the value proposition of static site generators:

  • A safe environment with low attack surface for the public.
  • Fast performance on all metrics, to boost the SEO and mobile performances.
  • Low costs, in development, deployment and maintenance

This environment has been, for the past decades, very much dominated by WordPress and its themes, which provide a good enough solution for most companies and, since they are so commonly used, editors do not need to learn again how to do things. As WordPress developed its own API in the last decade, we can rely on it to provide the base for our contents and access the existing ecosystem and know-how, deploying it on a low cost solution such as WordPress.com or perhaps a small VPS. All that we really need is a safe way to get our content from our install to our static site generator, that will generate the page at build time calling the APIs. We could just as easily deploy an headless CMS or even a completely custom solution – on the frontend site it doesn’t really matter much.

Our choice for a static site generator can also be decided according to the skills of the developers working on the project. On the React side, both Gatsby and Next.js are very popular solutions, with Gatsby having an already established set of plugins and starters very similar to WordPress that can speed up the development. On the Vue side, Vuepress and Gridsome are two common solutions: the first one being the easiest of the two in terms of features and approach to content (by using Markdown files), while the second more similar to Gatsby, providing plugins and starters. Both Gridsome and Gatsby, in fact, use GraphQL as a lingua franca for our contents, so that we can integrate many sources and use them in a common way.

Last but not least, we can decide where to deploy our contents. There’s a huge number of possibilities, from CDNs to storages (such as S3) or many services that pride themselves on simplicity like Netlify and Heroku. Anyway, what we really need is a channel to deliver the bundle of the contents to our users; whenever we update our contents, we will simply call an API to trigger again the build process and reload the files.


An Example with Gatsby

To build an example solution, we’re going to use DigitalOcean to host our WordPress installation. Our generator will be Gatsby for this very specific example, but the concepts are quite similar for many of them. Note that we will be just using a function to build the pages, but many of these generators offer integration with external CMSs and might use GraphQL and such to do the queries; here is just a generic example. To begin, we created a droplet on DigitalOcean using their image for WordPress on Ubuntu 18.04. You can find more information about this on their website, as their wizard will do the bulk of the work for you. Don’t forget to follow the installation of WordPress itself. For this example, we’re not going to even configure a domain for our install, but you definitely want to use a proper configuration. Many hostings also offer simple solutions to host applications such as WordPress, and will do the job nicely.

Now that our WordPress is set up, we can start working on our frontend. First thing first, we create the project using the command line interface for Gatsby.

npm install -g gatsby-cli

gatsby new example-wordpress

This creates our base project using the default starter kit from Gatsby. Inside the \example-wordpress\ we can find the modules needed already preinstalled, some configuration for styling the code (with Prettier), and the source code folder (\src\); the latter having inside both the folder for the React \components\ and the folder for the \pages\. Files inside the \pages\ folder will be accessible by default through their filename (for example, \page-2\ will be located at \example.com/page-2\).

What we want to do is to hook up into the build process of Gatsby and generate our pages from the WordPress API. You can find more information about the APIs from the REST API Handbook, but the gist of it is that we’re requesting the posts resource from it using the correct endpoint. You can preview the available resources by going at the page \example.com\\wp-json\; we will be accessing the \wp\\v2\, under which we have the editorial contents, and query for the posts. Our URL will be something like \example.com/wp-json/wp/v2/posts\.

Now we just have to pull it inside Gatsby and build the pages. To do this, we open our project and navigate to the \gatsby-node.js\ file that should be in the root. We install and import the module \node-fetch\, so that we have an easy interface to get our resource by using:

npm install –save node-fetch

And putting at the top of the file our import:

_const fetch = require(\MARKDOWN_HASH03c697f1f26e7438c661b7bc6dd0f4b2MARKDOWNHASH);

Next, we hook up into the \createPages\ step of Gatsby. In order to do so, we will export an asynchronous method from our file called \createPages\, which receives an object with the \actions\ available to us and a \reporter\ object that can tell Gatsby if something went wrong. Inside this function, we fetch our posts and create a page for each of them.

Let’s create the template page for the blog posts. We create a file in the \pages\ folder named \post.js\, and access the post data by reading it from the props of our page component.

We should now have a corresponding page in our frontend:

This is of course just the beginning.

  • To host our content online, we could rely on something like Netlify. We just push our project to Github, then add Netlify as an application.
  • To trigger the rebuilding of our contents, we could for example make a call using cURL to our services on the \save_post\ hook.
  • We could build our taxonomy pages, either by using the API from WordPress or the posts JSON. To better integrate it into Gatsby (or Gridsome perhaps), we could add our posts as GraphQL nodes.
  • A common criticism of this kind of solution is that editors don’t really have an idea of how the content will end up looking on the frontend. We can build a simple WYSIWYG editor on the frontend side by relying on a good library like Draft.js. Of course this also requires authentication and so forth. We could also share the same CSS and major HTML between both the WordPress environment and Gatsby.
  • We could lock our WordPress APIs behind a simple authentication using either Apache or Nginx, as they’re quite common in this kind of setups. Logging in through Node is trivial.


Conclusions

Static site generators enable us to provide a good user experience and also good performance, with a bit more effort than using common CMSs such as WordPress as a monolithic approach. We can integrate different sources and create very custom solutions using modern scaffolding and tooling. However, it does require quite a bit more effort, and the disconnect between frontend and backend can be a pain point for our editors.



Author: Federico Muzzo, UX/UI Engineer @Bitrock

Read More
An introduction to "Deno"

An Introduction to "Deno"


What is Deno?

“A secure runtime for JavaScript and TypeScript.” This is the definition contained in the official Deno website.

Before going into details, let’s start by clarifying the two main concepts included in this definition.


What is a runtime system?

As for Deno, we can say that’s what makes Javascript run outside the browser, adding a series of features that it is not possible to find in the Javascript engine itself.


What is Typescript?

Typescript is a superset of Javascript, which adds a series of features that make the language more interesting. Its main features are:

  • Optional static typing
  • Type inference
  • Improved Classes
  • Interfaces

At this point, you may find a similarity between Node and our Deno definition, since they seem to do almost the same thing and they are both built upon the Javascript V8 engine.


So Why Deno?

“Deno” – as Ryan Dahl, Creator of both Deno and Node, said – “is not by any means a rival of Node”. Simply, he was no longer happy with Node and decided to create a new runtime to make up for its “mistakes” and shortages (and adding also a bunch of new features).



Getting closer to Deno

Let’s now discover what makes Deno so promising and interesting, along with some key differences with Node.js:


Deno supports out-of-the-box Typescript

Deno can run Typescript code without installing additional libraries, such as ‘ts-node’.

It is possible to create an app.ts file and make it run with the simple command “Deno run app.ts”, without any other additional step.


ES Modules

Deno drops Commonjs Modules, which are still used in Node.js, and embraces the modern ES modules that are defined as standard in the Javascript world and mostly used in the front-end development scenario.

Deno borrows from Golang the possibility to import the modules directly from an Url.


Security First

Deno implements a philosophy of “least privileges” when it comes to security.To run a script, indeed, you need to add appropriate flags in order to enable certain permissions.

Here’s the list of flags that can be used:

  • --allow-env: allow environment access
  • --allow-hrtime: allow high-resolution time measurement
  • --allow-net=<allow-net>: allow network access
  • --allow-plugin: allow loading plugins
  • --allow-read=<allow-read>: allow file system read access
  • --allow-run: allow running subprocesses
  • --allow-write=<allow-write>: allow file system write access
  • --allow-all: allow all permissions (same as -A)


Standard Libraries

These libraries (click here to find out more) are developed and maintained by the core team of Deno.

Many other languages – Python included – share this concept of having a library of reference that is stable and tested by developers who maintain it.

Since Deno is at an initial stage, the list is still short – but certainly there will be further implementations in the future.


Built-in Tools

When it comes to Node.Js, if you want to have specific tools, you have to install them manually; furthermore, they are essentially third-parties tools, which are not maintained by the Node Team.

Deno, instead, embraces another philosophy: it offers, indeed, a built-in tool to improve the development. This creates a standard, which makes Deno not so dispersive as the Node ecosystem.

Here’s a partial list of them, along with the link to the relevant documentation for a deeper understanding of the topic:

  • fmt a built-in code formatter (similar to gofmt in Go)* test: runs test
  • debugger
  • Bundler
  • Documentation Generator
  • Dependency inspector
  • Linter


Compatibility with Browser API

Deno API was created to be as compatible as possible with the Browser API, in order to be able to implement any upcoming feature easily. This is one of the main “issues” that Node has, since it uses an incompatible global namespace (“Global” instead of “window”). This is the reason why an API like Fetch has never been implemented in Node.


Style Guide to building a Module (Opinionated Modules)

Unlike Node, Deno has a set of rules that a Developer should follow in order to publish a module. This can avoid the creation of a different way to reach the same output, thus creating a standard – which is a main principle within the Deno ecosystem. To find out more about the topic, click here.


Where is package.json?

As seen before, there is no package.json in Deno where it is possible to put all the dependencies. The answer, then, is deps.ts.

Deps is used for 2 main reasons:

  • to group all the dependencies needed for the project;
  • to manage versions.

This is a sort of replication of package.json present in Node.js, but many Developers are no longer considering it as best practice because of the decentralized nature of Deno. Indeed, they are now experimenting a better way to organize the code, which might lead to a different management of the modules. Let’s see how it will evolve in the future…

Here’s an example of a deps.ts file:


What about locking the dependencies?

A file called lock.json is needed in order to lock them. By using the following command, it is possible to cache and assign a hash to every dependency, in a way that no one can temper it:

deno cache --lock=lock.json --lock-write src/deps.ts

For further explanation about integrity check and lock files, please have a look at the official documentation.


Server Setup

Last but not least, here’s a quick but interesting comparison between a server setup in Node and in Deno.

Node server:

An example of Node server

Deno Server:

An example of Deno server

As you can notice, the snippets are pretty similar, but with fundamental differences that can sum up what we discussed above. More specifically:

  • ES modules;
  • decentralized import from an URL;
  • nextgen javascript feature out of the box;
  • the permission needed to run the script.



Future Improvements on the Roadmap and Conclusions

One of the key features in roadmap is the possibility to create a single executable file, as it happens now in many other languages (like Golang, for instance) – something that could revolutionize the Javascript ecosystem itself.

Also the compatibility layer with the Node.js stdlib is still in progress; this could lead to a faster development of the runtime system.

To sum up, we can say that Deno is in continuous evolution and probably will be the next game-changer of the Javascript ecosystem. The foundations for this runtime are in place and it is already a hot topic, so… keep an eye out!



Author: Yi Zhang, UX/UI Engineer @Bitrock

Read More
Bitrock_Rooms

Bitrock_Rooms

Our solution to guarantee a safe working environment

One of the most recent challenges that we as UX/UI Team have faced in Bitrock is the creation of a brand-new Web App to solve a contingent issue inside the company.

The communication was sudden and with few details about the project: what we had was a problem to solve, and a strict, dynamic deadline.


The challenge

Our mission consisted in delivering an App whose main goal was to manage the booking of the desks available in our Milan HQ office. The social distancing measures caused by Covid-19 pandemic, indeed, had forced Bitrock Team to reduce the capacity of the rooms. Our task was thus to provide a booking platform that could allow our colleagues to book their desks in advance, even from home: in this way it would be possible to ensure that employees keep sufficient space from each other and to provide a safe working place.

The rules we had to follow when designing the App were simple: every room would have a max capacity (of desks) to be respected, and a user would be able to book just one desk in a room per day. Another feature we were required to implement was the ability to see the bookings made by other colleagues in real time, in order to have a better feedback on the current rooms capacity.

On the UX/UI side, we had two kind of views in mind: a daily view, and a weekly one (a feature asked us to ease the booking for several days).

The functional analysis was ready, the deadline was clear. We thus started to work.


The project

At first we created wireframes: they were simple and useful to us in order to have a better understanding of the project.

As Backend and Database Platform, we chose to rely on Firebase and Firestore in order to speed up the implementation.

Firebase was a good fit for every need that we were trying to respond to:

  • Oauth authentication out of the box
  • Real Time Database perfectly integrated with RxJs library

Every decision we made was based on the concept of “Reactive Programming”, in order to achieve a data stream able to facilitate the automatic propagation of data changes.

For the selection of the Frontend Framework, the choice was easy: Angular, which is well integrated with Firebase in every aspect and synergised with Rxjs (A/N: for those who are not familiar to the Angular ecosystem, Rxjs is a library that embrace the concept of reactiveness with a functional approach) – everything was made reactive out of the box.

To sum up, here’s the list of the libraries we chosen:

  • Angular
  • RxJs
  • AngularFire – Firebase integration to Angular
  • Moment.js – Library to manage the date
  • Angular Material – UI Library with premade component

Our philosophy was to have the right balance between best practices and productivity, while respecting the limited available time.

The first point to tackle was the data schema to represent the booking of the desks. We thus created a flat structure, where the main keys were :

  • date
  • room
  • user

Here’s the schema that we used:

We then started creating components and services by using the tools that Angular had, for instance using cli commands.

Our choice not to use a state management like NgRx was dictated by the fact that this was a rather simple project with a limited number of components.

The tasks related to the daily view were carried out fast and smoothly: everything revolved around the RxJs libraries and the communication with Firebase.

Even the real-time update of the data from Firestore was great: it was so easy to implement…like magic! The implementation of the weekly view was a bit more challenging, but we managed to carry it out using our components.

The last part of the project covered the styling aspects: we decided to use Amber (Bitrock design system) as reference, in order to create a web app with the company “look and feel”.


Conclusions

This project represents the perfect playground for those situations that envisage a sudden problem that needs to be solved in a very short amount of time. During its different steps we had the possibility to reinforce our team-work spirit, as well as develop a proactive attitude. Everything was indeed designed, delivered and implemented very well thanks to the effort of the team as a whole, and not just because of individuals.

Bitrock Rooms is now used every day by Bitrock Team as one of the solutions to face Covid-19 challenges, helping creating a safe working environment and delivering a smart and smooth experience to users.


Bitrock Rooms’ daily view interface:

Bitrock Rooms’ weekly view interface:


Bitrock Rooms’ mobile view interface:



Authors: Marco Petreri, UX/UI Engineer @Bitrock – Yi Zhang, UX/UI Engineer @Bitrock

Read More