Digital

Rust in Web Development (Also): Efficient API Layer Based on Rust Foundations

by

Sándor Apáti

7 min read

March 22, 2024

Rust programming language is often viewed as the successor to C/C++, poised to eventually dominate the development of safety-critical kernels and low-level applications. Its efficiency and versatile application possibilities, however, extend far beyond safety, predetermining it for success in many more fields. Rust holds significant potential for developing web applications, especially API layers, where speed is also of great importance.

In the development of web and mobile applications, creating the frontend applications that run in the user’s browser or on their mobile phone only represents half of the work. During operation, these interfaces interact with one or more business systems, giving rise to various middleware or backend for frontend (BFF) layers. Several common issues arise when trying to integrate a frontend application with a system primarily organized around business processes and logics.

One fundamental issue is that frontend developers prefer to work with a unified API, such as REST or GraphQL, which returns only the necessary data, in the required form and manner, for a particular function of the frontend application. In contrast, business systems provide varied APIs that are overly “chatty”, and integrating multiple systems, even their protocols may be different. Performance issues also frequently arise because business systems are often not designed to handle tens of thousands of requests per second, necessitating some form of data caching.

Another challenge arises when data and processes from different systems need to be combined and standardized. For example, different systems may provide user identification, flight information, and reservations, while offers from a CRM system come from a fourth source, and these link to contents from a CMS system. Managing this complexity on the frontend is not advisable, not least for security reasons; it’s crucial that the business logic remains hidden. This is where the BFF layer (or middleware) plays its role, acting as an intermediary between the frontend and business systems.

What Makes a Good BFF?

From a customer experience perspective, the most critical expectation of a BFF layer is speed. Technically, this means operating with the shortest possible response time and utilizing the available hardware resources as efficiently as possible. It’s vital that the BFF layer minimally increases the latency in serving requests from the customer to the business systems.

From a business standpoint, security is also crucial: the BFF layer itself should be as secure as possible, and it should also protect the business systems from both overload attempts (e.g., through efficient caching, limiting the number of concurrent requests) and malicious activities (e.g., by checking and pre-filtering incoming data).

The first crucial decision in developing a BFF layer is selecting the programming language. PHP, Python, Java, C#, Go, JavaScript / TypeScript are all viable options, each with its advantages and disadvantages. Rust has recently joined this list, appearing as a steep choice: a relatively new, low-level language that is not particularly easy to learn.

However, its popularity is rapidly growing in areas where performance and security are critical: cloud providers base their fundamental systems on it (see Amazon Firecracker), it’s a common choice in blockchain system development, and it’s beginning to infiltrate operating system development. In the past few years, many have started using it for web development as well, where it has developed a particularly robust ecosystem.

Challenges of a Sports Betting System

At Mito Digital, the final push towards Rust adoption came from implementing a sports betting system. This system needed to manage rapidly changing data for tens of thousands of betting events. The business system couldn’t handle the thousands of requests per second from users, as it wasn’t designed for this. This led to the need for a unique web application that stores sports event data in memory and serves user requests directly from there.

We only request the complete event database from the business system once a day and then the changes every few seconds. Our application receives the data in XML format, processes it, stores it in in-memory data structures, and indexes it from several perspectives for efficient searching: sometimes using simple B-Tree indexes, other times using a full-text search engine (tantivy).

Interestingly, the first version of this system was developed in Go. We faced two main issues: slow XML processing and high memory usage, pushing the limits of our available hardware.

The new Rust-based implementation solved both issues: both the XML processing time and memory usage were reduced to a fraction.

Getting acquainted with Rust was relatively quick, as its basic structures are similar to previously known languages (Go, C#, PHP, JavaScript). The real novelty was the borrow checker. This compile-time check ensures Rust’s two unique features: safe memory management without a garbage collector and essentially risk-free concurrent programming (fearless concurrency). Getting used to the borrow checker took some time, but once we overcame this, we could proceed with development without significant problems.

Rust allows for both synchronous and asynchronous programming. The asynchronous programming keywords async/await might be familiar from C# and JavaScript, and they work similarly here. Several async runtime implementations exist for Rust; we chose tokio-rs, as the Warp and Axum web frameworks we prefer are built on it. Thanks to asynchronous programming, the application only needs to run a few threads concurrently: one thread handles specific background tasks (such as regularly downloading data changes in the sports betting case), and roughly one thread per CPU core is launched by the tokio runtime to asynchronously serve incoming web requests.

This Is Where ArcSwap Comes In

In a multi-threaded environment, synchronizing access to shared data can be a significant issue. If multiple threads attempt to modify the same memory area simultaneously, it could likely lead to a system crash or a substantial security breach. Most cases avoid this problem using locks and mutexes, ensuring that only one thread works with the data structure at a time.

In our case, ArcSwap allowed us to handle these race conditions mostly in a lockless manner, enabling threads serving client requests and background tasks to work without blocking each other, limited only by the available CPU performance.

ArcSwap is a data structure referencing another data structure via a pointer, which can be swapped out using atomic operations. Once packaged in an ArcSwap, the referenced data structure becomes read-only, allowing multiple threads to safely read it concurrently without locks. If the data needs to be modified, a cheap copy-on-write duplicate is made, where we perform the necessary modifications, then swap the data in the ArcSwap with the new version in a single atomic CPU operation, discarding the old one. At this point, the memory occupied by the discarded data is immediately freed, without waiting for a garbage collector run. This is a significant reason for the drastic reduction in memory usage.

Beyond ArcSwap, a plethora of pre-implemented concurrent access data structures can be found on crates.io. This is the central repository for Rust’s package manager, Cargo, similar to npm and https://npmjs.org for JavaScript, or composer, and https://packagist.org for PHP. For example, crossbeam provides communication channels similar to Go’s channels, dashmap provides a concurrent access HashMap data structure, and evmap offers a lock-free eventually consistent HashMap implementation.

In statically typed programming languages, a significant problem can be the need to produce a lot of boilerplate code during implementation. This can be the case when converting a data structure to and from JSON, or when linking a REST API endpoint with the URL routing layer. Most languages use some form of annotation to eliminate this, from which boilerplate code can be generated at runtime through reflection.

Rust addresses this problem with declarative macros. Like C macros, they are expanded into Rust code before the actual compilation process, significantly simplifying and making the process more comfortable than, for example, Go’s go generate solution, and without the runtime overhead of reflection. With macros, for instance, converting a Rust data structure to and from JSON can be accomplished with a few simple declarative macros (see serde_json). Macros greatly simplify many routine operations in Rust, allowing developers to focus on solving the task at hand.

Observability is crucial in a microservices-based system. The tokio tracing library makes both logging and tracing easy to implement. Since the language is static, there’s no dynamic runtime instrumentation as in Java or C#, but macros can be used with minimal effort to insert the necessary code snippets for logging and metrics. The collected data can be forwarded to almost any Application Performance Monitoring (APM) tool through various adapters. For example, log data can go to ElasticSearch or Grafana Loki, tracing spans to any OpenTelemetry compatible collector, and metrics to Prometheus.

Rust’s major advantage in Kubernetes or serverless environments is the minimal runtime dependencies of the resulting executable.

If compiled for a statically linked musl libc environment, the container needs to contain only the executable and some additional configuration (e.g., timezone, locale data), making the entire container only a few MB in size. The language’s runtime overhead is very small, and a Rust application starts up in moments compared to a .NET or Java application (useful, for example, in the case of AWS Lambda cold starts).

How Fast Is It?

If the response can be served from an in-memory data structure and doesn’t require generating a large JSON, it can serve tens of thousands of requests per second with a response time of a few milliseconds, comparable to the speed of serving static files from a web server. If producing the response is more complex or requires generating a larger JSON response, the rate might drop to a few thousand requests per second on a 4 vCPU machine, with response times in the 10-100ms range. If the request cannot be served from memory, then the response time of the called backend service will be the determining factor, not the Rust-based application.

The number of concurrent requests is not a problem due to asynchronous operation: each incoming connection consumes only a minimal amount of memory, and running out of file descriptors for sockets is likely to be a problem before memory usage becomes an issue.

Overall, our experience with Rust has been very positive, and it’s not as difficult to use as we initially feared.

There are still shortcomings: the language is young, so there are significantly fewer mature tools available than for, say, C# or Java. Finding Rust developers is also not easy; we tend to train backend developers from other languages in-house. Fortunately, the language’s popularity is rapidly growing according to the TIOBE index and GitHub data, so these issues should resolve over time.

Sándor Apáti works at Mito Digital as a Software Architect. Over the past 25 years, he has gained significant experience in backend and system development, with his professional focus besides Rust being on DevOps and the cloud. He is a certified AWS Solutions Architect Professional and Advanced Networking Specialist.

Share on

More Articles

This story is brought to you by Mito Digital

We work in four business units, all sharing the same DNA, but doing a lot of different and exciting stuff.

Mito Weekly

It started out as an in-house circulation of the most interesting links of a week and turned into Mito's weekly newsletter of inspiration and knowledge for all. You can sign up, too.

Emese Bódi

Mesi is searching for minds and souls, loves a good CV and to chat with new people. Go ahead and find out!

Apply now!

ED160CFA20

Kosárlabda szabályok

A kosárlabda csapatok 5 fős felállásban játszanak. A nevezett csapatokat csoportokba osztjuk, a sorsolást a tornát megelőző héten minden csapat részére kiküldjük. A mérkőzéseket hivatalos játékvezető vezeti. A mérkőzések egy pályán zajlanak. A csoportok első két helyezettje az egyenes kieséses szakaszba kvalifikálja magát. Az egyenes kiesés szakaszban a csoportmérkőzések során első helyezett csapatok a másik csoport második helyezettjével játszanak, ezt követően a vesztesek a harmadik helyért mérkőznek meg egymással, majd a győztesek vívják a döntőt.

Röplabda szabályok

Csapatlétszám: 6 fő max. 2 cserejátékossal
A játék időtartama:
  • A mérkőzés egy csapat által nyert egy játszmáig tart. 
  • A játszmát az a csapat nyeri, amelyik előbb ér el 25 pontot legalább 2 pont előnnyel. 24-24-es egyenlőség esetén a játszma addig folytatódik, amíg valamelyik csapat el nem éri a két pont különbséget. 
  • Mérkőzésidő max. 20 perc, ha ezen belül a fentiek alapján nem dől el a meccs, a 20 perc mérkőzésidő leteltével előnyben levő csapat nyeri a mérkőzést.
A csapat létszáma a pályán nem csökkenhet 6 fő alá. Azt a csapatot, amelyik nem tud az összetételi előírásnak megfelelő 6 főt a pályára állítani, a játékvezető “hiányos”-nak nyilvánítja.

Labdarúgás szabályok

Csapatlétszám: 5+1 fő.

Játékidő: 2×10 perc (szünet nélkül térfélcserével)

A mérkőzéseket 3×2 méteres kapukra játsszuk.

  • A mérkőzéseket a kispályás labdarúgás szabályai szerint bonyolítjuk le.
  • A mérkőzéseken csak hernyótalpas, terem- vagy edzőcipőben lehet szerepelni, fém, vagy gumi stoplis cipő használata szigorúan tilos! 
  • Ha bármelyik csapat létszáma a kiállítások, sérülések következtében 3+1 fő alá csökken, a mérkőzést be kell szüntetni. 

Introducing Nokia to website performance measurement through page speed measurement

Business background

Page speed has become an important factor for website owners and SEOs since Google began focusing more and more on user experience factors. If companies don’t adapt to this new mindset, they will lose their organic power.

Nokia has experienced an extensive page load time on most of its pages, but lacked proper measurable information about the performance of the website. It is challenging to adapt site-wise technical modifications without understanding what is really behind the curtain.

Our solution

Mito introduced a multifactor, multi locational-targeting page speed measurement and reporting system across different devices.

Our focus was to develop a measurement system from scratch that is able to report on different page speed KPIs so we can better understand what additional analyses are needed to increase the pages’ performance. We did not only focus on a page level measurement, but also grouped pages to page types that assists us in making more educated assumptions.

We paid attention to location modularity in order to extend the number of analysed locations to additional countries by adding new Virtual Machines to our framework.

We are able to measure different KPIs on both mobile and desktop so we can see how the site performance is realized across devices.

Our measurement system and dashboard provided us insights on what type of elements of the different page types might cause low speed performance and it helped us to further analyse the loading curve, just as the opportunities in Google PageSpeed Insights recommendations for the different page types.

Results

We set up the measurement system and the dashboard that reports on website performance and opportunities.

Through properly tracking the page speed improvements, Nokia becomes a more powerful organic participant in the telco market in Google search, while they provide a better user experience for their visitors.

End to end measurement solution for Nokia

Nokia is an innovative global leader in 5G, networks and phones.

An international business of this size requires a comprehensive measurement strategy including responsible data collection, consistent management of tags and measurement codes, personalized website tracking and purposeful conversion rate optimization (CRO).

We kicked off Analytics projects from January, 2020, beginning with extensive Google Tag Manager and Google Analytics audits. Besides inspecting the different system settings, we also optimized the measurement and pixel implementation processes within Nokia in order to increase efficiency and reduce implementation periods.

After the key adjustments and process harmonization, we have moved forward and started to work on the website and landing pages’ efficiency, including several conversion rate optimization tasks such as heatmap analysis and A/B testing.

Results

Nokia is receiving continuous support from our analytics team to ensure the client’s measurement and data is reliable. Due to streamlined processes, implementation periods are shorter and we eliminated unnecessary website tracking codes which cause longer loading times.

Nokia Allwhere - Private Wireless - Global digital media campaign planning & execution

Building brand awareness and creating demand for Nokia Private Industrial Wireless in new vertical markets

Nokia launched a digital campaign to educate the market that their industrial-focused LTE solutions are the key in accelerating the ultimate benefits of digital transformation and realization of the fourth industrial revolution.

The goal: to build brand awareness and create demand for Nokia private Industrial Wireless solutions in new vertical markets, especially infrastructure or asset-heavy industries where digitization has been mostly restricted to point solutions without general network connectivity.

Our approach

We’ve developed a multi-touch global digital media campaign, which covered the entire user journey of business leaders, from education to making a business decision.

Our main goal was to make Nokia visible on a large scale through automated solutions and smart targeting to reach people when they are reading, talking, learning about or searching for private wireless networks.

To archive this, we’ve utilized the combination of digital media platforms, including industry specific content sponsorships, social media ads, paid search, programmatic hyperlocal and Digital out-of-home ads in front of target company HQs. We’ve made sure that when our key audience is browsing the web, using social media on their phones or just walking outside of their offices, they see a relevant Nokia ad.

Results

The campaign has reached over 80 000 business decision makers and influencers in 3500 target companies, across 5 industry verticals globally.

Private wireless solution specific brand search volume increased by 70% during the campaign period (YoY). In Q2 and Q3 2020, Nokia Enterprise had a double digit year-on-year growth in net sales and new customers.

Emese Bódi

Mesi is searching for minds and souls, loves a good CV and to chat with new people. Go ahead and find out!

Apply now!

6CB921BA67

Flóra Hantos

Flóra is searching for minds and souls, loves a good CV and to chat with new people. Go ahead and find out!

Apply now!

782C904A0C

Emese Bódi

Mesi is searching for minds and souls, loves a good CV and to chat with new people. Go ahead and find out!

Apply now!

AE7CC48A6F

Emese Bódi

Mesi is searching for minds and souls, loves a good CV and to chat with new people. Go ahead and find out!

Apply now!

ECC1EB329D

Emese Bódi

Mesi is searching for minds and souls, loves a good CV and to chat with new people. Go ahead and find out!

Apply now!

A32F2962BB

Write us!

Write us!