Posts about Backend

Read Cross-Cluster Traffic Mirroring with Istio
Backend Cloud Devops

Cross-Cluster Traffic Mirroring with Istio

Posted on

The price of reliability is the pursuit of the utmost simplicity. — C.A.R. Hoare, Turing Award lecture Introduction Have you ever enthusiastically released a new, delightful version to production and then suddenly started hearing a concerning number of notification sounds? Gets your heart beating right? After all, you didn’t really expect this to happen because it worked in the development environment. This “But it worked in the development environment!

Read the post
Read ElasticWars Episode IV: A new field
Backend Devops

ElasticWars Episode IV: A new field

Posted on

On a normal day, we ingest a lot of data into our ELK clusters (~6TB across all of our data centers). This is mostly operational data (logs) from different components in our infrastructure. This data ranges from purely technical info (logs from our services) to data about which pages our users are loading (intersection between business and technical data). At trivago,we use Kafka as a central hub for moving data between our systems (including logs).

Read the post
Read Accommodation Consolidation: How we created an ETL pipeline on cloud
Backend Engineering Culture

Accommodation Consolidation: How we created an ETL pipeline on cloud

Posted on

Imagine you go to your hotel for check-in and they say that your dog is not allowed even though the website clearly states that it is! trivago gets information about millions of accommodations from hundreds of partners and they keep on updating. There are many differences not just in the data format, but also in the data itself. There can be many discrepancies in the information and consolidating them can be a very complex process.

Read the post
Read Why We Chose Go
Backend DevOps

Why We Chose Go

Posted on

To the outside, trivago appears to be one single software product providing our popular hotel meta search. Behind the scenes, however, it is home to dozens of projects and tools to support it. Teams are encouraged to choose the programming languages and frameworks that will get the job done best. Only few restrictions are placed on the teams in these decisions, primarily long-term maintainability. As a result, trivago has a largely polyglot code base that fosters creativity and diverse thinking.

Read the post
Read Getting Ready For The Big Data Apocalypse
Backend Data Science Engineering

Getting Ready For The Big Data Apocalypse

Posted on

trivago Intelligence was born in 2013 with two main objectives: First, to provide bidding capability to the advertisers, who are listed on trivago, and second, to provide them with metrics related to their own hotels; like clicks, revenue, and bookings (typical BI data). This project faced a wave of inevitable data growth which lead to a refactoring process which produced a lot of learnings for the team. As I expect it to be useful for other teams who deal with similar challenges, this article will describe why a team started a full migration of technologies, how we did it and the result of it.

Read the post
Read Automation-First Approach Using the Karate API Testing Framework
Open Source Backend

Automation-First Approach Using the Karate API Testing Framework

Posted on

Adopting an automation-first mindset is the first step to reduce manual and repetitive work. Thinking this way enables us to move faster, and more efficiently. It unburdens us from mundane, repetitive work, allowing us to focus on solving problems and creating value in the Software Development Life Cycle. So the first thing is to look for a tool that helps us write automated tests faster and is easy to maintain.

Read the post
Read Nomad - our experiences and best practices
Monitoring Backend DevOps

Nomad - our experiences and best practices

Posted on

Hello from trivago’s performance & monitoring team. One important part of our job is to ship more than a terabyte of logs and system metrics per day, from various data sources into elasticsearch, several time series databases and other data sinks. We do so by reading most of the data from multiple Kafka clusters and processing them with nearly 100 Logstashes. Our clusters currently consists of ~30 machines running Debian 7 with bare-metal installations of the aforementioned services.

Read the post

We're Hiring

Tackling hard problems is like going on an adventure. Solving a technical challenge feels like finding a hidden treasure. Want to go treasure hunting with us?

View all current job openings