CASE STUDY

How Getaround uses Porter to manage Kubernetes clusters serving traffic across 8 countries

Shankar Radhakrishnan
April 16, 2024
4 min read

Getaround (NYSE:GETR) is a car-sharing startup that provides a marketplace, connecting car owners with people looking to rent vehicles for short periods. Using the mobile app, users can easily find, access, and rent nearby cars instantly and on a 24/7 basis. The app also allows for keyless entry, making the entire rental experience truly seamless. Getaround aims to promote resource efficiency, by both increasing access to transportation and reducing the need to own cars, minimizing the overall environmental impact of cars. The startup was founded in 2009 and went public in 2022.

 

We interviewed Getaround’s Site Reliability Engineering (SRE) team, headed by Laurent Humez, to see how Porter helps the company streamline Kubernetes operations and manage their infrastructure with a small SRE team.

An experienced Site Reliability Engineer

Laurent has been around the DevOps space for most of his adult life; after studying Computer Science, he worked at multiple companies that managed infrastructure in drastically different ways - from hosting on traditional PaaS like Heroku to having legacy internal IT departments that also managed on-premises servers, everything from virtual machines to hypervisors. As time went on, he became dissatisfied with the way infrastructure was oftentimes handled at these companies and so leaned more into DevOps work over pure software development. 

He began his DevOps journey hosting on barebones servers that he’d access via secure shell (SSH). He then learned more about containerization through Docker, container orchestration and management through Kubernetes (K8s), and more abstracted cloud-based solutions like serverless. Simply put, Laurent had plentiful experience managing infrastructure across the whole gamut of solutions available.

Getaround’s infrastructure before Porter

Getaround initially had a large monolithic application and some peripheral applications that served production traffic hosted on Heroku, and their internal applications hosted on AWS ECS. Like many other startups, Getaround began hosting on the PaaS for the ease of use it provided. However, as the car marketplace startup grew, they had reached the limits of what the PaaS could offer and were feeling constrained.

For example, they couldn’t simply host on their own private cloud on Heroku to peer VPCs with their infrastructure on AWS - they would have had to upgrade their plan to Heroku’s Private Spaces (which came at a steep cost). Their application code relied on a fair amount of extra libraries and dependencies as well (specifically a database adapter that required system libraries) - configuring this on Heroku was very inconvenient. 

Also, being able to whitelist egress IPs on Heroku required a proxy that had to be managed. Furthermore, GetAround had to use the API to extend and stretch the limits of Heroku by composing their own deployment strategies.

Knowing the plethora of options available to manage their infrastructure, Laurent felt it was time for a move away from Heroku to their own private cloud, orchestrated by Kubernetes. 

Managing EKS directly 

Laurent was certain that Kubernetes was the best option to host their production applications, due to its flexibility, scalability, and cloud agnosticity - K8s is the industry standard for container orchestration. However, there were a few considerations before making the move.

The internal applications they hosted on AWS ECS were extremely tedious to manage. 

“Iterating on task definitions on ECS is difficult, secrets management is a hassle - really, anything on AWS feels like an administrative task that’s extremely time-consuming.” – Laurent Humez, SRE at Getaround

Managing day 2 operations for EKS (like cluster upgrades, monitoring, and logging) would be similarly tedious and time-consuming, not allowing Getaround’s SRE team to focus on the parts of their infrastructure that were most important to them. Furthermore, the actual migration from Heroku would require a fair amount of engineering bandwidth, and Getaround had an SRE team of 2 people at the time. Laurent was intimately familiar with Kubernetes and more than capable of taking care of these tasks, which is precisely how he knew the effort required to manage day 2 operations would tear his team away from more significant infrastructure work, and was simply not the best use of their time.

Porter: the best of both worlds 

From his experience with K8s, Laurent knew that the amount of time and effort needed to fully manage their infrastructure on Kubernetes would require a significantly larger team of dedicated SREs and DevOps engineers.  

“If we had set up Kubernetes from scratch, we probably wouldn’t have done anything differently. But that would have also taken a much longer time and much larger team.” Laurent Humez, SRE at Getaround

By using Porter, Getaround had a production-ready, battle-tested cluster, and they don’t have to manage any day 2 operations themselves, allowing them to focus on the most impactful infrastructure work. Furthermore, the actual migration process was completed by the Porter team, resulting in zero downtime for Getaround’s applications and databases. Since Porter manages Kubernetes clusters within users’ own cloud accounts, Getaround is hosting in their own VPC and is easily able to peer their VPC with the rest of their infrastructure on AWS. 

Having fixed IPs allows them to whitelist external services with no issues - this is especially useful for API calls. Configuring their infrastructure so their applications can utilize the system libraries for their database adapter wasn’t a problem either. To autoscale on Heroku, Getaround used an add-on called Judoscale. On Porter, they’re able to autoscale out-of-the-box through Kubernetes Event-driven Autoscaling (KEDA), based on sidekiq queue latency, a custom metric. 

Getaround’s SRE team wants their application developers to be able to self-serve, or in other words, be able to deploy an application whenever they have something ready to go into production, without having to ask the SRE team to do so. This is one of the primary advantages of an abstraction layer like Porter, allowing both development and infrastructure teams to maximize their bandwidth without having to burden each other with support tickets just to deploy a new app. As was the case when they were on Heroku, Getaround’s SRE team uses the Porter API to more granularly control the schedule of their deployments, having many developers spread across several geographies. 

Finally, the most compelling value proposition of Porter for Getaround is that the platform is inherently cloud agnostic. Since Porter runs on Kubernetes, Getaround wouldn’t be tied to one specific cloud provider. Furthermore, Porter provides no limitations on the startup’s SRE team; they can always configure their infrastructure to meet changing business needs, but all of the undifferentiated heavy lifting when it comes to managing K8s is taken care of for them. 

“If you want to do Kubernetes on your own, you need a large team of dedicated engineers working on it, constantly monitoring and maintaining your cluster. Porter handles our Kubernetes operations so we can focus more important parts of the infrastructure” – Laurent Humez, SRE at Getaround

Being production-ready on EKS for 8 countries’ worth of traffic

All of Getaround’s traffic runs on their Porter-managed EKS clusters; Getaround serves production traffic from eight countries - the US, France, the UK, Germany, Spain, Austria, Belgium, and Norway. Requests to application on the production cluster average at 6.6 million per day, comprised mostly of API calls. 

Getaround now has an infrastructure team comprised of essentially 3 people, serving these eight markets. Laurent and Paul Legac work on infrastructure full time, while two other engineers, Guillaume Chateaux and Eric Favre, split their time between application development and DevOps work. Even as a public company, they’re able to maintain a lean infrastructure team and allow their application developers to self-serve deployments, all through Porter.

Next Up

Why Dapi moved from Cloud Foundry to Porter to help enterprises manage payments
Shankar Radhakrishnan
4 min read
How Curri autoscales daily on Porter to handle spikes in user traffic
Shankar Radhakrishnan
7 min read
How CommonLit uses Porter to aid the literacy development of over 30 million students
Shankar Radhakrishnan
6 min read
Govly moves from GCP to AWS in a day using Porter
Shankar Radhakrishnan
5 min read
How Dashdive uses Porter to handle a billion requests per day
Shankar Radhakrishnan
5 min read
How Onclusive uses Porter to consolidate their tech following five mergers
Shankar Radhakrishnan
3 min read
Why Woflow moved from ECS to Porter
Trevor Shim
6 min read
Why Landing chose Porter to scale their servers
Shankar Radhakrishnan
5 min read
How Getaround uses Porter to manage Kubernetes clusters serving traffic across 8 countries
Shankar Radhakrishnan
4 min read
How Memberstack uses Porter to serve 30 million requests
Justin Rhee
3 min read
How Porter helps La Haus deliver a world-class developer experience on Kubernetes
Justin Rhee
3 min read
How Nooks uses Porter to achieve a median latency of 250ms for their AI parallel dialer
Shankar Radhakrishnan
11 min read
How Writesonic runs a 1.6TB Kubernetes cluster with no DevOps engineers
Justin Rhee
2 min read
How Avenue scaled after YC without hiring DevOps
Justin Rhee
3 min read
How HomeLight powers billions of dollars of real estate business on Porter
Justin Rhee
3 min read
Subscribe to our weekly newsletter
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.