CASE STUDY

Why Landing chose Porter to scale their servers

To improve their infrastructure's scalability, flexibility, security, and reliability while maintaining Heroku-like convenience, Landing switched to Porter.

Shankar Radhakrishnan
April 26, 2023
5 min read

Renting is a painful process: you go through endless applications to find the perfect place, then pay a security deposit, only to be locked into a lease that can last a year or more. That’s where Landing comes in, aiming to revolutionize the way people find and rent homes through technology.

The platform offers 20,000 fully furnished rentals in 375 cities throughout the United States, allowing users to move in or out with 2 weeks’ notice for as long or as short of a period as they want. Furthermore, there are no security deposits or application fees. Landing has grown rapidly in recent years, with $347.8M in total funding to date.

Initially, Landing was hosted on Heroku because it was the easiest and fastest way to get up and running. The company had no DevOps engineers and didn’t want to focus resources on building out their own infrastructure as it wasn’t a core part of their business. But as they grew, their infrastructure had to scale accordingly and so they looked to move to AWS.

Landing’s Challenges

Four main factors necessitated this change: security, flexibility, scalability, and reliability.

Landing realized that they needed a controllable, isolated environment to better meet their security and compliance needs. Landing wanted to host their applications inside their own virtual private cloud (VPC) to improve their security posture. They were using AWS Aurora in its own VPC for their database already, so moving applications into their own VPC also meant they could improve security and latency between their applications and the database through VPC peering.

Landing also expected that moving applications into their own VPC would give them improved flexibility and control over their dedicated and private environment, unlike the shared Heroku infrastructure.

In terms of scalability, the startup was deploying more and more services with increasing levels of technical complexity, to the point where Heroku became too inflexible to handle the advances in their tech stack. As Landing leaned more and more toward a Service-Oriented Architecture (SOA), Heroku became increasingly limiting. There were some new technologies and frameworks that Landing started to use, like Rust, that also didn’t have native support on Heroku.

The level of observability on Heroku, including diagnostic and troubleshooting capabilities, was too limiting for Landing’s site reliability engineering (SRE) needs, especially when it came to networking. Landing saw several inexplicable network issues in their Heroku application logs that they couldn’t do anything about because they didn’t have visibility into the infrastructure that’s owned by Heroku and is shared across multiple accounts.

Landing also saw issues when it comes to resource management. Whenever a workload running on a Heroku dyno (in cloud-native terms: a container) uses more memory than allotted, instead of cleanly restarting the container, Heroku allows the workload to start using swap memory. While this prevents workloads from crashing or restarting in such cases, the usage of swap memory often slows the workload's performance to a crawl. In many cases, the degraded performance is less acceptable (and harder to troubleshoot) than the out-of-memory pod restarts seen with Kubernetes.

Likewise, on Heroku, there is no way to incrementally increase the amount of compute resources for a workload — if a workload is currently on a Performance-M dyno, instead of being able to allocate just a bit more memory to each dyno, one must double their cost in order to upgrade to the next dyno level.

Competitive Evaluation

Landing considered their options. They first evaluated Fly and Render, two great alternatives to Heroku. However, those platforms didn’t allow Landing to host on their own infrastructure, which meant that the issues they were running into on Heroku due to its lack of flexibility were likely to pop up again in the future, possibly necessitating another migration. This remaining concern of inflexibility as well as potential vendor lock-in led them to ultimately decide against these two platforms.

The next logical option was to manage their own containers, using Elastic Container Service (ECS) or Elastic Kubernetes Service (EKS). Landing first considered ECS but found it to be too AWS-centric — ECS depends heavily on AWS for its ecosystem and thus raises concerns of vendor lock-in. EKS, on the other hand, provides more protection against vendor lock-in, because it’s considered the gold standard of container orchestration with a robust open-source ecosystem. ECS also fell short when it comes to developer experience and was not as powerful and flexible as EKS. However, as they looked into adopting EKS, Landing soon realized that running Kubernetes would incur significant DevOps overhead and deemed that they didn’t have enough engineering bandwidth to manage Kubernetes on their own.

“After talking to other companies that adopted Kubernetes, I realized managing Kubernetes on your own is a fool’s errand as a startup with shifting priorities.” - Daniel Klein, Staff Engineer at Landing

Enter Porter

Hitting the sweet spot between the two options of using a fully-hosted PaaS and rolling their own infrastructure from scratch, Porter struck a fine balance between ease of use and flexibility for Landing. In particular, Porter allows Landing to preserve the convenience of a PaaS while taking advantage of Kubernetes without having to manage its complexity.

Porter installs a battle-tested Kubernetes runtime on top of Elastic Kubernetes Service (EKS) and ensures its reliability. It also automates day 2 operations on an ongoing basis such as version upgrades, alerting, and monitoring so that Landing can run their applications on enterprise-grade infrastructure without additional overhead.

Porter’s use of Kubernetes solved the scalability issues that Landing experienced on Heroku while being significantly more cost-efficient. Specifically, Porter allows Landing to assign resources to their applications with a lot more granularity. On Heroku, Landing had a few discrete options when it came to resources; the aforementioned Performance-M dyno on Heroku provided 2.5GB of RAM, and if more resources were required, Landing had to jump straight to a Performance-L dyno that had 14GB of RAM. There was no in-between, which led to unnecessary cost bloat. Porter allows Landing to leverage Kubernetes to assign resources to each application with finer granularity, down to 1MB RAM and 0.01 vCPU. On Porter, it is also possible to configure Landing’s applications to autoscale based on Sidekiq queue length. By default, autoscaling on Porter is based on resource allocation, but the ability to autoscale based on custom metrics like request queue time is a feature Landing found particularly effective for their web workloads.

In addition, Porter was able to address the security and compliance issues that Landing experienced by giving them full control of their own VPC and the networking layer with improved security and observability. This gave them the flexibility to diagnose and fix the problems they were unable to fix in the Heroku environment.

Landing now has control over the observability of the apps and resources hosted in their private cloud, including their network. As a result, the rental startup has the ability to perform better diagnostics and troubleshooting to improve reliability since Porter already supports integrations with many observability software that Landing was planning to use, such as Prometheus and DataDog. Furthermore, Porter mitigates the concern of vendor lock-in because it is built on top of a standard Kubernetes ecosystem that makes the entire infrastructure more portable.

Final Choice

In the end, Landing decided to use Porter since it not only allowed them to migrate from Heroku to their own AWS account with ease but also because Porter’s platform runs on an enterprise-grade Kubernetes cluster that’s more scalable and reliable.

When compared to a traditional PaaS like Heroku, Porter gives Landing a lot more flexibility that became increasingly necessary as they scale. Compared to other container orchestrators such as ECS, Porter not only gives Landing more scalable infrastructure but also an easier developer experience despite running EKS underneath. Despite its hypergrowth, Landing still doesn’t have a dedicated DevOps function since most of the company’s infrastructure needs are taken care of by Porter. Even though Landing doesn’t need all the capabilities of Kubernetes right now, there’s simply no downside to using the industry-standard orchestrator with a robust ecosystem that Landing can grow into, since Porter abstracts away the complexity of Kubernetes.

Porter was the right choice for Landing as it provides the rental startup with the developer experience, flexibility, and cost efficiency they need now. However, even when Landing establishes a dedicated infrastructure team, they won’t feel restricted or as if they’ve outgrown the platform since Porter’s platform is built on open-source standards of the Kubernetes ecosystem.

Next Up

How Onclusive uses Porter to consolidate their tech following five mergers
Shankar Radhakrishnan
3 min read
Why Landing chose Porter to scale their servers
Shankar Radhakrishnan
5 min read
How Getaround uses Porter to manage Kubernetes clusters serving traffic across 8 countries
Shankar Radhakrishnan
4 min read
How Porter helps La Haus deliver a world-class developer experience on Kubernetes
Justin Rhee
3 min read
Why Dapi moved from Cloud Foundry to Porter to help enterprises manage payments
Shankar Radhakrishnan
4 min read
How Nooks uses Porter to achieve a median latency of 250ms for their AI parallel dialer
Shankar Radhakrishnan
11 min read
How Memberstack uses Porter to serve 30 million requests
Justin Rhee
3 min read
How Avenue scaled after YC without hiring DevOps
Justin Rhee
3 min read
Govly moves from GCP to AWS in a day using Porter
Shankar Radhakrishnan
5 min read
How Curri autoscales daily on Porter to handle spikes in user traffic
Shankar Radhakrishnan
7 min read
How Writesonic runs a 1.6TB Kubernetes cluster with no DevOps engineers
Justin Rhee
2 min read
How Dashdive uses Porter to handle a billion requests per day
Shankar Radhakrishnan
5 min read
How CommonLit uses Porter to aid the literacy development of over 30 million students
Shankar Radhakrishnan
6 min read
How HomeLight powers billions of dollars of real estate business on Porter
Justin Rhee
3 min read
Why Woflow moved from ECS to Porter
Trevor Shim
6 min read
Subscribe to our weekly newsletter
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.