YC SERIES

How Writesonic runs a 1.6TB Kubernetes cluster with no DevOps engineers

With Porter managing Writesonic's infrastructure and developer experience on EKS, the Writesonic team can continue to confidently scale without having to worry about being bottlenecked by DevOps.

Justin Rhee
April 15, 2022
2 min read

Writesonic is an AI-powered writing assistant that generates high-quality blog posts and marketing copy. Over 300,000 businesses and individuals use Writesonic to scale their content marketing. Writesonic went through Y Combinator as a member of the S21 batch.

Writesonic's platform relies on a combination of OpenAI's language APIs, custom ML models running on AWS, and application services deployed to EKS through Porter

Searching for a Heroku-like experience on Kubernetes

Coming out of YC, Writesonic wanted to quickly improve application scalability as they saw usage rise to thousands of new signups per week. The Writesonic team had previously used AWS services like EC2 and Lightsail for hosting, but they wanted to explore deploying to Kubernetes (k8s) after repeatedly running into scaling issues during large spikes in traffic.

Yet while Writesonic found the faster scaling of Kubernetes to be appealing, the potential infrastructure overhead incurred by k8s felt daunting even with a managed service like AWS's Elastic Kubernetes Service (EKS).

"We knew we wanted to use Kubernetes, but as a team without DevOps engineers we didn't want to focus on deploying to and managing the cluster."
Samanyou Garg, CEO of Writesonic

Even though Kubernetes seemed ideal from a performance perspective, the Writesonic team didn't want scalability to come at the expense of developer velocity. Like most startups, the team wanted to just focus on the application logic of their services instead of on the underlying infrastructure.

Scaling with Porter after Y Combinator

Initially, Writesonic evaluated hiring DevOps and just having engineers gradually familiarize themselves with Kubernetes. Ultimately, however, Samanyou (Writesonic's CEO) opted to use Porter to reap the benefits of Kubernetes without the traditionally associated DevOps overhead.

"Porter helped us set everything up on Kubernetes and get up and running in just a couple days. We haven't had to go under the hood at all with EKS and the experience has been extremely smooth."
Samanyou Garg, CEO of Writesonic

Instead of having to configure a build pipeline and manage a number of k8s manifests and YAML files, Writesonic was able to simply point Porter to their existing Dockerfiles and have their services automatically deployed to EKS.

In addition, since all Porter-provisioned clusters were created in Writesonic's existing AWS cloud, auxiliary ML workloads on AWS Sagemaker could be securely connected to the app services running on Porter.

Focusing on growth and product

Since the YC batch, the Writesonic team hasn't had to manage instances on AWS or go under the hood with Kubernetes at all thanks to the sane defaults set by Porter. As a result, the engineering team has been able to dedicate its bandwidth to Writesonic's core products, focusing on newer services like the generation of long-form written content.

"Now we are hiring developers across the entire platform but haven't had to worry about hiring DevOps. Our developers love the experience and our focus can remain on the product."
Samanyou Garg, CEO of Writesonic

With Porter managing Writesonic's infrastructure and developer experience on EKS, the Writesonic team can continue to confidently scale without having to worry about being bottlenecked by DevOps.

Looking to improve your developer experience on Kubernetes? Learn more about Porter today.

Next Up

How Onclusive uses Porter to consolidate their tech following five mergers
Shankar Radhakrishnan
3 min read
How HomeLight powers billions of dollars of real estate business on Porter
Justin Rhee
3 min read
Why Carry uses Porter instead of hiring a full-time DevOps engineer
Shankar Radhakrishnan
4 min read
How Avenue scaled after YC without hiring DevOps
Justin Rhee
3 min read
Why Woflow moved from ECS to Porter
Trevor Shim
6 min read
How Nooks uses Porter to achieve a median latency of 250ms for their AI parallel dialer
Shankar Radhakrishnan
11 min read
Why Landing chose Porter to scale their servers
Shankar Radhakrishnan
5 min read
How Getaround uses Porter to manage Kubernetes clusters serving traffic across 8 countries
Shankar Radhakrishnan
4 min read
Govly moves from GCP to AWS in a day using Porter
Shankar Radhakrishnan
5 min read
How Memberstack uses Porter to serve 30 million requests
Justin Rhee
3 min read
How Writesonic runs a 1.6TB Kubernetes cluster with no DevOps engineers
Justin Rhee
2 min read
How Dashdive uses Porter to handle a billion requests per day
Shankar Radhakrishnan
5 min read
Subscribe to our weekly newsletter
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.