CASE STUDY

Why Dapi moved from Cloud Foundry to Porter to help enterprises manage payments

Shankar Radhakrishnan
July 24, 2023
4 min read

Dapi is a fintech startup that was part of YC’s W20 batch and aims to make payments simple. Specifically, the startup provides an API (application programming interface) that allows companies to accept instant bank payments within any application. The company has partnered with corporations like MasterCard to build out the next generation of financial technology infrastructure. We spoke with Ahmad Sameh, Senior Software Engineer at Dapi, to learn how the fintech uses Porter to help enterprise customers manage payments.

Using Cloud Foundry

Before migrating to Porter, Dapi used Cloud Foundry to manage their cloud infrastructure. Cloud Foundry is a Platform-as-a-Service (PaaS) initially developed in-house by VMware and is now managed by the not-for-profit Cloud Foundry Foundation. It’s open-source and multi-cloud, supporting several cloud providers such as Amazon Web Services and Google Cloud Platform, and utilizes built-in container runtimes to deploy and run applications. 

Dapi has a small engineering team with limited DevOps experience, so utilizing the PaaS seemed to be the best way for them to focus on building their product without worrying about managing infrastructure. However, Cloud Foundry was designed more so with operations teams in mind (to create internal developer platforms, for example) and so, doesn’t necessarily abstract Kubernetes away enough as a traditional PaaS would. This resulted in Dapi’s engineering team spending much more of their bandwidth dealing with and figuring out how to manage their infrastructure than they would have liked. 

“Cloud Foundry felt cryptic; there was no preset method of doing anything so we’d have to figure out ways to do things ourselves and then remember those processes.” - Ahmad Sameh, Senior Software Engineer at Dapi

Due to the complexity associated with Cloud Foundry, Dapi’s engineering team looked for an alternative PaaS that would also run on their own private cloud and could handle their enterprise-grade architecture and applications. 

Moving to Porter 

Porter, much like Cloud Foundry, is also open-source, multi-cloud, and utilizes container runtimes, specifically running on top of Kubernetes. However, Porter was designed to be as convenient to use as traditional PaaS offerings, like Heroku, and acts as a layer of abstraction on top of users’ Kubernetes clusters. So, having to configure their underlying infrastructure is unnecessary unless a user wants to go under the hood for a specific reason like security configuration. 

CI/CD 

The most significant problem they ran into on Cloud Foundry was building and maintaining their Continuous Integration and Continuous Delivery (CI/CD) pipeline. The PaaS offering’s CI/CD solution required Cloud Foundry-specific logic and the Dapi team couldn’t find many resources to help them get set up.

On Porter, CI/CD is handled for users through GitHub Actions. All they have to do is select a Github repository, then a source branch, then a root path. Porter automatically detects Dockerfiles or build packs and automatically configures build methods. Then, users can add new services to their application, and authorize Porter to open a pull request in their branch that contains a GitHub Actions file named porter.yaml.

Unlike Cloud Foundry’s CI/CD solution, GitHub Actions is widely adopted so it's also far easier to customize and generally maintain, with a plethora of online resources available (including many pre-built CI workflows) for any sort of troubleshooting. If users want to configure a CI/CD pipeline or use a different CI provider, they can do so through the Porter CLI (command line interface), which allows them to run different commands specific to their customization. 

Environment variables & secrets

Another large friction regarding Cloud Foundry came in the form of environment variables. They needed to use the correct Environment Variables specifically related to Cloud Foundry’s proprietary runtime environment. However, there was a lack of resources regarding this, resulting in another hurdle. Furthermore, managing secrets, passwords, and other credentials was accomplished through the CredHub plug-in; having to create service instances of CredHub Service Brokers and bind them to each application was tiresome and complicated.

With Porter, Environment Variables can be added to applications with ease, and will then be available to all services in users’ applications. Furthermore, Environment Groups can be utilized to group a common set of variables and sync them to an application, so that every time an environment group is edited, it will automatically propagate to every application it is synced with. Secret environment variables, which are not exposed after creation, can be created by selecting the lock icon, shown below, when creating an environment variable.

Creating a secret environment variable on Porter.

Architecture on Porter 

As a fintech API company, Dapi has certain technical requirements that necessitate further configuration under the hood. Furthermore, since they serve enterprise clients and are partnered with colossal card companies like Mastercard, reliability is a paramount concern, as are security and compliance.

Compliance 

Since Porter acts as a middleware layer within users’ own private cloud, users can configure their underlying infrastructure to suit their business requirements. For Dapi, this includes going through a SOC 2 Type 2 compliance audit, meaning they have implemented the internal controls laid out by the American Institute of Certified Public Accountants (AICPS) Trust Service Criteria to ensure the security and privacy of customer data they process. The engineering team just needed to go under the hood and configure their underlying infrastructure to meet these requirements. They also plan on becoming Payment Card Industry Data Security Standard (PCI/DSS) compliant, which is a standard set by a council founded by major card brands.  Becoming PCI/DSS compliant would also entail configuring their underlying infrastructure, which Porter does not prevent users from doing at all - users have full control and flexibility when it comes to the configuration of their infrastructure. 

Health checks

Users’ cloud provider of choice ensures the reliability of their underlying resources; AWS and GCP manage global infrastructures that are made highly available and redundant so reliability concerns are largely mitigated. One feature of Porter that Dapi utilizes frequently to maintain their applications’ reliability is health checks. These health checks are endpoints that indicate an application is healthy and ready to receive traffic. Until this indication occurs, traffic won’t switch from the old application instance. Healthy applications should return a 200 status when ready for traffic and a 500 level error otherwise. Health checks can simply be configured on the Porter dashboard:

Configuring health checks on Porter.

Autoscaling and replicas

As an API provider, Dapi has certain requirements regarding the reliability of their infrastructure. They need to ensure that no API call that is currently being processed is dropped; occasionally, these take up to four minutes to process. They also need to make sure that their containers are never under-provisioned and can handle any volume of API calls at any given moment. Fortunately, through hooks on Kubernetes, which allow one to run scripts in response to the changing phases of a container’s lifecycle, they can set longer termination grace periods with a ‘Prestop’ hook. This allows containers to continue running and only terminate after the hook handler executes. Dapi is never underprovisioned either, as more resources are spun up through autoscaling whenever the amount of API calls increases.

Custom security configuration 

Dapi’s architecture is also uniquely configured for security purposes; the fintech startup only performs API calls one way when it comes to the colossal institutional banks it’s partnered with, submitting requests and receiving information from those banks. The requested data is then made available on Dapi’s platform, allowing clients’ applications access to the pertinent financial data seamlessly; Dapi’s APIs act as a bridge between these clients’ applications and their end users’ bank accounts. 

Furthermore, some of Dapi’s applications (specifically their staging cluster and associated applications) are only accessible through the relevant, approved IPs which are whitelisted on the networking layer of their infrastructure. This is accomplished through ingress annotations. For context, ingress controllers are Kubernetes resources that accept inbound traffic and route this traffic to the correct service in the cluster. All of the clusters provisioned by Porter contain Nginx ingress controllers which can be annotated under the advanced tab on the Porter dashboard: 

Custom Ingress Annotations on Porter.

These staging applications can also be accessed through a Virtual Private Network (VPN) that guarantees end-to-end encryption and is not accessible via the public internet. For some of their services, they only allow a single tenant (in this case, one user), and so, utilize separate namespaces. Each namespace is self-contained, with the worker services necessary for their API to function existing within each namespace. If one namespace needs to communicate with another one, there is an admin namespace that it can use. 

Dapi on Porter 

With Porter, Dapi has the flexibility and control over their infrastructure to serve enterprise level clients reliably, from custom security and compliance configurations to setting environment variables and secrets, leveraging the benefits Kubernetes has to offer, all while maintaining the convenience and simplicity of a traditional PaaS.

Next Up

How Onclusive uses Porter to consolidate their tech following five mergers
Shankar Radhakrishnan
3 min read
Why Landing chose Porter to scale their servers
Shankar Radhakrishnan
5 min read
Why Dapi moved from Cloud Foundry to Porter to help enterprises manage payments
Shankar Radhakrishnan
4 min read
How Memberstack uses Porter to serve 30 million requests
Justin Rhee
3 min read
How Dashdive uses Porter to handle a billion requests per day
Shankar Radhakrishnan
5 min read
How HomeLight powers billions of dollars of real estate business on Porter
Justin Rhee
3 min read
How Porter helps La Haus deliver a world-class developer experience on Kubernetes
Justin Rhee
3 min read
How CommonLit uses Porter to aid the literacy development of over 30 million students
Shankar Radhakrishnan
6 min read
Why Woflow moved from ECS to Porter
Trevor Shim
6 min read
How Getaround uses Porter to manage Kubernetes clusters serving traffic across 8 countries
Shankar Radhakrishnan
4 min read
How Avenue scaled after YC without hiring DevOps
Justin Rhee
3 min read
Govly moves from GCP to AWS in a day using Porter
Shankar Radhakrishnan
5 min read
How Writesonic runs a 1.6TB Kubernetes cluster with no DevOps engineers
Justin Rhee
2 min read
How Curri autoscales daily on Porter to handle spikes in user traffic
Shankar Radhakrishnan
7 min read
How Nooks uses Porter to achieve a median latency of 250ms for their AI parallel dialer
Shankar Radhakrishnan
11 min read
Subscribe to our weekly newsletter
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.