Overview

Modern Cloud-Native Software uses:

  1. Services: APIs and web apps (like Next.js) built for the software.
  2. Third-Party Open Source Software: Dependencies such as Redis, PostgreSQL, Solr, Kafka, and more.
  3. Infrastructure: Components to host the software and its third-party dependencies, typically provided by cloud providers like AWS, GCP, or Azure. The infrastructure components required depend on the choice of cloud provider. For example, on AWS, you might use VPC, EC2 instances, or a container orchestrator like Kubernetes, ECS, or Fargate, Route53 zones, S3 buckets, etc. On GCP, it could be Cloud DNS Zone, GCS Bucket, GKE Cluster, Compute instances, and more.

cloud-native-components.png

Deploying components of all three categories is a requirement to make the software available to its target users. There are several ways to deploy these components.

ProjectPlanton is a new framework created with the purpose of streamlining the deployment process to bring a consistent experience when deploying components of all three types, and even retaining consistency across cloud providers.

Chapter 1: The APIs

A few pages from the success story of Kubernetes have become the inspiration for prototyping a framework that would make it possible to create a consistent experience.

The most important aspect of Kubernetes that resonated was the Kubernetes Resource Model (KRM). Given the popularity of Kubernetes, it is needless to say that the tech industry at large is already very familiar with the YAML manifests containing apiVersion, kind, metadata, spec, and status structure.

So, instead of creating a new interface that needed to become as commonplace as Kubernetes manifests, a choice was made to use the same design.

Kubernetes uses OpenAPI to enforce API standards. However, there are other alternatives that offer a few additional benefits. Protobuf has become a more compelling choice, especially with buf and the new set of tooling that came along with it. Protobuf, when combined with buf CLI and buf schema registry with its remote code generation capabilities, is formidable API tooling and offers significant advantages over OpenAPI.

Although Kubernetes APIs were meant to be an interface for developers, the need to abstract underlying details like network infrastructure, container orchestration, and storage interfaces made it impractical to keep the APIs simple enough for developer consumption.

This is why there is a need for another higher level of abstraction. Just like Kubernetes has created abstractions for deploying “any” container image with some configuration, the new abstraction layer should be more specific for each deployment component.

The new abstraction layer, ProjectPlanton APIs is the first pillar of ProjectPlanton Multi-Cloud Deployment Framework.

Chapter II: The IaC

When Kubernetes APIs are written in YAML manifests and submitted to the server, they are just configuration persisted in database tables. Kubernetes controllers then take the configuration and create the required resources. This is a simple yet powerful framework. No one cares about the implementation of these controllers except for the maintainers. The end users collaborate on creating the configuration and then submit it to Kubernetes, simply expecting that the next steps will be taken care of.

Setting up any system that requires deploying services would demand system knowledge, which creates a chicken-and-egg problem. The whole idea of creating a framework to create abstractions for developers would start to fall apart if they again had to deal with deploying things before they even got started. All of this essentially means raising the barrier to entry.

On the other hand, without persisting the configuration somewhere, it is hard to evaluate the desired state versus the current state. Evaluating the current state requires access to metadata of the resources when they were created. A system of record is required to store this information. Storing data again means deploying a storage system—a big “No” for this type of framework. So, what can be the path forward? Say hello to the IaC tooling Terraform and Pulumi. Creating a system of record, evaluating the current state and desired state, and bridging the gap between the two is at the heart of IaC tools. These are as ubiquitously used as Kubernetes, if not more.

Both Pulumi and Terraform are general-purpose tools, rightly so, that allow users to design any kind of infrastructure. An IaC tool can only fit into the framework if it can work with the APIs defined in the framework. Since Terraform can only understand HCL, and the APIs are not written in HCL (nor is there a code generator that can transform Protobuf messages into Terraform variables), it is not a viable solution for this framework. Pulumi, on the other hand, can nicely fit into the framework since it supports most modern programming languages. The capabilities of remote code generation from buf schema registry combined with multi-language supporting Pulumi make them a compelling choice to create the client-only counterpart of Kubernetes controllers. The Pulumi state backends act as the system of record.

In this approach, pulumi-modules combined with the Pulumi engine are equivalent to the reconciliation logic executed by each Kubernetes controller. Just like end users of Kubernetes are abstracted away from the Kubernetes controllers, so should users of this framework be abstracted away from having to create or understand Pulumi modules or the Pulumi engine. To make that happen, every deployment component modeled in the API will have a corresponding default Pulumi module.

Chapter III: The CLI

kubectl is a simple CLI tool that was created to allow end users to submit their configuration to the API servers. It acts as a communication channel between end users and the Kubernetes system. A similar interface is required to allow users of the ProjectPlanton framework to write up the configuration and submit it to the Pulumi engine. So, the project-planton command line tool was created.

project-planton ties the APIs and Pulumi modules together to deliver deployment magic. It is not as sophisticated as kubectl. At its core, it only has one functionality—to allow users to deploy the component that they need with the configuration they write up in a manifest file. That is exactly what it does. When users run project-planton pulumi up --manifest <manifest.yaml>, the CLI reads the manifest, understands the API resource kind based on the value in the kind attribute in the manifest, looks up the default Pulumi module created for it, sets up the user-provided configuration as input for the Pulumi module, and delegates the action to the Pulumi CLI.

Minimal Abstractions

One of the core design philosophies when creating the framework was to avoid creating unnecessary abstractions and only create them when there was no other choice. This design choice essentially makes it possible to write Pulumi modules without any restrictions whatsoever. The abstractions are only at the API layer, and that’s about it. For example, this deployment framework requires users to take care of setting up the Pulumi backend, and provider credentials for the infrastructure provider (AWS provider, GCP provider, etc.) are all left for end users to figure out. It may have been possible to make this simpler by creating more abstractions, but then those abstractions would create a learning curve.