Credentials Management Guide

Your complete guide to securely providing cloud provider credentials to Project Planton.


Overview

To deploy infrastructure, Project Planton needs permission to create resources in your cloud accounts. These permissions come from credentials - authentication information that proves you have the right to make changes.

Think of credentials like keys to different buildings. AWS credentials are like keys to Amazon's building, GCP credentials open Google's building, and so on. Project Planton needs the right key for whichever building (cloud provider) you're working with.

The Three Ways to Provide Credentials

Project Planton supports three methods for providing credentials:

  1. Environment Variables (Most common, recommended for local development)
  2. Credential Files via CLI Flags (Good for multi-account scenarios)
  3. Embedded in Manifests (Advanced, not covered in this guide)

We'll focus on the first two methods, as they're the most practical and secure.


General Principles

Security Best Practices

✅ DO:

  • Use environment variables or credential files
  • Store credentials in password managers
  • Use IAM roles and temporary credentials when possible
  • Rotate credentials regularly
  • Use least-privilege permissions (only what's needed)
  • Use different credentials for dev/staging/prod

❌ DON'T:

  • Commit credentials to Git
  • Hardcode credentials in manifests
  • Share credentials via email or chat
  • Use root/admin credentials for deployments
  • Reuse personal credentials for automation

Permission Scoping

Grant only the permissions needed for your deployments:

  • Creating compute resources? Grant compute permissions.
  • Managing databases? Grant database permissions.
  • Don't grant *:* (full access) unless absolutely necessary.

Each cloud provider has guides for setting up appropriate IAM policies.


AWS Credentials

The simplest approach - AWS CLI and Project Planton both read these:

# Set credentials
export AWS_ACCESS_KEY_ID="AKIAIOSFODNN7EXAMPLE"
export AWS_SECRET_ACCESS_KEY="wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
export AWS_DEFAULT_REGION="us-west-2"  # Optional but recommended

# Verify they work
aws sts get-caller-identity

# Deploy
project-planton pulumi up --manifest ops/aws/vpc.yaml

Where to get these:

  1. AWS Console → IAM → Users → Your User → Security Credentials
  2. Click "Create access key"
  3. Download and store securely (you won't see the secret again)

Method 2: Credential Files via CLI Flags

# Create credential file
cat > ~/.aws/project-planton-prod.yaml <<EOF
accessKeyId: AKIAIOSFODNN7EXAMPLE
secretAccessKey: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
region: us-west-2
EOF

# Use with CLI
project-planton pulumi up \
  --manifest ops/aws/vpc.yaml \
  --aws-credential ~/.aws/project-planton-prod.yaml

Use AWS CLI profiles to manage multiple accounts:

# Configure profile
aws configure --profile production
# Enter access key, secret, region when prompted

# Use profile with Project Planton
export AWS_PROFILE=production
project-planton pulumi up --manifest ops/aws/vpc.yaml

Method 4: IAM Roles (Best for EC2/ECS/Lambda)

If running on AWS compute (EC2, ECS, Lambda), use IAM roles instead of access keys:

# No credentials needed - automatically provided by AWS
# Just ensure your EC2 instance/ECS task has an IAM role attached

project-planton pulumi up --manifest ops/aws/vpc.yaml

Troubleshooting AWS Credentials

Problem: "Unable to locate credentials"

# Solution 1: Check if credentials are set
env | grep AWS

# Solution 2: Verify credentials are valid
aws sts get-caller-identity

# Solution 3: Check AWS CLI config
cat ~/.aws/credentials
cat ~/.aws/config

Problem: "Access Denied" errors

# Check what permissions your credentials have
aws iam get-user

# Verify you have the necessary IAM policies attached
# Contact your AWS administrator if you need additional permissions

Google Cloud (GCP) Credentials

Step 1: Create service account and key:

# Via gcloud CLI
gcloud iam service-accounts create project-planton-deployer \
  --display-name "Project Planton Deployer"

# Grant necessary roles (example: GKE admin)
gcloud projects add-iam-policy-binding my-project \
  --member="serviceAccount:project-planton-deployer@my-project.iam.gserviceaccount.com" \
  --role="roles/container.admin"

# Create and download key
gcloud iam service-accounts keys create ~/gcp-key.json \
  --iam-account=project-planton-deployer@my-project.iam.gserviceaccount.com

Step 2: Use the key file:

# Method A: Environment variable (most common)
export GOOGLE_APPLICATION_CREDENTIALS=~/gcp-key.json

project-planton pulumi up --manifest ops/gcp/gke-cluster.yaml

# Method B: CLI flag
project-planton pulumi up \
  --manifest ops/gcp/gke-cluster.yaml \
  --gcp-credential ~/gcp-key-as-yaml.yaml

Method 2: Application Default Credentials (Local Development)

# Authenticate with your personal Google account
gcloud auth application-default login

# No additional configuration needed
project-planton pulumi up --manifest ops/gcp/gke-cluster.yaml

When to use: Local development, personal projects.
When NOT to use: Production, CI/CD (use service accounts instead).

Method 3: Workload Identity (Best for GKE)

If deploying from within GKE, use Workload Identity:

# Configure at cluster creation - no credentials in code
# Kubernetes service accounts automatically get GCP permissions
# Project Planton automatically uses workload identity when available

GCP Credential File Format for CLI Flag

# gcp-credential.yaml
serviceAccountKeyBase64: "<base64-encoded-json-key>"

To create:

# Encode your service account key
base64 -i ~/gcp-key.json | tr -d '\n' > base64-key.txt

# Create YAML file
cat > gcp-credential.yaml <<EOF
serviceAccountKeyBase64: $(cat base64-key.txt)
EOF

Troubleshooting GCP Credentials

Problem: "Application Default Credentials not found"

# Solution: Set environment variable
export GOOGLE_APPLICATION_CREDENTIALS=~/gcp-key.json

# Or authenticate with gcloud
gcloud auth application-default login

Problem: "Permission denied" errors

# Check what project you're using
gcloud config get-value project

# List available projects
gcloud projects list

# Set correct project
gcloud config set project my-project-id

# Verify service account permissions
gcloud projects get-iam-policy my-project \
  --flatten="bindings[].members" \
  --filter="bindings.members:serviceAccount:project-planton-deployer@*"

Azure Credentials

Step 1: Create service principal:

# Create service principal and get credentials
az ad sp create-for-rbac \
  --name "project-planton-deployer" \
  --role contributor \
  --scopes /subscriptions/<subscription-id>

# Output shows:
# {
#   "appId": "abc-123",           # This is CLIENT_ID
#   "displayName": "...",
#   "password": "xyz-789",         # This is CLIENT_SECRET
#   "tenant": "def-456"            # This is TENANT_ID
# }

Step 2: Use credentials:

# Method A: Environment variables
export ARM_CLIENT_ID="abc-123"
export ARM_CLIENT_SECRET="xyz-789"
export ARM_TENANT_ID="def-456"
export ARM_SUBSCRIPTION_ID="your-subscription-id"

project-planton pulumi up --manifest ops/azure/aks-cluster.yaml

# Method B: Credential file via CLI flag
cat > azure-credential.yaml <<EOF
clientId: abc-123
clientSecret: xyz-789
tenantId: def-456
subscriptionId: your-subscription-id
EOF

project-planton pulumi up \
  --manifest ops/azure/aks-cluster.yaml \
  --azure-credential azure-credential.yaml

Method 2: Azure CLI Authentication (Local Development)

# Login with your personal account
az login

# No additional configuration needed
project-planton pulumi up --manifest ops/azure/aks-cluster.yaml

Troubleshooting Azure Credentials

Problem: "Failed to authenticate"

# Verify you're logged in
az account show

# List available subscriptions
az account list

# Set correct subscription
az account set --subscription "My Subscription"

Problem: "Insufficient permissions"

# Check service principal roles
az role assignment list \
  --assignee <client-id> \
  --output table

# Add necessary role
az role assignment create \
  --assignee <client-id> \
  --role "Contributor" \
  --scope /subscriptions/<subscription-id>

Cloudflare Credentials

Step 1: Create API token in Cloudflare dashboard:

  1. Go to Cloudflare Dashboard → My Profile → API Tokens
  2. Click "Create Token"
  3. Select template or create custom with needed permissions
  4. Copy the token (you won't see it again)

Step 2: Use the token:

# Method A: Environment variable
export CLOUDFLARE_API_TOKEN="your-api-token-here"

project-planton pulumi up --manifest ops/cloudflare/r2-bucket.yaml

# Method B: Credential file (not commonly used, environment variable preferred)
export CLOUDFLARE_API_KEY="your-api-key"
export CLOUDFLARE_EMAIL="your-email@example.com"

project-planton pulumi up --manifest ops/cloudflare/r2-bucket.yaml

Why not recommended: API keys have account-wide access. API tokens can be scoped to specific permissions.

Troubleshooting Cloudflare Credentials

Problem: "Authentication failed"

# Verify token is set
echo $CLOUDFLARE_API_TOKEN

# Test token with Cloudflare API
curl -X GET "https://api.cloudflare.com/client/v4/user/tokens/verify" \
  -H "Authorization: Bearer $CLOUDFLARE_API_TOKEN"

Problem: "Insufficient permissions"

  • Check token permissions in Cloudflare dashboard
  • Create new token with required permissions
  • Ensure token isn't expired

Kubernetes Cluster Credentials

When deploying to Kubernetes (using *.Kubernetes components), you need kubeconfig credentials.

Method 1: Default Kubeconfig File

# Project Planton automatically uses ~/.kube/config
project-planton pulumi up --manifest ops/k8s/postgres.yaml

Method 2: Custom Kubeconfig Path

# Set custom kubeconfig
export KUBECONFIG=~/.kube/staging-cluster-config

project-planton pulumi up --manifest ops/k8s/postgres.yaml

Method 3: Kubeconfig via CLI Flag

# Pass kubeconfig as YAML file
project-planton pulumi up \
  --manifest ops/k8s/postgres.yaml \
  --kubernetes-cluster ~/.kube/prod-cluster.yaml

Getting Kubeconfig Files

For GKE:

gcloud container clusters get-credentials my-cluster \
  --region us-central1 \
  --project my-project

For EKS:

aws eks update-kubeconfig \
  --name my-cluster \
  --region us-west-2

For AKS:

az aks get-credentials \
  --resource-group my-rg \
  --name my-cluster

Troubleshooting Kubernetes Credentials

Problem: "Unable to connect to cluster"

# Verify kubeconfig is valid
kubectl cluster-info

# Check current context
kubectl config current-context

# List available contexts
kubectl config get-contexts

# Switch context
kubectl config use-context my-cluster

Other Providers

MongoDB Atlas

# Environment variables
export MONGODB_ATLAS_PUBLIC_KEY="your-public-key"
export MONGODB_ATLAS_PRIVATE_KEY="your-private-key"

# Or via CLI flag
project-planton pulumi up \
  --manifest ops/atlas/cluster.yaml \
  --mongodb-atlas-credential atlas-creds.yaml

Snowflake

# Environment variables
export SNOWFLAKE_ACCOUNT="account-identifier"
export SNOWFLAKE_USER="username"
export SNOWFLAKE_PASSWORD="password"

# Or via CLI flag
project-planton pulumi up \
  --manifest ops/snowflake/database.yaml \
  --snowflake-credential snowflake-creds.yaml

Confluent Cloud

# Environment variables
export CONFLUENT_CLOUD_API_KEY="api-key"
export CONFLUENT_CLOUD_API_SECRET="api-secret"

# Or via CLI flag
project-planton pulumi up \
  --manifest ops/confluent/kafka.yaml \
  --confluent-credential confluent-creds.yaml

CI/CD Credential Management

GitHub Actions

name: Deploy Infrastructure

on:
  push:
    branches: [main]

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      
      - name: Deploy to AWS
        run: |
          project-planton pulumi up \
            --manifest ops/aws/vpc.yaml \
            --yes
        env:
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          AWS_DEFAULT_REGION: us-west-2

Store credentials in:

  • Repository Secrets (Settings → Secrets and variables → Actions)
  • Organization Secrets (for sharing across repos)
  • Environment Secrets (for environment-specific credentials)

GitLab CI

deploy:
  stage: deploy
  script:
    - project-planton pulumi up --manifest ops/gcp/cluster.yaml --yes
  variables:
    GOOGLE_APPLICATION_CREDENTIALS: ${GCP_SERVICE_ACCOUNT_KEY}
  only:
    - main

Store credentials in:

  • CI/CD Variables (Settings → CI/CD → Variables)
  • Mark as "Protected" and "Masked"

Jenkins

pipeline {
    agent any
    environment {
        AWS_ACCESS_KEY_ID     = credentials('aws-access-key-id')
        AWS_SECRET_ACCESS_KEY = credentials('aws-secret-access-key')
    }
    stages {
        stage('Deploy') {
            steps {
                sh 'project-planton pulumi up --manifest ops/aws/vpc.yaml --yes'
            }
        }
    }
}

Store credentials in: Jenkins Credentials Manager


Credential Storage Solutions

1Password:

# Store credentials
op item create --category login \
  --title "AWS Prod Credentials" \
  aws_access_key_id[password]=AKIA... \
  aws_secret_access_key[password]=wJal...

# Retrieve and use
export AWS_ACCESS_KEY_ID=$(op item get "AWS Prod Credentials" --fields aws_access_key_id)
export AWS_SECRET_ACCESS_KEY=$(op item get "AWS Prod Credentials" --fields aws_secret_access_key)

pass (Unix password manager):

# Store credentials
pass insert aws/prod/access_key_id
pass insert aws/prod/secret_access_key

# Retrieve and use
export AWS_ACCESS_KEY_ID=$(pass aws/prod/access_key_id)
export AWS_SECRET_ACCESS_KEY=$(pass aws/prod/secret_access_key)

AWS Secrets Manager:

# Store credentials
aws secretsmanager create-secret \
  --name prod/gcp/service-account \
  --secret-string file://gcp-key.json

# Retrieve and use
aws secretsmanager get-secret-value \
  --secret-id prod/gcp/service-account \
  --query SecretString \
  --output text > /tmp/gcp-key.json

export GOOGLE_APPLICATION_CREDENTIALS=/tmp/gcp-key.json

HashiCorp Vault:

# Store credentials
vault kv put secret/aws/prod \
  access_key_id=AKIA... \
  secret_access_key=wJal...

# Retrieve and use
export AWS_ACCESS_KEY_ID=$(vault kv get -field=access_key_id secret/aws/prod)
export AWS_SECRET_ACCESS_KEY=$(vault kv get -field=secret_access_key secret/aws/prod)

Security Checklist

Before deploying to production:

  • Credentials stored in secure location (not in code)
  • Using least-privilege IAM policies
  • Different credentials for dev/staging/prod
  • Credentials rotated regularly (every 90 days)
  • Service accounts used instead of personal accounts
  • Temporary credentials used where possible (IAM roles, workload identity)
  • CI/CD secrets marked as protected/masked
  • Credential access logged and monitored
  • Revocation plan in place for compromised credentials
  • Team members only have access to credentials they need

Common Mistakes to Avoid

❌ Committing Credentials to Git

# This is BAD - credentials in git history forever
git add aws-credentials.yaml
git commit -m "Add AWS credentials"  # DON'T DO THIS!

If you accidentally commit credentials:

  1. Rotate credentials IMMEDIATELY
  2. Use git-filter-branch or BFG Repo-Cleaner to remove from history
  3. Force push (if you must, and if repository is private)
  4. Assume credentials are compromised - rotate them

❌ Using Root/Admin Credentials

# DON'T use root AWS account credentials
# DON'T use GCP owner role
# DON'T use Azure global administrator

# DO create service accounts with minimal permissions

❌ Sharing Credentials Insecurely

# DON'T send credentials via:
# - Email
# - Slack/Teams
# - Text message
# - Unencrypted files

# DO use:
# - Secret managers
# - Encrypted password managers
# - Secure credential sharing tools (1Password, Vault)

❌ Never Rotating Credentials

# DON'T use the same credentials forever
# DO rotate every 90 days or when:
# - Team member leaves
# - Credentials may have been exposed
# - As part of regular security practice

Quick Reference

Environment Variables by Provider

AWS:

AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION
AWS_PROFILE

GCP:

GOOGLE_APPLICATION_CREDENTIALS  # Path to JSON key file
GOOGLE_CLOUD_PROJECT

Azure:

ARM_CLIENT_ID
ARM_CLIENT_SECRET
ARM_TENANT_ID
ARM_SUBSCRIPTION_ID

Cloudflare:

CLOUDFLARE_API_TOKEN          # Recommended
CLOUDFLARE_API_KEY            # Legacy
CLOUDFLARE_EMAIL              # With legacy key

Kubernetes:

KUBECONFIG  # Path to kubeconfig file

CLI Credential Flags

--aws-credential <file>
--azure-credential <file>
--gcp-credential <file>
--kubernetes-cluster <file>
--cloudflare-credential <file>
--confluent-credential <file>
--mongodb-atlas-credential <file>
--snowflake-credential <file>


Getting Help

Found a credential issue? Check the troubleshooting section for your provider above.

Security concern? Contact your security team immediately if credentials may be compromised.

Need help? Open an issue with details (never include actual credentials in issues!).


Remember: Treat credentials like passwords. Never share them insecurely, rotate them regularly, and use the minimum permissions necessary. Your infrastructure's security depends on it. 🔐

Next article

Kustomize Integration

Kustomize Integration Guide Learn how to use Kustomize with Project Planton for managing multi-environment deployments. What is Kustomize? Kustomize is a configuration management tool that lets you create variations of YAML files without duplication. Instead of maintaining separate manifests for dev/staging/prod, you maintain one base and environment-specific overlays that patch the base. The Problem Without Kustomize: The Solution With Kustomize: The Clothing Analogy Think of Kustomize like a...
Read next article