The Confession
I have a confession to make: I’ve spent the better part of a decade paying for things I couldn’t see.
Not in some metaphysical sense (though that too), but literally. Every month, thousands of dollars flowing to cloud providers for infrastructure that existed somewhere in the ether, governed by rules I couldn’t read, priced by algorithms I couldn’t understand, running code I’d never see.
And for years, I told myself this was fine. This was progress. This was abstraction.
But somewhere along the way, abstraction became opacity. Convenience became dependence. And “serverless” became a euphemism for “someone else’s servers, someone else’s rules, someone else’s bill.”
Lately, I’ve been dreaming.
This is not a product announcement. There’s no GitHub repo to star, no landing page to visit, no waitlist to join. This is a thought experiment, a detailed sketch of something I’ve been calling AnakinCloud in my head. A meditation on the question that’s been nagging at me since I deployed my first Heroku app in 2012: Who owns this magic, and why can’t I see the trick?
Part I: The Economics of Opacity
The Vercel Tax
Let’s talk numbers, because numbers don’t lie (though they can certainly mislead).
A typical Series A startup running on Vercel might see a bill like this:
That’s $4,300/month for what amounts to a sophisticated nginx configuration and some Lambda functions with good DX.
Now, I’m not here to bash Vercel. They’ve done remarkable things for developer experience. But there’s a fundamental tension at the heart of their model: they profit from your ignorance.
Not maliciously. It’s just how the incentives align. The more magical the platform feels, the less you question the price. The more you depend on their proprietary edge network, the harder it is to leave. The more opaque the billing, the harder it is to optimize.
The EKS Paradox
“Fine,” you say, “I’ll just run Kubernetes myself.”
And so begins a different kind of suffering.
AWS EKS promises you the power of Kubernetes without the operational burden. What it delivers is a $144/cluster/month control plane fee (that’s just to exist), plus:
- NAT Gateway charges that would make a telecom executive blush
- Data transfer fees designed by someone who hates the concept of microservices
- Load balancer costs that scale linearly with your paranoia
- A complexity cliff that turns “just add another node” into a three-day Terraform adventure
| The Promise | Vercel Reality | EKS Reality |
|---|---|---|
| Getting started | 5 minutes | 5 hours (if lucky) |
| Monthly cost (startup) | $4,000+ | $2,500+ |
| Monthly cost (scale) | $15,000+ | $8,000+ |
| Vendor lock-in | High (proprietary) | Medium (AWS-specific) |
| Understanding your bill | Impossible | PhD required |
| Exit strategy | Rewrite everything | Rewrite networking |
The EKS paradox is this: you chose Kubernetes for portability, then spent six months building AWS-specific infrastructure that only works on AWS.
Imagining a Different Path
What if there was a third option?
What if a platform could offer:
- The developer experience of Vercel (git push, get URL)
- The power of Kubernetes (scale anything, run anything)
- The transparency of open source (see every line of code)
- The cost of running your own infrastructure (minus the operational pain)
That’s not a rhetorical question. It’s the question I keep asking myself. And this article is my attempt to sketch out what the answer might look like.
Part II: The Philosophy of Transparent Infrastructure
Standing on the Shoulders of Giants
Here’s what I believe: the cloud shouldn’t be magic; it should be machinery.
Machinery can be understood. Machinery can be inspected. Machinery can be repaired by someone other than the original manufacturer.
The open-source community has spent two decades building the most sophisticated infrastructure machinery in human history. Kubernetes. Prometheus. Cilium. Traefik. PostgreSQL. Every piece is documented, battle-tested, and free as in freedom.
This is the philosophy I’m imagining for AnakinCloud:
Everything is Open Source
Not “open core” with premium features. Not “source available” with scary licenses. Actually, truly, fork-it-and-compete-with-us open source. MIT licensed. Forever.
Every Abstraction is Escapable
Imagine anakin deploy that also shows you the CRDs it generates. Use the CLI forever, or graduate to raw kubectl when you’re ready. Your choice.
Pricing is Transparent
Not “contact sales.” Not “it depends.” A public price list that shows exactly what everything costs, plus a calculator that tells you the truth before you commit.
Self-Hosting is First-Class
Don’t trust us? Run the entire platform on your own hardware. Same code, same features, zero vendor lock-in. The managed version would just be a convenience.
The Two-Path Promise
In this dream, when you use AnakinCloud, you’re making a bet, but it’s not a one-way bet.
Path A: Managed Bliss
You want the Vercel experience. Push code, get URLs. Never think about servers. That’s fine. Imagine this:
# This is all you'd need
anakin login
anakin deploy
# Your app is live at https://your-app.anakin.cloud
The platform handles the Kubernetes. The databases. The certificates, the scaling, the monitoring, the backups. You write code; it keeps it running.
Path B: Transparent Power
But here’s what would make it different: at any moment, you could peek behind the curtain.
# Show me what you're actually doing
anakin export --format=yaml > my-infrastructure.yaml
# Actually, let me just run this myself
kubectl apply -f my-infrastructure.yaml
That my-infrastructure.yaml file? It wouldn’t be a proprietary format. It would be standard Kubernetes manifests with Custom Resource Definitions. You could take it anywhere. Run it on EKS. Run it on GKE. Run it on a Raspberry Pi cluster in your garage.
The Giants We’d Stand Upon
Before we go further, let’s acknowledge the open-source projects that would make something like this possible. AnakinCloud wouldn’t need to write much “new” code. It would integrate, configure, and compose:
The Foundation
Every one of these projects has a corporate sponsor, a community of maintainers, and years of production hardening. We wouldn’t be reinventing wheels; we’d be building a car.
Part III: The Architecture of Honesty
Let’s get technical. Because transparency isn’t just philosophy. It’s architecture. And even in a dream, the details matter.
Why RKE2 on Hetzner?
These wouldn’t be arbitrary choices. They’re the result of asking: “What would we use if we had to bet our own money?”
The numbers speak for themselves:
Compare that to AWS, where an equivalent t3.medium runs $30/month before you’ve transferred a single byte.
The Control Plane
Here’s what the architecture could look like, stripped of marketing:
┌─────────────────────────────────────────────────────────────────┐
│ AnakinCloud Control Plane │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Project │ │ Deployment │ │ Database │ │
│ │ Operator │ │ Operator │ │ Operator │ │
│ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ Kubernetes API Server │ │
│ │ (CRDs + Native Resources + Secrets) │ │
│ └─────────────────────────────────────────────────────────┘ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Namespaces │ │ Tekton │ │ CloudNative │ │
│ │ RBAC │ │ Pipelines │ │ PG │ │
│ │ Quotas │ │ Pods │ │ Clusters │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
Nothing proprietary. Every box would be an open-source project. Every connection would be a standard Kubernetes API call.
Part IV: Dreaming in Custom Resource Definitions
Kubernetes, Extended
The real power of building on Kubernetes is the Custom Resource Definition (CRD) system. CRDs let you extend the Kubernetes API with your own resource types, and then those resources get all of Kubernetes’ built-in superpowers: RBAC, audit logging, watch semantics, declarative reconciliation.
In this vision, AnakinCloud would define five core CRDs. Let me show you what they could look like, because in a transparent platform, you’d be reading these files when you want to understand what’s happening.
The Project CRD
Imagine a “project” as just a Kubernetes custom resource:
apiVersion: anakin.cloud/v1alpha1
kind: Project
metadata:
name: acme-corp
namespace: anakin-system
spec:
# Who owns this project
owner:
email: alice@acme.corp
organizationId: org-123
# Resource quotas - transparent limits
quotas:
maxCpu: "16"
maxMemory: "32Gi"
maxStorage: "100Gi"
maxDeployments: 25
# Networking
domains:
- "*.acme.corp"
- "api.acme.io"
# Billing tier (determines pricing)
tier: startup
# Feature flags
features:
previewEnvironments: true
customDomains: true
dedicatedDatabase: true
That’s it. That’s a project. No magic, no hidden state. Just a YAML file that tells the cluster what resources this project can use.
The Deployment CRD
When you’d run anakin deploy, here’s what could get created:
apiVersion: anakin.cloud/v1alpha1
kind: Deployment
metadata:
name: api-server
namespace: proj-acme-corp
labels:
anakin.cloud/project: acme-corp
anakin.cloud/tier: startup
spec:
# Source - supporting git or pre-built images
source:
git:
repository: https://github.com/acme/api
branch: main
path: "."
buildStrategy: buildpack # or 'dockerfile'
# Runtime configuration
runtime:
instances:
min: 2
max: 10
resources:
requests:
cpu: "250m"
memory: "512Mi"
limits:
cpu: "1000m"
memory: "2Gi"
# Health checks
healthCheck:
path: /health
intervalSeconds: 10
# Environment (references secrets properly)
env:
- name: NODE_ENV
value: production
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: acme-corp-db
key: connection-string
# Scaling behavior
scaling:
metric: cpu
targetUtilization: 70
scaleDownDelay: 300s
# Traffic management
traffic:
canary:
enabled: false
percentage: 0
rateLimit:
requestsPerSecond: 100
burstSize: 200
This resource would get picked up by the Deployment Operator, which would:
- Clone your repository (using a Tekton pipeline)
- Build your image (using Cloud Native Buildpacks or your Dockerfile)
- Push to a registry (Harbor, also open source)
- Create the Kubernetes Deployment, Service, and HPA
- Configure Traefik ingress rules
- Set up Prometheus scraping
All of this would be visible. All of this would be overridable. All of this would be yours.
The Database CRD
Managed databases are where PaaS providers really make their money. A simple PostgreSQL instance on AWS RDS can cost $50-100/month for something you could run yourself for $10.
Here’s what a more honest approach could look like:
apiVersion: anakin.cloud/v1alpha1
kind: Database
metadata:
name: acme-corp-db
namespace: proj-acme-corp
spec:
engine: postgresql
version: "16"
tier: startup # startup | growth | scale | enterprise
# High availability
replicas: 2 # Primary + 1 replica
# Backup configuration
backup:
enabled: true
schedule: "0 */6 * * *" # Every 6 hours
retention: 7d
destination:
s3:
bucket: acme-backups
endpoint: s3.eu-central-1.amazonaws.com
# Connection pooling via PgBouncer
pooler:
enabled: true
mode: transaction
maxConnections: 100
Under the hood, this would use CloudNativePG, the most sophisticated Kubernetes operator for PostgreSQL. No wrapping it; using it directly.
Part V: The Operator Pattern
How Kubernetes Could Become a PaaS
The secret to this vision isn’t any single piece of software. It’s how they would work together.
Kubernetes has this beautiful concept: the control loop. You declare what you want (spec), and controllers continuously work to make reality match your declaration (status).
We could extend this pattern with operators:
// Simplified reconciliation logic for the Deployment Operator
func (r *DeploymentReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
var deployment anakinv1.Deployment
if err := r.Get(ctx, req.NamespacedName, &deployment); err != nil {
return ctrl.Result{}, client.IgnoreNotFound(err)
}
// Phase 1: Ensure build pipeline exists
pipeline, err := r.ensureBuildPipeline(ctx, &deployment)
if err != nil {
return ctrl.Result{}, err
}
// Phase 2: If source changed, trigger new build
if r.sourceChanged(&deployment) {
run, err := r.triggerPipelineRun(ctx, &deployment, pipeline)
if err != nil {
return ctrl.Result{}, err
}
deployment.Status.Phase = anakinv1.DeploymentPhaseBuilding
deployment.Status.CurrentBuild = run.Name
return ctrl.Result{RequeueAfter: 10 * time.Second}, r.Status().Update(ctx, &deployment)
}
// Phase 3: If build complete, update Kubernetes resources
if deployment.Status.Phase == anakinv1.DeploymentPhaseBuilding {
build, err := r.getBuildStatus(ctx, deployment.Status.CurrentBuild)
if err != nil {
return ctrl.Result{}, err
}
if build.Succeeded() {
deployment.Status.LatestImage = build.ImageDigest
deployment.Status.Phase = anakinv1.DeploymentPhaseDeploying
}
}
// Phase 4: Reconcile Kubernetes native resources
if err := r.reconcileKubernetesDeployment(ctx, &deployment); err != nil {
return ctrl.Result{}, err
}
if err := r.reconcileService(ctx, &deployment); err != nil {
return ctrl.Result{}, err
}
if err := r.reconcileHPA(ctx, &deployment); err != nil {
return ctrl.Result{}, err
}
if err := r.reconcileIngress(ctx, &deployment); err != nil {
return ctrl.Result{}, err
}
deployment.Status.Phase = anakinv1.DeploymentPhaseRunning
deployment.Status.URL = r.computeURL(&deployment)
return ctrl.Result{}, r.Status().Update(ctx, &deployment)
}
The status of your deployment would always be queryable:
$ kubectl get deployment.anakin.cloud/api-server -o yaml
status:
phase: Running
currentBuild: build-abc123
latestImage: registry.anakin.cloud/acme/api@sha256:def456...
url: https://api.acme.corp
replicas:
desired: 3
ready: 3
available: 3
conditions:
- type: Available
status: "True"
lastTransitionTime: "2025-01-20T10:30:00Z"
Part VI: Networking Without Nonsense
Cilium: The Future of Container Networking
Traditional container networking works like this:
- Packet arrives
- iptables rules (thousands of them) evaluate the packet
- Maybe some NAT happens
- Packet finally gets where it’s going
Cilium with eBPF:
- Packet arrives
- eBPF program in the kernel makes a decision
- Done
But it’s not just about performance. Cilium gives us network policies that actually work:
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: api-server-policy
namespace: proj-acme-corp
spec:
endpointSelector:
matchLabels:
app: api-server
ingress:
- fromEndpoints:
- matchLabels:
app: traefik
toPorts:
- ports:
- port: "8080"
protocol: TCP
egress:
- toEndpoints:
- matchLabels:
app: acme-corp-db
toPorts:
- ports:
- port: "5432"
Preview Environments
One of Vercel’s killer features is preview deployments. Push a branch, get a URL. Here’s how AnakinCloud could do the same, but transparently:
apiVersion: anakin.cloud/v1alpha1
kind: Deployment
metadata:
name: api-server
spec:
previewEnvironments:
enabled: true
branchPattern: "feature/*|fix/*|preview/*"
urlTemplate: "{{.Branch}}.preview.acme.corp"
ttl: 24h
resources:
instances:
min: 1
max: 2
When you push feature/new-auth, the operator would:
- Create a new
Deploymentresource:api-server-feature-new-auth - Build the branch
- Deploy with preview-specific config
- Create the ingress:
feature-new-auth.preview.acme.corp - Post the URL as a GitHub comment
When the PR merges, the operator would clean up automatically.
Part VII: Observability as a First-Class Citizen
The Three Pillars, Done Right
“Observability” has become a buzzword, but the concept is simple: can you understand what your system is doing from the outside?
Observability Stack
Every deployment would automatically get:
# Automatically added to your pods
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9090"
prometheus.io/path: "/metrics"
Every project would get pre-built Grafana dashboards:
- Application Overview: Requests/sec, error rate, latency p50/p95/p99
- Resource Usage: CPU, memory, network I/O
- Database Metrics: Connections, query performance, replication lag
- Cost Attribution: How much is each component costing you
Part VIII: The Economics of Transparency
A Real Cost Breakdown
Let’s get specific. Here’s what running a typical startup workload could cost on something like this:
The Workload:
- 3 production services (API, web, worker)
- 1 PostgreSQL database (high availability)
- 2 Redis instances (cache + queue)
- Auto-scaling from 2-10 instances per service
- 50GB storage
- 500GB/month egress
| Component | AnakinCloud (Dream) | Vercel + PlanetScale + Upstash |
|---|---|---|
| Compute (avg 6 instances) | €54/mo | $180/mo |
| Database (HA PostgreSQL) | €45/mo | $99/mo |
| Redis (2 instances) | €18/mo | $40/mo |
| Storage (50GB) | €2.50/mo | Included |
| Egress (500GB) | €5/mo | $50/mo |
| Platform fee | €49/mo | $100+/mo |
But here’s the thing about transparent pricing: you could verify it.
A pricing calculator could show:
- Exactly which Hetzner instance types would be used
- The markup charged for management (imagine: 20%)
- What you’d pay if you self-hosted instead
Your estimated cost breakdown:
────────────────────────────────────────
Hetzner infrastructure: €103.75/mo
AnakinCloud platform fee: €49.00/mo (support, updates, security)
AnakinCloud margin: €20.75/mo (20% of infrastructure)
────────────────────────────────────────
Total: €173.50/mo
Self-hosted estimate: €103.75/mo (infrastructure only)
Nothing hidden. If the margin isn’t worth it to you, self-host. The platform would help you do it.
The Business Model
Self-Hosted
$0/forever
- Full platform, all features
- MIT licensed
- Community support
- You handle operations
For teams with Kubernetes expertise who want full control
Managed
€49/project/month + infrastructure
- Operations handled for you
- 24/7 monitoring
- Security patches
- Priority support
For teams who want to focus on their product
Part IX: The Self-Hosting Path
Taking Full Control
Speaking of self-hosting, here’s how it could work:
1. Get the code
git clone https://github.com/anakincloud/anakin-platform
cd anakin-platform
2. Provision infrastructure
Terraform modules for Hetzner, AWS, GCP, Azure, and bare metal:
cd terraform/hetzner
terraform init
terraform apply -var="cluster_name=my-platform"
3. Install the platform
helm repo add anakin https://charts.anakin.cloud
helm install anakin-platform anakin/platform \
--namespace anakin-system \
--create-namespace \
--values my-values.yaml
4. You’re done
Same CRDs, same operators, same capabilities. The only difference would be you’re running the infrastructure.
The Graduation Path
Here’s something most platforms won’t tell you: in this vision, you could graduate from managed to self-hosted at any time.
- Export your configuration:
anakin export --all > infrastructure.yaml - Set up your own cluster
- Apply the manifests:
kubectl apply -f infrastructure.yaml - Update DNS
- Done
Your data, your configuration, your choice. Always.
Part X: Security Without Obscurity
Defense in Depth, Visible in Depth
Security through obscurity is no security at all. Here’s what a security model could look like, in the open:
Namespace Isolation
Every project would run in its own Kubernetes namespace with:
- Resource quotas (can’t starve other tenants)
- Network policies (can’t reach other tenants)
- RBAC (can’t see other tenants’ secrets)
apiVersion: v1
kind: ResourceQuota
metadata:
name: project-quota
namespace: proj-acme-corp
spec:
hard:
requests.cpu: "16"
requests.memory: "32Gi"
limits.cpu: "32"
limits.memory: "64Gi"
persistentvolumeclaims: "10"
Network Policies by Default
By default, pods couldn’t talk to each other. You’d have to explicitly allow communication:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: proj-acme-corp
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
Image Scanning
Every image built through the pipeline would be scanned with Trivy before deployment. Critical vulnerabilities would block the deploy.
Audit Logging
Every API call logged: who, what, from where, when, and the response. Logs immutable and retained for 90 days minimum.
Part XI: The Plugin Ecosystem
Extensibility Without Complexity
The core platform would handle compute, databases, and networking. But modern applications need more:
Plugin Ecosystem
Each plugin would be:
- Optional: only installed if you need it
- Transparent: using upstream open-source operators
- Configurable: via CRD abstraction or raw operator resources
Want something without a plugin?
# Install any Helm chart into your namespace
anakin addon install bitnami/kafka \
--namespace proj-acme-corp \
--values kafka-values.yaml
You wouldn’t be locked into any ecosystem. The platform would be additive, not restrictive.
Part XII: Imagining the Experience
Five Minutes to Your First Deployment
Let’s make this concrete. Imagine:
1. Sign up
brew install anakincloud/tap/anakin
anakin auth login
2. Create a project
anakin project create my-first-app
3. Deploy
cd my-existing-node-app
anakin deploy
That’s it. The CLI would detect your framework, build your image, deploy to Kubernetes, set up HTTPS, and return your URL.
✓ Detected: Next.js 14 application
✓ Build completed in 45s
✓ Deployed to: https://my-first-app.anakin.cloud
✓ SSL certificate provisioned
Your app is live!
4. Add a database
anakin db create postgres --name main-db
Connection string automatically injected as DATABASE_URL.
5. See what’s running
# Via CLI
anakin status
# Or see the Kubernetes resources directly
kubectl get all -n proj-my-first-app
Imagine going from git clone to production in 5 minutes. But unlike other platforms, you could also spend 5 hours understanding exactly what’s running. Optimizing for both.
The Manifesto
I started this article with a confession: I’ve spent years paying for things I couldn’t see.
AnakinCloud is my dream of an answer to that discomfort. It’s a bet on several principles that I believe could work:
Transparency Creates Trust
When you can see how something works, you can decide if it’s worth paying for. When you can’t, you’re just hoping.
Open Source is the Only Moat That Benefits Users
Proprietary platforms compete on lock-in. Open-source platforms compete on value. I’d rather compete on value.
Abstraction and Understanding Aren't Opposites
You should be able to ignore complexity, but you shouldn’t be forced to. The best platforms are the ones you can grow into, not out of.
The Cloud Shouldn't Be Mysterious
Servers, networks, storage: these aren’t magic. They’re machines. Machines should be understandable by the people who depend on them.
Why Am I Sharing This?
I could have kept this in my notes app forever. Another doc in the graveyard of ideas that never saw daylight.
But I believe in thinking in public. And I believe that sometimes the best way to figure out if something should exist is to describe it in detail and see if anyone else feels the same itch.
So consider this an open invitation:
- Does this resonate? Let me know. Maybe I’m not alone in this frustration.
- Does this already exist? Point me to it. I’d rather use something great than build something redundant.
- Would you use this? Tell me. The difference between a dream and a project is often just knowing someone else wants it too.
- Want to build this together? Now we’re talking.
This article describes AnakinCloud, a thought experiment about transparent infrastructure. Nothing here exists as working code, yet. The vision is built on the shoulders of giants: RKE2, Kubernetes, CloudNativePG, Cilium, Traefik, Prometheus, and dozens of other projects maintained by thousands of contributors worldwide. The contribution would be integration and philosophy: the belief that infrastructure should be transparent, that abstraction should be escapable, and that the cloud belongs to everyone.
If you’ve read this far, thank you for dreaming with me.