Automation is supposed to reduce manual effort, right? But if you zoom out a little, you’ll notice something ironic—humans are still deeply embedded in "automated" CI/CD processes.

The Traditional Approach: Separate CI/CD Pipelines

Let’s talk about how application deployments are typically handled today.

We create separate Continuous Deployment (CD) pipelines that assume infrastructure is already in place. These pipelines—whether using ArgoCD, Helm, or Kubernetes manifests—deploy applications on top of an existing cluster. Sounds great, but here’s where the problems start:

Who Creates These Pipelines?

Each new application needs its CD pipeline. And who writes these? Humans.

  • Option 1: Developers manually copy self-serve templates (if available).
  • Option 2: Ops teams create them on request (time-consuming).

Problem: Every new service requires a new pipeline. Every new environment requires an even bigger effort, as multiple pipelines must be created from scratch.

Ordering & Secrets Management Is Manual

These pipelines work under the assumption that infrastructure exists. So, when a new environment is needed:

  1. Someone provisions infrastructure.
  2. Someone manually copies values and secrets.
  3. Pipelines use these shared or individual secrets for deployments.

If you zoom in, this seems fine—everything runs in the right order. But when you zoom out, you see the hidden inefficiencies:

  • Humans are responsible for manually fulfilling infrastructure outputs and passing them to pipelines.
  • Configuration management issues persist, as seen in Part 2.
  • Only a handful of “heroes” in the team know the end-to-end process, making scaling difficult.

“But My Pipelines Are Automated!”—Is It?

At this point, some people might say:

"My pipelines aren’t manual! I have a script for it. It runs on a Git commit—it's GitOps! "

And yes, that’s great. But here’s the catch:

  • These scripts are still imperative and sequential. They execute step-by-step instructions in a set order, just like a human would.
  • They are fragile. If something breaks midway (e.g., a missing secret, a resource that isn't ready yet), the entire process halts.
  • They require human intervention when things go wrong. Someone has to debug, re-run, or fix the missing piece.
  • They are not self-healing. If an environment drifts from the expected state, the script won’t detect and fix it automatically—it needs to be manually rerun.

Essentially, these non-declarative, big sequential scripts are as fragile as humans.

They are still built on assumptions:

✔️ Infra already exists.
✔️ Secrets are already available.
✔️ Things will execute in the right order.

When reality doesn’t match these assumptions, things break. Unlike Terraform, which continuously enforces state and corrects drift, these scripts just execute once and hope everything works. If something changes later, they won’t fix it on their own.

What If All of This Was Just a Terraform Project?

Now, imagine an alternative: Everything as Code (EaC)—where your entire infrastructure, configuration, and deployment pipelines are defined in a single declarative system.

No More Manual Ordering or Wiring

  • Terraform automatically manages dependencies and ordering between infrastructure, CD pipelines, and configuration management.
  • Outputs from one module flow into another—no need for manual fulfilment.
  • Infrastructure changes trigger application deployments automatically.
  • 3 way diff and state management further aids the self healing nature of it.

One IaC Trigger for Everything

Instead of triggering multiple pipelines manually or maintaining conventions to ensure order, you:

  • Run Terraform → it provisions infrastructure, generates configurations, and even sets up CD pipelines.
  • Deploy with ArgoCD or Helm → but provisioned through Terraform, not manually set up.

Result: Adding a new service is as simple as invoking a Terraform module with a few details. New environment? One-click. No extra wiring is required.

CD Pipelines Become Dynamic & Declarative

Your pipelines don’t need to be hand-written per application. Instead, Terraform provisions a templated pipeline that dynamically picks up:

  • Java version
  • Image URL
  • Helm chart values
  • Any other app-specific details

Terraform at Scale: It Shouldn’t Be Scary

People often fear large Terraform projects, thinking they become unwieldy. But the goal is not to treat Terraform as a fragile thing that must be run once in a blue moon.

Infrastructure should be continuously managed—Terraform should be run multiple times a day without fear. This is why we built Facets, and tools like Terragrunt exist—to make large-scale Terraform manageable.

Conclusion:

IaC shouldn’t just stop at infrastructure. It should extend to deployments, configurations, and pipelines, making them fully declarative.

✔️ No more manually created pipelines.
✔️ ️No more manual secret management.
✔️ No more rigid infrastructure assumptions.

Instead, a single Terraform project provisions everything—infra, pipelines, config management, and even deployment strategies.

This doesn’t mean ditching tools like ArgoCD—it means making them a part of your IaC strategy rather than treating them as separate, manual processes.

One-click to deploy a new service. One-click to spin up a new environment. No hidden humans in the automation loop.

And that’s what true automation looks like.