Building truly seamless AI-driven workflows means bridging the gap between large language models (LLMs) and the tools they need to interact with. Over the past year, our teams at Facets have been on a journey experimenting with “toy agents,” exploring function calling, and ultimately designing Model Context Protocol (MCP) servers to expose our infrastructure and developer tools directly to LLMs. In this post, we’re sharing what we’ve learned: from the origins of MCP to best practices for designing reliable, user-friendly toolchains. Whether you’re an AI/DevOps enthusiast, a platform engineer, or simply curious about where AI meets infrastructure, you’ll walk away with practical guidance and real-world examples from our podcast.

What You Will Learn:

  • What MCP Servers Are: Understand how MCP servers allow LLMs to interact with tools and services via a standardised protocol.

  • Designing Effective MCPs: Learn how to create clear, concise function names and build intuitive toolsets for LLMs.

  • Guardrails & Dry-Run Features: Discover how to ensure safe execution with dry-runs and validation steps.

  • Tool Curation & Context Management: Explore how to limit tool access based on user context and roles to prevent overwhelm.

Key Takeaways:

  • MCPs simplify AI-tool integration by creating a standardised way for LLMs to access services.

  • Function naming tips and tricks for effective communication with LLMs.

  • Guardrails prevent errors with features like dry-runs and confirmations.

  • Context-driven toolsets help keep LLMs focused and relevant, ensuring better user experience.

Why MCPs Matter Right Now

When ChatGPT first arrived, it felt a lot like an Alexa-style assistant: helpful, but limited in the context of specific developer tasks. Early adopters quickly discovered that, to get code snippets or debugging advice, they often had to copy-paste outputs or design makeshift “assistant” prompts. Function calling (i.e., giving the LLM a way to invoke predefined APIs) was a game changer: suddenly, we could hand off cognitive load- asking the LLM to not only suggest code but actually execute or test parts of it.

Yet as exciting as function calling was, the moment you built more than a couple of these helpers, you realised:

  1. Tool proliferation easily overwhelms the LLM when there’s no standardised discovery mechanism.

  2. Why UX matters. Feeding raw OpenAPI specs to an LLM still requires interpretation; APIs weren’t designed for human-like or LLM-friendly interaction.

  3. Security and guardrails are essential if an LLM can perform “destructive” operations (e.g., deleting cloud resources).

That’s when the concept of MCP servers crystallised: think of MCP as a protocol, akin to HTTP- for exposing any tool or service to an LLM in a consistent, curated fashion. Instead of letting an LLM parse raw API docs, you surface a small set of well-designed “tools” (functions) with clear names, thoughtfully shaped payloads, and built-in validation. This approach turns the LLM into a capable “user” of your services.

When and Why you Should Build an MCP for Your Product

Not every SaaS needs an MCP right away. But here are three guiding questions we asked ourselves:

  1. Is there a “high friction” problem in your product that could be eliminated with an LLM-driven workflow?

  2. Are there internal processes that would be accelerated if an LLM could assist developers directly (without building a separate UI)?

  3. Are you willing to invest in the upfront UX work (tool naming, payload design, guardrails)?

Our recommendation: If you can identify a single “north star” feature such as module scaffolding or environment troubleshooting, start there. Internal adoption will teach you where to refine tool definitions, adjust validations, and expand coverage. By the time you expose it to customers, you’ll have a mature, battle-tested MCP that genuinely empowers teams.


Early Feedback: How Our Customers Are Experimenting

We’ve rolled out an early alpha of our MCP server (focused on Terraform module generation) to a handful of customers. Here’s what they’re telling us:

  • Increased experimentation: Teams that used to rely on generic Helm charts or community modules are now writing custom modules on the fly; tweaking parameters until they fit organisational guardrails.

  • Faster module onboarding: Instead of a 2-week ramp for writing and validating a new Terraform module, developers can now generate a baseline in minutes, run a dry-run plan, then refine.

  • Higher organisational buy-in: Platform engineers appreciate that their internal standards are baked into the MCP. Less “hand-holding” is needed.

Of course, there have been hiccups, hallucinations in naming, unexpected payload formats, intermittent network errors. Each bug report highlights areas where our validations or function definitions need tightening. The beauty of an internal alpha: we don’t need a fully polished MVP to learn valuable lessons.

The Future of MCPs: Orchestrating Multiple Tools

As MCPs evolve, we see a future where multiple MCP servers work in tandem, orchestrated by a central system. Imagine being able to call one “MCP orchestrator” that automatically fetches and passes context between tools, based on user roles and permissions. This will allow LLMs to seamlessly interact with various services (e.g., monitoring, CI/CD, infrastructure management) without requiring users to switch between different interfaces.

Conclusion: The Road Ahead for MCPs

By designing MCP servers that expose tools as logical, AI-friendly functions, we’re giving LLMs the power to make smarter, more context-aware decisions, ultimately streamlining workflows and enhancing productivity.

As we continue to experiment and iterate, we encourage you to think about how MCPs can improve the workflows within your own product. Whether it’s automating manual tasks or building smarter AI-driven tools, the potential is limitless.

Listen to the first episode of our series to get started with building your own MCP implementations.

Let’s build the future of DevOps with MCPs, where AI doesn’t just support but drives innovation.