How Optimizely MCP Learns Your CMS (and Remembers It)

In Part 1, I introduced the “discovery-first” idea—an MCP that can plug into any SaaS CMS and learn how it’s structured on its own.

This post gets into the details: how the MCP discovers your schema, builds a usable map of it, and remembers what it learns so that subsequent requests feel instant.

Discovery – asking the CMS about itself

When the MCP connects to a CMS for the first time, it doesn’t guess what types exist. It introspects the CMS’s GraphQL API using the standard GraphQL introspection query.

In code, that looks like this:

// src/clients/graph-client.ts
import { getIntrospectionQuery, IntrospectionQuery } from 'graphql';

async introspect(): Promise<IntrospectionQuery> {
  return await this.query<IntrospectionQuery>(
    getIntrospectionQuery(),
    undefined,
    { 
      cacheKey: 'graphql:introspection',
      cacheTtl: 3600 // 1 hour cache
    }
  );
}

getIntrospectionQuery() is the full schema introspection spec from the GraphQL reference implementation.

It doesn’t just fetch types and fields — it returns everything: object types, interfaces, enums, input objects, directives, and even nested type references up to nine levels deep.

That richness matters, because MCP needs to reason about structures like [BlockData!]! or detect which types implement _IContent or _IComponent.

Building the type map — turning raw introspection into something usable

The introspection response can be huge. It’s effectively the entire CMS schema in JSON form, describing hundreds of types and relationships. By itself, that’s unwieldy. MCP needs a faster way to navigate it.

That’s where the type map comes in.

The type map is a simple but powerful lookup table that indexes every discovered type by name.
It lets MCP jump directly to any type’s details without re-parsing the whole schema, which is what enables features like:

  • Identifying which types represent content or components.
  • Traversing relationships between nested objects.
  • Dynamically generating valid GraphQL queries without hardcoding templates.

Here’s the code that builds it:

// src/logic/graph/schema-introspector.ts
async initialize(): Promise<void> {
  if (this.schema) return;

  // Fetch and cache the full schema
  this.schema = await withCache(
    'graphql:schema:full',
    () => this.client.introspect(),
    3600
  );

  // Index all types for fast lookup
  this.schema.__schema.types.forEach(type => {
    this.typeMap.set(type.name, type);
  });

  // Identify root query type for future lookups
  const queryType = this.typeMap.get(this.schema.__schema.queryType.name);
  if (queryType && queryType.kind === 'OBJECT') {
    this.queryTypeInfo = this.extractTypeInfo(queryType);
  }

  this.logger.info('Schema introspection completed', {
    typeCount: this.schema.__schema.types.length,
    queryFields: this.queryTypeInfo?.fields?.length || 0
  });
}

By the end of this step, MCP knows how your CMS is shaped: which types exist, how they connect, and what each field exposes.
That’s the knowledge it uses to generate safe queries on the fly.

Caching — learning once, remembering fast

Once the MCP discovers your schema, it doesn’t need to do it again. Full GraphQL introspection can take a second or more, so the server layers its caches to keep things instant after the first call.

At a high level, there are three cache tiers:

  1. Base Cache – a simple in-memory key/value store with a 5-minute TTL used across tools and logic.
  2. Discovery Cache – holds schema introspection and type maps (TTL 5–60 min) and automatically invalidates when the schema version changes.
  3. Fragment Cache – stores generated GraphQL fragments in memory and on disk, surviving restarts and invalidating on schema change.
// Simplified Base Cache
const cache = new Map();
export function get(key) {
  const e = cache.get(key);
  return e && Date.now() - e.ts < e.ttl ? e.val : null;
}
export function set(key, val, ttl) {
  cache.set(key, { val, ts: Date.now(), ttl });
}

Example — Getting an article page

This example shows how MCP retrieves a standard content page like ArticlePage or StandardPage — not a Visual Builder page which we’ll cover in the next part of the series.

// User perspective — Claude or an AI client calls:
await get({ identifier: "/" });                        // homepage
await get({ identifier: "/articles/my-article/" });    // by path
await get({ identifier: "Getting Started Guide" });    // by search

Here’s what actually happens behind the scenes

// 1. Initialize schema (cached after first call)
const introspector = new SchemaIntrospector(graphClient);
await introspector.initialize(); // runs getIntrospectionQuery()

// 2. Detect identifier type
const strategy = this.detectIdentifierType(identifier); // e.g. "path"

// 3. Find the content
const foundContent = await this.findContent(identifier, strategy, locale);
const contentType = foundContent.contentType; // e.g. "ArticlePage"

// 4. Generate or reuse a fragment
let fragment = await fragmentCache.getCachedFragment(contentType);
if (!fragment) {
  fragment = await fragmentGenerator.generateFragment(contentType, {
    maxDepth: 2,
    includeBlocks: true
  });
  await fragmentCache.setCachedFragment(contentType, fragment);
}

// 5. Build query and execute
const fullQuery = `
  ${fragment}
  query GetFullContent($key: String!) {
    _Content(where: { _metadata: { key: { eq: $key } } }) {
      items {
        _metadata {
          key displayName types url { default hierarchical }
          published lastModified status
        }
        ...${contentType}Fragment
      }
    }
  }
`;
const data = await graphClient.query(fullQuery, { key: foundContent.key });

Cold Start:

[INFO] Schema introspection completed (203 types, 1247 ms)
[DEBUG] Cache miss: fragment:ArticlePage
[INFO] Generating fragment for ArticlePage (124 ms)
[INFO] Query executed successfully (287 ms)

Warm cache:

[DEBUG] Cache hit: graphql:schema:full
[DEBUG] Cache hit: fragment:ArticlePage
[INFO] Query executed successfully (78 ms)

Example Response:

{
  "_metadata": {
    "key": "f3e8ef7f63ac45758a1dca8fbbde8d82",
    "displayName": "Getting Started with MCP",
    "types": ["ArticlePage", "_Page", "_Content"],
    "url": {
      "default": "/articles/getting-started/",
      "hierarchical": "/articles/getting-started/"
    },
    "published": "2024-01-15T10:30:00Z",
    "lastModified": "2024-01-20T14:22:00Z",
    "status": "Published"
  },
  "Title": "Getting Started with MCP",
  "Heading": "Your Guide to Model Context Protocol",
  "Body": { "html": "<p>This guide will help you…</p>" },
  "PromoImage": { "url": { "default": "https://cdn.example.com/mcp.jpg" } },
  "PublishDate": "2024-01-15T00:00:00Z",
  "SeoSettings": {
    "MetaTitle": "Getting Started with MCP | Developer Guide",
    "MetaDescription": "Learn how to integrate Model Context Protocol…"
  }
}

Next up

In Part 3, we’ll dig into Visual Builder pages — the nested composition layer that makes discovery more challenging and more interesting.

That’s where our Optimizely MCP shifts from “understanding structure” to “understanding composition.”

Building a Discovery-First MCP for Optimizely CMS – Part 1 of 4

This post kicks off a four-part series on how we’re evolving the Optimizely Model Context Protocol (MCP).

The project is still in beta and open source, but it’s already capable of something new: connecting to any Optimizely SaaS CMS, discovering its schema in real time, and generating valid queries without a predefined map.

You can explore the code here: Optimizely CMS MCP on GitHub.

Here’s what’s coming in this series:

  • Part 2: How the discovery and caching layers work under the hood
  • Part 3: Handling Visual Builder’s complex composition hierarchy
  • Part 4: The roadmap — multi-tenant architecture, remote MCP, and potential AI-driven use cases

Why we need a Discovery-First MCP

When I first introduced MCP in Playing with MCP: An Experimental Server for Optimizely CMS, it was a proof of concept — a single interface that let AI or automation layers query content through Optimizely’s Graph and Content APIs.

It worked well for the starter template, but not for the real world.
Every Optimizely CMS ends up with its own set of content types, naming conventions, and nested structures. Once you leave the template, assumptions start to break.

The goal of this rebuild is simple: stop assuming and start discovering.

Instead of teaching MCP what a CMS should look like, we’ve made it capable of learning what it actually is.

The Core Idea

At its core, the new MCP does five things:

  1. Discovers all available types and fields through GraphQL introspection.
  2. Analyses those structures to understand relationships and possible intent mappings.
  3. Generates GraphQL queries dynamically based on what it finds.
  4. Caches discoveries so repeated requests are instant.
  5. Adapts automatically when the schema changes.

This makes MCP schema-aware instead of schema-dependent.

Architectural Overview

The MCP sits between AI interfaces and the Optimizely CMS, handling both retrieval and content creation through the appropriate APIs.

Discovery Layer — learns the content model via GraphQL introspection.
Mapping Engine — interprets structures and field relationships.
Query Generator — builds queries and mutations from the live schema.
Content Operations — uses the Content API for creating or updating entries with full context awareness.
Cache & State — stores schema data for fast reuse.

This design makes MCP both schema-aware and schema-adaptive: it can read and write intelligently to any Optimizely CMS without knowing its shape ahead of time.

Visual Builder Discovery

Visual Builder is where discovery gets interesting.
Its pages aren’t flat content types — they’re trees of components:

Each component introduces its own schema.
Teaching MCP to navigate that structure meant dynamic fragment generation and recursive introspection — topics we’ll dig into in Part 3.

What next

Part 2, I’ll explain how MCP discovers and caches schema data efficiently, keeps queries fast, and avoids re-introspection every request.

Later in the series, we’ll look at what’s next on the roadmap — including multi-tenant “remote MCP”, and potential use cases like AI-driven data migration between systems such as Sitecore and Optimizely.

Playing with MCP: An Experimental Server for Optimizely CMS

I’ve been tinkering with something experimental: an MCP server for Optimizely CMS.

MCP, or Model Context Protocol, is a way for AI tools to talk to external systems. It’s not a REST API, and it’s not GraphQL either. Think of it more like a common “protocol wrapper” that lets AI assistants discover what a server can do — what data it exposes, what actions it supports — and then call those capabilities in a structured way.

Here’s the catch: MCP isn’t really built to run over HTTP like most APIs we’re used to. Right now it’s more about local connections (stdio, sockets). That’s fine for experiments, but it does mean it’s not drop-in ready for cloud deployments.

That said, I think if MCP evolves to run reliably over HTTP, we’ll start to see some very cool “public MCP” layers: imagine subscribing to a SaaS tool, dropping your API key into your AI desktop, and instantly being able to query and manage your data. We’re not there yet, but it’s worth exploring.

What I built

This MCP server sits in front of Optimizely CMS and combines two APIs:

  • Graph API – to query content and inspect content types.
  • Content Management API – to create, update, publish, and manage content.

So instead of juggling multiple APIs, the MCP server gives you a single entry point that AI assistants can plug into.

Out of the box, the server supports:

  • Fetching content with different auth methods (keys, OAuth, HMAC, etc.)
  • Managing versions and workflows
  • Exploring content type schemas
  • Handling multiple languages
  • Adding a caching layer for performance

It’s built in TypeScript with runtime validation, so it’s fairly solid — but for now it’s still experimental.

Potential Use Cases

For developers

  • Connect the MCP server to something like Cursor or VS Code.
  • Debugging headless builds becomes easier:
    “What’s the Graph data for this URL?”
    “Show me the schema for this content type.”

For marketers

  • Imagine asking your AI assistant questions like:
    “How many published pages use this content type?”
    “Which pieces of content are waiting for approval?”
    “Show me the graph data behind this article.”

Where this could go

Right now, this is a side experiment. It’s not production-ready, and MCP itself is still finding its footing. But I see a possible future where MCP servers can be hosted over HTTP, sitting in front of APIs like Optimizely’s Graph and Content Management APIs.

That would open up some powerful use cases: SaaS vendors offering public MCP endpoints, customers connecting their own AI desktops/agents with nothing more than an API key, and assistants becoming first-class clients of CMS platforms.

Try it out

he code is open source here: https://github.com/first3things/optimizely-cms-mcp

Clone it, run it locally, and see what you can do. If you’ve got thoughts on where MCP should head — especially around HTTP exposure and public access — I’d love to hear them.

How to Reset or Complete a Stuck Optimizely DXP Integration Deployment using the Optimizely Deployment API

If you’ve deployed to Optimizely’s Integration environment without specifying the DirectDeploy parameter in your PowerShell command, your deployment might get stuck in an “AwaitingVerification” state.

Without a record of the deployment ID, the deployment pipeline can become blocked. Unlike Pre-Production or Production environments, the Optimizely PaaS portal does not offer a UI option to reset or complete Integration deployments.

Here’s how to resolve this issue quickly using PowerShell:

1. Connect to Optimizley Deployment API

Run the following command, replacing placeholders with your credentials:

Connect-EpiCloud -ClientKey <ClientKey> -ClientSecret <ClientSecret> -ProjectId <ProjectId>
2. Find the Deployment ID

Identify deployments stuck in AwaitingVerification:

Get-EpiDeployment | Where-Object { $_.Status -notin @("Succeeded", "Failed", "Redeployed") }

You’ll see output similar to:

id               : 36d74dbe-75b5-4037-b2f2-426b6c906982
projectId        : ce447ee1-de47-450d-801e-aacae4fdc24e
status           : AwaitingVerification
startTime        : 2025-03-27T12:49:35.451Z
percentComplete  : 100
3. Find the Deployment ID

Choose one:

To Reset the deployment:

Reset-EpiDeployment -Id <DeploymentId> -Wait

To Complete the deployment:

Complete-EpiDeployment -Id <DeploymentId> -Wait

Replace <DeploymentId> with the actual deployment ID from the previous step.

Your Integration environment should now be unblocked and ready for new deployments.

Optimizely SaaS CMS: Balancing TCO and ROI in Your CMS Hosting Decision

With Optimizely SaaS CMS coming soon, I’ve been talking with companies about the tricky business of picking the right core system software for their business.

These chats really got me thinking, so I decided to jot down some thoughts to tackle the tricky topics of Total Cost of Ownership (TCO) and Return on Investment (ROI) when it comes to CMS and software selection in general.

Full article here: https://www.linkedin.com/pulse/optimizely-saas-cms-balancing-tco-roi-your-hosting-johnny-mullaney-cuh0e/

Opticon 2023:The Launch of “Optimizely One”

After another excellent Opticon event, I took time to distil my thoughts and write up some key takeaways which can be summarised in two words:

Choice and Instructions!

Check out my LinkedIn article here:

https://www.linkedin.com/pulse/opticon-2023the-launch-optimizely-one-johnny-mullaney-jmdle

Overriding Optimizely CMS Approval Sequences

Optimziely CMS Approval Sequences are an important tool for organisations that use CMS to translate, review and quality check content before publishing. A typical Optimziely CMS Approval sequences configuration involves multiple stages of approval, each requiring actions such as click to Approve/Decline and Commenting.

The Problem

I recently worked with a client who needed to bypass the administration overhead that Approval Sequences adds but only for certain CMS users , whose role within the organisation was to coordinate the large bulk publishing operations sometimes associated with releases.

While there is the option for administrators to use the “Approve Entire Approval Sequence” button, the consensus for this use case was that using this still added overhead for the editors.

Specifically, we needed to force the approval of Approval Sequences for these editorial users, reducing the number of interactions required. The key requirement is to streamline operations and minimise the effort involved in the publishing process for these CMS Editors.

The Solution

We settled on a solution to customise the Approval Engine using the IApprovalEngine interface:

When a CMS Editor clicks ‘Ready for Review’, use the Approval Engine events to:

  • Check is this user is part of the relevant User Group to override the approval sequence
  • If so, use the Approval Engine to force the approval of this sequence

Additionally we decided to give this Editorial team the option of reducing their clicks even more by Publishing content directly after clicking ‘Ready for Review’. As this isn’t necessarily recommended, but may be useful for this team in some scenarios, we put this feature behind a Site Setting feature toggle.

Technical Implementation

The following code demonstrates setting up an InitializationModule to override the approval sequence and publish. I’ve hard coded the admin email address in this example so you will should swap that out for your desired business logic.

        [InitializableModule]
        [ModuleDependency(typeof(EPiServer.Web.InitializationModule))]
        public class CustomApprovalsInitialization : IInitializableModule
        {
            private IApprovalEngineEvents _approvalEngineEvents;
            private ICustomerService _customerService;
            private IContentRepository contentRepository;
            private ISettingsService _settingsService;

            public void Initialize(InitializationEngine context)
            {
                _customerService = ServiceLocator.Current.GetInstance<ICustomerService>();
                _settingsService = ServiceLocator.Current.GetInstance<ISettingsService>();

                var contentEvents = ServiceLocator.Current.GetInstance<IContentEvents>();
                contentEvents.RequestedApproval += ContentEvents_RequestedApproval;

                _approvalEngineEvents = context.Locate.Advanced.GetInstance<IApprovalEngineEvents>();
                _approvalEngineEvents.StepStarted += _approvalEngineEvents_StepStarted;
            }

            private Task _approvalEngineEvents_StepStarted(object sender, ApprovalStepEventArgs e)
            {
                var user = _customerService.UserManager().FindByEmailAsync("admin@example.com").GetAwaiter().GetResult();
                if (user != null)
                {
                    var approvalEngine = ServiceLocator.Current.GetInstance<IApprovalEngine>();

                    Task.Run(() =>  {
                                        approvalEngine.ForceApproveAsync(e.ApprovalID, "admin@example.com", "Auto-approved by Admin.").GetAwaiter().GetResult();
                                    }).GetAwaiter().GetResult();

                }

                return Task.CompletedTask;
            }

            private void ContentEvents_RequestedApproval(object sender, ContentEventArgs e)
            {
                var settingsPage = _settingsService.GetSiteSettings<ReferencePageSettings>();
                if (settingsPage.ApprovalSequenceOverrideAutoPublish)
                {
                    var user = _customerService.UserManager().FindByEmailAsync("admin@example.com").GetAwaiter().GetResult();
                    if (user != null)
                    {
                        contentRepository = ServiceLocator.Current.GetInstance<IContentRepository>();

                        var content = contentRepository.Get<PageData>(e.ContentLink).CreateWritableClone();

                        contentRepository.Publish(content as IContent);
                    }
                }
            }

            public void Uninitialize(InitializationEngine context)
            {
                //Add uninitialization logic
            }
        }

High Value Data: Reshaping Brands and Driving E-Commerce Revolution

I’ve just published a new article on how high-value data is reshaping brands and driving the e-commerce revolution to the FTT blog.

In this piece, I delve into the importance of customer data as the fuel that propels your e-commerce revolution discussing how companies like Netflix and Spotify leverage comprehensive customer views and tailored content delivery for success, and how this is just the tip of the iceberg.

I also share insights on how to own your data using tools like Optimizely’s Data Platform (ODP) and how to formalize your data strategy to guide your company’s direction and ambitions as a data-driven organization.

At FTT, we believe in automating the execution of strategic thinking with test and learn cycles to grow quicker and liberate employees to focus on ever-higher value initiatives.

Check out the full article here and let me know your thoughts.

Hight Value Data: Reshaping Brands and Driving E-Commerce Revolution (ftt.ai)

Optimizely Data Platform (ODP) -> Commerce Cloud: Product Attribute Connector

The most powerful E-Commerce segmentation is possible when your data platform knows everything about your products. Then you can segment, personalise, experiment and sell to customers who are interested in various types of products.

This post explains how you can easily extend your product catalog data in ODP with First Three Things ODP Product Attribute Add On.

Optimizely’s ODP Commerce Cloud Integration

Optimizely’s Commerce Cloud integration is a no-code solution that uses Service API to sync Contact, Product and Order data to ODP.

https://docs.developers.optimizely.com/digital-experience-platform/v1.5.0-optimizely-data-platform/docs/import-data-from-optimizely-commerce-cloud

In terms of Product data, the integration sends universal catalog data such as product & variant codes, product/variant relationships, product name, image & price.

However every catalog is different with each company having their own metadata that provides additional context around each product. In an Optimizely product catalog, this data will be managed as properties attached to Products and Variants.

Sending this data to ODP allows you to create much more powerful customer segments and this is what our ODP Product Attribute Connector is responsible for.

Installation

Code is open source and available at: https://github.com/first3things/ODP-ProductAttribute-AddOn

Install the package directly from the Optimizley Nuget repository.

dotnet add package First3Things.ODPProductAttributeConnector

Add the following to your StartUp class. This extension will register the necessary dependencies.

using First3Things.ODPProductAttributeConnector.DependencyInjection;

services.AddOdpProductAttributeConnector(_configuration);

Add your API credentials to the appSettings.json file

  "ODPConnector": {
    "apiHost": "<-- host name e.g. api.zaius.com -->",
    "apiKey": "<-- your public api key retrieved in the admin area -->"
  }
}

Configuration

Create fields in ODP for product attributes you want to sync to ODP by logging in as an administrator and proceeding to:

Settings -> (Data Management) Objects & Fields -> Products

image

Additional information on that step is available in this blog post:

Update your Content Types

Add the ODP Sync attribute to any Product or Variant content type properties you want to sync.

Set the ODP Field Name within the attribute.

[OdpProductSync("brand")]
[CultureSpecific]
[BackingType(typeof(PropertyString))]
[Display(Name = "Brand", GroupName = SystemTabNames.Content, Order = 15)]
public virtual string Brand { get; set; }

Scheduled Job

An “ODP Product Attribute Connector” scheduled job will be installed in the CMS Administrative area.

This job uses reflection to retrieve your commerce content types and send property values tagged with the ODPProductSync attribute to ODP.

Additional Notes

Retrieve Catalog Business Logic

Multiple catalogs are not supported out of the box.

The business logic executed by the Scheduled Job picks the first Catalog.

If you need to overwrite this logic, inject a new implementation for

ICatalogService.GetCatalogRoot()

Useful Optimizely Documentation

Product Batch Request API: https://docs.developers.optimizely.com/digital-experience-platform/v1.5.0-optimizely-data-platform/reference/batch-requests

Recommended Product Fields: https://docs.developers.optimizely.com/digital-experience-platform/v1.5.0-optimizely-data-platform/docs/usecase-products#recommended-fields

A Marketing Managers Guide to a modern DXP Solution Architecture

As a Solution Architect, I nerd out on new technologies, plugging systems together, good code, solving problems and the general process of designing software.

And that is why I have been writing technical blog posts for a few years. 

This time I decided to write an article that was aimed at a non-technical audience. I wanted to give Marketers a primer on their role in DXP Solution Architecture and delicately explain why Optimizely can be a good fit.

I underestimated the challenge. I soon realised my new audience’s expectation for the structure of content, tone of voice and level of detail were now very different. The process of crafting something of value took more time than expected as I iterated over draft after draft.

Finally I have an article I am happy to put out into the world.

https://www.linkedin.com/pulse/marketing-managers-guide-solution-architecture-modern-johnny-mullaney