Experience Sitecore ! | More than 300 articles about the best DXP by Martin Miles

Experience Sitecore !

More than 300 articles about the best DXP by Martin Miles

Redirects in Sitecore XM Cloud - know your options!

Sitecore XM Cloud supports multiple redirect strategies, from content-authored redirects via Sitecore items to headless app configuration and middleware logic.

In XM Cloud implementations, there are three typical patterns for performing redirects:

  • they can be defined at the CMS as Redirect Items or Redirect Maps, or
  • handled by Next.js middleware at runtime, or
  • baked into the front-end hosting, for example, in next.config.js or platform rules.

Each approach has tradeoffs in flexibility, performance, and author control. In addition, hosting platforms like Vercel, Netlify also allow static redirects via their config files or APIs, but that is outside of scope for XM Cloud, which already provides built-in mechanisms so marketers can manage redirects without code deployments​


Content-Authored Redirects: Items vs. Maps

XM Cloud’s built-in Redirect Item and Redirect Map features let content authors define redirects in the Content Editor.

A Redirect Item is created under a page node (right-click, Insert → Redirect) and simply points to a target URL. When a request hits that item’s path, the configured redirect is issued.

Redirect Maps live in the site’s Redirects settings and can contain many rules; each map item can define multiple source-to-target path mappings, with support for 301/302 and even server transfer redirects. Maps allow regex patterns and grouping, so a handful of regex rules can replace dozens of single-item redirects. For example, a mapping rule like ^/products/(.*)/(.*)$ -> /groceries/$1/$2 applies to many URLs, whereas a Redirect Item only covers one exact path.

Redirect Items are simplest for one-off cases, but managing many items can be cumbersome. Redirect Maps centralize rules (groups/folders, regex) for better visibility and maintainability. In practice, use Redirect Items for a few simple vanity URLs or moved pages, and Redirect Maps when you have multiple or patterned redirects. Always plan the scope: XM Cloud documentation even recommends limiting a map to ~200 entries for performance and manageability.

In either case, after creating or changing redirects, you must publish the site item (root) to push the updates to Experience Edge. XM Cloud caches redirect data at the edge with a typical ~4-hour TTL, so republishing the site clears that cache and makes new rules active immediately.

 

Developer-Controlled Redirects (via Head App)

For larger redirect sets or developer-controlled routing, it’s common to handle redirects on the front-end app instead of via XM Cloud. Next.js lets you define static redirects in next.config.js, which Vercel and other hosts apply at build time. For example:

module.exports = {
  async redirects() {
    return [
      { source: '/old-page', destination: '/new-page', permanent: true },
      { source: '/old-blog/:slug*', destination: '/blog/:slug*', permanent: true },
      // ... up to 1024 entries
    ]
  }
};

This redirects() array can include regex and wildcard matching. Because these rules are generated at build time and handled on the CDN edge, they execute before any JS or middleware, which means faster, low-overhead redirects. In fact, Next.js processes next.config.js redirects at the edge like at. Vercel’s network, so users are redirected without even involving server-side code. The trade-off is that such redirects only update when you rebuild the site – they cannot be changed dynamically by content authors.

Most static hosting providers also offer redirect files. For example, Netlify will process a plain-text _redirects file in CSV or _redirects format, and Vercel imposes a 1024-entry limit on static redirects. The Sitecore Accelerate docs note that beyond ~1024 redirects you should migrate to an Edge Function or JSON-driven middleware approach. Indeed, if you hit the limit, one can place a JSON of rules (.sitecore/redirects.json) and a custom middleware plugin to read it, avoiding XM Cloud entirely.

 

Middleware-Based Redirects (Dynamic)

In a pure headless scenario, XM Cloud content-managed redirects are usually handled by Next.js middleware at runtime. The standard XM Cloud starter kit includes a Redirects middleware plugin: on each request, it queries Experience Edge for any matching Redirect Item or Map and issues a redirect if found. This means redirects are always up-to-date without rebuilding the site. However, it also means every request incurs a check and often a GraphQL call against Edge.

User Request → Next.js Middleware: 
   - Load Redirects via GraphQL (from XM Cloud)
   - If a rule matches the path, return redirect response; otherwise continue

Because the middleware runs on every request unless specifically filtered outhaving many redirects can hurt performance. Sitecore’s docs warn that “the redirect middleware needs to process the list by hitting Experience Edge, which can cause performance issues” when redirects are numerous. The middleware can also be tuned: for instance, you can narrow its scope by adjusting the matcher so it only runs on certain paths, not on APIs, static assets, etc. If a site isn’t using content-managed redirects at all, you can disable or remove the redirect middleware plugin entirely.

In short, dynamic middleware redirects offer real-time flexibility where authors change in CMS and immediately get it effective after publishing, but at the cost of per-request overhead. In scenarios like user-specific or geolocation-based redirects, dynamic middleware is useful. If you anticipate a large number of static redirects with no runtime logic, it’s generally better to move them into a build-time config or use regex grouping to reduce count.

 

Hybrid Build-Time Redirects

An emerging approach combines the strengths of both worlds: content-managed redirects in XM Cloud, but applied via a build/redeploy, so that no middleware query is needed at request time. In this hybrid strategy, editors still create Redirect Items/Maps in XM Cloud, but a hook triggers a site rebuild. For example, you can use an Experience Edge webhook on the site to notify a custom service when redirects change. The service or another automation tool listens for that update, then calls the Vercel REST API to redeploy the site.

During the build, a custom script runs. This script invokes the Edge GraphQL API to retrieve all Redirect Maps and Items, transforms them into Next.js redirect objects, and writes them into a JSON or directly into next.config.js format. The build output then includes a static list of redirects (for example, in .sitecore/redirects.json) that Next.js will apply at the CDN edge. As a result, end users see the updated redirects immediately after the deploy, and the runtime Next.js middleware is bypassed entirely, improving performance.

This hybrid pattern requires some initial setup, but it keeps redirect management user-friendly: editors still work in XM Cloud as usual, but developers configure the rebuild pipeline. Key steps include: configuring the Edge webhook via the XM Cloud Admin API, setting up a middleware or serverless listener to call the Vercel "Redeploy" endpoint, and adding a build-time function like generateRedirects or similar in the Next.js app that populates the redirect list. The result is a form of hybrid static redirects that are always in sync with the CMS without incurring request-time lookup costs

 

Performance Considerations & Best Practices

  • Publish the Site: Remember to publish the headless site item whenever redirect items or maps change. XM Cloud’s Edge caches redirect data (about 4 hours by default), so failing to republish can cause old redirect rules to linger. In practice, always include the site node in your publish steps after editing redirects.

  • Use Regex Judiciously: Redirect Maps support full regex, which is powerful but costly to evaluate. Prefer direct path matches when possible, and group common patterns into a single regex rule. This reduces the total count and keeps matching fast.

  • Limit Redirect Count: If you have hundreds of redirects, especially distinct ones, consider moving them out of the CMS and into the front-end config. Next.js has a 1024-entry limit on static redirects, and thousands of CMS-managed redirects can strain middleware performance. The accelerate docs even suggest: “if you have a large number of redirects, you need to use the hosting provider features”, for example, Next.js config or Edge Functions.

  • Optimize Middleware: If using middleware, narrow its scope. In Next.js 13+, use the matcher option in middleware.ts to skip API routes, static assets (/_next/), or health checks. Also, XM Cloud starter kits allow disabling the redirect plugin if not needed. Excessive link prefetching or personalization features can inadvertently invoke the middleware multiple times, so configure prefetch settings appropriately.

  • Head-App Redirects First: As a rule of thumb, use static head-app redirects whenever feasible. These execute at the CDN edge and avoid server work. Reserve middleware redirects for cases where you truly need runtime logic - either user-based or geo-based, or maybe A/B testing.

  • Test Redirect Order: In Next.js, the order of plugins or configuration can matter. If a redirect isn’t firing, check that the Redirects plugin/middleware has higher priority than others, for example, than any catch-all pages.

  • Environment Nuances: Be aware of hosting specifics. For example, Netlify automatically sorts query parameters, which can affect regex matches. And ensure your targetHostname setting in XM Cloud site definitions includes your domains, so redirects use the correct host.

By combining these techniques - CMS-managed maps for author edits, static redirects in the head app for scale, and judicious middleware use for dynamic cases -you can build a robust redirect strategy in XM Cloud. The key is to balance flexibility (author editing, regex) with performance (pre-calculated redirects, minimizing per-request work). With careful planning and the new hybrid approach, XM Cloud sites can seamlessly redirect users and preserve SEO even as content or site structure changes.

References:

 

SQL-level access to database in XM Cloud

Could you ever guess that you can still access databases in XM Cloud directly at SQL level? Yes, you can, and below I am showing how exactly.

Disclaimer
The step‑by‑step SQL‑level techniques described in this post are provided strictly for educational purposes. Directly querying or modifying the underlying database of Sitecore XM Cloud is strongly discouraged, as it bypasses critical application‑level safeguards, voids support agreements, and may lead to data corruption, security vulnerabilities, or service outages. Always interact with your XM Cloud instance through the officially supported APIs, Sitecore CLI, or designated administration tools. Proceeding with direct database access is done entirely at your own risk, and the author and any affiliated parties disclaim all liability for any loss, damage, or disruption resulting from such actions.

The key trick here goes from using Sitecore PowerShell Extensions, if you have them enabled at your instance, you're good to go! This applies to both cloud and locally containerized databases. PowerShell has a useful commandlet Invoke-SqlCommand that allows making SQL-level connections and accepts two parameters: $connection and $query:

Invoke-SqlCommand -Connection $connection -Query $query 

But how exactly do we get a connection? Luckily, since we operate from within ASP.NET application, we can use its own API to get it, no even needing to look up the connection strings config:

$connection = [Sitecore.Configuration.Settings]::GetConnectionString("master")

The other thing is that we must know the physical name of the database to operate against. Once again, we can calculate that without even a lookup into configs:

$builder = New-Object System.Data.SqlClient.SqlConnectionStringBuilder $connection
$dbName = $builder.InitialCatalog

Please note that on a local, your database name is always Sitecore.Master, while on a cloud instance, it is very long, combined from the org, project, and environment names, thus always varies from environment to environment, so you have to calculate it like that.

Next, let's build a query. To start with we can do a basic select for the items availabe. Having a physical database name, we can pass it into a query as below:

$sql = @"
USE [{0}]
SELECT ID, [Name], [TemplateID], Created from  [dbo].[Items]
"@

With that in mind, let's combine everything together into a single script:

$sql = @"
USE [{0}]
SELECT ID, [Name], [TemplateID], Created from  [dbo].[Items]
"@

Import-Function Invoke-SqlCommand

Write-Verbose "Cleaning up the History, EventQueue, and PublishQueue tables in the $($db.Name) database."
$connection = [Sitecore.Configuration.Settings]::GetConnectionString("master")
$builder = New-Object System.Data.SqlClient.SqlConnectionStringBuilder $connection
$dbName = $builder.InitialCatalog
$query = [string]::Format($sql, $dbName)

Invoke-SqlCommand -Connection $connection -Query $query 

Running it brings the desired result:


But what is especially amazing is that you also have write access to master database! With that in mind, I create a query that creates a new item under a specific location and also populates some of its fields. It looks slightly more complicated, but again, nothing extraordinary:

$sql = @"
USE [Sitecore.Master];

-- Variables
DECLARE @NewItemId UNIQUEIDENTIFIER = NEWID();
DECLARE @ParentId UNIQUEIDENTIFIER = '{110D559F-DEA5-42EA-9C1C-8A5DF7E70EF9}';
DECLARE @TemplateId UNIQUEIDENTIFIER = '{76036F5E-CBCE-46D1-AF0A-4143F9B557AA}'; -- Sample Item template
DECLARE @TextFieldId UNIQUEIDENTIFIER = '{A60ACD61-A6DB-4182-8329-C957982CEC74}'; -- Text field ID
DECLARE @Now DATETIME = GETUTCDATE();
DECLARE @ItemName NVARCHAR(255) = 'SQL-inserted item';
DECLARE @Language NVARCHAR(10) = 'en';
DECLARE @Version INT = 1;
DECLARE @TextValue NVARCHAR(MAX) = 'This item was created entirely with SQL INSERT script';

-- 1. Insert into Items
INSERT INTO [dbo].[Items] 
    ([ID], [Name], [TemplateID], [ParentID], [MasterID], [Created], [Updated])
VALUES 
    (@NewItemId, @ItemName, @TemplateId, @ParentId, '00000000-0000-0000-0000-000000000000', @Now, @Now);

-- 2. Insert into VersionedFields (CORRECT field ID now)
INSERT INTO [dbo].[VersionedFields]
    ([ItemId], [FieldId], [Language], [Version], [Value], [Created], [Updated])
VALUES
    (@NewItemId, @TextFieldId, @Language, @Version, @TextValue, @Now, @Now);
"@

Import-Function Invoke-SqlCommand

Write-Verbose "Cleaning up the History, EventQueue, and PublishQueue tables in the $($db.Name) database."
$connection = [Sitecore.Configuration.Settings]::GetConnectionString("master")
$builder = New-Object System.Data.SqlClient.SqlConnectionStringBuilder $connection
$dbName = $builder.InitialCatalog
#$query = [string]::Format($sql, $dbName)
$query = $sql

Invoke-SqlCommand -Connection $connection -Query $query 

Upon the execution, I saw no changes in Content Tree, which likely happens due to some CM caches in place. For the sake of clarity, I restarted CM and got this newly created item with all the fields, as expected:

Note, that $name token from standard vales does not expand here - this gets done by Sitecore API upon item creation logic, while we directly inserted into database bypassing it.

That approach is really awesome, despite feeling so hacky! Imagine combining it with the SPE Remoting and/or combining that with AI - it brings unlimited potential. 

But once again, it all goes for educational purposes exclusively.

All you need to know about transforming Web.config on Sitecore XM Cloud

In Sitecore XM Cloud, one cannot modify web.config on the CM instance at runtime. By design, the CM webroot in XM Cloud containers is only writable by the deployment process, so "live" edits to web.config aren’t possible without redeploy. You can patch anything under App_Config/Include at runtime, by using Sitecore PowerShell Extensions, for example, but the main web.config file sits outside that folder, namely exactly at the web root, and requires stricter permissions. Only the Deploy process can modify it.

In this blog post, I am going to share all the techniques you can undertake to get your changes reflected within web.config on your desired environment.

Why?

Firstly, why at all would one need to modify web.config on the XM Cloud CM?

Transforming the CM instance’s web.config is essential because it’s the only way to inject critical, environment-specific settings, like Content Security Policy headers, custom session timeouts, IIS rewrite rules, or extra connection strings right into a locked-down XM Cloud deployment. Since the cloud platform prohibits direct edits to web.config at runtime, using XDT transforms ensures that everything from security hardening (CSP, HSTS) to feature flags or environment variables is baked into the build pipeline in a controlled, auditable way. This same transform can then be reapplied locally so your local CM containers mirror exactly what runs in production, reducing drift and making deployments predictable and secure. But..

How?

Since CM executes technically on an ASP.NET Framework runtime, an old good technique called XDT transformation, known from the old good days of ASP.NET, is still there with us. Iа you have never done it before, transformation may appear slightly complicated to produce it at first, but reading an XDT file is very intuitive. Here is an example:

<?xml version="1.0" encoding="utf-8" ?>
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
  <system.web>
    <customErrors mode="Off" xdt:Transform="SetAttributes"/>
	<!--<customErrors mode="Off" xdt:Transform="SetAttributes" xdt:Locator="Condition(@mode!='Off')"/>-->
  </system.web>
  <appSettings>
    <add key="Some_New_Key" value="value_to_insert" xdt:Transform="InsertIfMissing" xdt:Locator="Match(key)" />
  </appSettings>
  <location path="sitecore">
	<system.webServer>
		<httpProtocol>
			<customHeaders>
				<add name="Content-Security-Policy" value="default-src 'self' 'unsafe-inline' 'unsafe-eval' [https://apps.sitecore.net](https://apps.sitecore.net/); img-src 'self' data: https://demo.sitecoresandbox.cloud/ [https://s.gravatar.com](https://s.gravatar.com/) https://*.wp.com/cdn.auth0.com/avatars; style-src 'self' 'unsafe-inline' [https://fonts.googleapis.com](https://fonts.googleapis.com/); font-src 'self' 'unsafe-inline' [https://fonts.gstatic.com](https://fonts.gstatic.com/); block-all-mixed-content; child-src 'self' https://demo.sitecoresandbox.cloud/; connect-src 'self' https://demo.sitecoresandbox.cloud/; media-src https://demo.sitecoresandbox.cloud/" xdt:Transform="Replace" xdt:Locator="Match(name)"/>
			</customHeaders>
		</httpProtocol>
	</system.webServer>
  </location>
</configuration>

Cloud build-time XDT transform: officially recommended approach

For the cloud deployments, you can leverage the transforms section of xmcloud.build.json to apply an XDT patch at build time:

"transforms": [
  {
    "xdtPath": "/xdts/web.config.xdt",
    "targetPath": "/web.config"
  }
]

The deploy process will do the rest, and it has everything required to apply the specified transform against the provided web.config. Please note that xdtPath is relevant to the CM customization .NET Framework project, also a cloud redeployment is mandatory for changes to take effect

This build-time transform is cloud-compatible with no custom images needed, and centralizes the change in a single file. It’s also the officially documented method for altering web.config in XM Cloud.

Pros: Officially supported, one versioned transform file, applied automatically by the pipeline.

Cons: Changes take effect only on redeploy, which is typically normal in XM Cloud.

So far, so good. But the above officially recommended approach only works with cloud deployments. What should we do to transform configs on a local development docker-run containers?

Local XDT Transformation

The first thing that probably came to your mind would be to create a custom CM image derived from the official XM Cloud image provided by Sitecore. However, you are not allowed to deploy any custom images for CM, due to the safety railguards. Anyway, even if it were possible, this idea would generally be overkill. Instead, for local XM Cloud Docker development, we generally want to somehow mirror the cloud approach but without custom images.

There are two main options:

1. Dockerfile build-time transform

Luckily, Sitecore supplies us with a helpful Docker tools image for XM Cloud, officially named as scr.sitecore.com/tools/sitecore-xmcloud-docker-tools-assets, that contains an Invoke-XdtTransform.ps1 PowerShell script to perform exactly what we need.

In the docker/build/cm directory, along with Dockerfile, create a new folder xdts and copy the desired XDT into it. Next, let's add copying and the execution instructions to the docker/build/cm/Dockerfile itself:

COPY ./xdts C:\inetpub\wwwroot\xdts

RUN (Get-ChildItem -Path 'C:\\inetpub\\wwwroot\\xdts\\web*.xdt' -Recurse ) | `
    ForEach-Object { & 'C:\\tools\\scripts\\Invoke-XdtTransform.ps1' -Path 'C:\\inetpub\\wwwroot\\web.config' -XdtPath $_.FullName `
    -XdtDllPath 'C:\\tools\\bin\\Microsoft.Web.XmlTransform.dll'; };

This will process all the transforms and bake the result into the CM image. After rebuilding the container with docker-compose build, the web.config receives our changes.

Pros: Uses the same XDT logic as XM Cloud; no extra runtime steps.

Cons: Requires rebuilding the Docker image for every change, which results in slower iterative development, and is not cloud-compatible. As I said above, you can’t push a custom CM image to XM Cloud, and you generally don't need that because xmcloud.build.json takes care of that anyway. The only real negative here is that you violate the DRY principle because you have the XDT file duplicated.

2. Development-only runtime patches

As we know, one cannot create custom CM images, but nothing stops us from creating our own custom tools image! What for? Your CM image copies the tools folder from the Sitecore XM Cloud Docker Tools Assets image into the tools folder, and this folder contains out-of-the-box development-only XDT configuration transforms in a folder called dev-patches and also the entrypoint for the CM image. This folder contains some default config patches provided by Sitecore out-of-the-box:

Therefore, our goal is to reuse this image by creating our own, where we will add our own XDT transform folder with actual files inside if these folders. Because we expect to reuse the execution script as well, it is important to maintain the same folder/file structure as on the original image. In this case, our changes will get picked up and processed automatically.

Steps to achieve:

1. First of all, we need to create a folder for a custom XDT transformation. Let's call it YouCustomXdtFolder and create Web.config.xdt file inside it. Naming convention is important here: transform is always called Web.config.xdt and the folder name will be later used on stage 5 to reference this transformation.

2. Create a custom tools image. Create a Dockerfile file under docker/build/tools, which you also created:

# escape=`

ARG BASE_IMAGE

FROM ${BASE_IMAGE}

COPY dev-patches\ \tools\dev-patches\

3. Build out tools image. In the docker-compose.override.yml let's add a new record under services:

  tools:
    image: ${REGISTRY}${COMPOSE_PROJECT_NAME}-sitecore-xmcloud-docker-tools-assets:${VERSION:-latest}
    build:
      context: ./docker/build/tools
      args:
        BASE_IMAGE: ${SITECORE_TOOLS_REGISTRY}sitecore-xmcloud-docker-tools-assets:${TOOLS_VERSION}
    scale: 0

4. Instruct CM to use the custom tools image rather than the default one:

services:
  cm:
    build:
      args:
        TOOLS_IMAGE: ${REGISTRY}${COMPOSE_PROJECT_NAME}-sitecore-xmcloud-docker-tools-assets:${VERSION:-latest}
    depends_on:
      - tools
    environment:
      SITECORE_DEVELOPMENT_PATCHES: ${SITECORE_DEVELOPMENT_PATCHES}

5. Append the name(s) of the custom transform folder to the environmental variable, for example:

SITECORE_DEVELOPMENT_PATCHES: DevEnvOn,CustomErrorsOff,DebugOn,DiagnosticsOff,InitMessagesOff,YouCustomXdtFolder

These five steps will do the entire magic on your local CM!

Pros: No need to rebuild the image – just restart the container when the XDT changes. Uses the official Docker entrypoint logic.

Cons: It only affects your local dev environment (you must still use xmcloud.build.json for cloud). It also requires maintaining the environment variable, but that can be version-controlled.

Volume Overwrites

Avoid this approach!

It is based on mounting or copying entire config folders like App_Config or web.config directly via /docker/deploy mounting point folder, and is absolutely not recommended!

That approach is error-prone and hard to maintain - you risk overwriting updates, missing subtle changes, aтв potentially receive false positives which may later hurt you badly. Instead, try to use either Dockerfile build-time transform or Development-only runtime patches approaches wherever possible.

You can only use the Volume Overwrites approach for experimental and time-critical cases for a one-off proving a concept, without any intent of keeping these changes. If your concept appears to be successful, consider using one of the above methods for local, along with reflecting the changes in xmcloud.build.json for the cloud deployment.

Summary Comparison of Approaches

Approach Cloud-Compatible Local Support Build/Rebuild Needed Maintenance Effort Notes
XM Cloud xmcloud.build.json XDT ✅ (only way) (cloud-only) N/A (cloud build) Low – one XDT file Official method for XM Cloud builds.
Dockerfile XDT (build-time) Yes (rebuild image) Medium – Dockerfile edits Works exactly like cloud transform (same XDT logic). Not usable in the cloud.
Dev-only patches (runtime) ❌ (dev only) No (just restart) Low – simple patch & env

Uses SITECORE_DEVELOPMENT_PATCHES​.

Quick turnaround; no custom image.

Volume/config override No (instant) High-fragile/sync issues Not recommended – mass copies of folders are “ugly” and error-prone.


  • Build Speed: The dev-only approach avoids image rebuilds, which brings fast feedback, whereas the Dockerfile method requires rebuilding the CM image after changes and is slower, especially upon each change. XM Cloud transforms only run on deployment builds.

  • Maintenance: Keeping one XDT file in source control for both cloud and local is easiest. The Dockerfile method scatters transform logic into build scripts (higher maintenance). The dev-only patch centralizes it with environment configuration.

  • Error-Proneness: Transform files are declarative and less error-prone than manual file swaps. Volume mounts risk configuration drift. The built-in dev-patches and XM Cloud pipeline both use the official transform engine, which is robust.


Conclusion and recommendations

First and obvious: use XDT transforms wherever possible! Even in those rare occasions when you can modify web.config manually, it does not mean that you should!

For cloud deployments, always use xmcloud.build.json transforms to modify web.config. In local Docker, mirror the same transform logic. The preferred local method is to leverage the SITECORE_DEVELOPMENT_PATCHES mechanism: place the same Web.config.xdt under docker/build/tools/YourPatchName/ and add YourPatchName to the environment variable. This requires no Dockerfile hacking and no custom CM image, yet applies the transform at runtime using the same Microsoft.Web.XmlTransform script.

As a fallback or if needed, you can also inject a RUN Invoke-XdtTransform.ps1 step into the CM Dockerfile​, but this is more effort and not supported on XM Cloud. In all cases, avoid manual folder copies or replacing the entire config.

The transform-based approaches (build-time for cloud, and/or the dev-patch for local) strikes the best balance of simplicity, performance, and future maintainability​ and represents the current best practices.

Rendering Parameters vs. Rendering Variants - when should use one or another

Do you know how to identify when you should create a rendering variant for a component, and when you can simplify effort by setting rendering parameters? Below is the answer and it’s pretty straightforward.

To address let's first take a look at both options and options and identify their key differences.

Rendering Parameters allow you to have additional control over a component/rendering by passing additional parameters into it. Key-value-pair is the most simplistic form, but of course, you can use any advanced form of input by leveraging rendering parameters templates, but regardless of the chosen way the result will be the same - you pass some additional parameters into a component. Based on those params a component can do certain things, for example, show/hide specific blocks or use more advanced styling tricks. Important to keep in mind - that all the parameters are stored within a holding page. Remember that you should inherit Base Rendering Parameters template to have full support in Pages Builder.

parameters


Rendering Variants (aka. Headless Variants) feel more advanced compared to params. The principle difference is that a variant allows you to return principally different HTML output and do way more complicated manipulations over the HTML structure. You should use common sense when choosing variants and leverage them in cases where the same component may present various look and feel options: for example, a promo block with two images having a headless variant of these same images positionally swapped. Achieving the same with rendering parameters would require bringing ugly presentation logic into the components code along with code duplications. Using variants allows us to achieve the same result way more elegantly. Note that, Variants originate from SXA, therefore when you bring a legacy JSS site to XM Cloud without converting it to SXA - this option isn't available.

variants



Both Rendering Variants and Rendering Parameters assume you use the same component that receives the same datasource items (or none datasource at all). You should never leverage datasource items to control the presentation or behavior of components - they are purposed exclusively for storing the content, as it comes from their name.

Hope that clarifies the use cases and removes ambiguity.

Experience Edge: Know Your Limitations

Experience Edge brought us that much-desired Content-Delivery-as-a-Service approach and happened to be revolutionary in its vision. However, that flexibility of service comes at some expense, and the limitations each of us must be aware of. Understanding these is critical when building cloud-hosted Sitecore solutions. The key technical limits include API rate throttling, data payload/query size caps, content/media size limits, caching rules, and XM Cloud platform constraints. In this post, I will cover them all, so that it can help you plan better.

API Rate Limits

  • 80 requests/sec. The Experience Edge GraphQL endpoint is rate-limited. Each tenant’s delivery API allows at most 80 requests per second (visible as X-Rate-Limit-Limit: 80). Exceeding this returns HTTP 429 (Too Many Requests) until the 1-second window resets. In practice, Sitecore notes this is a "fair use" cap on uncached requests, so designing with CDN caching via SSG/ISR is essential to stay below the limit.

  • Rate-limit headers. Every Edge response includes headers like X-Rate-Limit-Remaining calls this second, and X-Rate-Limit-Reset essentially - the time until reset, to help clients throttle their calls. For example, if 5 requests are made in one second, the next response will show 75 remaining.

GraphQL Query & Payload Constraints

  • Max query results: A single GraphQL query returns at most 1,000 items/entities. To fetch more items, you must use cursor-based pagination. For example, any search or multi-item query is capped at 1000 results per call.

  • Query complexity limit: Edge enforces a complexity budget on GraphQL queries. Very large or deeply nested queries can fail if they exceed the complexity threshold (around 250 in older Sitecore docs). Developers should test complex queries and consider splitting them or trimming fields.

  • No persisted or mixed queries: Experience Edge does not support persisted queries. Also, due to a known schema issue, you cannot mix literal values and GraphQL variables in one query; you must use all variables if any are used. Not knowing this rule cost me once a decent amount of time for troubleshooting.

  • Payload request size: Very large GraphQL request payloads can be problematic. By default, Next.js APIs have a 2 MB body size limit, which can cause 413 Payload Too Large errors when submitting huge queries. Sitecore suggests raising this (say, to ~5 MB) if necessary. In practice, keep queries reasonably small to avoid frontend limits.

  • Include/Exclude paths: When querying site routes (siteInfo.routes), the combined number of paths in includedPaths + excludedPaths is limited to 100. This caps how many different route filters you can specify in one request.

Content & Delivery Constraints

  • Static snapshot only: Experience Edge provides a static snapshot of published content. It does not apply personalization, AB testing, or any dynamic/contextual logic at request time. Any logic based on user, session, or query string must be handled client-side. If you change a layout service extension or rendering configuration, you must republish the affected items for Edge to pick up the changes.

  • Security model: Edge does not enforce Sitecore item-level security. All published content on Edge is effectively public, so use publishing restrictions in the CMS to prevent sensitive items from being published.

  • Single content scope: An Edge tenant covers the entire XM Cloud tenant with a single content scope. You cannot scope queries, cache clears, or webhooks to a specific site. For example, when a cache clear or webhook trigger runs, it applies to the whole tenant’s content, not per site.

  • Sites per tenant: Edge supports up to 1,000 sites per tenant. A "site" in this context is a logical group defined by includedPaths/excludedPaths in siteInfo. You cannot define more than 1000 sites in one Edge environment. In practice, the maximum site I met was 300 per tenant and all those were served by a multisite add-on on a Next.Js front-end.

  • Multi-site rules: You cannot have two different site definitions pointing to the same start item on Edge. Also, virtual folders and item aliases are not supported on Edge. Content must be published in standard items, and all routes are resolved case-sensitively.

  • Locales and device layers: Culture locale codes in queries are case-sensitive (e.g. it-ITit-it). In the layout data delivered by Edge, only the Default device layer is supported in Presentation data, so multi-device renderings beyond “Default” aren’t included.

Media Limits

  • Max media item size: Each media item file size published to Edge is limited to 50MB. Larger media will not be published to Edge; such large assets should be handled via other services like Sitecore Content Hub, or you can self-host them at any preferred blob storage of choice.

  • Media URL parameters: The built-in Media CDN on Edge supports only the parameters w, h, mw, and mh for image resizing. No other image transformations, like quality or format changes, are yet available out-of-the-box.

  • Case-sensitive URLs: Media item URLs on Edge are case-sensitive. For example, if the item path is Images/Banners/promo-banner.jpg, using lowercase images/banners/promo-banner.jpg will end up with 404. This quirk has caused issues in practice, so be careful with link manager settings that change casing.

  • Delivery: Media is delivered via the same CDN cache as content. There is no per-request payload aggregation for media; each media URL is fetched independently (subject to the CDN and TTL rules below).

Caching Rules & TTL

  • Default TTL: By default Edge caches content and media for 4 hours each (see contentCacheTtl: "04:00:00" and mediaCacheTtl: "04:00:00"). This means cached responses may be served up to 4 hours old unless cleared.

  • Auto-clear: Content and media caches are auto-cleared by default (the contentCacheAutoClear and mediaCacheAutoClear settings are true). In practice, this means a publish or explicit clear will purge the CDN cache so users see new content.

  • Custom TTL: You can adjust the cache TTLs via the Edge Admin API. TTL values are strings in D.HH:MM:SS format. For example, setting contentCacheTtl to "720.00:00:00" yields a 720-day TTL, or "00:15:00" for 15 minutes. The default 4h can thus be increased or decreased per project needs.

  • Cache clearing: In addition to auto-clear on publish, Edge offers Admin API endpoints to clear the cache or delete content. For instance, you can clear all content or specific items via the API. To use these features, administrators must obtain appropriate Edge API credentials in XM Cloud Deploy.

XM Cloud Platform Limits (Impacting Edge)

  • Environment mapping: In XM Cloud, the best practice is a 1:1 mapping of XM environments to Edge tenants. In other words, each XM Cloud environment typically has its own Experience Edge deployment. This means content and API keys are not shared across environments by default.

  • Search index: XM Cloud uses Solr, and there is no option to plug in different search technologies for Edge indexing. The connector will only work with Solr indices configured in XM Cloud.

  • Admin credentials: XM Cloud Deploy limits the number of Experience Edge Admin API credentials per project to 10. Attempts to create more will fail with an error. Project administrators should plan credential usage accordingly, for example, one per dev/CD pipeline.

  • Snapshot publishing: To enable incremental updates, XM Cloud provides snapshot publishing. This ensures that as soon as an item is published, Edge content is updated without a full site rebuild. If snapshot publishing is not enabled, any content changes on Edge require full republishing of affected sites. Developers must enable the Snapshot Publishing feature in XM Cloud to avoid hitting the rate limit on builds.

Baseв on all the above, let's also think about some deployment & publishing considerations that may affect your project:

  • Static build (SSG) preferred: Since every uncached request to Edge counts toward the rate limit, Microsoft recommends using Static Site Generation (SSG) and Incremental Static Regeneration (ISR) on the frontend. With SSG, pages are built at deploy-time and served from the host cache, minimizing live queries to Edge.

  • Build-time pagination: Very large sites can take a long time to generate. The default sitemap plugin fetches all pages across all sites; projects should use included/excluded paths to limit build-time queries. Otherwise, large volumes of pages hitting Edge during a build can approach the rate limit.

  • Publish-time republishing: Because Edge content is static, certain backend changes require republishing. In particular, changes to clones, standard values, or rendering/template configurations won’t reflect on Edge until the dependent items are republished. Plan your release process to include republishes after such changes.

Hope knowing the above helps you plan better!

Sitemaps in Sitecore XM Cloud: Automation, Customization, and SEO Best Practices

In Sitecore XM Cloud, sitemaps are generated and served via Experience Edge to inform search engines about all discoverable URLs. XM Cloud uses SXA’s built‑in sitemap features by default, storing the generated XML as media items in the CMS so they can be published to Experience Edge. Sitemap behavior is controlled by the Sitemap configuration item under /sitecore/content/<SiteCollection>/<Site>/Settings/Sitemap. There are few important fields - Refresh threshold which defines minimum time between regenerations, Cache expiration, Maximum number of pages per sitemap for splitting into a sitemap index, and Generate sitemap media items which must be enabled to publish via Edge. The Sitemap media items field of the Site item will list the generated sitemap(s) under /sitecore/media library/Project/<Site>/<Site>/Sitemaps/<Site>​, and the default link provider is used unless overridden. Tip: you can configure a custom provider via <linkManager> and choose its name in the Sitemap settings.

Automated Sitemap Generation Workflow

When content authors publish pages, XM Cloud schedules sitemap regeneration automatically based on the refresh threshold. Behind the scenes, an OnPublishEnd pipeline (often the SitemapCacheClearer.OnPublishEnd handler in SXA) checks each site’s sitemap settings. If enough time has elapsed since the last build, a Sitemap Refresh job runs. In this job, the old sitemap media item is deleted and a new one is generated and saved in the Media Library​. Once created, the new sitemap item is linked in the Sitemap media items field of the site and then published. This typically triggers two publish actions: one to publish the new media item (/sitecore/media library/Project/.../Sitemaps/<Site>/sitemap) and one to re-publish the Site item so Experience Edge sees the updated link.

For high-volume publishing, it’s best to set a reasonable refresh threshold to batch sitemap generation. For example, if you publish many pages daily, you might set the refresh threshold to 0 forcing a rebuild every time, or schedule a daily publish so the sitemap is updated once per day. Generating sitemaps can be resource-intensive especially for large sites, so avoid rebuilding on every small change unless necessary.

Sitemap Filtering: SXA provides pipeline processors to include or exclude pages. By default, items inheriting SXA’s base page templates have a Change frequency field. Setting it to "do not include" will exclude that page from the sitemap​. The SXA sitemap pipelines (sitemap.filterItem) include built‑in processors for base template filtering and change-frequency logic. To exclude a page, simply open it in Content Editor (or Experience Editor SEO dialog) and set Change frequency to "do not include"​.

GraphQL Sitemap Query: Once published, the XM Cloud GraphQL API provides access to the sitemap media URL. For example, the following query returns the sitemap XML URL for a given site name:

query SitemapQuery($site: String!) {
      site {
        siteInfo(site: $site) {
          sitemap
        }
      }
    }

This returns the Experience Edge URL of the generated sitemap media item. You can use this in headless code or debugging to verify the sitemap’s existence and freshness.

Sitemaps in Local Docker Containers

In a local XM Cloud Docker setup, the /sitemap.xml route often returns an empty file by default because the Experience Edge publish never occurs. There is no web database or Edge target, so the OnPublishEnd process never actually runs, leaving the empty sitemap item. Attempting to publish locally throws an exception (Invalid Authority connection string for Edge). To debug or test sitemap issues locally, you can manually trigger the SXA sitemap pipeline.

I really like the Sitemap Developer Utility approach suggested by Jeff L'Heureux: in your XM Cloud solution’s Docker files, create a page (e.g. generateSitemap.aspx) inside docker\deploy\platform with code that simulates a publish event. For example, one can invoke the SitemapCacheClearer.OnPublishEnd() method manually in C#.

// Simulate a publish event for the "Edge" target
    Database master = Factory.GetDatabase("master");
    List<string> targets = new List<string> {"Edge"};
    PublishOptions options = new PublishOptions(master, master, PublishMode.SingleItem, 
        Language.English, DateTime.Now, targets);
    Publisher publisher = new Publisher(options);
    SitecoreEventArgs args = new SitecoreEventArgs("OnPublishEnd", new object[] { publisher }, new EventResult());
    new SitemapCacheClearer().OnPublishEnd(null, args);
    

This code triggers the same sitemap build logic as a real publish​. Jeff's utility page provides buttons to run various steps (OnPublishEnd, the sitemap.generateSitemapJob pipeline, etc.) and shows output.

Once you run the utility and the cache job completes, the media item is regenerated. Then restart or refresh your Next.js site locally to see the updated sitemap at http://front-end-site.localhost/sitemap.xml. The browser will display the raw XML with <loc>, <lastmod>, <changefreq>, and <priority> entries as it normally should.

Sitemap Customization for Multi-Domain Sites

A common scenario is one XM Cloud instance serving multiple language or regional domains (say, www.siteA.com and www.siteA.fr) with one shared content tree. In SXA this is often handled by a Site Grouping with multiple hostnames. By default, SXA will generate a single sitemap based on the primary hostname. This leads to two issues: the same XML file is returned on both domains, and each page appears several times (once per language) under the same <loc>. For example, a bilingual site without customization might show both English and French URLs under the English domain, duplicating <url> entries.

To fix this, customize the Next.js API route (e.g. pages/api/sitemap.ts) that serves /sitemap.xml. The approach is: detect which host/domain the request is for, fetch the raw sitemap XML via GraphQL, and then filter and rewrite the entries accordingly. For instance, if the host header contains the French domain, only include the French URLs and update the <loc> and hreflang="fr" links to use the French hostname. Pseudocode for the filtering might look like:

if (lang === 'en') {
      // Filter out French URLs and fix alternate links
      urls = urls.filter(u => !u.loc[0].includes(FRENCH_PREFIX))
                 .map(updateFrenchAlternateLinks);
    } else if (lang === 'fr') {
      // Filter out English URLs and swap French loc to French domain
      urls = urls.filter(u => u.loc[0].includes(FRENCH_PREFIX))
                 .map(updateLocToFrenchDomain)
                 .map(updateFrenchAlternateLinks);
    }
    

Here, FRENCH_PREFIX is something like en.mysite.com/fr, and we replace it with the French hostname. In practice, the XML is parsed (e.g. via xml2js), then the result.urlset.url array is filtered and modified, and rebuilt to XML. There is a great solution suggested by Mike Payne which uses two helper functions filterUrlsEN and filterUrlsFR to drop unwanted entries and updateLoc/updateFrenchXhtmlURLs to replace URL prefixes​. Finally, the modified XML is sent in the HTTP response. This ensures that when a sitemap is requested from www.site.ca, all <loc> URLs and alternate links point to site.ca, and when requested from www.othersite.com, they point to www.othersite.com.

SEO Considerations and Best Practices

  • Include Alternate Languages (hreflang): XM Cloud (via SXA) automatically adds <xhtml:link rel="alternate" hreflang="..."> entries in the sitemap for multi-lingual pages. Ensure these are correct for your domains. After customizing for multiple hostnames, the <xhtml:link> URLs should also be updated to the appropriate domain​. This helps Google index the right language version for each region.

  • Set Change Frequency and Priority: Use SXA’s SEO dialog or Content Editor on the page item to set Change frequency and Priority for each page. For example, if a page is static, set a low change frequency. These values are written into <changefreq> and <priority> in the sitemap. Note: Pages can be excluded by setting frequency to "do not include".

  • Maximize Crawling via Sitemap Index: If your site has many pages, configure Maximum number of pages per sitemap so XM Cloud generates a sitemap index with multiple files. This avoids any single sitemap exceeding search engine limits and keeps crawlers from giving up on a very large file.

  • Robots.txt: SXA will append the sitemap link /sitemap.xml to the site’s robots.txt automatically​. Verify that your robots.txt in production references the correct sitemap and hostname.

  • Media Items and Edge: Always keep Generate sitemap media items enabled: without having this, XM Cloud cannot deliver the XML to the front-end. After a successful build, the sitemap XML is stored in a media item and served by Experience Edge. You can confirm the published sitemap exists by checking /sitecore/media library/Project/<Site>/<Site>/Sitemaps/<Site> or by running the GraphQL query mentioned above.

  • Link Provider Configuration: If your site uses custom URL routing (e.g. language segments or rewritten paths), you can override the link provider used for sitemap URLs. In a patch config, add something like:

    <linkManager defaultProvider="switchableLinkProvider">
          <providers>
            <add name="customSitemapLinkProvider" 
                 type="Sitecore.XA.Foundation.Multisite.LinkManagers.LocalizableLinkProvider, Sitecore.XA.Foundation.Multisite"
                 lowercaseUrls="true" .../>
          </providers>
        </linkManager>

    Don't forget to set the "Link provider name" field in the Sitemap settings to customeSitemapLinkProvider​ afterwards. This ensures the sitemap uses the correct domain and culture prefixes as needed.

Diagnostics and Troubleshooting

If the sitemap isn’t updating or the XML is wrong, check these:

  • Site Item Settings: On the site’s Settings/Sitemap item, confirm the refresh threshold and expiration are as expected. During debugging you can set threshold to 0 to force immediate rebuilds.

  • Was it published to Edge? Ensure the sitemap media item was published to Edge. You might need to publish the Site item or Media Library manually if it wasn’t picked up.

  • Cache Type: In the SXA Sitemap settings, the Cache Type can be set to "Inactive," "Stored in cache", or "Stored in file". For XM Cloud, the default "Stored in file" is typically used so the XML is persisted. If set to "Inactive", the sitemap generator will not run.

  • Inspect Job History: In the CM admin (/sitecore/admin/Jobs.aspx), look for the "Sitemap refresh" jobs to see if these succeeded or threw errors.

  • Next.js Route Errors: If your Next.js site’s /sitemap.xml endpoint returns an error, inspect its handler. The custom API route uses GraphQLSitemapXmlService.getSitemap(). Ensure the hostnames in your logic match your ENV variables, namely PUBLIC_EN_HOSTNAME. Add logging around the xml2js parsing if the output seems empty or malformed.

By following the above patterns - configuring SXA sitemap settings, automating generation on publish, and customizing for your site topology -you can ensure that XM Cloud serves up accurate, SEO‑friendly sitemaps. This helps search engines index your content fully and respects multi-lingual domain structures and refresh logic specific to a headless architecture.

References: one, two, three and four.

Merry Christmas and happy New Year!

Every year I create a special Christmas postcard to congratulate my readers on a new oncoming year, full of changes and opportunities. Wish you all the best in 2025!

My artwork for the past years (click the label to expand)
2024


2023


2022


2021


2020


2019


2018


2017


2016


Reviewing my 2024 Sitecore MVP contributions

Sitecore Technology MVP 2024 Sitecore Technology MVP 2023 Sitecore Technology MVP 2022 Sitecore Technology MVP 2021

Sitecore Technology MVP 2020 Sitecore Technology MVP 2019 Sitecore Technology MVP 2018 Sitecore Technology MVP 2017

The Sitecore MVP program is designed to recognize individuals who have demonstrated advanced knowledge of the Sitecore platform and a commitment to sharing knowledge and technical expertise with community partners, customers, and prospects over the past year. The program is open to anyone who is passionate about Sitecore and has a desire to contribute to the community.

Over the past application year starting from December 1st, 2023, I have been actively involved in the Sitecore community, contributing in a number of ways.

Sitecore Blogs 

  1. This year I have written 18 blog posts at the Perficient site on various topics related to Sitecore, including my Crash Course to Next.Js with TypeScript and GraphQL, top-notch findings about XM Cloud and other composable products, best practices, tips and tricks, and case studies. Listing them all by the bullets would make this post too long, therefore instead I leave the link to the entire list of them, shown reverse chronologically.
  2. I’ve been also posting on my very own blog platform, which already contains more than 200 posts about Sitecore accumulated over the past years.
  3. Also, I occasionally create video recordings/walkthrough and upload them to my YouTube channel.

Sitecore User Groups 

  1. Organized three Los Angeles Sitecore User Groups (#19, #20, and #21). This user group has ~480 members!
  2. Last fall I established and organized the most wanted user group of the year – Sitecore Headless Development UserGroup educating its 410+ members. This one is very special since headless development has become the new normal of delivering sites with Sitecore, while so many professionals feel left behind unable to catch up with the fast-emerging tech. I put it as my personal mission to run it twice per quarter helping the community learn and grow “headlessly” and that is one of my commitments to it. It became the most run and the most attended/reviewed event of all Sitecore user groups with eight events organized over this year (#1 and #2) (#3, #4, #5, #6, #7, #8, #9, #10) along with event #11 scheduled for December 12th. All the recordings are publicly available on YouTube, and also referenced from the individual event pages.
  3. Presented my innovative approach to the Content Migration for XM Cloud solutions.
  4. Another user group presentation narrates all the new features happening with Next.Js 15, breaking API changes and what it all means for Sitecore.

GitHub

  • Sifon project keeps maintained and receives new features. Thus Sifon got support for Sitecore 10.4 platforms.
  • I keep Awesome Sitecore project up and actual. This repository has plenty of stars on GitHub and is an integral part of a big Awesome Lists family, if you haven’t heard of Awesome Lists and its significance I highly recommend reading these articles – first and the second.
  • There are also a few less significant repositories among my contributions that are still meaningful and helpful.

Sitecore Mentor Program 

  • Got two mentees in 2024, supported them over the course of a year, also delivered them both full-scaled XM Cloud training along with the certification.
  • One of my past year mentees was recognized as Sitecore MVP in 2024, resulting from an Exclusive Mentorship Agreement, proving my mentoring approach was successful.

MVP Program

  • I participate in most of the webinars and MVP Lunches (often in both time zones per event).
  • I think MVP Summit is the best perk of the MVP Program, so never miss it out. This year I’ve learned a lot and also provided feedback to the product teams, as usual.
  • I participate in several streams of the Early Access Program, sharing insights with the product team ahead of GA dates.
  • In the past, I have participated in a very honorable activity helping to review the first-time applicants for the MVP Program which is the first line of the evaluation and we carefully match every first-time applicant against high Sitecore MVP standards. This year I am taking part in reviewing as well.

Sitecore Learning

I collaborated with the Sitecore Learning team for the past 2-3 years, and this year was not an exception: 

  • I was invited by Sitecore Learning to make an excellent detailed review of a new feature - XM Cloud Forms Builder for Tips & Tricks series. 

Sitecore Telegram 

  • I am making Telegram a premium-level channel for delivering Sitecore news and materials. Telegram has a unique set of features that no other software can offer, and I am leveraging these advantages for more convenience to my subscribers.
  • Started in 2017 as a single channel, it was expanding rapidly and has now reached a milestone of 1,100 subscribers!
  • Growth did not stop but escalated further beyond Sitecore going composable with having a dedicated channel for almost any composable product. Here all they are:

Support Tickets

  • CS0514702 (Content Hub)
  • CS0462816 (SPE for XM Cloud)
  • CS0518934 (Forms Builder)

Other Contributions

  • I created Sitecore MVP section in the Wikipedia, explaining MVP Program, its significance for Sitecore and the overall process of determining the winners.
  • I am very active on my LinkedIn (with 7K+ followers) and Twitter aka X (with almost ~1.2K subscribers), multiple posts per week, sometimes a few a day.
  • With my dedication to Sitecore's new flagship product, XM Cloud, it was no wonder I just launched a new XM Cloud Daily series of tips and tricks on social media (this actually started in December 2024, so it falls on a new application period).
  • That comes in addition to the existing series on LinkedIn - Headless Tips & Tricks, where I share the insights and nuances of modern headless development with Sitecore

 The above is what I memorized about my annual contributions so far. Wishing all decent applicants to join this elite club for the coming year!

XM Cloud content migration: connecting external database

Historically when performing content migration with Sitecore we used to deal with database backups. In a modern SaaS world, we do not have the luxury of neither managing cloud database backups, nor the corresponding UI for doing this. Therefore, we must find an alternative approach.

Technical Challenge

Let’s assume we have a legacy Sitecore website, in my case that was XP 9.3 and we’ve been provided with only a master database backup having all the content. The objective is to perform content migration from this master database into a new and shiny XM Cloud environment(s).

Without having direct access to the cloud, we can only operate locally. In theory, there could be a few potential ways of doing this:

  1. Set up a legacy XP of the desired version with the legacy content database already attached/restored to it. Then try to attach (or restore) a vanilla XM Cloud database to a local SQL Server as a recipient database in order to perform content migration into it. Unfortunately, the given approach would not work since SQL Server version incompatibility between XM Cloud and XP 9.3. Even if that was possible, running XP 9.3 with the XM Cloud database won’t work as Xз 9.3 neither knows about XM Cloud schema nor is capable of handling Items as Resource required feature which was invented later in XP 10.1. Therefore – this option is not possible.

  2. Can we go the other way around by using the old database along with XM Cloud? This is not documented, but let’s assess it:

    1. Definitely won’t work in the cloud since we’re not given any control of DBs and their maintenance or backups.

    2. In a local environment, XM Cloud only works in Docker containers and it is not possible to use it with an external SQL Server where we have a legacy database. But what if we try to plug that legacy database inside of the local SQL Container? Sadly, there are no documented ways of achieving that.

  3. Keep two independent instances side by side (legacy XP and XM Cloud in containers) and use an external tool to connect both of them in order to migrate the content. In theory that is possible but carries on few drawbacks.
    1. The tool of choice is Razl, but this tool is not free, requires a paid license, and does not have a free trial to ever test this out.
    2. Connecting to a containerized environment may not be easy and require some additional preps
    3. You may need to have a high-spec computer (or at least two mid-level machines connected to the same network) to have both instances running side by side.

After some consideration, the second approach seems to be reasonable to try so let’s give it a chance and conduct a PoC.

Proof of Concept: local XM Cloud with external content database

Utilize the second approach we’re going to try attaching the given external legacy database to XM Cloud running in a local containerized setup. That will allow using a built-in UI for mass-migrating the content between the databases (as pictured below) along with the Sitecore PowerShell script for finalizing and fine-tuning the migrated content.

Control Panel

Step 1: Ensurу SQL Server port is externally exposed

We are connecting the external SQL Server Management studio through a port of the SQL Server container that is exposed externally in order to make it possible. Luckily, that has been done for us already, just make sure docker-compose has:

ports:
          - "14330:1433"

Step 2: Spin up an XM Cloud containers and confirm XM Cloud works fine for you

Nothing extraordinary here, as easy as running .\init.ps1 followed by .\up.ps1.

Step 3: Connect SQL Management Studio to SQL Server running in a container.

After you sound up containers, run SQL Management Studio and connect to SQL Server running in SQL container through an exposed port 14330, as we did at step 1:

Connection parameters

Step 4: Restore the legacy database

If you have a Data-Tier “backpack” file you may want to do an extra step and convert it into a binary backup for that particular version used by XMCloud before restoring. This step is optional, but in case you want to restore the backup more than once (which is likely to happen), it would make sense to take a binary backup as soon as you restore the data-tier “backpack” first time ever. Data-tier backups process much slower than binaries, so that will definitely save time in the future.

Once connected, let’s enable contained database authentication. This step is mandatory, otherwise, that would not be possible to restore a database:

EXEC sys.sp_configure N'contained database authentication', N'1'
    go
    exec ('RECONFIGURE WITH OVERRIDE')
    go

One more challenge ahead: when performing backup and restore operations, SQL Server shows up a path local to the server engine, and not the host machine. That means, our backup should exist “inside” of SQL container. Luckily, w have this also covered. Make sure docker-compose.override.yml contains:

mssql:
    volumes:
          - type: bind
    source: .\docker\data\sql
    target:c:\data

That means, one can locate legacy database backups into .\docker\data\sql folder of a host machine and it will magically appear within C:\datafolder when using SQL Management Studio database backup restore tool which you can perform now.

Important! Restore legacy database using the “magic name” in a format Sitecore.<DB_NAME_SUFFIX>, further down below I will be using the value RR as DB_NAME_SUFFIX.

Once got restored database in SQL Server Management Studio under the name Sitecore.RR we need to plug this database to the system. There is a naming convention hidden from our eyes within CM containers.

Step 5: Configure connection strings

Unlike in XM/XP – there is no documented way to plug an external database. The way connection strings are mapped to the actual system is cumbersome, it uses some “magic” hidden within the container itself and obfuscated from our eyes. It only tool to reach it experimental way. Here are the steps to reproduce:

  • Add environmental variable to docker-compose record for CM:

    • Sitecore_ConnectionStrings_RR: Data Source=${SQL_SERVER};Initial Catalog=${SQL_DATABASE_PREFIX}.RR;User ID=${SQL_SA_LOGIN};Password=${SQL_SA_PASSWORD}
  • Add a new connection string record. To do so you’ll need to create a connection strings file within your customization project as .\src\platform\<SITENAME>\App_Config\ConnectionStrings.config with the content of the connection strings file from the CM container with the addition of a new string:

Please note the difference in the suffix format of both above records, that is totally fine. CM container still processes that correctly.

Step 6: Reinstantiating CM container

Simply restarting a CM container is not sufficient. You must remove it and re-create it, just killing/stopping is not sufficient.

For example, the below command will work for that purpose:

docker-compose restart cm

… not will this one:

docker-compose kill cm

The reason is that CM will not update environmental variables from docker-compose file upon restart. Do this instead:

docker-compose kill cm
    docker-compose rm cm --force
    docker-compose up cm -d

Step 7: Validating

  1. Inspecting CM container for environmental variables will show you this new connection string, as added:

    1. "Env": [
                      "Sitecore_ConnectionStrings_RR=Data Source=mssql;Initial Catalog=Sitecore.RR;User ID=sa;Password=6I7X5b0r2fbO2MQfwKH"
  2. Inspecting connection string config (located at C:\inetpub\wwwroot\App_Config\ConnectionStrings.config on CM container) contains the newly added connection string.

Step 8: Register new database with XM Cloud

It can be done the below config patch that does this job. Save it as docker\deploy\platfo.rm\App_Config\Include\ZZZ\z.rr.config for test and later do not forget to include it in a platform customization project, so that it gets shipped with each deployment
<?xml version="1.0" encoding="UTF-8"?>
    <configuration
        xmlns:patch="www.sitecore.net/.../">
        <sitecore>
            <eventingdefaultProvider="sitecore">
                <eventQueueProvider>
                    <eventQueuename="rr"patch:after="evertQueue[@name='web']"type="Sitecore.Data.Eventing.$(database)EventQueue, Sitecore.Kernel">
                        <paramref="dataApis/dataApi[@name='$(database)']"param1="$(name)"/>
                        <paramref="PropertyStoreProvider/store[@name='$(name)']"/>
                    </eventQueue>
                </eventQueueProvider>
            </eventing>
            <PropertyStoreProvider>
                <storename="rr"patch:after="store[@name='master']"prefix="rr"getValueWithoutPrefix="true"singleInstance="true"type="Sitecore.Data.Properties.$(database)PropertyStore, Sitecore.Kernel">
                    <paramref="dataApis/dataApi[@name='$(database)']"param1="$(name)"/>
                    <paramresolve="true"type="Sitecore.Abstractions.BaseEventManager, Sitecore.Kernel"/>
                    <paramresolve="true"type="Sitecore.Abstractions.BaseCacheManager, Sitecore.Kernel"/>
                </store>
            </PropertyStoreProvider>
            <databases>
                <databaseid="rr"patch:after="database[@id='master']"singleInstance="true"type="Sitecore.Data.DefaultDatabase, Sitecore.Kernel">
                    <paramdesc="name">$(id)
                    </param>
                    <icon>Images/database_master.png</icon>
                    <securityEnabled>true</securityEnabled>
                    <dataProvidershint="list:AddDataProvider">
                        <dataProviderref="dataProviders/main"param1="$(id)">
                            <disableGroup>publishing</disableGroup>
                            <prefetchhint="raw:AddPrefetch">
                                <sc.includefile="/App_Config/Prefetch/Common.config"/>
                                <sc.includefile="/App_Config/Prefetch/Webdb.config"/>
                            </prefetch>
                        </dataProvider>
                    </dataProviders>
                    <!-- <proxiesEnabled>false</proxiesEnabled> -->
                    <archiveshint="raw:AddArchive">
                        <archivename="archive"/>
                        <archivename="recyclebin"/>
                    </archives>
                    <cacheSizeshint="setting">
                        <data>100MB</data>
                        <items>50MB</items>
                        <paths>2500KB</paths>
                        <itempaths>50MB</itempaths>
                        <standardValues>2500KB</standardValues>
                    </cacheSizes>
                </database>
            </databases>
        </sitecore>
    </configuration>

Step 9: Enabling Sitecore PowerShell Extension

Next, we’d want to enable PowerShell, if that is not yet done. You won’t be able to migrate the content using SPE without performing this step.

<?xml version="1.0" encoding="utf-8"?>
    <configuration
        xmlns:patch="http://www.sitecore.net/xmlconfig/"
        xmlns:role="http://www.sitecore.net/xmlconfig/role/"
        xmlns:set="http://www.sitecore.net/xmlconfig/set/">
        <sitecorerole:require="XMCloud">
            <powershell>
                <userAccountControl>
                    <tokens>
                        <tokenname="Default"elevationAction="Block"/>
                        <tokenname="Console"expiration="00:55:00"elevationAction="Allow"patch:instead="*[@name='Console']"/>
                        <tokenname="ISE"expiration="00:55:00"elevationAction="Allow"patch:instead="*[@name='ISE']"/>
                        <tokenname="ItemSave"expiration="00:55:00"elevationAction="Allow"patch:instead="*[@name='ItemSave']"/>
                    </tokens>
                </userAccountControl>
            </powershell>
        </sitecore>
    </configuration>

Include the above code into a platform customization project as .\docker\deploy\platform\App_Config\Include\ZZZ\z.SPE.config. If everything is done correctly, you can run SPE commands, as below:

SPE results

The Result

After all the above steps are done correctly, you will be able to utilize the legacy content database along with your new shiny local XM Cloud instance:
Result in Sitecore Content Editor
Now you can copy items between databases just by using built-in Sitecore UI preserving their IDs and version history. You can also copy items with SPE from one database to another which are both visible to the SPE engine.

.NET Core Renderings for XM Cloud finally gets some love

That is not a secret – Sitecore always used to prioritize Next.Js framework as the first-class citizen for XM Cloud. All the best and finest features tend to find their way to a given framework in the first place. However, recently, there has been much activity around the .NET Core Rendering Framework which makes a lot of sense given most of us, Sitecore tech professionals, originate from the Microsoft and .NET background. More excitement – that is done on .NET 8, which is the latest LST runtime!

Starter Kit

ASP.NET Core framework was with us for a while, periodically receiving some minor updates and fixes. But let’s be honest: having an SDK on its own is one thing, but receiving a decent starter kit on top of that framework is what makes us developers actually create at scale. And that moment has just occurred – without any loud fanfare, XMC ASP.NET Core Starter Kit went public. Please be aware that this is only a PRE-RELEASE version and has its own temporal shortcomings, I gave it a try and want to share my findings with you.

What are these shortcomings? Just a few:

  • FEaaS and BYOC components are not yet supported, therefore you also cannot use Form since it leverages those
  • System.Text.Json serializer is more strict than Newtonsoft which was removed in favor of a built-in solution, thus some components may fail
  • SITECORE_EDGE_CONTEXT_ID variable is not supported

Everything else seems to work the same. There are also some expectations of XM Cloud supporting .NET Rendering at a built-in editing host at some time later in the same manner that works today along with JSS applications, but I do not work for Sitecore and can only make assumptions and guesses without any certainty to it.

First Impression

I forked the repo and cloned the forked code into my computer. Let’s take a look at what we have got there.

VS Code

  • the code varies from what we used to see from XM Cloud Foundation Head starter kit, and that’s understood
  • at the root folder we still have xmcloud.build.json, sitecore.json and folders – .config and .sitecore
  • xmcloud.build.json is required for cloud deploy, but does not have renderingHosts root section required for editing host(s), as I explained above
  • there is headapps folder to keep the solution file along with .NET projects subfolder(s), currently just a single one – aspnet-core-starter
  • there is also local-containers folder that contains docker-compose files, .env, docker files, scripts, Traefik, and the rest of the container assets we got used to
  • another difference – authoring folder contains serialization settings and items as well as .NET framework project for CM customizations
  • however, there are no init.ps1 and up.ps1 files, but that is easy to create yourself by stealing and modifying those from XM Cloud Foundation Head

With that in mind, we can start investigating. There is a ReadMe document explaining how to deploy this codebase, but before going ahead with it I of course decided to:

Run Local Containers

There are no instructions on container setup, only for cloud deployment, but after spending a few years with Foundation Head, the very first thing that naturally comes into my mind is running this starter kit in local Docker containers. Why not?

There are a couple of things one should do first before spinning up containers.

1. Modify settings in .ENV file – at least these two:

# Enter the value for SQL Server admin password:
SQL_SA_PASSWORD=SA_PASSWORD
# Provide a folder storing a Sitecore license file:
HOST_LICENSE_FOLDER=C:\Projects
2. We need to generate Traefik SSL Certificates. To do so let’s create .\local-containers\init.ps1 script with the below content:
    [CmdletBinding(DefaultParameterSetName = "no-arguments")]
    Param()
    $ErrorActionPreference = "Stop";
    
    # duplicates in Up.ps1 scrips
    $envContent = Get-Content .env -Encoding UTF8
    $xmCloudHost = $envContent | Where-Object {$_ -imatch "^CM_HOST=.+"}
    $renderingHost = $envContent | Where-Object {$_ -imatch "^RENDERING_HOST=.+"}
    $xmCloudHost = $xmCloudHost.Split("=")[1]
    $renderingHost = $renderingHost.Split("=")[1]
    
    Push-Location docker\traefik\certs
    try{
        $mkcert = ".\mkcert.exe"
        if($null -ne(Get-Command mkcert.exe -ErrorAction SilentlyContinue)){
            # mkcert installed in PATH
            $mkcert = "mkcert"
        }elseif(-not(Test-Path$mkcert)){
            Write-Host "Downloading and installing mkcert certificate tool..." -ForegroundColor Green
            Invoke-WebRequest "https://github.com/FiloSottile/mkcert/releases/download/v1.4.1/mkcert-v1.4.1-windows-amd64.exe" -UseBasicParsing -OutFile mkcert.exe
            if((Get-FileHash mkcert.exe).Hash -ne "1BE92F598145F61CA67DD9F5C687DFEC17953548D013715FF54067B34D7C3246"){
                Remove-Item mkcert.exe -Force
                throw "Invalid mkcert.exe file"
            }
        }
        Write-Host "Generating Traefik TLS certificate..." -ForegroundColor Green
        & $mkcert -install
        & $mkcert "$xmCloudHost"
        & $mkcert "$renderingHost"
    }
    catch{
        Write-Error "An error occurred while attempting to generate TLS certificate: $_"
    }
    finally{
        Pop-Location
    }

    Write-Host "Adding Windows host"
    Add-HostsEntry "$renderingHost"
    Add-HostsEntry "$xmCloudHost"

    Write-Host "Done!" -ForegroundColor Green

And then execute this script:

Certs

There is no up.ps1 script, so instead let’s run docker-compose directly: docker compose up -d

You may notice some new images show up, and you also see a new container: aspnet-core-starter

Docker

If everything is configured correctly, the script will execute successfully. Run Sitecore from its default hostname, as configured in .env file: https://xmcloudcm.localhost/sitecore

From there you will see no significant changes. Containers just work well! Sitecore has no content to interact with the head application. I will add the content from the template but let’s make the could deployment first.

Deploy to the Cloud

ReadMe document suggests an inconvenient way of cloud deployment:

1. Create a repository from this template.

2. Log into the Sitecore Deploy Portal.

3. Create a new project using the ‘bring your code’ option, and select the repository you created in step 1.

For the majority of us, who are on the Sitecore Partner side, there are only six environments available grouped into two projects. These allocations are priceless and are carefully shared between all XM Cloud enthusiasts and aspirants who are learning a new platform. We cannot simply “create a new project” because we don’t have that spare project, so in order to create one we have to delete the existing one. Deleting a project requires deleting all (three) of its environments in the first place, which is half of the sandbox capacity, carrying valuable work in progress for many individuals.

That is why I decided to use CLI instead. Luckily it works exactly the same as it does with Next.Js starter kits, and from .\.config\dotnet-tools.json you may see that it uses that same version. You deploy the root folder holding xmcloud.build.json file as a working directory, so there are no changes in execution.

Eventually, once deployed we navigate to XM cloud. I decided to follow the ReadMe and create a Basic site from Skate Park template. Basically, I am following steps 4-18 from the ReadMe file.

As a side exercise, you will need to remove a Navigation component from a Header partial item, located at /sitecore/content/Basic/Basic/Presentation/Partial Designs/Header. Basic site will break in the debugger if you do not delete currently incompatible rendering that has a serialization issue.

Building Dev Tunnel in Visual Studio

Next, let’s open and build the solution in the Visual Studio IDE, which refers to .\headapps\aspnet-core-starter.sln file. You may see it related to three Sitecore dependencies from Sitecore.AspNetCore.SDK.LayoutService.Client:

  • Transient: Sitecore.AspNetCore.SDK.LayoutService.Client.Interfaces.ISitecoreLayoutClient
  • Singleton: Sitecore.AspNetCore.SDK.LayoutService.Client.Serialization.ISitecoreLayoutSerialize
  • Singleton: Sitecore.AspNetCore.SDK.LayoutService.Client.Serialization.Converter.IFieldParser

Modify .\headapps\aspnet-core-starter\appsettings.json with the setting values collected from the previous steps. You will end up with something looking as:

Appsettings.json

Now let’s create a Dev Tunnel in VisualStudio:

Dev Tunnel

There will be at least two security prompts:

Dev Tunnel Authorize Github Dev Tunnel Authorize Notice

If everything goes well, a confirmation message pops up:

Dev Tunnel Created

Now you will be able to run and debug your code in Visual Studio:

Debugger Works

Make a note of the dev tunnel URL, so that we can use it to configure Rendering Host, as described at step 27 of ReadMe. You will end up with something as below:

Rendering Hosts

So far so good. You can now run the website by URL and in Experience Editor. Running in Page will however not work yet due to the below error:

No Pages Without Publish

To explain that, Experience Editor runs as a part of CM and pulls content from a GraphQL endpoint on that same CM. Pages instead is a standalone separate application, so it does not have access neither to the endpoint nor to the Rendering Hosts settings item. It only has access to Experience Edge so we must publish first. Make sure you publish the entire Site Collection. Once complete, Page works perfectly well and displays the site:

Pages Work 1 Pages Work 2

To explain what happens above: Pages app (which is a SaaS-run editor) pulls Experience Edge for the settings of the rendering editing host (which runs in a debuggable dev tunnel from Visual Studio) and renders HTML right there with the layout data and content pulled from Experience Edge.

Deploy Rendering Host to Cloud

Without much thinking, I decided to deploy the rendering host as Azure Web App, with the assumption that the .NET 8 application would be best supported in its native cloud.

Web App Configure

After the Web App is created, add the required environmental variables. The modern SITECORE_EDGE_CONTEXT_ID variable is not yet supported with .NET Core SDK, so we should go the older way:

Azure App Settings

A pleasant bonus of GitHub integration is that Azure creates GitHub Actions workflow with the default functional build and deployment. There is almost nothing to change, I only made a single fix replacing this run: dotnet publish -c Release -o ${{env.DOTNET_ROOT}}/myapp with a hardcoded path since this variable contains space from “Program Files” part and is incorrectly tokenized breaking the build. After this fix, GitHub actions got built the right way and I started receiving green status:

Github Actions

… and the published site shows up from the Azure Web App powered rendering host:

Published Site

Finally, we can get rid of Dev Tunnel, replacing it with the actual “published site” hostnames:

Getting Rid Of Dev Tunnel

After republishing the Rendering Host item to Edge, we can stop the debugger and close Visual Studio. Both Experience Editor and Pages app are now working with an editing host served by the Azure Web App.

Verdict

Of course, that would be much anticipated for XM Cloud to have built-in .NET editing host capabilities same way as JSS does. But even without it, I applaud to Sitecore development team for making and keeping working on this starter kit as that is a big milestone for all of us within the .NET Community!

With this kit, we can now start building XM Cloud powered .NET app at a faster pace. I believe all the missing features will find their way to the product, and maybe later there will be some (semi)official SSG support for .NET, something like Statiq. That will allow deployments to a wider set of hosting, such as Azure Static Web Apps, Netlify, and even Vercel which does not support .NET as of today.