Experience Sitecore ! | More than 200 articles about the best DXP by Martin Miles

Experience Sitecore !

More than 200 articles about the best DXP by Martin Miles

Blog content

So, don't miss out! Subscribe to this blog/twitter from the buttons above!

Reviewing my 2023 Sitecore MVP contributions

The Sitecore MVP program is designed to recognize individuals who have demonstrated advanced knowledge of the Sitecore platform and a commitment to sharing knowledge and technical expertise with community partners, customers, and prospects over the past year. The program is open to anyone who is passionate about Sitecore and has a desire to contribute to the community.

Over the past application year starting from December 1st, 2022, I have been actively involved in the Sitecore community, contributing in a number of ways.

Sitecore Blogs

  1. This year I have written 45(!) blog posts and that’s at the Perficient site only on the various topics related to Sitecore, including my top-notch findings about XM Cloud and other composable products, best practices, tips and tricks, and case studies. Listing them all by the bullets would make this post too excessive, therefore instead I leave the link to the entire list of them.
  2. I’ve been also posting on my very own blog platform, which already contains more than 200 posts about Sitecore accumulated over the past years.

Sitecore User Groups

  1. Organized four Los Angeles Sitecore User Groups  (#15, #16, #17 and #18)
  2. Established a organized the most wanted user group of a year – Sitecore Headless Development UserGroup. This one is very special since headless development has become the new normal of delivering sites with Sitecore, while so many professionals feel left behind unable to catch up with the fast emerging tech. I put it as my personal mission to help the community learn and grow “headlessly” and that is one of my commitments to it. There were two events so far (#1 and #2) with a #3 scheduled for December 12th.
  3. Facilitated and co-organized Sitecore Southeast Europe User Group (Balkans / Serbia), I am sponsoring it from my own pocket (Meetup + Zoom).
  4. Presented my new topic “Mastery of XM Cloud” on 17 Jan  at the Jaipur user group, India

SUGCON Conferences 2023

  1. Everyone knows how I love these conferences, especially SUGCON Europe. This year I submitted my speech papers again and got chosen again with the topic Accelerate Headless SXA Builds with XM Cloud. Last year I hit the same stage with The Ultimate Sitecore Upgrade session.
  2. I was very eager attending to SUGCON India and submitted a joint speech with my genius mentee Tiffany Laster – this proposal got chosen (yay!). Unfortunately, at the very last minute, my company changed the priorities, and we were not allowed to travel. Since the remote presentation was not an option there, I have the warmest hopes to present there the following year. Two of my other mentees (Neha Pasi and Mahima Patel) however found their way to that stage and presented a meaningful session on Sitecore Discover. Tiffany was also chosen as a speaker for the following SUGCON NA however with a different topic – DAM Maturity Model.
  3. I was a proud member of the SUGCON NA 2023 Organization Committee, which we thought over the past 15 months. We collectively were responsible for a wide range of tasks, but my primary personal responsibilities were organizing the session recording, building the event website, choosing the speakers from the speech proposals for building the agenda, and several more. I served as Room Captain for each and every session timeslot on Thursday and the majority on Friday.


Sifon project keeps going not just maintained but also receiving new features. Sifon gets support for each version of XM/XP releases almost the next day. I am also PoC a composable version of Sifon Cloud, if proven that would be a big thing for XM Cloud. Every time I am involved in a Sitecore platform upgrade or any other development or PoCs working outside of containers – Sifon saves me a lot of time and effort.

I keep Awesome Sitecore up and actual. This repository has plenty of stars on GitHub and is an integral part of a big Awesome Lists family, if you haven’t heard of Awesome Lists and its significance I  highly recommend reading these articles – first and the second.

At the beginning of the year made guidance and a demo on how one can pair .NET Headless SDK with XM Cloud in the same containers working nicely together, along with releasing the source code to it.

There are also a few less significant repositories among my contributions that are still meaningful and helpful.

Sitecore Mentor Program

  • With lessons learned from the previous year of mentorship, this time I got 5 mentees, all young, ambitious, and genius and I cannot stress out enough how I am proud of them all and their achievements!
  • 3 of them found their way to SUGCON conferences as speakers (see above)
  • the others still deliver valuable contributions to the community.

MVP Program

  • I participate in all the webinars and MVP Lunches (often in both timezones per event) I can only reach out.
  • Every past year, I have participated in a very honorable activity helping to review the first-time applicants for the MVP Program. This is the first line of the evaluation and we carefully match every first-time applicant against high Sitecore MVP standards.
  • I think MVP Summit is the best perk of the MVP Program, so never miss it out. This year I’ve learned so much and provided feedback to the product teams, as usual.

Sitecore Learning

I collaborated with the Sitecore Learning team for the past 2-3 years but this year my contributions exceeded the previous ones. Here are some:

  • I volunteered to become a beta tester for a new Instructor-led XM Cloud training and provided valuable feedback upon the completion
  • Collaborated with the team on XM Cloud certification exam (sorry cannot be more explicit here due to the NDA)
  • I was proud to be chosen as an expert for opening a new Sitecore Tips & Tricks series organized by Sitecore Learning team. In 60 minutes I demonstrated Sitecore Components Builder with an external feed integration from zero to hero actually running it deployed to Vertical, all that with writing zero lines of code (part 1 and part 2). Impressive!

Sitecore Telegram

  • Contributed to it even more than in any of the previous years, I am making Telegram a premium-level channel for delivering Sitecore news and materials. Telegram has a unique set of features that no other software can offer, and I am leveraging these advantages for more convenience to my subscribers.
  • Started in 2017 as a single channel, it was expanding rapidly and now reached a milestone of 1,000 subscribers!
  • Growth did not stop but escalated further beyond Sitecore going composable with having a dedicated channel for almost any composable product. Here all they are:

Other Contributions

  • Three times over this year I was an invited guest and expert to Sitecore Fireside at Apple Podcasts (one, two, and three)
  • I am very active on my LinkedIn (with 4K followers) and Twitter aka X (with almost ~1.2K subscribers), multiple posts per week, sometimes a few a day.
  • With my dedication to the new flagship product of Sitecore – XM Cloud there was no wonder I managed to get certified with it among the first. It is a great product and I wish it to become even better!

The above is what I memorized from a year of contributions so far. I think it could serve as a good example of the annual applicant’s contribution and what Sitecore MVP standards stand for. Wish you all join this elite club for the coming year.

Challenges of an international travelling in 2023 or how things can go unexpectedly wrong

I am the kind of person who tries to predict and avoid potential problems way before they even can occur. Risk Management presents in every single cell circulating in my blood – partly because of some sort of professional deformation as well as a natural curiosity and lessons learned from others’ mistakes. But sometimes things can go very unpredictably and you’re left on your own.

That is a triple miracle that I made it back to the US from the conference, but in fact that is a set of three independent miracles.

First, getting to and from Spain. It was a lucky coincidence of me buying both onward and return tickets on those rare lucky days right in between a series of air traffic control strikes across European airports.

I made my flight back early on Saturday and some of those who left for a weekend could not make it because of air control strikes in France and Germany. Even if you’re not French or German, there is a big change of making layout/change at one of their airports, as there are no direct flights to the USA from the medium Spanish airports. Likewise, when flying in, I changed the LAX plane in Frankfurt, for Malaga. After Spain itself joined the strikes from that weekend, it would be even fewer chances to fly away, so I feel exceptionally lucky from departing early and departing through the UK which joins the airport strikes slightly later giving me enough time to leave Europe.

Going next. Early Monday morning I showed up at Heathrow airport as normal and then got denied boarding for an “expired” barcode on my COVID certificate. I used one for flying all the time including in the UK and it has never been a problem. That could be a minor problem, at least I am vaccinated there in the UK and the records should be available. I memorize all my passwords so can easily log in to the app or the website…. Incorrect! Whoever did the app made it with mandatory 2-factor authentication by sending a text to your number. But I don’t have my old UK number, after moved back to the states. Now you see, how one minor problem turns into a much bigger one.

Trying to escalate it with all levels of management did not help at all, this type of person is just simply sitting their paid hours and do not want to step the extra mile. “Computer says no” – is an accurate description of dealing with them. So, I was denied boarding for a stupid reason, and the clock’s ticking…

In a critical situation, your mind works differently, brainstorming any possible outcomes under stress. I remembered that did switching that original number (I did not even remember the actual digits) to a pre-paid plan and put it somewhere in storage along with some old phone. Or had to call someone who could access it there, but it was 3AM there in California. Chances to: 1) wake up the right person in order to ..2) understand your uncertain instructions and... 3) manage to follow them up correctly - that multiplied together are so low! But I made all that happen in a permitted window of 30 minutes - such a miracle! Unbelievable!

After receiving the code, I was able to pass through a line of unwanted difficult questions and eventually generate my certificates in the mobile app. And guess what? That check-in lady neither did not scan the updated barcode nor entered it somewhere. At all! Just said “now ok” and that was it. So she potentially could let me board with an "expired" "barcode" since everything she “checked” was the date label above it. The impact of losing a flight and being stuck in an airport limbo with heavy bags on you (not to say $1-2K to pay for a replacement flight) is a huge penalty when things go wrong mainly because of inadequate and non-transparent procedures and human robots who follow them. This system is definitely broken. The humans behind it are also “broken” in a similar way.

That’s not all. By the time I passed the above line of traps for showing robots-people the right label they wanted to see, they had put my ticket into a STANBY status, which means I was not guaranteed a seat for BOTH legs of my flight, not just trans-Atlantic segment. They boarded me to Phoenix without giving me a following ticked, which I need to get there.

The first segment of my trip was delayed for 2 hours so I only had something less than 40 minutes to clear the customs and immigration, re-check the bags to a final destination (praying it reaches the plane in time) and run myself long way to the departure gate. Long story short, I was the fastest person to get off the plane and pass all the procedures, rechecking the bags, running through additional security, etc. but reached the “gate closed” door, and boarding assistance are just moving away. I had to run as fast as possible, waive my arm and shout "do not close" to pay their attention, then ask them to let me on the plane. Emotions burst and the timing was so precise - an extra 20 seconds would leave me staying overnight at Phoenix and possibly paying for a final segment, but this type of luck followed me the whole day so both I and my bags magically arrived at Orange County airport in time.

What a crazy day it was!

Tunneling out Sitecore 10.3 from a local machine containers for the full global access

I am experiencing an urgent request to prepare a Sitecore instance for the test of some external tools our prospect partners making demos for us. In good times, I'd, of course, spin up a proper PaaS / Kubernetes environment, however, I am occasionally out of control of any cloud subscription, and what is much more important - time! The deadline for such tasks is usually "yesterday", so I started thinking of potential "poor man's deployment" options.

Like many developers do, I also have a "server in a wardrobe", however, that is not a retired laptop but a proper high-speck machine that currently serves me as a hypervisor server plugged by a gigabit Google Fiber connection. My cat loves spending time there, and I am generally OK with that given she does not block the heat sink vents output:

This server runs on ltsc2022 kernel, which provides me additional performance benefits, as I wrote about in a previous post of running 10.3 in Process Isolation mode. So why not re-use the codebase from that same containerized Next.js starter kit for the sake of PoC?

Please note: you should not utilize this approach for hosting real-life projects or going for something bigger than a quick PoC of a demo showcase. The stability of a tunneled channel remains at the courtesy of a service provider, and it also may violate one's particular license, so please use it with care.

The next question comes of how do I make it accessible from the outer global internet so that people making demos can login from where they are and work with Sitecore as they would normally do. Typically, to make that happen I need to undertake three steps:

  1. Define a hostname and configure Sitecore to install with its subdomains.
  2. Generate a wildcard certificate for a domain name of the above hostname.
  3. Make required DNC changes to change A-record and subdomains to point public IP of that machine.

But wait a bit, do I have a public IP? Sadly, I don't, therefore, start looking for a variety of DynDNS options which still required more effort than I initially was going to commit into. Eventually, I remembered a specific class of tunneling software that serves exactly that purpose. From a wide range, LocalTunnel appeared to be the most promising free-to-use solution that some folks use to proxy out their basic sites for demos. 

Looking at its features it looks very much attractive:

  • it is totally free of charge
  • does not require any registration/tokens
  • ultrasimple installation with npm
  • because of the above, it potentially can tunnel directly into containers
  • gives you an option of a temporal claiming subdomain, if one is available
  • allows mapping invalid SSL certificates

The typical installation and execution are ultra-simple:

npm install -g localtunnel
lt --port 8080

After the second command, LocalTunnel responds with a URL navigating which will tunnel your request to port 8080 of the host machine it was run at.

But how do I apply that knowledge to a complicated Sitecore installation, given that most of the Sitecore services in containers are behind Traefik, which also serves SSL offload point? In addition, the Identity Server requires a publically accessible URL to return the successfully authenticated request.

The more advanced call syntax looks as below:

lt --local-host HOST_ON_LOCAL_MACHINE --local-https --allow-invalid-cert --port 443 --subdomain SUBDOMAIN_TO_REQUEST

Basically, for Sitecore to operate from outside I must set it up in a way that external URLs match those URLs run locally at the host where LocalTunnel runs. With the above command, if a subdomain request is satisfied, it will be served by the URL https://SUBDOMAIN_TO_REQUEST.loca.lt which leads to HOST_ON_LOCAL_MACHINE on port 443.

So, in a headless Sitecore we have four typical parts running on subdomains of a hostname served by a wildcard certificate:

  • Content Management (aka Sitecore itself)
  • Content Delivery
  • Identity Server
  • Rendering Host

OOB they are served by default with something like cm.YourProject.localhost, cd.YourProject.localhost, id.YourProject.localhost and www.YourProject.localhost correspondingly. In order to match HOST_ON_LOCAL_MACHINE to SUBDOMAIN_TO_REQUEST for the sake of this exercise, I choose the following hostnames for installation:

The scripts that create Next.js StarteKit Template and Init.ps1 script don't modify all the required hostname changes, so just in case you'll do it manually (recommended), I will name the locations to change

1. Init.ps1 - a block that makes and installs certificates (search by & $mkcert -install)

2. Init.ps1 - a block that adds host file entries (search by Add-HostsEntry)

3. Init.ps1 - a block that sets the environment variables (search Set-EnvFileVariable "CM_HOST")

4. Up.ps1 - authentication using Sitecore CLI (search by dotnet sitecore login)

5. Up.ps1 - final execution in the browser (search by Start-Process at the bottom of the file)

6. .env file - replace CM_HOST, ID_HOST, RENDERING_HOST and CD_HOST variables

7. Make sure Traefik config (docker\traefik\config\dynamic\certs_config.yaml) references the correct certificate and key files

8. Create-jss-project.ps1 - features --layoutServiceHost and --deployUrl parameters of jss setup command

9. src\rendering\scjssconfig.json

10. src\rendering\src\temp\config.js

After the whole installation completes successfully and you see Sitecore CM and Rendering Host in the browser with alternated domain URLs.

Now you can start LocalTunnel:

lt --local-host identity.loca.lt --local-https --allow-invalid-cert --port 443 --subdomain identity
lt --local-host sitecore.loca.lt --local-https --allow-invalid-cert --port 443 --subdomain sitecore
lt --local-host rendering.loca.lt --local-https --allow-invalid-cert --port 443 --subdomain rendering
lt --local-host delivery.loca.lt --local-https --allow-invalid-cert --port 443 --subdomain delivery

On the first run from outside it may show you a notification screen that LocalTunnel serves given URL, and with relatively great piing, that's it.

I briefly tested it and it works well: no SSL issues, Experience Editor runs and allows to bring changes, then publishes them correctly so that they get reflected while browsing Rendering Host. All seems to work well and as expected!

LTSC2022 images for Sitecore containers released: what does it mean to me?

Exciting news! Sitecore kept the original promise and released the new ltsc2022 container images for all the topologies of both the 10.3 and 10.2 versions of their platform.

The biggest benefits of new images are improved image sizes – almost 50% smaller than ltsc2019, and support for running Process Isolation on Windows 11.

Check it yourself:

So, what does that mean for developers and DevOps?

First and most, running Sitecore 10.3 on Windows Server 2022 is now officially supported. You may consider upgrading your existing solutions to benefit from Server 2022 runtime.

Developers working on Windows 11 now also got so much wanted support, containers built from the new images can run in Process isolation mode without a hypervisor. That brings your cluster performance to nearly bare metal metrics.

Let's try it in action!

I decided to give it a try and test if that would work and how effectively. I recently purchased a new Microsoft Surface  8 Pro laptop which had Windows 11 pre-installed and therefore useless for my professional purposes, so it seems to be excellent test equipment.

After initial preparation and installing all the prerequisites, I was ready to go. Choosing the codebase I decided to go with the popular Sitecore Containers Template for JSS Next.js apps and Sitecore 10.3 XM1 topology, as the most proven and well-preconfigured starter kit.

Since I initialized my codebase with -Topology XM1 parameter, all the required container configurations are located under /MyProject/run/sitecore-xm1 folder. We are looking for .env file which stores all the necessary parameters.

The main change to do here is setting these two environmental settings to benefit from ltsc2022 images:


The other important change in .env file would be changing to ISOLATION=process. Also, please note that TRAEFIK_ISOLATION=hyperv stays unchanged due to a lack of ltsc2022 support for Traefik, so sadly you still need to have Hyper-V installed on this machine. The difference is that it serves only Traefik, the rest of Sitecore resources will work in the Process mode.

I also did a few optional improvements upgrading important components to their recent versions:


Also, changed node to reflect the recent LTS version:


Please note, that sitecore-docker-tools-assets did not get any changes from the previous version of Sitecore (10.2), so I left it untouched.

Last thing – to make sure I indeed build and run in the Process isolation mode, I set ISOLATION=process changing this value from default. The rest of .env file was correctly generated for me by Init.ps1 script.

All changes complete, let’s hit .\up.ps1 in PowerShell terminal with administrative mode and wait until it downloads and builds images:

Advanced Part: building Traefik with ltsc2022

Now, let's get rid of the only left 1809-based container, which is Traefik. Luckily, its Dockerfile is available, so I can rewrite it to consume ltsc2022 images. In addition, I took the latest (by the time) version of it which is 2.9.8, while the officially supported is 2.2.0, so it would make sense to parametrize the version as well, taking its settings from .env settings.

I created a new docker\build\traefik folder and ended up with the following Dockerfile within there:

FROM mcr.microsoft.com/windows/servercore:${IMAGE_OS}

SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]

RUN Invoke-WebRequest \
        -Uri "https://github.com/traefik/traefik/releases/download/$env:VERSION/traefik_${env:VERSION}_windows_amd64.zip" \
        -OutFile "/traefik.zip"; \
    Expand-Archive -Path "/traefik.zip" -DestinationPath "/" -Force; \
    Remove-Item "/traefik.zip" -Force

ENTRYPOINT [ "/traefik" ]

# Metadata
LABEL org.opencontainers.image.vendor="Traefik Labs" \
    org.opencontainers.image.url="https://traefik.io" \
    org.opencontainers.image.source="https://github.com/traefik/traefik" \
    org.opencontainers.image.title="Traefik" \
    org.opencontainers.image.description="A modern reverse-proxy" \
    org.opencontainers.image.version=$env:VERSION \

Because of that I also had to update the related docker-compose section of docker-compose.override.yml file:

    isolation: ${ISOLATION}
    image: ${REGISTRY}traefik:${TRAEFIK_VERSION}-servercore-${EXTERNAL_IMAGE_TAG_SUFFIX}
      context: ../../docker/build/traefik
      - ../../docker/traefik:C:/etc/traefik
    - rendering

What I want to pay attention here - I am now using ${ISOLATION} as the rest of the containers are using instead of dedicated TRAEFIK_ISOLATION which can now be removed from .env.

Another thing is that I am passing fully parametrized image name:


I intentionally do not prefix it with ${COMPOSE_PROJECT_NAME} so that this image becomes reusable between several solutions on the same machine, which saves some disk drive space.

Last step would be adding .env parameter TRAEFIK_VERSION=v2.9.8 and removing TRAEFIK_IMAGE parameter which is no longer needed. Good to go!

Outcomes and verdict

I tested all of the important features of the platform, including Experience Editor and it all works, and what is especially important – works impressively fast with the Process isolation mode. And since all the containers are built with ltsc2022 and run in Process isolation, one doesn't need Hyper-V at all!

As for me, I ended up having a nice and powerful laptop suitable for modern Sitecore headless operations.

Enjoy faster development!

Content Hub One full review: good, bad and ugly

Most of my readers know me as a dedicated Sitecore professional, however, those who are close to me are aware of the variety of my hobbies. Some of them also know me as a Scotch whisky expert and collector. After living for almost 15 years in the UK I got a pretty decent collection of these spirits and learned hundreds of facts from attending dozens of whisky distilleries in Scotland.

Once I got my hands on a new SaaS offering from Sitecore - Content Hub One, I decided to give it a try on a practical example and try its capabilities as I was doing a real application. What would I use for the demo purposes? Something I know much about  - that's how exposing my whisky collection was chosen. Let's go through all the way starting with content modeling, going through actual data and media authoring and publishing, and eventually creating a headless app for content delivery.


First look

Once I got access to Content Hub ONE, I felt curious about what can I do using it. After logging through the portal, I got the ascetic main interface:

It exactly mimics your expected activities here: Content Types is used for Content modeling, Media - is for uploading media assets and Content is for creating content from your types and referencing uploaded media.

Content Hub One comes with handy documentation that helps understand the operations.

Content Modeling

For my purpose, I need to set up two content types - a listing type featuring items from the collection and item types itself to be used on the corresponding pages (marketers also know them as PLP and PDP).

Let's start with a Whisky type which represents the actual item from my collection. You can only choose from these basic field types:

  • Text type can be either short single-line value or multi-line long text up to 50,000 characters
  • Rich text includes markup and can take even more - 200,000 characters. It does not accept raw HTML.
  • Number, Boolean, and Date/Time are obvious and speak for themselves.
  • Reference gives the ability to link other content records to this item, with the unfortunate limit of max 10 items per field
  • Media is similar to the above with the difference that it allows referencing uploaded media items.

Unfortunately, some crucial fields are missing, such as those used for storing Links, URLs, and email addresses.

I ended up with the following structure for a Whisky item type that features as many of various field types as possible:

Next, let's create a Collection type to include a collection of items as well as some descriptive content within Rich text type:

Pay attention to the Archive field. From the home page, I want to distribute a zip archive with all 50 images of my collection, so I included this media field. The challenges of this implementation are described below.


Content Hub ONE users can upload media so that it gets published to Experience Edge CDN. However, its usage is very limited to only images of GIF, JPG, PNG, and WEBP formats.

That is not sufficient for my demo purposes. I also need to upload videos of creative ads for each of my whisky items, as is referenced at Whisky type and I also want to upload a ZIP archive with all the 50 images featuring my entire collection, referenced at Collection type. This is not something extraordinary and is very common for content-powered websites.

So, the question is - can I upload archives and videos? Officially - no, you cannot. However, nothing stops you from renaming your asses to something like video.mp4.jpg or archive.zip.jpg so that it successfully passes upload validation and actually gets uploaded and later published to Edge. With 70Mb limit per media item, it can host pretty much reasonably converted videos, archives, or whatever you may want to put there.

Note: please be aware that since anything else than images isn't officially supported, you may lose access to that content once. Use it at your own risk!

Further below I will show how to build a head application that can consume such content, including "alternative" non-supported media types.


There is the documentation for the developers, a good start at least.


Content Hub One comes with helpful CLI and useful documentation. It has support for docker installation, but when speaking about local installation I personally enjoy support for installing using my favorite Chocolatey package management tool:

choco install Sitecore.ContentHubOne.Cli --source https://nuget.sitecore.com/resources/v2

With CLI you execute commands against the tenants with only one active at the moment. Adding a tenant is easy, but in order to do you must provide the following four parameters:

  • organization-id
  • tenant-id
  • client-id
  • client-secret

Using CLI you can do serialization the same as with XP/XM platforms and see the difference and that is a pretty important feature here. I pulled all my content into a folder using ch-one-cli serialization pull content-item -c pdp command where pdp is my type for whisky items:

The serialized item looks as below:

id: kghzWaTk20i2ZZO3USdEaQ
name: Glenkinchie
    value: 'Glenkinchie '
    type: ShortText
    type: ShortText
    value: 12
    type: Integer
    value: >
      The flagship expression from the Glenkinchie distillery, one of the stalwarts of the Lowlands. A fantastic introduction to the region, Glenkinchie 12 Year Old shows off the characteristic lightness and grassy elements that Lowland whiskies are known for, with nods to cooked fruit and Sauternes wine along the way. A brilliant single malt to enjoy as an aperitif on a warm evening.
    type: LongText
    - >-
        "type": "Link",
        "relatedType": "Media",
        "id": "lMMd0sL2mE6MkWxFPWiJqg",
        "uri": "http://content-api-weu.sitecorecloud.io/api/content/v1/media/lMMd0sL2mE6MkWxFPWiJqg"
    type: Media
    - >-
        "type": "Link",
        "relatedType": "Media",
        "id": "Vo5NteSyGUml53YH67qMTA",
        "uri": "http://content-api-weu.sitecorecloud.io/api/content/v1/media/Vo5NteSyGUml53YH67qMTA"
    type: Media
After modifying it locally and saving the changes, it is possible to validate and promote these changes back to Content Hub ONE CMS. With that in mind, you can automate all the things about for your CI/CD pipelines using PowerShell, for example. I would also recommend watching this walkthrough video to familiarize yourself with Content Hub ONE CLI in action.


There is a client SDK available with the support of two languages: JavaScript and C#. For the sake of simplicity and speed, I decided to use C# SDK for my ASP.NET head application. At a first glance, SDK looked decent and promising:

And quite easy to deal with:

var content = await _client.ContentItems.GetAsync();

var collection = content.Data
    .FirstOrDefault(i => i.System.ContentType.Id == "collection");

var whiskies = content.Data
    .Where(i => i.System.ContentType.Id == "pdp")
However, it has one significant drawback: the only way to get media content for use in a head application is via Experience Edge & GraphQL. After spending a few hours troubleshooting and doing various attempts I came to this conclusion. Unfortunately, I did not find anything about that if the documentation. In any case with GraphQL querying Edge my client code looks nicer and more pretty, with fewer queries and fewer dependencies. The one and only dependency I got for this is a GraphQL.Client library. The additional thing to add for querying Edge is setting X-GQL-Token with a value, you obtain from the Settings menu.

The advantage of GraphQL is that you can query against the endpoints specifying quite complex structures of what you want to get back as a single response and receive only that without any unwanted overhead. I ended up having two queries:

For the whole collection:

  collection(id: ""zTa0ARbEZ06uIGNABSCIvw"") {
    archive {
        results {
        ... on Pdp {
        picture {
            results {

And for specific whisky record item requested from a PDP page:

  pdp(id: $id) {
    picture {
        results {
    video {
        results {

The last query results get easily retrieved in the code as:

var response = await Client.SendQueryAsync<Data>(request);
var whiskyItem = response.Data.pdp;


Some challenges occur at the front-end part of the head application. 

When dealing with Rich text fields you have to come up with building your own logic (my inline oversimplified example, lines 9-50) for rendering HTML output from a JSON structure you got for that field. The good news is that .NET gets it nicely deserialized so that you can at least iterate through this markup:

Sitecore provided an extremely helpful GraphQL IDE tool for us to test and craft queries, so below is how the same Rich text filed value looks in a JSON format:

You may end up wrapping all clumsy business logic for rendering Rich text fields into a single Html Helper producing HTML output for the entire Rich text field, which may accept several customization parameters. I did not do that as it is labor-heavy, but for the sake of example, produced such a helper for Long Text field type:

public static class TextHelper
    public static IHtmlContent ToParagraphs(this IHtmlHelper htmlHelper, string text)
        var modifiedText = text.Replace("\n", "<br>");
        var p = new TagBuilder("p");
        return p;

which can be called from a view as:


Supporting ZIP downloads

On the home page, there is a download link sitting within Rich text content. This link references a controller action that returns zip archive with the correct mime types.

public async Task<IActionResult> Download()
    // that method id overkill, ideally
    var collection = await _graphQl.GetCollection();

    if (collection.Archive.Results.Any())
        var url = collection.Archive.Results[0].FileUrl;
        var name = collection.Archive.Results[0].Name;
        name = Path.GetFileNameWithoutExtension(name);

        // gets actual bytes from ZIP binary stored as CH1 media
        var binaryData = await Download(url);
        if (binaryData != null)
            // Set the correct MIME type for a zip file
            Response.Headers.Add("Content-Disposition", $"attachment; filename={name}");
            Response.ContentType = "application/zip";

            // Return the binary data as a FileContentResult
            return File(binaryData, "application/zip");

    return StatusCode(404);

Supporting video

For the sake of a demo, I simply embedded a video player to a page and referenced the URL of published media from CDN:

<video width="100%" style="margin-top: 20px;" controls>
    <source src="@Model.Video.Results[0].FileUrl" type="video/mp4">
    Your browser does not support the video tag.

Bringing it all together

I built and deployed the demo at https://whisky.martinmiles.net. You can also find the source code of the resulting .NET 7 head application project at this GitHub link.

Now, run it in a browser. All the content seen on a page is editable from Content Hub ONE, as modeled and submitted earlier. Here's what it looks like:


Content Hub ONE developers did a great job in the shortest time and there should not be any questions to them. However, from my point of view, there is a big number of both minor and major issues that prevent using this platform in its current stage for commercial usage. Let's take a look at them.

1. Lack of official support for media items other than four types of images is a big blocker. Especially given that there is no technical barrier to doing that in principle. Hopefully, that gets sorted with time.

2. Many times while working with CH1 I got phantom errors, without understanding the cause. For example, I want to upload media but got Cannot read properties of undefined (reading 'error') in return. Later I realized that was caused by session expiration, which for some reason is not handled well in the cases. What is more frustrating - I got these session issues even after just navigating the site, as if navigation did not reset the session expiration timer. But since that is SaaS product - it's only my guesses without having access to internals.

3. Another issue experienced today was CH1 got down with UI showing me a Failed to fetch error. That also occurred with my cloud-deployed head app which also failed to fetch content from CH1. Unannounced/planned maintenance?

4. Not being able to reference more than 10 other records seriously limits platform usage. With my specific example, I had around 50 items of whisky to be exposed through this app but was able to include only max 10 of them. What is worse - there are no error messages around it or UI informing me about the limitation in any other way.

5. When playing around with the existing type I cannot change the field type, and that limitation is understood. The obvious solution would be deleting that field instead and regrating it with the same name but another type (let's assume there's no content to be affected). Wrong, that's not possible and eds with Failed entity definition saving with name: 'HC.C.collection' error. You can only recreate the field with a new name, not the same one you've just deleted. If you got lots of queries in your client code - need to locate them and update them correspondingly.

6. Not enough field types. For example, URL could be simply placed into a small text field, but without proper validation, editors may end up having broken links if they put a faulty URL value on a page.

There is some of UI/UX to be improved

1. Content Hub One demands more clicks for content modeling creation compared to let's say XP. For example, if you publish a content item, related media does not get published automatically. You need manually click through media, locate it and publish explicitly. On large volume of content that annoys and adds unwanted labor.

2. To help with the above, why not add a "Publish" menu item into the context menu upon an uploaded item in a Draft state? That eliminates the unwanted step of clicking into the item for publishing.

3. On the big monitors the name of a record is mislocated in the top left corner making it unobvious to edit it, given that, it is not located with a form field, so not immediately obvious that is editable. That is especially important for records that are not possible to rename after creation. Bringing the name close to the other fields would definitely help!

4. Lack of drag&drop. It would be much easier to upload media by simply dragging the files onto a media listbox, or any other reasonable control.

5. Speaking about media, UI does not support selecting multiple files for an upload. Users have to click one after another.

6. Need better UI around grouping and managing assets. Currently, there are facets but need something more than that, maybe the ability to group records into folders. I don't have a desired view on that, but definitely see the need for such a feature, as my ultra-simple demo case already requires navigational effort.


I don't want to end up with the criticism only leaving a negative impression about this product: there are plenty of positives as well. I would only mention just a few: decent SDKs, attention to the details where the feature is actually implemented (like the order of referenced items follows up the order you select them), nice idea of a modern asynchronous UI that can notify you about when the resource gets published to Edge (just need to sort out the session expiration issues).

Content Hub One is definitely in the early stages of its career. I would wish the development team and product managers to eventually overcome the "child sickness" stage of the product and deliver us a lightweight but reasonably powerful headless CMS that will speed up content modeling and content delivery experience. The foot is already in the door, so the team needs to push on it!

Merry Christmas and happy New Year!

Every year I create a special Christmas postcard to congratulate my readers on a new oncoming year, full of changes and opportunities. Wish you all the best in 2022!

My artwork for the past years (click the label to expand)







The ultimate guide to Sitecore XM Cloud

If you ask anyone in the Sitecore community what was the biggest hype of 2022 – there wouldn’t be any other opinion than mentioning XM Cloud. Let’s take a look at this latest and shiny SaaS offering!


Overview, Architecture, Features


Sitecore XM Cloud is a cloud-based platform that provides tools and features for managing and optimizing digital customer experiences. The platform is a headless SaaS solution initially designed to resolve certain points of pain XP platform previously suffered from, including

  • Platform upgrades: one of the time-consuming and uncertain activities that eat out a significant budget, but on their own bring very little of new features. I have written a decent series of blog posts exclusively devoted to the Sitecore platform upgrades and the accompanying complexities. With the SaaS model, you get upgrades automatically done for you by Sitecore.
  • Infrastructure management and maintenance, security maintenance, taking care of scalability: all that was previously done by the end clients or partners now is fully managed by Sitecore itself.
  • A bottleneck of Sitecore developers: lack of availability of developers with specific knowledge of Sitecore internals may encounter a staffing bottleneck when building and maintaining Sitecore-based applications, including the complexity of the platform, limited documentation, dependence on the Sitecore support team, and limited resources. Since most of that is now provided by the vendor, development shifts mainly to the “head” and requires generic frontend skills that are much easier to come by.
  • Platform architectural bottleneck: in addition to all the above, previously scaling up your CD servers in a non-headless environment was quite a bottleneck and came at a cost compared to XM Cloud which does not have CDs at all, but only serves content via highly available Edge APIs endpoints.

That results in a drastically improved development process, velocity, and reliability; not to mention saved budgets. Clients can mostly focus on building the sites for visitors, where the platform itself gets care from the vendor.


Before talking about licensing, it is important to understand how this SaaS offering is structured.

XM Cloud hosting platform includes the concepts of organizations, projects, and environments. An organization is the equivalent of an XM Cloud subscription. To initially access the XM Cloud portal, you must be a member of an “Organization” admin account at Sitecore. Once you have access to this account, you can manage your users, give them the required access or even set them as admins, create projects, set up environments for those projects, and promote your code through these environments. In old XP terminology, each environment is a Sitecore instance.

There are three environments per project: one production and two non-prod – typically for development, staging, testing, or other purposes. With the XM Cloud portal, you can easily manage and access all of your Sitecore instances in the cloud.

Coming back to the licensing model, it is similar to a subscription license from Sitecore platforms and its cost model was simplified very much to be based on traffic and consumption.

Each XM Cloud license includes a certain number of projects and environments, and you can use these to manage your different Sitecore instances. Depending on your specific requirements, you may be able to use different projects to handle different instances. You can use environments to set up and configure your development, staging, testing, and production environments, as well as any other environments you may need. If you have any questions about how to best use projects and environments, or if you have any other licensing questions, it is recommended that you speak with your Account Manager, who will be able to provide more information and guidance to meet your specific needs.

Interestingly, in order to run local docker containers with XM Cloud, one requires to have a valid Sitecore license. I cannot say if your existing partner’s license file will work with local containers development, as being a Sitecore MVP I was given a universal Sitecore license file which worked well. For building and deploying a source code with a built-in Deploy App you don’t need to provide a license file – it is assumed from your organization (subscription).

xm cloud license


In very simplified terms, the architecture could be explained by the below diagram provided by Sitecore:

XM Cloud architecture

Not all the internals of the architecture are mentioned above (ie. ACR, Kubernetes, etc. are missed out), but should you really care about anything within the dashed area? All that is Sitecore-managed and developers typically focus on the development of the front-end website (also known as a “head”) which is most often built with Next.js. Of course, other frameworks also would work with XM Cloud, however, there’s lots of plumbing to be done that Next provides out-of-box.

One of the most important features of XM Cloud is Webhook Framework. It is built into XM Cloud in the same as is for 10.3 platforms. In a composable world of decoupled SaaS products webhooks are used to notify external services about changes in XM Cloud, for example, to validate and even cancel Workflow state transitions.

For example, in the old good XP platform, we used events to notify that publishing to CD has been completed. One of the possible scenarios was using Core database to pass the remote events as that was an architectural feature of a monolith platform. In a composable world that cannot be the case, as systems cannot share resources in that way and can only communicate through APIs. You don’t have CDs and publish to Edge which reasonably also has its own webhooks. You could also utilize webhooks on git repository or at Vercel side for example.

With some obvious architectural limitations, it is possible to customize XM Cloud in a similar way as we did it with XP by applying patches, but there is an expectation is that developers would customize less and less with time and platform growth. From the functionality point of view, these customizations would focus on data and synchronization rather than patching system features.

Speaking about the drawbacks of XM Cloud I could state single region geolocation, which you must specify initially.

Cloud Portal

It is a visual dashboard of your Organization. Here you can get all your tools in the same place, based on what you have in your subscription. In addition – there are shortcuts to Documentation and you can access Support from the Portal as well.

cloud portal

You can create projects and environments right from this portal. Choose between a starter template or setting up your GitHub repository and if the latter has been chosen – once you grant access to XM Cloud Deploy App to your account and choose the desired repository – it will perform the deployment in the background.

You choose a specific git branch for the desired environment (for example – main branch deploys to the production environment, while develop branch – is to testing) and can also enable auto-deploy upon each commit to a chosen branch. There is nothing else to set up or configure on top of the above, like CI/CD pipelines – Deploy App already knows what to do.

Is GitHub the only way to provide source code for build and deployment? The answer is both yes and no: from GUI it indeed currently only supports GitHub, with later plans to add support for other popular version control hosting providers, such as Azure DevOps, GitLab, Bitbucket, etc. But from CLI one can have more configuring options.

And not just that – almost everything you can do on the portal with GUI you can do remotely with CLI, for example. However, many of the Cloud Portal tools are not available for local development: Pages, Components, Explorer, Deploy App, and Dashboard itself all run exclusively in the cloud and are not available to run locally.

So, how would I develop it at my local rendering host and test it then with Pages or Experience Editor running in the cloud? Currently, there’s a workaround to configure tunneling for the local Rendering Host with a reverse proxy like ngrok or localtunnel, so that your local rendering host server becomes available outside.

If using default GitHub is not an option or you want to customize the automation and/or set your own CI/CD pipeline, there’s another feature of Cloud Portal named Authentication Clients available – an access token generator for XM Cloud.

This is how the documentation describes it:

“When you generate an authentication client, the client creates credentials that include a client ID and a client secret. You can use the credentials to request a JSON Web Token for your CM instance or to request a JWT for Experience Edge XM.”

So, that is an effective tool for creating tokens for approving custom tools for the automation and/or setting your own CI/CD pipeline.

The third tab named Status provides a basic but helpful overview of a deployment process.

I would recommend reading through the official Cloud Portal documentation.


With the XP platform, we used to have an Identity Server however the one is no longer useful in a genuine composable world. Sitecore had to re-think the authentication approach and implemented a new Unified Identity system so that it offers SSO across all applications of the Composable DXP family.


How XM Cloud integrates with Content Hub DAM and CMP out-of-box and reference to the CMP/DAM connector now in the base image.

Sitecore Experience Edge

Experience Edge for XM comes out of the box with XM Cloud and is the default destination for publishing content. It is a content delivery service that provides scalable, globally replicated access to Sitecore Experience Platform items, layout, and media through an API. It uses CDN networks to distribute published content across the globe, ensuring a fast experience at scale.

The Edge for XM connector is required to publish the content from your pages and components in Sitecore XM Cloud to the highly scalable Sitecore Experience Edge delivery platform. It is also included.

GraphQL schema used by Experience Edge in XM Cloud is the same as used for 10.3 platforms (with a minor difference in temporal query complexity limits). However, XM Cloud schema is different from those used in Content Hub and Content Hub One as they implement different underlying data structures.

Speaking about security, previously on XP when we dealt with “protected” pages, security data went along with published content to CD servers, where Sitecore platform implemented it in the correct way. With XM Cloud we don’t have a CD server any longer, we publish to Edge Content gets published to Edge regardless of that permissions. Of course, it is not immediately available to the outer world – without a valid API key, it is not possible to access it. But instead of a fully compliant Sitecore CD server we now got a “head” which is a totally detached piece of technology. Having API Key, head may, or may not respect the security rules – Edge will give it out anyway. So, you should extra care and test these things while developing head application on your rendering host.

Familiar XP tools that have gone out of XM Cloud

Since we do not have xDB and xConnect within XM Cloud architecture, many features from XP did not find their way to the new composable world. We have to say goodbye to:

  • XDB
  • EXM
  • Identity Server
  • FXM
  • Path Analyzer

In addition, because of the exclusively headless architecture of XM Cloud, not having CD servers, and operating through Edge these features also went off this SaaS platform:

  • MVC renderings, including classical SXA
  • Publishing Service
  • Sitecore Forms
  • Custom Search Indexes
  • Universal Tracker

Sitecore Forms have gone simply because there is nowhere to submit the data back to. It is expected that Sitecore comes up with a sort of SaaS forms component to be used with a composable family of products. Meanwhile, you can consider using something like Jotforms, Marketo, Hubspot Forms, etc.

As for implementing search functionality – despite Solr still being a part of this platform for supporting XM needs (same as does it for XP), however, without having CD servers there is no more reason to have custom search indexes. Implementing a search feature for your website now requires a composable search component. I will talk through it in more detail below.

Sitecore platform features that still remain

  • Media Library exists and functions exactly the same as with XP
  • The above could be said about Workbox and workflows
  • Content Editor and Experience Editor remain untouched
  • We still have the luxury of poking the system internals with Sitecore PowerShell Extensions, this component is built-in since that is a requirement for the same built-in SXA Headless
  • Many other less important features stay

As said above, custom search indexes are no longer available and one has to select from a choice of composable options: Sitecore Search, Coveo, or Algolia – those first come into my mind. All these products are platform agnostic and will integrate seamlessly with any site regardless of underlying CMS or web engine.

There might be several approaches for implementing external search with XM Cloud:

  • Using search-based queries from Experience Edge directly. The development team can write searches against our Experience Edge endpoints and call from their application.
  • Using a crawling search engine. Sitecore Search has been released and got already proven on several implementations including sitecore.com website. Currently, it indexes against rendered HTML markup, however, Edge support should come up with future releases.
  • There’s still an option of setting up and configuring your own external Solr solution to send content to it. Also, SearchStax Studio for XM Cloud could be another alternative option. 

Headless SXA

The headless part of SXA is provided with Sitecore XM Cloud, a cloud-based CMS platform. It allows developers to build and deploy headless websites using XM Cloud, taking advantage of all the known existing SXA features, such as Page Layout, Partial Layout and especially Rendering Variants.

In addition, new concepts were introduced: Page Branches, and standard values for layout on a per-website basis.

What I found really great was the out-of-box implementation of SEO-related things, such as:

  • sitemaps
  • robots.txt
  • redirect items
  • redirect maps
  • errors (404 and 500)

This version of SXA supports Next.js as a first-class citizen and therefore has a revised list of components, leaving only those most basic to work (here they are all). With years of SXA practices, it became a default way of doing things to clone components and add rendering variants to existing ones. Assuming the same, Rendering Variants remain with us in a headless world, however, Scriban is no longer there, replaced with Next.js. Here is an example of how Rendering Variants are defined within a single promo.tsx file with Next.js implementation – there are two of them named Default and WithText. It’s really simple and similar to Scriban.

When you set up an empty site you have default SXA Headless components (at the components tab). Drag and drop works, and also works dragging components along the screen to the position. The right-hand-side section features component properties, including styling.

component tab

There is a content tab that shows item properties similar to what it was before in Content Editor.

CLI and Serialization

Similar to the XP platform, CLI operates through built-in Sitecore Management Services in order to support Sitecore CLI.

Almost everything that can be done through the Cloud portal can be also done through the Sitecore CLI. The latest version of CLI allows the creation of Projects, New Environments and Deployments, and much more.

Sitecore CLI allows you to take existing serialized content and move it directly into XM Cloud. Since serialization is done in YAML format, it stays the same as with XP and also is compatible with Unicorn, therefore correctly formatted content is easily portable to XM Cloud. I want to emphasize that it is only about serialized content, Unicorn itself is an XP module and cannot be installed on XM Coud.

New Tools and Interfaces

Old good Content Editor and Experience Editor still remain with us and that’s good news for those who habitually stick to the interfaces. they used to. Nevertheless, this blog post will familiarize you with the totally new apps and interfaces coming with XM Cloud.


This application opens by default. It shows up all of the websites available in the system with the predictive typing search bar to filter them out. Clicking any of the sites opens them up in a Pages app, as expected.

What is more interesting is the ability to create websites directly from this interface by clicking the “Create website” button.

create website


If you played around Horizon in the latest versions of XP such as 10.2, you will immediately recognize this user interface. Pages are built on the foundation of Horizon:

Pages application

The Pages application has 5 views which you select from the left navigation stripe:

  • Pages
  • Components
  • Content
  • Personalize
  • Analyze

I personally find it a little confusing, that some views have the same names and apps. For example, by clicking the Components view of the above Pages app from the left bar navigation, you will see a component selector which you can drag and drop to the editable page. However, this is totally different from the Components application which is a turbocharged tool for creating components to be used on a page, which you later choose by the mentioned component selector of Pages application.

I want to highlight that Pages was done with attention to minor details. It has very pleasant animation effects for many transitions, leaving you with a premium experience feelings. There is no “Save” button anywhere as Pages application autosaves your changes.

It also has support for multiple websites selected from a dropdown. Preview, Devices Experience, Navigation Tree – all these features are located at the same interface you would expect in order to have a productive editing experience.


The tool is still in its early stages, but it has a lot of potentials. Its purpose is to enable content creators and marketers to create components without the need for a developer to start from scratch.

There are lots of styling options to specify custom types, and custom fonts, at least it looks advanced.

Here we create the component and publish it, so that component becomes available in the component library of the Pages app. The intention is for us to be able to put components together and create them together from basic page elements, images, links, and so forth. Starting with a grid, then you simply start adding columns, set alignment, and apply other UI-related things.

But what is the most impressive feature of Components app is – we not only create and customize individual components similar to the way we do it in SXA, but now we create entire components completely from scratch, we can style them and can hydrate them with data from an external data source. By saying external I mean really external: one of many 3rd party content providers and headless CMSs, like Content Hub One, Contentful, Kontent.ai, etc., and all that with zero code!

All you need to do is to provide an API endpoint and use a mouse to select what fields from the source you want to map. For those who are behind a tough firewall or strict corporate policies, there is an option to directly paste in a sample JSON snippet to map against it. They don’t ever have to hit the live API when creating a datasource.

What is even more mind-breaking is that you can feed the same single component from various external systems at the same time. Image a Promo Block component that you can easily drag and drop to a page that consists of:

  • Heading title that sits natively at Sitecore XM Cloud
  • Image taken from Content Hub DAM
  • Localized promo content taken from Contentful CMS
  • Call to action button that is wired up to CDP

In my opinion, that is a superpower of XM Cloud editing capabilities! I very much recommend watching this short video demonstrating the whole power of the new Components app.


How the data is updated and invalidated with Components? It seems that the data is pulled in at the moment when the connection is established, and not get refreshed at the moment of publication.


A cut version of Sitecore Personalize comes built into Pages interface of XM Cloud to allow for basic view event personalization. Therefore there isn’t a one-to-one match of old XP rulesets to the new ones, so a deeper analysis and re-evaluation of your personalization strategy is needed.

With XM Cloud there’s a new way of doing personalization coming out of the box: we have a limited set of rules available to us and those are primarily rules based on the current session, such as day of the week, a visitor’s country, to an operating system of visitor’s current device. However, there’s no way you can manage the built-in Personalize tenant directly from the XM Cloud interface.

Checking carefully, you can still find some rules based on historical data: an example of a history-based rule is the number of page views a visitor has visited pages within the past (max) 30 days. Of course, once you enable full Sitecore Personalize a set of tools will expand the functionality to a wider range of options.

Unlike XP where we would personalize each component individually, in XM cloud we create page variants to achieve that so that we can define the audience that will be exposed to that page variant.

Configuring the audience, you see the default set of rules, I counted 14:

  • Point of sale: The visit is to point(s) of sale
  • Region: The visitor is in region(s) during the current visit
  • Country: The visitor is in country(s) during the current visit
  • Visit day of the month: The visit is on a day of the month that compares to number based on your organization’s time zone
  • Day of the week: The visit is on a day(s) of the week, based on your organization’s time zone
  • Month of visit: The visit is in month(s), based on your organization’s time zone
  • First referrer: The visitor comes from a URL that compares to referrer in the current visit
  • UTM value: The visit includes a UTM type that compares to UTM value
  • Operating system: The visitor is using operating system(s) during the current visit
  • Device: The visitor is using a device type(s) device during the current visit
  • Number of page views: The visitor has visited page within the past x days and the total number of page views compares to a number of views
  • First page: The visit has started on a page that compares to page during the current visit
  • Page view: The visitor has visited page name(s) during the current visit
  • New or returning visitor: The visitor is a specific type to your site

After specifying the audience, you then process customizing components for that page variant.

That is implemented in a very similar manner as it was back in the XP: specify the data source item with the desired content, choose a different component to replace the default one, or simply choose to hide the component as another option.


The bigger question comes about how personalization works with Edge, without having CD servers previously responsible for executing personalization. It is important to understand that the personalization logic is now happening at the Vercel/Next.js middleware level.

A middleware package intercept browser request at Vercel in order to do audience matching with Personalize, and then serves the relevant content from the Edge, for example by substituting personalized Edge content from one to another. The above approach does not imply any performance penalties at all since all the parts of this chain are super-fast at CDN/Jamstack level at Vercel and also because the content is already cached at Experience Edge. I would recommend spending some time reading through more details about the built-in personalization in XM Cloud by this link.


Another excellent embedded tool powered by Boxever page-level analytics: pages built-in with Pages have embedded tracking tag therefore analytics become auto-available. Just out of the box it will empower you to see:

  • the browsers the operating system
  • the source where they came from
  • the pages: first page, entry page, top pages
  • visited top countries
  • by source visits
  • by time of the day

Analytics presents data with a heat map for the time of the day with most traffic shown by darker rectangles.


Similar to the previous example of Personalize, this analytics system is now built on top of the Sitecore CDP. Previously all the analytics was taken out from Sitecore XDB, and without having that in SaaS cloud offerings the only option for providing built-in analytics and personalization was to rely on CDP and Personalize in some way.


In addition to Pages you can use Explorer for editing content. I think the main idea of this interface was to give editors the ability to switch rapidly between visual and filled editing interfaces.

Explorer presents you the content with some similarity to Horizon performing on XP: kind of a Content Editor with navigation and the ability to publish and edit items at the fields level. You can do many expected content operations here like modify, copy, upload download media assets, etc.

Interestingly that Sitecore decided to walk away from a tree-based style of presenting content structure to a drill-into way of navigation. It definitely makes things clear, and when we need to take a look at the entire structure of the content tree – there is still old good Content Editor to our help.


Site Identifier

This is an interface to add a tracker to your websites if they are missing so that they become trackable with Analyze section of Pages app.

site identifier

That was an overview of the most important new applications and interfaces coming with XM Cloud. The part of this post will take you through the development experience with XM Cloud.

Development with XM Cloud

In the final post of the XM Cloud series, we’ll talk about the development process with XM Cloud and its nuances. I assume you’re either familiar with the traditional way of Sitecore development or at least get some basic understanding of the process.

In order to identify the difference in the development, let’s quickly recap architectural features of XM Cloud that affect:

  • there are no more Content Delivery (CD) servers available
  • therefore content gets published to Sitecore Edge
  • that, in turn, enforces headless development only

Sitecore Next.js SDK

That means we now have a “head” web application running on a Rendering Host. That would be most likely built with Sitecore Next.js SDK – using Next.js on top of React with TypeScript running on a Node-powered server. Next.js is a framework created by Vercel that is built on top of the React library, designed to help developers build production-ready applications with little need for configuration and boilerplate code.

Of course, as a natively headless platform Sitecore allows using any other framework or technology, like Vue, Angular, or .NET Renderings, however, if React is your choice then Next.js SDK will offer you lots of features available out of the box, including

  • Powerful routing mechanism
  • Layout Service fetching and mapping
  • Placeholder Resolver
  • Multi-language support
  • Field Helper components
  • Components Factory
  • Experience Editor support
  • Sitecore Analytics support

I would recommend going deeply through the relevant documentation, as it covers it all in the detail.

A headless architecture consists of a back end with a layer of services and APIs and a front-end/client/user-facing application. The front-end application, or presentation layer, retrieves data from the CMS using API endpoints and uses that data to populate or hydrate the markup it generates.

Without having CD servers any longer, there is no place to execute HTTP request pipeline extensions. Experience Edge provides just raw published data in a headless way, there is no place to apply any logic. But what if you need to modify a request, for example?

In that case, all that logic gets moved to a “head” application, since changes will need to be made in the hosting environment for the client application.

If the Next.js application is hosted on Vercel, a Vercel Serverless Function can be set up to process incoming HTTP requests and generate a response. For Cloudflare Pages, one can choose Cloudflare Worker for the same purpose, and so on.

Similarly, any other integrations or personalization previously taken place on a CD server should be refactored to work with the “head” application at the Rendering Host. It is recommended to use out-of-process API-based integration as much as possible to maintain loose coupling.

Development Modes

There are two approaches for developers to interact with XM Cloud and implement customer requirements: Edge Mode and Fully Local.

With Edge Mode, you developed a classical JavaScript-based rendering app on a local node server.  This app connects to which will connect to the GraphQL endpoint of the Experience Edge which XM Cloud publishes to. This option does not require a license, as Edge relates to your Cloud Subscription which is aware of your license. Therefore you can scale and outsource the development as much as you desire. You can also develop on Mac or Linux, as soon as the OS can run a Node server.

Fully Local development requires containers running on a Windows-based environment and you need to have a valid license file to run it. If running on a client version of Windows (say, 10 or 11) make sure to reference 1809-based images in your .env file.


Once you run it, you may see a list of containers – it is pretty similar to what you can find with the classical XM platform:

xm cloud containers

There is a Solr container used for internal purposes, a container with a SQL server, and init containers for both of them. The main container is called CM, but please be aware (mainly for patching purposes) that the role name is not CM any longer but is XMCloud instead – here’s its definition:

<add key="role:define" value="XMCloud" />

Another important container here is Traefik – the one that accepts and distributes external traffic. One of the most frequent errors occurring while trying to set up local containers is that developers often already have a running web server (for example IIS) that occupies port 80, preventing Traefik running in a container from exposing its own port 80, so that it errors out.

The rest of the containers are providing rendering hosts with “head” applications.

The principal difference of local container development is that instead of using Edge from XM CLoud, your rendering host will connect to the GraphQL endpoint of the CM instance which features the same API as cloud-based Experience Edge (if you’re using Headless SXA then Edge endpoint is configured in the site settings item).

What can a developer control?

You can apply config patches for configuring CM instance, the same as you did before with XP (here's an example). There’s also a guide on deploying customizations to the XM Cloud environment.

That means it is possible to turn things on and off, like overriding pipeline processors. Of course, there is no longer request manipulation with XM Cloud as request processing shifts to the rendering host while httpRequestBegin and httpRequestEnd pipelines relate only to the CM instance itself. Nevertheless, there are lots of other familiar pipelines to deal with.

For your custom code Sitecore NuGet Feed will support relevant packages. All the packages follow up the name pattern starting with Sitecore.XMCloud.* so that one cannot mess them up with the packages for XP/XM platforms. For example, the top-level package is called Sitecore.XMCloud.Platform. Versioning of these packages follows up SemVer version of XMCloud.

Sitecore has an expectation that developers customize it less and less with time, with most of the customization happening around data synchronization rather than the altering system itself.

One of the good news is that you still have the PowerShell Extensions, but that is disabled by default. Enabling it requires setting SITECORE_SPE_ELEVATION environment variable to either Confirm or Allow with new deployment to take effect. Once complete – you get full power, including direct database access. I recorded a short video showing how to enable SPE for your XM Cloud environment, if you also need SPE remoting - read this post for additional instructions.

Sitecore Connect for Content Hub: CMP or DAM connector provided by XM Cloud, is also included in the base container image. There is a promise of adding connectors to other popular systems and DXPs with time.

Unfortunately, installing familiar modules is no longer available in the easy way of package installation we used to.

Build configuration

There is a new important configuration file with XM Cloud – xmcloud.build.json. It configures build targets, editing hosts, XDT transforms, if any, serialization modules to deploy to the XM instance for that environment, etc. Its structure looks as below:

  "deployItems": {
    "modules": []
  "buildTargets": [],
  "renderingHosts": {
    "<key>": <value>
  "transforms": [],
  "postActions": {
    "actions": {
      "<key>": <value>

For example, the default build pipeline of XM Cloud is configured to look at a single *.sln file at the root of your repository to process, but with xmcloud.build.json a developer can override this behavior and specify what exactly to build.

Read more about configuring build configuration with XM Cloud.

Sitecore CLI

With Sitecore XP most of you likely used CLI for serialization purposes, with some rare developers experimenting with itemres plugin for creating items as resources from serialization modules. With XM Cloud Sitecore CLI becomes much more useful – lots of management could (and should) be done using it, like creating the projects and environments, performing initial deployments using CLI, etc.

In order to install it you must have .NET 6 already installed on your machine. The tool consists of two parts – Sitecore Management Services running on CM and the command line tool itself. Management Services comes already preinstalled for XM Cloud (in both development container and cloud instance), so it only comes to installing CLI from the project folder:

dotnet nuget add source -n Sitecore https://sitecore.myget.org/F/sc-packages/api/v3/index.json
dotnet tool install Sitecore.CLI

Here’s an example of using CLI to list projects running against my XM Cloud organization (subscription).

CLI shows list of projects

In the XM Cloud terminology, a project is a set of environments. Each environment is in fact its own XM Cloud CM instance. Therefore, we can list a set of its environments and can take actions towards each individual environment, let’s say publish its content to Experience edge:

dotnet sitecore publish --pt Edge -n <environment-name>

Old good serialization also works well with CLI in order to synchronize items between remote and local XM Cloud instances:

dotnet sitecore serialization pull -n development
dotnet sitecore serialization push -n development

Similarly to 10.x platforms, you may benefit from a Sitecore for Visual Studio if you prefer using GUI for Sitecore Content Serialization. It is not free to use and requires purchasing a TDS license (notably, TDS itself does not support XM Cloud).

One of its greatest features in my opinion is Sitecore Module Explorer for you you to navigate, create and modify Sitecore Content Serialization configuration files as items in a tree structure.

Head Application

Configuring front-end applications mainly comes to these 3 settings:

  • JSS_APP_NAME – the name of a site
  • GRAPH_QL_ENDPOINT should be pointing to Edge GraphQL
  • SITECORE_API_KEY for Edge Token, typically found under /sitecore/system/Settings/Services/API Keys

I would recommend watching this video for the front-end developer setup for XM Cloud.

The really impressive productivity feature is running npm run start:connected where you set up a watch mode against the source code so that changes get immediately updated in the browser.

You can also configure an external editing host for XM Cloud instances so that you could also benefit from editing experiences in Pages.

Edge and GraphQL

There is a common misunderstanding about content published to Experience Edge becoming unprotected. It comes from the official documentation, saying: “Experience Edge for XM does not enforce security constraints on Sitecore content. You must apply publishing restrictions to avoid publishing content that you do not want to be publicly accessible“. In fact, publishing to Edge is not equal to making content publically exposed – no one can access it without providing a valid Edge Token you typically configure as SITECORE_API_KEY parameter for your environment. API Key should never be shared publicly – each time you need to pull the data out of Edge you should delegate that task to your head application in order to obscure API Keys rather than making such calls directly from JavaScript code running in the client’s browser, thus exposing your secrets.

Previously with XP/XM it was possible to extend the GraphQL Preview endpoint for additional features like comparison for integers and dates, coordinates, multifield searches, faceting, etc. It is worth mentioning that now you cannot perform any changes around GraphQL endpoints via pipelines with XM Cloud, because it uses Edge. You can still do the following operators in addition to searching fields by a value: compare with EQ / NEQ, CONTAINS / NCONTAINS, order by field names both ways, do pagination, logical AND, and OR.

You can read more about Edge Schema at this link. Also, it is worth mentioning that Edge has its own webhooks. This can be helpful for notifying a caller when publishing is complete.

Use New-EdgeToken.ps1 script to create one for any desired environment. Upon completion, this script opens up GraphQL IDE automatically as well as returns X-GQL-Token for you to use with IDE straight away.

Note that you can use the GraphQL IDE only if you have installed Experience Edge. However, Edge tenant and edge connecter are created automatically upon the creation of an environment with XM Cloud.

Another thing to keep in mind is that despite you're empowered to use Media Library, there's a GraphQL limitation of 50Mb per resource. For larger files (likely videos), consider using DAMs, given they typically allow wider management options.

XM Cloud Play! Summit Demo

This is an exceptional gift from Sitecore Demo Team to us! It allows you to experiment working with XM Cloud and its features, and editing capabilities in both local containers and by deploying it to your XM Cloud environment.

The demo solution features many of the latest tech working together for us to play with:

  • Sitecore XM Cloud
  • Content Hub DAM and CMP
  • Headless SXA
  • JavaScript Services (JSS)
  • Next.js
  • Vercel
  • Tailwind CSS

Since the demo is available in the GitHub repository which means it is easily deployable with Deploy App in a few clicks, as demonstrated below.

Headless SXA

XM Cloud comes with a built-in headless SXA as well as it is included in the base XM Cloud container image, so the question comes up – should I go with it or without it?

Of course, it is possible to build a headless app on XM Cloud without using SXA, but it is not recommended. SXA provides many benefits and is included by default with Sitecore XM Cloud, so it is generally a better option for building a new site. If you are in a migration scenario, you may not have the option to use SXA, but other than that, it is recommended to use SXA and headless services for the best results.

There is an XM Cloud Starter Kit with Next JS for your faster journey to XM Cloud development.

Rendering Variants was always one of the most powerful features of SXA and it took its own evolution from NVelocity templates to scriban, now became powered with Next.js – here’s an example of a Promo component having two rendering variants – Default and WithText. You can have as many variants other than Default as you want within a *.tsx file following this syntax:

export const Default = (props: PromoProps): JSX.Element => {
  if (props.fields) {
    return (
    // ... markup merged with props 
  return <PromoDefaultComponent {...props} />;

SXA is perfect for multi-site implementations as it comes with cross-site capabilities for sharing page and partial designs, renderings, content, and cross-linking. In a previous post, I already mentioned a new feature – Page Branches – that allow setting standard values for layout on a per-website basis. Site-specific standard values feature is another example of managing default on a per-site basis. The development team is also working on implementing Site Templates – the idea comes about blueprinting the entire sites from pre-defined templates.

Speaking about the UI – by default, SXA comes with the grid system Bootstrap 5, but that is configurable. Default renderings respect both grid and styles through parameter templates, you must care of that once creating your own custom renderings unless, of course, you clone existing ones.

I remember a while ago when I only started working with SXA there was a lack of proper solutions around renderings having a background image set – we had to invent our own approaches to that. Slightly later SXA got that feature, but today I was glad to find support for a decent number of stretch modes with it: parallax, stretch, vertical and horizontal tiles, fixed. And all that works in headlessly!

Deploy App

Sitecore XM Cloud comes with an integrated Deploy App that performs exactly what is called for – deploying your solution using existing source code to XM Cloud from a friendly GUI as an alternative to using CLI. You can also create a project using a starter template.

choosing deployment method

In order to demonstrate its capabilities, let’s Start from your existing XM Cloud code, using Play! Summit demo mentioned earlier. Currently, the only provider to work with Deploy App is GitHub. So let’s go ahead with it.

deployment with github

After providing access to your GitHub account, you configure and choose the branch and the specific environment it deploys to if it is production or not. There is an option to trigger a deployment on commit to that branch to trigger an automatic deployment each time someone makes a push to the specified repository branch.

deployment with github parameters

That’s it! The deployment starts and makes things done automatically: provisioning tenants and environments, configuring Edge, pulling and building the codebase, deploying the artifacts, and eventually running post-build actions, if any.

deployment log

The whole process takes around 15 minutes and is impressive – the whole CI/CD pipeline comes out of the box, preconfigured. However, there’s one more thing left to do.

Deploying to Vercel

Since our head application is decoupled from Sitecore, we also need to deploy and reference it to Experience Edge. There are various options but the best would be using Vercel hosting. Vercel is the company that invented and maintains Next.js which is why their powerful hosting platform ideally fits solutions built with it.

Vercel is an all-in-one development platform that combines the best developer experience with an obsessive focus on end-user performance. As a developer-centric stack, Vercel is accessible to developers of all skill sets and removes a historical lock-in to .NET. Vercel provides a scalable solution to the largest of organizations with the newest best practices in content delivery while helping to reduce infrastructure/deployment overhead previously required to deploy Sitecore applications.

Deployment is pretty simple. Upon creating an account and project, you link your GitHub account and use the same repository used for XM Cloud deployment previously. Vercel is smart enough to autosuggest a folder with your next.js application highlighting it with an icon. After providing three familiar environmental parameters (App name, Edge endpoint, and API key for Edge token) deployment takes place quickly and then you’re all set!

Tip: do not modify .env file directly, instead - create an "override" file called .env.local. The difference is that in starterkits that file by default is excluded from source control with .gitignore, plus in addition to preventing mess with other developers, you can easily maintain your specific environmental setting in the same place.

XM Cloud also allows you to set up a Vercel hosting directly from its interface and add integration.

vercel setup hosting

If you do not have an account - you'll also be able to create one with quickly and easily, currently, there's only support for Personal Account types. Also, make sure you choose All Project access.

vercel link to vercel

You will still need to log in to Vercel and manually add GitHub integration to the same repository and branch, add a few more parameters, and at least 3 above environmental variables. Once done you can immediately trigger the deployment.

I hope this overview helps you familiarize XM Cloud development options and encourages you to try it earlier rather than later. Would you have any questions – please feel free reaching me!

Sifon - the easymost way of installing Sitecore XM/XP to your local machine

Hey folks, if you have not heard about Sifon for Sitecore - you must definitely check this out. It is a definite swiss army knife for local Sitecore development and you'd really like to learn why.

But here's a demo of how straightforward Sitecore installation is using the latest Sifon 1.3.3 release - you don't need to do anything at all rather than click a few buttons from UI. Below are the new features for this version:

  • added support for 10.3 version of the platform (downloads, Solr, dependencies, etc.)
  • added support for XM topology starting from 10.3
  • added SQL Server smooth installation in a single click
  • added convenient defaults so that you don't need typing at all, would you prefer the default setting
  • tested well on Windows 11
This tool is a gem for marketers, business analysts, and other non-developer groups of people who may need to set up Sitecore on their local machines but do not want to mess up with Docker and containers. Single-click smooth installation is what they want!
The installation itself is simple - either downloading the installer from the official website or even easier from Chocolatey gallery by this command:

cinst sifon
This 15-minute-long video shows it all in action - installing Sifon, then downloading and installing Solr, SQL Server, and Sitecore XP 10.3 with Publishing Service 7.0, SXA 10.3, and even Horizon from 10.2 - it works with 10.3 perfectly well, and installs in a single click, as everything else with Sifon:

Upon completion, Sifon will also automatically set up and activate the profile for the newly installed Sitecore instance (in the above image it is name xp and is also shown in the window title). Profiles are used to identify the active environment for the rest of Sifon functionality and plugins to operate against. One can easily switch active profiles from the dropdown and Profile editor menu.

I really hope this wonderful tool saves you lots of time and effort. Thanks for watching!

Update: occasionally, some rare systems report errors upon prerequisites installation. The error message prompts about being unable to identify and run AppPool task and is caused by the mandatory system restart requirement from IIS. For such systems, after the restart and re-running Sifon will work as expected. As an alternative, you may run the below command and restart your computer prior to using Sifon to ensure a smooth installation experience:
Enable-WindowsOptionalFeature -Online -FeatureName "IIS-WebServerManagementTools" -All

Sitecore Edge considerations for sitemap

A quick one today. We recently came across interesting thoughts and concerns about using Sitecore Edge. As you might know (for example from my previous post), there are no more CD servers when publishing to Sitecore Edge - think of that as just a GraphQL endpoint serving out json.

So, how do we implement a sitemap.xml in such a case? Brainstorming brought several approaches to consider:

Approach one

  • Create a custom sitemap NextJS route
  • Use GraphQL to query Edge using the search query. Here we would have to iterate through items in increments of 10
  • Cache the result on Vercel side using SSG

Approach two

  • Create a service from CM side that will return all published items/urls
  • This service will only be accessible by Azure function which will generate a sitemap file and store it in CDN
  • Front-end would then in this case access this file and render the content of it (or similar)

Approach three

  • Generate all the sitemaps (if more than a single sitemap) on CM, then store them all in single text fields
  • Returned them via edge, using GraphQL the font-end head which handles sitemap.xml

Then I realized, there is SXA Headless boasts SEO features OOB, including sitemap.xml. Let's take a look at what they do in order to generate sitemaps.

With 10.3 of SXA, the team has revised the Sitemap feature providing much more flexibility to cover as many use cases as only possible. Looking at /Sitecore/Content/Tenant/Site/Settings/Sitemap item you'll find lots of settings for fine-tuning your sitemaps depending on your particular needs. CM crawls websites and generates sitemaps. Then they get published to Sitecore Edge as a blob and then it gets proxied by a Rendering Host via GraphQL. When search engines request sitemaps of a particular website, Rendering Host gives them exactly what has been asked. That is actually similar to the above approach three with all the invalidation and updates of sitemaps provided also OOB.

This gives out a good amount of options, depending on your particular scenario.