Experience Sitecore ! | More than 200 articles about the best DXP by Martin Miles

Experience Sitecore !

More than 200 articles about the best DXP by Martin Miles

Rendering Parameters vs. Rendering Variants - when should use one or another

Do you know how to identify when you should create a rendering variant for a component, and when you can simplify effort by setting rendering parameters? Below is the answer and it’s pretty straightforward.

To address let's first take a look at both options and options and identify their key differences.

Rendering Parameters allow you to have additional control over a component/rendering by passing additional parameters into it. Key-value-pair is the most simplistic form, but of course, you can use any advanced form of input by leveraging rendering parameters templates, but regardless of the chosen way the result will be the same - you pass some additional parameters into a component. Based on those params a component can do certain things, for example, show/hide specific blocks or use more advanced styling tricks. Important to keep in mind - that all the parameters are stored within a holding page. Remember that you should inherit Base Rendering Parameters template to have full support in Pages Builder.

parameters


Rendering Variants (aka. Headless Variants) feel more advanced compared to params. The principle difference is that a variant allows you to return principally different HTML output and do way more complicated manipulations over the HTML structure. You should use common sense when choosing variants and leverage them in cases where the same component may present various look and feel options: for example, a promo block with two images having a headless variant of these same images positionally swapped. Achieving the same with rendering parameters would require bringing ugly presentation logic into the components code along with code duplications. Using variants allows us to achieve the same result way more elegantly. Note that, Variants originate from SXA, therefore when you bring a legacy JSS site to XM Cloud without converting it to SXA - this option isn't available.

variants



Both Rendering Variants and Rendering Parameters assume you use the same component that receives the same datasource items (or none datasource at all). You should never leverage datasource items to control the presentation or behavior of components - they are purposed exclusively for storing the content, as it comes from their name.

Hope that clarifies the use cases and removes ambiguity.

Experience Edge: Know Your Limitations

Experience Edge brought us that much-desired Content-Delivery-as-a-Service approach and happened to be revolutionary in its vision. However, that flexibility of service comes at some expense, and the limitations each of us must be aware of. Understanding these is critical when building cloud-hosted Sitecore solutions. The key technical limits include API rate throttling, data payload/query size caps, content/media size limits, caching rules, and XM Cloud platform constraints. In this post, I will cover them all, so that it can help you plan better.

API Rate Limits

  • 80 requests/sec. The Experience Edge GraphQL endpoint is rate-limited. Each tenant’s delivery API allows at most 80 requests per second (visible as X-Rate-Limit-Limit: 80). Exceeding this returns HTTP 429 (Too Many Requests) until the 1-second window resets. In practice, Sitecore notes this is a "fair use" cap on uncached requests, so designing with CDN caching via SSG/ISR is essential to stay below the limit.

  • Rate-limit headers. Every Edge response includes headers like X-Rate-Limit-Remaining calls this second, and X-Rate-Limit-Reset essentially - the time until reset, to help clients throttle their calls. For example, if 5 requests are made in one second, the next response will show 75 remaining.

GraphQL Query & Payload Constraints

  • Max query results: A single GraphQL query returns at most 1,000 items/entities. To fetch more items, you must use cursor-based pagination. For example, any search or multi-item query is capped at 1000 results per call.

  • Query complexity limit: Edge enforces a complexity budget on GraphQL queries. Very large or deeply nested queries can fail if they exceed the complexity threshold (around 250 in older Sitecore docs). Developers should test complex queries and consider splitting them or trimming fields.

  • No persisted or mixed queries: Experience Edge does not support persisted queries. Also, due to a known schema issue, you cannot mix literal values and GraphQL variables in one query; you must use all variables if any are used. Not knowing this rule cost me once a decent amount of time for troubleshooting.

  • Payload request size: Very large GraphQL request payloads can be problematic. By default, Next.js APIs have a 2 MB body size limit, which can cause 413 Payload Too Large errors when submitting huge queries. Sitecore suggests raising this (say, to ~5 MB) if necessary. In practice, keep queries reasonably small to avoid frontend limits.

  • Include/Exclude paths: When querying site routes (siteInfo.routes), the combined number of paths in includedPaths + excludedPaths is limited to 100. This caps how many different route filters you can specify in one request.

Content & Delivery Constraints

  • Static snapshot only: Experience Edge provides a static snapshot of published content. It does not apply personalization, AB testing, or any dynamic/contextual logic at request time. Any logic based on user, session, or query string must be handled client-side. If you change a layout service extension or rendering configuration, you must republish the affected items for Edge to pick up the changes.

  • Security model: Edge does not enforce Sitecore item-level security. All published content on Edge is effectively public, so use publishing restrictions in the CMS to prevent sensitive items from being published.

  • Single content scope: An Edge tenant covers the entire XM Cloud tenant with a single content scope. You cannot scope queries, cache clears, or webhooks to a specific site. For example, when a cache clear or webhook trigger runs, it applies to the whole tenant’s content, not per site.

  • Sites per tenant: Edge supports up to 1,000 sites per tenant. A "site" in this context is a logical group defined by includedPaths/excludedPaths in siteInfo. You cannot define more than 1000 sites in one Edge environment. In practice, the maximum site I met was 300 per tenant and all those were served by a multisite add-on on a Next.Js front-end.

  • Multi-site rules: You cannot have two different site definitions pointing to the same start item on Edge. Also, virtual folders and item aliases are not supported on Edge. Content must be published in standard items, and all routes are resolved case-sensitively.

  • Locales and device layers: Culture locale codes in queries are case-sensitive (e.g. it-ITit-it). In the layout data delivered by Edge, only the Default device layer is supported in Presentation data, so multi-device renderings beyond “Default” aren’t included.

Media Limits

  • Max media item size: Each media item file size published to Edge is limited to 50MB. Larger media will not be published to Edge; such large assets should be handled via other services like Sitecore Content Hub, or you can self-host them at any preferred blob storage of choice.

  • Media URL parameters: The built-in Media CDN on Edge supports only the parameters w, h, mw, and mh for image resizing. No other image transformations, like quality or format changes, are yet available out-of-the-box.

  • Case-sensitive URLs: Media item URLs on Edge are case-sensitive. For example, if the item path is Images/Banners/promo-banner.jpg, using lowercase images/banners/promo-banner.jpg will end up with 404. This quirk has caused issues in practice, so be careful with link manager settings that change casing.

  • Delivery: Media is delivered via the same CDN cache as content. There is no per-request payload aggregation for media; each media URL is fetched independently (subject to the CDN and TTL rules below).

Caching Rules & TTL

  • Default TTL: By default Edge caches content and media for 4 hours each (see contentCacheTtl: "04:00:00" and mediaCacheTtl: "04:00:00"). This means cached responses may be served up to 4 hours old unless cleared.

  • Auto-clear: Content and media caches are auto-cleared by default (the contentCacheAutoClear and mediaCacheAutoClear settings are true). In practice, this means a publish or explicit clear will purge the CDN cache so users see new content.

  • Custom TTL: You can adjust the cache TTLs via the Edge Admin API. TTL values are strings in D.HH:MM:SS format. For example, setting contentCacheTtl to "720.00:00:00" yields a 720-day TTL, or "00:15:00" for 15 minutes. The default 4h can thus be increased or decreased per project needs.

  • Cache clearing: In addition to auto-clear on publish, Edge offers Admin API endpoints to clear the cache or delete content. For instance, you can clear all content or specific items via the API. To use these features, administrators must obtain appropriate Edge API credentials in XM Cloud Deploy.

XM Cloud Platform Limits (Impacting Edge)

  • Environment mapping: In XM Cloud, the best practice is a 1:1 mapping of XM environments to Edge tenants. In other words, each XM Cloud environment typically has its own Experience Edge deployment. This means content and API keys are not shared across environments by default.

  • Search index: XM Cloud uses Solr, and there is no option to plug in different search technologies for Edge indexing. The connector will only work with Solr indices configured in XM Cloud.

  • Admin credentials: XM Cloud Deploy limits the number of Experience Edge Admin API credentials per project to 10. Attempts to create more will fail with an error. Project administrators should plan credential usage accordingly, for example, one per dev/CD pipeline.

  • Snapshot publishing: To enable incremental updates, XM Cloud provides snapshot publishing. This ensures that as soon as an item is published, Edge content is updated without a full site rebuild. If snapshot publishing is not enabled, any content changes on Edge require full republishing of affected sites. Developers must enable the Snapshot Publishing feature in XM Cloud to avoid hitting the rate limit on builds.

Baseв on all the above, let's also think about some deployment & publishing considerations that may affect your project:

  • Static build (SSG) preferred: Since every uncached request to Edge counts toward the rate limit, Microsoft recommends using Static Site Generation (SSG) and Incremental Static Regeneration (ISR) on the frontend. With SSG, pages are built at deploy-time and served from the host cache, minimizing live queries to Edge.

  • Build-time pagination: Very large sites can take a long time to generate. The default sitemap plugin fetches all pages across all sites; projects should use included/excluded paths to limit build-time queries. Otherwise, large volumes of pages hitting Edge during a build can approach the rate limit.

  • Publish-time republishing: Because Edge content is static, certain backend changes require republishing. In particular, changes to clones, standard values, or rendering/template configurations won’t reflect on Edge until the dependent items are republished. Plan your release process to include republishes after such changes.

Hope knowing the above helps you plan better!

Sitemaps in Sitecore XM Cloud: Automation, Customization, and SEO Best Practices

In Sitecore XM Cloud, sitemaps are generated and served via Experience Edge to inform search engines about all discoverable URLs. XM Cloud uses SXA’s built‑in sitemap features by default, storing the generated XML as media items in the CMS so they can be published to Experience Edge. Sitemap behavior is controlled by the Sitemap configuration item under /sitecore/content/<SiteCollection>/<Site>/Settings/Sitemap. There are few important fields - Refresh threshold which defines minimum time between regenerations, Cache expiration, Maximum number of pages per sitemap for splitting into a sitemap index, and Generate sitemap media items which must be enabled to publish via Edge. The Sitemap media items field of the Site item will list the generated sitemap(s) under /sitecore/media library/Project/<Site>/<Site>/Sitemaps/<Site>​, and the default link provider is used unless overridden. Tip: you can configure a custom provider via <linkManager> and choose its name in the Sitemap settings.

Automated Sitemap Generation Workflow

When content authors publish pages, XM Cloud schedules sitemap regeneration automatically based on the refresh threshold. Behind the scenes, an OnPublishEnd pipeline (often the SitemapCacheClearer.OnPublishEnd handler in SXA) checks each site’s sitemap settings. If enough time has elapsed since the last build, a Sitemap Refresh job runs. In this job, the old sitemap media item is deleted and a new one is generated and saved in the Media Library​. Once created, the new sitemap item is linked in the Sitemap media items field of the site and then published. This typically triggers two publish actions: one to publish the new media item (/sitecore/media library/Project/.../Sitemaps/<Site>/sitemap) and one to re-publish the Site item so Experience Edge sees the updated link.

For high-volume publishing, it’s best to set a reasonable refresh threshold to batch sitemap generation. For example, if you publish many pages daily, you might set the refresh threshold to 0 forcing a rebuild every time, or schedule a daily publish so the sitemap is updated once per day. Generating sitemaps can be resource-intensive especially for large sites, so avoid rebuilding on every small change unless necessary.

Sitemap Filtering: SXA provides pipeline processors to include or exclude pages. By default, items inheriting SXA’s base page templates have a Change frequency field. Setting it to "do not include" will exclude that page from the sitemap​. The SXA sitemap pipelines (sitemap.filterItem) include built‑in processors for base template filtering and change-frequency logic. To exclude a page, simply open it in Content Editor (or Experience Editor SEO dialog) and set Change frequency to "do not include"​.

GraphQL Sitemap Query: Once published, the XM Cloud GraphQL API provides access to the sitemap media URL. For example, the following query returns the sitemap XML URL for a given site name:

query SitemapQuery($site: String!) {
      site {
        siteInfo(site: $site) {
          sitemap
        }
      }
    }

This returns the Experience Edge URL of the generated sitemap media item. You can use this in headless code or debugging to verify the sitemap’s existence and freshness.

Sitemaps in Local Docker Containers

In a local XM Cloud Docker setup, the /sitemap.xml route often returns an empty file by default because the Experience Edge publish never occurs. There is no web database or Edge target, so the OnPublishEnd process never actually runs, leaving the empty sitemap item. Attempting to publish locally throws an exception (Invalid Authority connection string for Edge). To debug or test sitemap issues locally, you can manually trigger the SXA sitemap pipeline.

I really like the Sitemap Developer Utility approach suggested by Jeff L'Heureux: in your XM Cloud solution’s Docker files, create a page (e.g. generateSitemap.aspx) inside docker\deploy\platform with code that simulates a publish event. For example, one can invoke the SitemapCacheClearer.OnPublishEnd() method manually in C#.

// Simulate a publish event for the "Edge" target
    Database master = Factory.GetDatabase("master");
    List<string> targets = new List<string> {"Edge"};
    PublishOptions options = new PublishOptions(master, master, PublishMode.SingleItem, 
        Language.English, DateTime.Now, targets);
    Publisher publisher = new Publisher(options);
    SitecoreEventArgs args = new SitecoreEventArgs("OnPublishEnd", new object[] { publisher }, new EventResult());
    new SitemapCacheClearer().OnPublishEnd(null, args);
    

This code triggers the same sitemap build logic as a real publish​. Jeff's utility page provides buttons to run various steps (OnPublishEnd, the sitemap.generateSitemapJob pipeline, etc.) and shows output.

Once you run the utility and the cache job completes, the media item is regenerated. Then restart or refresh your Next.js site locally to see the updated sitemap at http://front-end-site.localhost/sitemap.xml. The browser will display the raw XML with <loc>, <lastmod>, <changefreq>, and <priority> entries as it normally should.

Sitemap Customization for Multi-Domain Sites

A common scenario is one XM Cloud instance serving multiple language or regional domains (say, www.siteA.com and www.siteA.fr) with one shared content tree. In SXA this is often handled by a Site Grouping with multiple hostnames. By default, SXA will generate a single sitemap based on the primary hostname. This leads to two issues: the same XML file is returned on both domains, and each page appears several times (once per language) under the same <loc>. For example, a bilingual site without customization might show both English and French URLs under the English domain, duplicating <url> entries.

To fix this, customize the Next.js API route (e.g. pages/api/sitemap.ts) that serves /sitemap.xml. The approach is: detect which host/domain the request is for, fetch the raw sitemap XML via GraphQL, and then filter and rewrite the entries accordingly. For instance, if the host header contains the French domain, only include the French URLs and update the <loc> and hreflang="fr" links to use the French hostname. Pseudocode for the filtering might look like:

if (lang === 'en') {
      // Filter out French URLs and fix alternate links
      urls = urls.filter(u => !u.loc[0].includes(FRENCH_PREFIX))
                 .map(updateFrenchAlternateLinks);
    } else if (lang === 'fr') {
      // Filter out English URLs and swap French loc to French domain
      urls = urls.filter(u => u.loc[0].includes(FRENCH_PREFIX))
                 .map(updateLocToFrenchDomain)
                 .map(updateFrenchAlternateLinks);
    }
    

Here, FRENCH_PREFIX is something like en.mysite.com/fr, and we replace it with the French hostname. In practice, the XML is parsed (e.g. via xml2js), then the result.urlset.url array is filtered and modified, and rebuilt to XML. There is a great solution suggested by Mike Payne which uses two helper functions filterUrlsEN and filterUrlsFR to drop unwanted entries and updateLoc/updateFrenchXhtmlURLs to replace URL prefixes​. Finally, the modified XML is sent in the HTTP response. This ensures that when a sitemap is requested from www.site.ca, all <loc> URLs and alternate links point to site.ca, and when requested from www.othersite.com, they point to www.othersite.com.

SEO Considerations and Best Practices

  • Include Alternate Languages (hreflang): XM Cloud (via SXA) automatically adds <xhtml:link rel="alternate" hreflang="..."> entries in the sitemap for multi-lingual pages. Ensure these are correct for your domains. After customizing for multiple hostnames, the <xhtml:link> URLs should also be updated to the appropriate domain​. This helps Google index the right language version for each region.

  • Set Change Frequency and Priority: Use SXA’s SEO dialog or Content Editor on the page item to set Change frequency and Priority for each page. For example, if a page is static, set a low change frequency. These values are written into <changefreq> and <priority> in the sitemap. Note: Pages can be excluded by setting frequency to "do not include".

  • Maximize Crawling via Sitemap Index: If your site has many pages, configure Maximum number of pages per sitemap so XM Cloud generates a sitemap index with multiple files. This avoids any single sitemap exceeding search engine limits and keeps crawlers from giving up on a very large file.

  • Robots.txt: SXA will append the sitemap link /sitemap.xml to the site’s robots.txt automatically​. Verify that your robots.txt in production references the correct sitemap and hostname.

  • Media Items and Edge: Always keep Generate sitemap media items enabled: without having this, XM Cloud cannot deliver the XML to the front-end. After a successful build, the sitemap XML is stored in a media item and served by Experience Edge. You can confirm the published sitemap exists by checking /sitecore/media library/Project/<Site>/<Site>/Sitemaps/<Site> or by running the GraphQL query mentioned above.

  • Link Provider Configuration: If your site uses custom URL routing (e.g. language segments or rewritten paths), you can override the link provider used for sitemap URLs. In a patch config, add something like:

    <linkManager defaultProvider="switchableLinkProvider">
          <providers>
            <add name="customSitemapLinkProvider" 
                 type="Sitecore.XA.Foundation.Multisite.LinkManagers.LocalizableLinkProvider, Sitecore.XA.Foundation.Multisite"
                 lowercaseUrls="true" .../>
          </providers>
        </linkManager>

    Don't forget to set the "Link provider name" field in the Sitemap settings to customeSitemapLinkProvider​ afterwards. This ensures the sitemap uses the correct domain and culture prefixes as needed.

Diagnostics and Troubleshooting

If the sitemap isn’t updating or the XML is wrong, check these:

  • Site Item Settings: On the site’s Settings/Sitemap item, confirm the refresh threshold and expiration are as expected. During debugging you can set threshold to 0 to force immediate rebuilds.

  • Was it published to Edge? Ensure the sitemap media item was published to Edge. You might need to publish the Site item or Media Library manually if it wasn’t picked up.

  • Cache Type: In the SXA Sitemap settings, the Cache Type can be set to "Inactive," "Stored in cache", or "Stored in file". For XM Cloud, the default "Stored in file" is typically used so the XML is persisted. If set to "Inactive", the sitemap generator will not run.

  • Inspect Job History: In the CM admin (/sitecore/admin/Jobs.aspx), look for the "Sitemap refresh" jobs to see if these succeeded or threw errors.

  • Next.js Route Errors: If your Next.js site’s /sitemap.xml endpoint returns an error, inspect its handler. The custom API route uses GraphQLSitemapXmlService.getSitemap(). Ensure the hostnames in your logic match your ENV variables, namely PUBLIC_EN_HOSTNAME. Add logging around the xml2js parsing if the output seems empty or malformed.

By following the above patterns - configuring SXA sitemap settings, automating generation on publish, and customizing for your site topology -you can ensure that XM Cloud serves up accurate, SEO‑friendly sitemaps. This helps search engines index your content fully and respects multi-lingual domain structures and refresh logic specific to a headless architecture.

References: one, two, three and four.

Merry Christmas and happy New Year!

Every year I create a special Christmas postcard to congratulate my readers on a new oncoming year, full of changes and opportunities. Wish you all the best in 2025!

My artwork for the past years (click the label to expand)
2024


2023


2022


2021


2020


2019


2018


2017


2016


Reviewing my 2024 Sitecore MVP contributions

Sitecore Technology MVP 2024 Sitecore Technology MVP 2023 Sitecore Technology MVP 2022 Sitecore Technology MVP 2021

Sitecore Technology MVP 2020 Sitecore Technology MVP 2019 Sitecore Technology MVP 2018 Sitecore Technology MVP 2017

The Sitecore MVP program is designed to recognize individuals who have demonstrated advanced knowledge of the Sitecore platform and a commitment to sharing knowledge and technical expertise with community partners, customers, and prospects over the past year. The program is open to anyone who is passionate about Sitecore and has a desire to contribute to the community.

Over the past application year starting from December 1st, 2023, I have been actively involved in the Sitecore community, contributing in a number of ways.

Sitecore Blogs 

  1. This year I have written 18 blog posts at the Perficient site on various topics related to Sitecore, including my Crash Course to Next.Js with TypeScript and GraphQL, top-notch findings about XM Cloud and other composable products, best practices, tips and tricks, and case studies. Listing them all by the bullets would make this post too long, therefore instead I leave the link to the entire list of them, shown reverse chronologically.
  2. I’ve been also posting on my very own blog platform, which already contains more than 200 posts about Sitecore accumulated over the past years.
  3. Also, I occasionally create video recordings/walkthrough and upload them to my YouTube channel.

Sitecore User Groups 

  1. Organized three Los Angeles Sitecore User Groups (#19, #20, and #21). This user group has ~480 members!
  2. Last fall I established and organized the most wanted user group of the year – Sitecore Headless Development UserGroup educating its 410+ members. This one is very special since headless development has become the new normal of delivering sites with Sitecore, while so many professionals feel left behind unable to catch up with the fast-emerging tech. I put it as my personal mission to run it twice per quarter helping the community learn and grow “headlessly” and that is one of my commitments to it. It became the most run and the most attended/reviewed event of all Sitecore user groups with eight events organized over this year (#1 and #2) (#3, #4, #5, #6, #7, #8, #9, #10) along with event #11 scheduled for December 12th. All the recordings are publicly available on YouTube, and also referenced from the individual event pages.
  3. Presented my innovative approach to the Content Migration for XM Cloud solutions.
  4. Another user group presentation narrates all the new features happening with Next.Js 15, breaking API changes and what it all means for Sitecore.

GitHub

  • Sifon project keeps maintained and receives new features. Thus Sifon got support for Sitecore 10.4 platforms.
  • I keep Awesome Sitecore project up and actual. This repository has plenty of stars on GitHub and is an integral part of a big Awesome Lists family, if you haven’t heard of Awesome Lists and its significance I highly recommend reading these articles – first and the second.
  • There are also a few less significant repositories among my contributions that are still meaningful and helpful.

Sitecore Mentor Program 

  • Got two mentees in 2024, supported them over the course of a year, also delivered them both full-scaled XM Cloud training along with the certification.
  • One of my past year mentees was recognized as Sitecore MVP in 2024, resulting from an Exclusive Mentorship Agreement, proving my mentoring approach was successful.

MVP Program

  • I participate in most of the webinars and MVP Lunches (often in both time zones per event).
  • I think MVP Summit is the best perk of the MVP Program, so never miss it out. This year I’ve learned a lot and also provided feedback to the product teams, as usual.
  • I participate in several streams of the Early Access Program, sharing insights with the product team ahead of GA dates.
  • In the past, I have participated in a very honorable activity helping to review the first-time applicants for the MVP Program which is the first line of the evaluation and we carefully match every first-time applicant against high Sitecore MVP standards. This year I am taking part in reviewing as well.

Sitecore Learning

I collaborated with the Sitecore Learning team for the past 2-3 years, and this year was not an exception: 

  • I was invited by Sitecore Learning to make an excellent detailed review of a new feature - XM Cloud Forms Builder for Tips & Tricks series. 

Sitecore Telegram 

  • I am making Telegram a premium-level channel for delivering Sitecore news and materials. Telegram has a unique set of features that no other software can offer, and I am leveraging these advantages for more convenience to my subscribers.
  • Started in 2017 as a single channel, it was expanding rapidly and has now reached a milestone of 1,100 subscribers!
  • Growth did not stop but escalated further beyond Sitecore going composable with having a dedicated channel for almost any composable product. Here all they are:

Support Tickets

  • CS0514702 (Content Hub)
  • CS0462816 (SPE for XM Cloud)
  • CS0518934 (Forms Builder)

Other Contributions

  • I created Sitecore MVP section in the Wikipedia, explaining MVP Program, its significance for Sitecore and the overall process of determining the winners.
  • I am very active on my LinkedIn (with 7K+ followers) and Twitter aka X (with almost ~1.2K subscribers), multiple posts per week, sometimes a few a day.
  • With my dedication to Sitecore's new flagship product, XM Cloud, it was no wonder I just launched a new XM Cloud Daily series of tips and tricks on social media (this actually started in December 2024, so it falls on a new application period).
  • That comes in addition to the existing series on LinkedIn - Headless Tips & Tricks, where I share the insights and nuances of modern headless development with Sitecore

 The above is what I memorized about my annual contributions so far. Wishing all decent applicants to join this elite club for the coming year!

XM Cloud content migration: connecting external database

Historically when performing content migration with Sitecore we used to deal with database backups. In a modern SaaS world, we do not have the luxury of neither managing cloud database backups, nor the corresponding UI for doing this. Therefore, we must find an alternative approach.

Technical Challenge

Let’s assume we have a legacy Sitecore website, in my case that was XP 9.3 and we’ve been provided with only a master database backup having all the content. The objective is to perform content migration from this master database into a new and shiny XM Cloud environment(s).

Without having direct access to the cloud, we can only operate locally. In theory, there could be a few potential ways of doing this:

  1. Set up a legacy XP of the desired version with the legacy content database already attached/restored to it. Then try to attach (or restore) a vanilla XM Cloud database to a local SQL Server as a recipient database in order to perform content migration into it. Unfortunately, the given approach would not work since SQL Server version incompatibility between XM Cloud and XP 9.3. Even if that was possible, running XP 9.3 with the XM Cloud database won’t work as Xз 9.3 neither knows about XM Cloud schema nor is capable of handling Items as Resource required feature which was invented later in XP 10.1. Therefore – this option is not possible.

  2. Can we go the other way around by using the old database along with XM Cloud? This is not documented, but let’s assess it:

    1. Definitely won’t work in the cloud since we’re not given any control of DBs and their maintenance or backups.

    2. In a local environment, XM Cloud only works in Docker containers and it is not possible to use it with an external SQL Server where we have a legacy database. But what if we try to plug that legacy database inside of the local SQL Container? Sadly, there are no documented ways of achieving that.

  3. Keep two independent instances side by side (legacy XP and XM Cloud in containers) and use an external tool to connect both of them in order to migrate the content. In theory that is possible but carries on few drawbacks.
    1. The tool of choice is Razl, but this tool is not free, requires a paid license, and does not have a free trial to ever test this out.
    2. Connecting to a containerized environment may not be easy and require some additional preps
    3. You may need to have a high-spec computer (or at least two mid-level machines connected to the same network) to have both instances running side by side.

After some consideration, the second approach seems to be reasonable to try so let’s give it a chance and conduct a PoC.

Proof of Concept: local XM Cloud with external content database

Utilize the second approach we’re going to try attaching the given external legacy database to XM Cloud running in a local containerized setup. That will allow using a built-in UI for mass-migrating the content between the databases (as pictured below) along with the Sitecore PowerShell script for finalizing and fine-tuning the migrated content.

Control Panel

Step 1: Ensurу SQL Server port is externally exposed

We are connecting the external SQL Server Management studio through a port of the SQL Server container that is exposed externally in order to make it possible. Luckily, that has been done for us already, just make sure docker-compose has:

ports:
          - "14330:1433"

Step 2: Spin up an XM Cloud containers and confirm XM Cloud works fine for you

Nothing extraordinary here, as easy as running .\init.ps1 followed by .\up.ps1.

Step 3: Connect SQL Management Studio to SQL Server running in a container.

After you sound up containers, run SQL Management Studio and connect to SQL Server running in SQL container through an exposed port 14330, as we did at step 1:

Connection parameters

Step 4: Restore the legacy database

If you have a Data-Tier “backpack” file you may want to do an extra step and convert it into a binary backup for that particular version used by XMCloud before restoring. This step is optional, but in case you want to restore the backup more than once (which is likely to happen), it would make sense to take a binary backup as soon as you restore the data-tier “backpack” first time ever. Data-tier backups process much slower than binaries, so that will definitely save time in the future.

Once connected, let’s enable contained database authentication. This step is mandatory, otherwise, that would not be possible to restore a database:

EXEC sys.sp_configure N'contained database authentication', N'1'
    go
    exec ('RECONFIGURE WITH OVERRIDE')
    go

One more challenge ahead: when performing backup and restore operations, SQL Server shows up a path local to the server engine, and not the host machine. That means, our backup should exist “inside” of SQL container. Luckily, w have this also covered. Make sure docker-compose.override.yml contains:

mssql:
    volumes:
          - type: bind
    source: .\docker\data\sql
    target:c:\data

That means, one can locate legacy database backups into .\docker\data\sql folder of a host machine and it will magically appear within C:\datafolder when using SQL Management Studio database backup restore tool which you can perform now.

Important! Restore legacy database using the “magic name” in a format Sitecore.<DB_NAME_SUFFIX>, further down below I will be using the value RR as DB_NAME_SUFFIX.

Once got restored database in SQL Server Management Studio under the name Sitecore.RR we need to plug this database to the system. There is a naming convention hidden from our eyes within CM containers.

Step 5: Configure connection strings

Unlike in XM/XP – there is no documented way to plug an external database. The way connection strings are mapped to the actual system is cumbersome, it uses some “magic” hidden within the container itself and obfuscated from our eyes. It only tool to reach it experimental way. Here are the steps to reproduce:

  • Add environmental variable to docker-compose record for CM:

    • Sitecore_ConnectionStrings_RR: Data Source=${SQL_SERVER};Initial Catalog=${SQL_DATABASE_PREFIX}.RR;User ID=${SQL_SA_LOGIN};Password=${SQL_SA_PASSWORD}
  • Add a new connection string record. To do so you’ll need to create a connection strings file within your customization project as .\src\platform\<SITENAME>\App_Config\ConnectionStrings.config with the content of the connection strings file from the CM container with the addition of a new string:

Please note the difference in the suffix format of both above records, that is totally fine. CM container still processes that correctly.

Step 6: Reinstantiating CM container

Simply restarting a CM container is not sufficient. You must remove it and re-create it, just killing/stopping is not sufficient.

For example, the below command will work for that purpose:

docker-compose restart cm

… not will this one:

docker-compose kill cm

The reason is that CM will not update environmental variables from docker-compose file upon restart. Do this instead:

docker-compose kill cm
    docker-compose rm cm --force
    docker-compose up cm -d

Step 7: Validating

  1. Inspecting CM container for environmental variables will show you this new connection string, as added:

    1. "Env": [
                      "Sitecore_ConnectionStrings_RR=Data Source=mssql;Initial Catalog=Sitecore.RR;User ID=sa;Password=6I7X5b0r2fbO2MQfwKH"
  2. Inspecting connection string config (located at C:\inetpub\wwwroot\App_Config\ConnectionStrings.config on CM container) contains the newly added connection string.

Step 8: Register new database with XM Cloud

It can be done the below config patch that does this job. Save it as docker\deploy\platfo.rm\App_Config\Include\ZZZ\z.rr.config for test and later do not forget to include it in a platform customization project, so that it gets shipped with each deployment
<?xml version="1.0" encoding="UTF-8"?>
    <configuration
        xmlns:patch="www.sitecore.net/.../">
        <sitecore>
            <eventingdefaultProvider="sitecore">
                <eventQueueProvider>
                    <eventQueuename="rr"patch:after="evertQueue[@name='web']"type="Sitecore.Data.Eventing.$(database)EventQueue, Sitecore.Kernel">
                        <paramref="dataApis/dataApi[@name='$(database)']"param1="$(name)"/>
                        <paramref="PropertyStoreProvider/store[@name='$(name)']"/>
                    </eventQueue>
                </eventQueueProvider>
            </eventing>
            <PropertyStoreProvider>
                <storename="rr"patch:after="store[@name='master']"prefix="rr"getValueWithoutPrefix="true"singleInstance="true"type="Sitecore.Data.Properties.$(database)PropertyStore, Sitecore.Kernel">
                    <paramref="dataApis/dataApi[@name='$(database)']"param1="$(name)"/>
                    <paramresolve="true"type="Sitecore.Abstractions.BaseEventManager, Sitecore.Kernel"/>
                    <paramresolve="true"type="Sitecore.Abstractions.BaseCacheManager, Sitecore.Kernel"/>
                </store>
            </PropertyStoreProvider>
            <databases>
                <databaseid="rr"patch:after="database[@id='master']"singleInstance="true"type="Sitecore.Data.DefaultDatabase, Sitecore.Kernel">
                    <paramdesc="name">$(id)
                    </param>
                    <icon>Images/database_master.png</icon>
                    <securityEnabled>true</securityEnabled>
                    <dataProvidershint="list:AddDataProvider">
                        <dataProviderref="dataProviders/main"param1="$(id)">
                            <disableGroup>publishing</disableGroup>
                            <prefetchhint="raw:AddPrefetch">
                                <sc.includefile="/App_Config/Prefetch/Common.config"/>
                                <sc.includefile="/App_Config/Prefetch/Webdb.config"/>
                            </prefetch>
                        </dataProvider>
                    </dataProviders>
                    <!-- <proxiesEnabled>false</proxiesEnabled> -->
                    <archiveshint="raw:AddArchive">
                        <archivename="archive"/>
                        <archivename="recyclebin"/>
                    </archives>
                    <cacheSizeshint="setting">
                        <data>100MB</data>
                        <items>50MB</items>
                        <paths>2500KB</paths>
                        <itempaths>50MB</itempaths>
                        <standardValues>2500KB</standardValues>
                    </cacheSizes>
                </database>
            </databases>
        </sitecore>
    </configuration>

Step 9: Enabling Sitecore PowerShell Extension

Next, we’d want to enable PowerShell, if that is not yet done. You won’t be able to migrate the content using SPE without performing this step.

<?xml version="1.0" encoding="utf-8"?>
    <configuration
        xmlns:patch="http://www.sitecore.net/xmlconfig/"
        xmlns:role="http://www.sitecore.net/xmlconfig/role/"
        xmlns:set="http://www.sitecore.net/xmlconfig/set/">
        <sitecorerole:require="XMCloud">
            <powershell>
                <userAccountControl>
                    <tokens>
                        <tokenname="Default"elevationAction="Block"/>
                        <tokenname="Console"expiration="00:55:00"elevationAction="Allow"patch:instead="*[@name='Console']"/>
                        <tokenname="ISE"expiration="00:55:00"elevationAction="Allow"patch:instead="*[@name='ISE']"/>
                        <tokenname="ItemSave"expiration="00:55:00"elevationAction="Allow"patch:instead="*[@name='ItemSave']"/>
                    </tokens>
                </userAccountControl>
            </powershell>
        </sitecore>
    </configuration>

Include the above code into a platform customization project as .\docker\deploy\platform\App_Config\Include\ZZZ\z.SPE.config. If everything is done correctly, you can run SPE commands, as below:

SPE results

The Result

After all the above steps are done correctly, you will be able to utilize the legacy content database along with your new shiny local XM Cloud instance:
Result in Sitecore Content Editor
Now you can copy items between databases just by using built-in Sitecore UI preserving their IDs and version history. You can also copy items with SPE from one database to another which are both visible to the SPE engine.

.NET Core Renderings for XM Cloud finally gets some love

That is not a secret – Sitecore always used to prioritize Next.Js framework as the first-class citizen for XM Cloud. All the best and finest features tend to find their way to a given framework in the first place. However, recently, there has been much activity around the .NET Core Rendering Framework which makes a lot of sense given most of us, Sitecore tech professionals, originate from the Microsoft and .NET background. More excitement – that is done on .NET 8, which is the latest LST runtime!

Starter Kit

ASP.NET Core framework was with us for a while, periodically receiving some minor updates and fixes. But let’s be honest: having an SDK on its own is one thing, but receiving a decent starter kit on top of that framework is what makes us developers actually create at scale. And that moment has just occurred – without any loud fanfare, XMC ASP.NET Core Starter Kit went public. Please be aware that this is only a PRE-RELEASE version and has its own temporal shortcomings, I gave it a try and want to share my findings with you.

What are these shortcomings? Just a few:

  • FEaaS and BYOC components are not yet supported, therefore you also cannot use Form since it leverages those
  • System.Text.Json serializer is more strict than Newtonsoft which was removed in favor of a built-in solution, thus some components may fail
  • SITECORE_EDGE_CONTEXT_ID variable is not supported

Everything else seems to work the same. There are also some expectations of XM Cloud supporting .NET Rendering at a built-in editing host at some time later in the same manner that works today along with JSS applications, but I do not work for Sitecore and can only make assumptions and guesses without any certainty to it.

First Impression

I forked the repo and cloned the forked code into my computer. Let’s take a look at what we have got there.

VS Code

  • the code varies from what we used to see from XM Cloud Foundation Head starter kit, and that’s understood
  • at the root folder we still have xmcloud.build.json, sitecore.json and folders – .config and .sitecore
  • xmcloud.build.json is required for cloud deploy, but does not have renderingHosts root section required for editing host(s), as I explained above
  • there is headapps folder to keep the solution file along with .NET projects subfolder(s), currently just a single one – aspnet-core-starter
  • there is also local-containers folder that contains docker-compose files, .env, docker files, scripts, Traefik, and the rest of the container assets we got used to
  • another difference – authoring folder contains serialization settings and items as well as .NET framework project for CM customizations
  • however, there are no init.ps1 and up.ps1 files, but that is easy to create yourself by stealing and modifying those from XM Cloud Foundation Head

With that in mind, we can start investigating. There is a ReadMe document explaining how to deploy this codebase, but before going ahead with it I of course decided to:

Run Local Containers

There are no instructions on container setup, only for cloud deployment, but after spending a few years with Foundation Head, the very first thing that naturally comes into my mind is running this starter kit in local Docker containers. Why not?

There are a couple of things one should do first before spinning up containers.

1. Modify settings in .ENV file – at least these two:

# Enter the value for SQL Server admin password:
SQL_SA_PASSWORD=SA_PASSWORD
# Provide a folder storing a Sitecore license file:
HOST_LICENSE_FOLDER=C:\Projects
2. We need to generate Traefik SSL Certificates. To do so let’s create .\local-containers\init.ps1 script with the below content:
    [CmdletBinding(DefaultParameterSetName = "no-arguments")]
    Param()
    $ErrorActionPreference = "Stop";
    
    # duplicates in Up.ps1 scrips
    $envContent = Get-Content .env -Encoding UTF8
    $xmCloudHost = $envContent | Where-Object {$_ -imatch "^CM_HOST=.+"}
    $renderingHost = $envContent | Where-Object {$_ -imatch "^RENDERING_HOST=.+"}
    $xmCloudHost = $xmCloudHost.Split("=")[1]
    $renderingHost = $renderingHost.Split("=")[1]
    
    Push-Location docker\traefik\certs
    try{
        $mkcert = ".\mkcert.exe"
        if($null -ne(Get-Command mkcert.exe -ErrorAction SilentlyContinue)){
            # mkcert installed in PATH
            $mkcert = "mkcert"
        }elseif(-not(Test-Path$mkcert)){
            Write-Host "Downloading and installing mkcert certificate tool..." -ForegroundColor Green
            Invoke-WebRequest "https://github.com/FiloSottile/mkcert/releases/download/v1.4.1/mkcert-v1.4.1-windows-amd64.exe" -UseBasicParsing -OutFile mkcert.exe
            if((Get-FileHash mkcert.exe).Hash -ne "1BE92F598145F61CA67DD9F5C687DFEC17953548D013715FF54067B34D7C3246"){
                Remove-Item mkcert.exe -Force
                throw "Invalid mkcert.exe file"
            }
        }
        Write-Host "Generating Traefik TLS certificate..." -ForegroundColor Green
        & $mkcert -install
        & $mkcert "$xmCloudHost"
        & $mkcert "$renderingHost"
    }
    catch{
        Write-Error "An error occurred while attempting to generate TLS certificate: $_"
    }
    finally{
        Pop-Location
    }

    Write-Host "Adding Windows host"
    Add-HostsEntry "$renderingHost"
    Add-HostsEntry "$xmCloudHost"

    Write-Host "Done!" -ForegroundColor Green

And then execute this script:

Certs

There is no up.ps1 script, so instead let’s run docker-compose directly: docker compose up -d

You may notice some new images show up, and you also see a new container: aspnet-core-starter

Docker

If everything is configured correctly, the script will execute successfully. Run Sitecore from its default hostname, as configured in .env file: https://xmcloudcm.localhost/sitecore

From there you will see no significant changes. Containers just work well! Sitecore has no content to interact with the head application. I will add the content from the template but let’s make the could deployment first.

Deploy to the Cloud

ReadMe document suggests an inconvenient way of cloud deployment:

1. Create a repository from this template.

2. Log into the Sitecore Deploy Portal.

3. Create a new project using the ‘bring your code’ option, and select the repository you created in step 1.

For the majority of us, who are on the Sitecore Partner side, there are only six environments available grouped into two projects. These allocations are priceless and are carefully shared between all XM Cloud enthusiasts and aspirants who are learning a new platform. We cannot simply “create a new project” because we don’t have that spare project, so in order to create one we have to delete the existing one. Deleting a project requires deleting all (three) of its environments in the first place, which is half of the sandbox capacity, carrying valuable work in progress for many individuals.

That is why I decided to use CLI instead. Luckily it works exactly the same as it does with Next.Js starter kits, and from .\.config\dotnet-tools.json you may see that it uses that same version. You deploy the root folder holding xmcloud.build.json file as a working directory, so there are no changes in execution.

Eventually, once deployed we navigate to XM cloud. I decided to follow the ReadMe and create a Basic site from Skate Park template. Basically, I am following steps 4-18 from the ReadMe file.

As a side exercise, you will need to remove a Navigation component from a Header partial item, located at /sitecore/content/Basic/Basic/Presentation/Partial Designs/Header. Basic site will break in the debugger if you do not delete currently incompatible rendering that has a serialization issue.

Building Dev Tunnel in Visual Studio

Next, let’s open and build the solution in the Visual Studio IDE, which refers to .\headapps\aspnet-core-starter.sln file. You may see it related to three Sitecore dependencies from Sitecore.AspNetCore.SDK.LayoutService.Client:

  • Transient: Sitecore.AspNetCore.SDK.LayoutService.Client.Interfaces.ISitecoreLayoutClient
  • Singleton: Sitecore.AspNetCore.SDK.LayoutService.Client.Serialization.ISitecoreLayoutSerialize
  • Singleton: Sitecore.AspNetCore.SDK.LayoutService.Client.Serialization.Converter.IFieldParser

Modify .\headapps\aspnet-core-starter\appsettings.json with the setting values collected from the previous steps. You will end up with something looking as:

Appsettings.json

Now let’s create a Dev Tunnel in VisualStudio:

Dev Tunnel

There will be at least two security prompts:

Dev Tunnel Authorize Github Dev Tunnel Authorize Notice

If everything goes well, a confirmation message pops up:

Dev Tunnel Created

Now you will be able to run and debug your code in Visual Studio:

Debugger Works

Make a note of the dev tunnel URL, so that we can use it to configure Rendering Host, as described at step 27 of ReadMe. You will end up with something as below:

Rendering Hosts

So far so good. You can now run the website by URL and in Experience Editor. Running in Page will however not work yet due to the below error:

No Pages Without Publish

To explain that, Experience Editor runs as a part of CM and pulls content from a GraphQL endpoint on that same CM. Pages instead is a standalone separate application, so it does not have access neither to the endpoint nor to the Rendering Hosts settings item. It only has access to Experience Edge so we must publish first. Make sure you publish the entire Site Collection. Once complete, Page works perfectly well and displays the site:

Pages Work 1 Pages Work 2

To explain what happens above: Pages app (which is a SaaS-run editor) pulls Experience Edge for the settings of the rendering editing host (which runs in a debuggable dev tunnel from Visual Studio) and renders HTML right there with the layout data and content pulled from Experience Edge.

Deploy Rendering Host to Cloud

Without much thinking, I decided to deploy the rendering host as Azure Web App, with the assumption that the .NET 8 application would be best supported in its native cloud.

Web App Configure

After the Web App is created, add the required environmental variables. The modern SITECORE_EDGE_CONTEXT_ID variable is not yet supported with .NET Core SDK, so we should go the older way:

Azure App Settings

A pleasant bonus of GitHub integration is that Azure creates GitHub Actions workflow with the default functional build and deployment. There is almost nothing to change, I only made a single fix replacing this run: dotnet publish -c Release -o ${{env.DOTNET_ROOT}}/myapp with a hardcoded path since this variable contains space from “Program Files” part and is incorrectly tokenized breaking the build. After this fix, GitHub actions got built the right way and I started receiving green status:

Github Actions

… and the published site shows up from the Azure Web App powered rendering host:

Published Site

Finally, we can get rid of Dev Tunnel, replacing it with the actual “published site” hostnames:

Getting Rid Of Dev Tunnel

After republishing the Rendering Host item to Edge, we can stop the debugger and close Visual Studio. Both Experience Editor and Pages app are now working with an editing host served by the Azure Web App.

Verdict

Of course, that would be much anticipated for XM Cloud to have built-in .NET editing host capabilities same way as JSS does. But even without it, I applaud to Sitecore development team for making and keeping working on this starter kit as that is a big milestone for all of us within the .NET Community!

With this kit, we can now start building XM Cloud powered .NET app at a faster pace. I believe all the missing features will find their way to the product, and maybe later there will be some (semi)official SSG support for .NET, something like Statiq. That will allow deployments to a wider set of hosting, such as Azure Static Web Apps, Netlify, and even Vercel which does not support .NET as of today.

Full guide to enabling Headless Multisite Add-On for your XM Cloud solution

Sitecore Headless Next.Js SDK recently brought the feature that makes it possible to have multiple websites from Sitecore content tree being served by the same rendering hoist JSS application. It uses Next.js Middleware to serve the correct Sitecore site based on the incoming hostname.

Why and When to Use the Multisite Add-on

The Multisite Add-on is particularly useful in scenarios where multiple sites share common components or when there is a need to centralize the rendering logic. It simplifies deployment pipelines and reduces infrastructure complexity, making it ideal for enterprises managing a portfolio of websites under a unified architecture. This approach saves resources and ensures a consistent user experience across all sites.

How it works

The application fetches site information at build-time, not at runtime (for performance reasons). Every request invokes all Next.js middleware. Because new site additions are NOT frequent, fetching site information at runtime (while technically possible) is not the best solution due to the potential impact on visitor performance. You can automate this process using a webhook to trigger automatic redeployments of your Next.js app on publish.

Sitecore provides a relatively complicated diagram of how it works (pictured below), but if you do not get it from the first look, do not worry as you’ll get the understanding after reading this article.

Uuid 8601bf89 50ab 5913 42ed 4773b05ab2c3[1]

Technical Implementation

There are a few steps one must encounter to make it work. Let’s start with the local environment.

Since multisite refers to different sites, we need to configure hostnames. The Main site operates on main.localhost hostname, and it is already configured as below:

.\.env
    RENDERING_HOST=main.localhost
.\docker-compose.override.yml
    PUBLIC_URL: "https://${RENDERING_HOST}"

For the sake of the experiment, we plan to create a second website served by second.localhost locally. To do so, let’s add a new environmental variable to the root .env file (SECOND_HOST=second.localhost) and introduce some changes to init.ps1 script:

$renderingHost = $envContent | Where-Object{$_ -imatch "^RENDERING_HOST=.+"}
$secondHost = $envContent | Where-Object{$_ -imatch "^SECOND_HOST=.+"}
$renderingHost = $renderingHost.Split("=")[1]
$secondHost = $secondHost .Split("=")[1]

down below the file we also want to create SSL certificate for this domain by adding a line at the bottom:

& $mkcert -install
# & $mkcert "*.localhost"
& $mkcert"$xmCloudHost"
& $mkcert"$renderingHost"
& $mkcert"$secondHost"

For Traefic to pick up the generated certificate and route traffic as needed, we need to add two more lines at the end of .\docker\traefik\config\dynamic\certs_config.yaml file:

tls:
certificates:
    - certFile:C:\etc\traefik\certs\xmcloudcm.localhost.pem
keyFile:C:\etc\traefik\certs\xmcloudcm.localhost-key.pem
    - certFile:C:\etc\traefik\certs\main.localhost.pem
keyFile:C:\etc\traefik\certs\main.localhost-key.pem
    - certFile:C:\etc\traefik\certs\second.localhost.pem
keyFile:C:\etc\traefik\certs\second.localhost-key.pem

If you try initializing and running – it may seem to work at first glance. But navigating to second.localhost in the browser will lead to an infinite redirect loop. Inspecting the cause I realized that occurs due to CORS issue, namely that second.localhost does not have CORS configured. Typically, when configuring the rendering host from docker-compose.override.yml we provide PUBLIC_URL environmental variable into the rendering host container, however, that is a single value and we cannot pass multiple.

Here’s the more descriptive StackOverflow post I created to address a given issue

To resolve it, we must provide the second host in the below syntax as well as define CORS rules as labels into the rendering host, as below:

labels:
  - "traefik.enable=true"
  - "traefik.http.routers.rendering-secure.entrypoints=websecure"
  - "traefik.http.routers.rendering-secure.rule=Host(`${RENDERING_HOST}`,`${SECOND_HOST}`)"
  - "traefik.http.routers.rendering-secure.tls=true"
# Add CORS headers to fix CORS issues in Experience Editor with Next.js 12.2 and newer
  - "traefik.http.middlewares.rendering-headers.headers.accesscontrolallowmethods=GET,POST,OPTIONS"
  - "traefik.http.middlewares.rendering-headers.headers.accesscontrolalloworiginlist=https://${CM_HOST}, https://${SECOND_HOST}"
  - "traefik.http.routers.rendering-secure.middlewares=rendering-headers"

Once the above is done, you can run the PowerShell prompt, initialize, and spin up Sitecore containers, as normal, by executing init.ps1 and up.ps1 scripts.

Configuring Sitecore

Wait until Sitecore spins up, navigate to a site collection, right-click it, and add another website calling it Second running on a hostname second.localhost.

Make sure second site uses the same exact application name as main, you can configure that from /sitecore/content/Site Collection/Second/Settings item, App Name field. That ensures the purpose of this exercise is for both sites to reuse the same rendering host application.

You should also make sure to match the values of the Predefined application rendering host fields of /sitecore/content/Site Collection/Second/Settings/Site Grouping/Second and /sitecore/content/Site Collection/Main/Settings/Site Grouping/Main items.

Another important field here is Hostname, make sure to set these fields for both websites:

Hostname

Now you are good to edit Home item of Second site. Multisite middleware does not affect the editing mode of Sitecore, so from there, you’ll see no difference.

Troubleshooting

If you’ve done everything right, but second.localhost resolves to the Main website, let’s troubleshoot. The very first location to check is .\src\temp\config.js file. This file contains sites variable with the array of sites and related hostnames to be used for the site resolution. The important fact is that a given file is generated at the build time, not runtime.

So, you open it up and see an empty array (config.sites = process.env.SITES || '[]') that means you just need to initialize the build, for example by simply removing and recreating the rendering host container:

docker-compose kill rendering
docker-compose rm rendering -f
docker-compose up rendering -d

Also, before running the above, it helps to check SXA Site Manager, which is available under the PowerShell Toolbox in Sitecore Desktop. You must see both sites and relevant hostnames there and in the correct order – make sure to move them as high as possible, as this site chain works by “first come – first served” principle.

Multisite

After rendering host gets recreated (it may take a while for both build and spinning up steps), check the site’s definition at .\src\temp\config.js again. It should look as below:

config.sites = process.env.SITES || '[{"name":"second","hostName":"second.localhost","language":""},{"name":"main","hostName":"main.localhost","language":""}]'

The amount of SXA client site records must match the records from SXA Site Manager. Now, running second.localhost in the browser should show you rendered home page of the second site.

Another technique of troubleshooting is to inspect middleware logs. To do so, create .env.local file at the rendering host app root (if does not exist yet) and add the debugging parameter:

DEBUG=sitecore-jss:multisite

Now, rendering host container logs will expose you the insights into how multisite middleware processes your requests and resolves site contexts, and sets site cookies. Below is a sample (and correct) output of the log:

sitecore-jss:multisite multisite middleware start: { pathname: '/', language: 'en', hostname: 'second.localhost' } +8h
sitecore-jss:multisite multisite middleware end in 7ms: {
  rewritePath: '/_site_second/',
  siteName: 'second',
  headers: {
  set-cookie: 'sc_site=second; Path=/',
  x-middleware-rewrite: 'https://localhost:3000/_site_second',
  x-sc-rewrite: '/_site_second/'
},
  cookies: ResponseCookies {"sc_site":{"name":"sc_site","value":"second","path":"/"}}
} +7ms

The above log is crucial to understanding how multisite middleware works internally. The internal request comes rewrites as <https://localhost:3000/_site_second where ‘second‘ is a tokenized site name parameter. If .\src\main\src\temp\config.js file contains the corresponding site record, site gets resolved and sc_site cookie is set.

If you still have the issues of showing up the Second website that resolves to Main website despite the multisite middleware log outputs correct site resolution, that may be likely caused by conflicting with other middleware processors. This would be your number one thing to check especially if you have multiple custom middleware. Miltisie middleware is especially sensitive to the execution order, as it sets cookies. In my case, that problem was because Sitecore Personalize Engage SDK was registered through middleware and also programmed to set its own cookie, and somehow configured with multisite middleware.

In that case, you have to play with order constant within each conflicting middleware (they are all located under .\src\lib\middleware\plugins folder) with the following restart of a rendering host.

Resources Sharing

Since the multisite add-on leverages the same rendering host app for the multiple sites that use it, all the components and layouts TSX markups, middleware, and the rest of the resources become automatically available for the second site. However, that is not true by default for the Sitecore resources. We must assign at least one website that will share its assets, such as renderings, partials, layouts, etc across the entire Site Collection. Luckily, that is pretty easy to do at the site collection level:

Site Collection

For the sake of an experiment, let’s make Second website to use Page Designs from the Main site. After the above sharing permission, they are automatically available at /sitecore/content/Site Collection/Second/Presentation/Page Designs item.

Configuring the cloud

Once we make the multisite add-on function well, let’s make the same work in the cloud.

  1. First of all, check all the changes to the code repository, and trigger the deployment, either from Deploy App, CLI, or your preferred CI/CD
  2. After your changes arrive at XM Cloud environment – let’s bring the required changes to Vercel, starting with defining the hostnames for the sites:Vercel
  3. After you define the hostnames in Vercel, change the hostname under Site Grouping item for this particular cloud environment to match the above hostname configured earlier in Vercel.
  4. Save changes and smart publish Site Collection. Publishing takes some time, so please stay patient.
  5. Finally, you must also do Vercel full re-deployment, to force regenerating \src\temp\config.js file with the hostnames from Experience Edge published at the previous step

Testing

If everything is done right in the previous steps, you can access the published Second website from Vercel by its hostname, in our case that would be https://dev-second.vercel.app. Make sure all the shared partial layouts are seen as expected.

When testing 404 and 500 you will see that these are shared between both sites automatically, but you cannot configure them individually per site. This occurs because both are located at .\src\pages\404.tsx and .\src\pages\500.tsx of the same rendering host app and use different internal mechanics rather than generic content served from Sitecore throughout .\src\pages\[[...path]].tsx route. These error pages however could be multi-lingual, if needed.

Hope you find this article useful!

Sitecore 10.4 is out and here’s all you need to know about it

That was a decent gap since 1.5 years ago Sitecore previously released a feature-full version of their XM/XP platform, namely 10.3 was released on December 1st of 2022. That is why I was very excited to look through the newest release of the vendor’s self-hosted platforms and familiarize myself with its changes.

First and foremost, the 10.4 platforms could be exclusively obtained from a new download page which has moved to its new home at Sitecore Developer Portal. I recommend bookmarking that for the current and all future releases.

Release Notes

There is a list of impressive 200 changes and improvements coming along with official Release Notes. I recommend going through it especially paying attention to the Deprecated and Removed sections.

So, what’s New?

From the important features and changes, I’d focus on a few:

  • XM to XM Cloud Migration Tool for migrating content, media, and users from a source XM instance to an XM Cloud environment. This tool provides an aid for the routine and sometimes recurring back-end migrations, so our customers/partners can focus on migrating and developing new front-end sites.
  • xDB to CDP Migration Tool for transferring site visitor contact facets to Sitecore’s CDP and Personalize products, and also via Sitecore Connect to external systems. This provides the ability to interwork with or eventually adopt other SaaS-based innovations.
  • New /sitecore/admin/duplicates.aspx admin folder page addressing the change in media duplication behavior (now, the blobs are in fact also duplicated) – run it upon the migration to 10.4 in order to change the media items accordingly.
  • Added a new Codeless Schema Extension module, enabling business users to extend the xConnect schema without requiring code development. If that one was available earlier – it could significantly boost xDB usage by marketers. It will be generally available in mid-May 2024.
  • Improved accessibility to help content authors with disabilities.
  • Sitecore Client Content Reader role allows access into CM without the risk of breaking something – it was a frequently requested feature.
  • It is now possible to extract data from xDB and transform the schema for external analytics tools such as Power BI.
  • GraphQL is enabled by default on the CM container instance in the local dev – which totally makes sense to me.
  • Underlying dependencies updated to the latest – SQL Server 2022, latest Azure Kubernetes Service, Solr 8.11, etc.

Containers

Spinning up Sitecore in local Docker containers used to be the easiest way of starting up. However, the most important fact you have to consider for a containerized setup is that base images are only available for ltsc2022 platform, at least for now. If you are a lucky one using a Windows 11 machine – you get the best possible performance running Sitecore in Process isolation mode. Otherwise, you may struggle with Hyper-V compatibility issues.

The other thing I noticed is that SitecoreDockerTools is simply set to pull the latest version which is 10.3.40 at the time of writing.

Also, Traefik image remains on one of the older versions (not versions 3.x of Traefik, but 2.9.8 which was even older before – v2.2.0) that do not support ltsc2022 and therefore still uses Hyper-V isolation. You can however fix that manually to have each and every image running fast in Process isolation mode. As always, it helps a lot to examine the list of available published images as your own exercise as some were standardized.

Compared to previous versions, this one seems to be lightweight, with no helpful PowerShell scripts for up & down containers (so we use docker-compose directly) as well as clean-up scripts and others. As before, it supports all three default topologies – XP0, XM1, and XP1.

Sitecore Gallery Tips:

  • Tip 1: Sitecore Gallery has recently moved from MyGet https://sitecore.myget.org/F/sc-powershell/api/v2 to Sitecore hosted NuGet https://nuget.sitecore.com/resources/v2.
  • Tip 2: don’t forget to update the PackageManagement and PowerShellGet modules from PSGallery if needed, as below:
Install-Module -Name PackageManagement -Repository PSGallery -Force -AllowClobber
Install-Module -Name PowerShellGet -Repository PSGallery -Force -AllowClobber

Containers

If for some reason you cannot or are unwilling to use containers, there are other options: SIA and manual installations from a zip archive. Over the past years, I have created a tool called Sifon that is effectively better than SIA, because it can also install all the prerequisites, such as Solr and SQL Server of the required versions, along with downloading the necessary resources from the developer portal. I will add the support for 10.4 in the next few days or a week.

10.4 dashboard

Upon the installation, you will see the Sitecore Dashboard:

Sitecore 10.4 Dashboard

Version 10.4 now operates 010422 revision:

Version 10.4

SXA

This crucial module comes in the correspondent version 10.4 along with a newer 7.0 version of the Sitecore PowerShell Extensions module. The biggest news about this module is that it now supports Tailwind, in the same way as XM Cloud does:

Tailwind

Conclusion

In general, time will prove what I expect this version to be – the most mature version of Sitecore, working faster and more reliably with the updated underlying JavaScript-dependent libraries. I am impatiently waiting for the hot things such as AI integrations and the delayed feature set promised to appear later the month in May 2024 to explore and share about.

Cypress: a new generation of end-to-end testing

What is Cypress

Cypress is a modern JavaScript-based end-to-end (e2e) testing framework designed to automate web testing by running tests directly in the browser. Cypress has become a popular tool for web applications due to a number of distinctive advantages such as user-friendly interface, fast test execution, ease of debugging, ease of writing tests, etc.

Those who have already had any experience with this testing framework probably know about its advantages, which make it possible to ensure that projects are covered with high-quality and reliable autotests. Cypress has well-developed documentation, one of the best across the industry, with helpful recommendations for beginners, which is constantly being improved, as well as an extensive user community. However, despite the convenience, simplicity, and quick start, when we talk about Cypress tests, we still mean the code. In this regard, to work effectively with the persona behind Cypress requires not only an understanding of software testing as such but also the basics of programming, being more or less confident with JavaScript/TypeScript.

Why Cypress

Typically, to test your applications, you’ll need to take the following steps:

  • Launch the application
  • Wait until the server starts
  • Conduct manual testing of the application (clicking the buttons, entering random text in input fields, or submit a form)
  • Validate the result of your test being correct (such as changes in title, part of the text, etc.)
  • Repeat these steps again after simple code changes.

Repeating these steps over and over again becomes tedious and takes up too much of your time and energy. What if we could automate this testing process? Thanks to this, you can focus on more important things and not waste time testing the UI over and over again.

This is where Cypress comes into play. When using Cypress the only thing you need to do is:

  • Write the code for your test (clicking a button, entering text in input fields, etc.)
  • Start the server
  • Run or rerun the test

That’s it! The Cypress library cares about all the testing for you. It not only tells you if all your tests passed or not, but it also points to which test failed and why exactly.

How about Selenium

Wait, but we already have Selenium, is it still actual?

Selenium remained The King of automated testing for more than a decade. I remember myself back in 2015 creating a powerful UI wrapper for Selenium WebDriver to automate and simplify its operations for non-technical users. The application is named Onero and is still available along with its source code. But when it comes to Cypress – it already offers powerful UI straight out of the box, and much more useful tools and integrations – just keep reading to find this below.

Cypress is the next generation web testing platform. It was developed based on Mocha and is a JavaScript-based end-to-end testing framework. That’s how it is different to Selenium which is a testing framework used for web browser automation. Selenium WebDriver controls the browser locally or remotely and is used to test UI automation.

The principal difference is that Cypress runs directly in browser, while Selenium is external to a browser and controls it via WebDriver. That alone makes Cypress much perfectly handling async operations and waits, which were all the time issues for Selenium and required clumsy scaffolding around related error handling.

With that in mind, let’s compare Cypress vs. Selenium line by line:

With that in mind, let’s compare Cypress vs. Selenium line by line:

Cypress Selenium
Types of testing Front end with APIs, end-to-end End-to-end, doesn’t support API testing
Supported languages JavaScript/Typescript Multiple languages are supported, such as Java, JavaScript, Perl, PHP, Python, Ruby, C#, etc.
Audience Developers as well as testers Automation engineers, testers
Ease of Use For those familiar with JavaScript, it will be an easy walk. Otherwise, it will be a bit tricky. It is still developer-friendly being designed keeping developers in mind. It also has a super helpful feature called “travel back in time.” As it supports multiple languages, people can quickly start writing tests, but it’s more time-consuming than Cypress as you have to learn specific syntax.
Speed It has a different architecture that doesn’t utilize a web driver and therefore is faster, also Cypress is written with JavaScript which is native to browsers where it executes. Because of its architecture, it’s hard to create simple, quick tests. However, the platform itself is fast, and you can run many tests at scale, in parallel, and cross-browser.
Ease of Setup Just run the following command: npm install Cypress –save-dev. It requires no other component installation (unlike web driver) as Selenium does, you don’t even have to have a browser as it can use Electron. Also, everything is well-bundled. As it has two component bindings and a web driver. Installation is more complicated and time-consuming.
Integrations & Plugins It has less integrations which is compensated by a rich set of plugins. Perfectly runs in Docker containers and supports GitHub Actions. It integrates with CI, CD, visual testing, cloud vendors, and reporting tools.
Supported Browsers Supports all Chromium-based browsers (Chrome, Edge, Brave) and Firefox. All browsers: Chrome, Opera, Firefox, Edge, Internet Explorer, etc along with the “scriptable headless” browser – PhantomJs.
Documentation Helpful code samples and excellent documentation in general Average documentation.
Online Community & support A growing community, but smaller then Selenium gained over a decade. It has a mature online community.

Selenium is aimed more at QA automation specialists, while Cypress is aimed merely at developers to improve TDD efficiency. Selenium was introduced in 2004, so it has more ecosystem support than Cypress, which was developed in 2015 and continues to expand.

Installation and first run

You need to have Node.js and as Cypress is shipped with npm module.

npm -i init
npm install cypress -- save -dev

Along with Cypress itself, you will likely want to install XPath plugin, otherwise, you’re limited to only CSS locators.

npm install -D cypress-xpath

Once ready, you may run it:

npx cypress open

From there you'll see two screens: E2E Testing and Components Testing

Main Screen

Most of the time you will likely be dealing with E2E testing. That’s where you choose your desired browser and execute your tests:

E2e

By default you’ll find a live documentation in a form of a bunch of helpful pre-written tests exposing best of Cypress API in action. Feel free to modify, copy and paste as per your needs.

Tests

Here’s how Cypress executes tests from the UI on an example of sample test run:

Sample Run

But of course, in a basic most scenario you can run it from console, You can even pass a specific test spec file to execute:

npx cypress run --spec .\cypress\e2e\JumpStart\JumpStart.cy.js

Regardless of the execution mode, results will stay persistent:

From Console

Component Testing

This feature was recently added and stayed long time in preview. Now once it is out of beta, let’s take a look at what Component Testing is.

Instead of working with the entire application, with component testing you can simply connect a component in isolation. This will save you time by downloading only the parts you’re interested in, and will allow you to test much faster. Or you can test different properties of the same component and see how they display. This can be very useful in situations where small changes affect a large part of the application.

Component

In addition to initializing the settings, Cypress will create several support files, one of the most important is component.ts, located in the cypress/support folder.

import { mount } from 'cypress/react18'

declare global {
    namespace Cypress {
    interface Chainable {
        mount: typeof mount
    }
    }
}

Cypress.Commands.add('mount', mount)

// Example of usage:
// cy.mount(MyComponent)

This file contains the component mount function for the framework being used. Cypress supports React, Angular, Svelte, Vue, and even frameworks like Next.js and Nuxt.

Cypress features

  1. Time travel
  2. Debuggability
  3. Automatic waits (built-in waits)
  4. Consistence results
  5. Screenshots and videos
  6. Cross browser testing – locally or remotely

I want to focus on some of these features.

Time Travel. This is an impressive feature that allows you to see the current state of your application at any time while it is being tested.

Debuggability. Your Cypress test code runs in the same run loop as your application. This means you have access to the code running on the page, as well as the things the browser makes available to you, like document, window, and debugger. You can also leverage .debug() function to quickly inspect any part of your app right while running a test. Just attach it to any Cypress chain of commands to have a look at the system’s state at that moment:

it('allows debugging like a pro', ()=>{
    cy.visit('/location/page')
    cy.get('[data-id="selector"]').debug()
})

Automatic waits. Aa a key advantage over Selenium, Cypress is smart to know how fast an element is animating and will wait for it to stop animating before acting against it. It will also automatically wait until an element becomes visible, becomes enabled, or when another element is no longer covering it.

Consistence results. Due to its architecture and runtime nuances, Cypress fully controls the entire automation process from top to bottom, which puts it in the unique position of being able to understand everything happening in and outside of the browser. This means Cypress is capable of delivering more consistent results than any other external testing tool.

Screenshots and videos. Cypress can work on screenshots and videos. One can capture both the complete page and particular element screenshot with the screenshot command in Cypress. Cypress also has the in-built feature to capture the screenshots of failed tests. To capture a screenshot of a particular scenario, we use the command screenshot.

describe('Test with a screenshot', function(){
    it("Test case 1", function(){
        //navigate URL
        cy.visit("https://microsoft.com/windows")

        //complete page screenshot with filename - CompletePage
        cy.screenshot('CompletePage')

        //screenshot of the particular element
        cy.get(':nth-child(3) > section').screenshot()
    });
});

Produced screenshots appear inside the screenshots folder (in the plugins folder) of the project, but that’s configurable from the globals.

Cypress video capturing executes for tests. Enable it from cypress.config.ts:

import{ defineConfig } from 'cypress'

export default defineConfig({
    video: true,
})

Please refer to the official documentation that explains how to use screenshots and videos with Cypress.

GitHub Actions Integration

Cypress nicely allows to run tests in Cypress using GitHub Actions.

To do this on the GitHub Action server, you first need to install everything necessary. We also need to determine when we want to run tests (for example, run them on demand or every time new code is introduced). This is how we gradually define what GitHub Actions will look like. In GitHub Actions, these plans are called “workflows”. Workflow files are located under .github/workflows folder. Each file is a YAML with a set of rules configuring what and how will get executed:

name: e2e-tests

on:[push]

jobs:
    cypress-run:
    runs-on: ubuntu-latest
    steps:
        - name: Checkout
        uses: actions/checkout@v3
        - name: Cypress run
        uses: cypress-io/github-action@v5
        with:
            start: npm start
    

Let’s look at what’s going on in this file. In the first line, we give the action a name. It can be anything, but it is better to be descriptive.

In the first line, we give the action a name. In the second line, we define the event on which this script should be executed. There are many different events, such as push, pull_request, schedule, or workflow_dispatch (that allows you to trigger an action manually).

The third line specifies the task or tasks to be performed. Here we must determine what needs to be done. If we were starting from scratch, this is where we would run npm install to install all the dependencies, run the application, and run tests on it. But, as you can see, we are not starting from scratch, but using predefined actions – instead of doing that, we can re-use previously created macros. For example, cypress-io/github-action@v5 will run npm install, correctly cache Cypress (so installation will be faster next time), start the application with npm start, and run npx cypress run. And all this with just four lines in a YAML file.

Run Cypress in containers

In modern automated testing, setting up and maintaining a test environment can often be a time-consuming task, especially when working with multiple dependencies and their configurations, different operating systems, libraries, tools, and versions. Often one may encounter dependency conflicts, inconsistency of environments, limitations in scalability and error reproduction, etc., which ultimately leads to unpredictability and unreliability of testing results.

Using Docker greatly helps prevent most of these problems from occurring and the good news is that you can do that. In particular, using Cypress in Docker can be useful because:

  1. It ensures that Cypress autotests run in an isolated test environment. In this case, the tests are essentially independent of what is outside the container, which ensures the reliability and uninterrupted operation of the tests every time they are launched.
  2. For running it locally, this means the absence of Node.js, Cypress, any exotic browser on the host computer – that won’t become an obstacle. This not just allows to run Cypress locally on different host computers but also deploy them in CI/CD pipelines and to cloud services by ensuring uniformity and consistency in the test environment. When moving a Docker image from one server to another, containers with the application itself and tests will work the same regardless of the operating system used, the presence of Node.js, Cypress, browsers, etc. This ensures that Cypress autotests are reproducible and the results of running them predictable across different underlying systems.
  3. Docker allows you to quickly deploy the necessary environment for running Cypress autotests, and therefore you do not need to install operating system dependencies, the necessary browsers, and test frameworks each time.
  4. Speeds up the testing process by reducing the total time for test runs. This is achieved through scaling, i.e. increasing the number of containers, running Cypress autotests in different containers in parallel, parallel cross-browser testing capabilities using Docker Compose, etc.

The official images of Cypress

Today, the public Docker Hub image repository, as well as the corresponding cypress-docker-images repository on GitHub, hosts 4 official Cypress Docker images:

Limitations of Cypress

Nothing is ideal on Earth, so Cypress also has some limitations mostly caused by its unique architecture:

  1. One cannot use Cypress to drive two browsers at the same time
  2. It doesn’t provide support for multi-tabs
  3. Cypress only supports JavaScript for creating test cases
  4. Cypress doesn’t provide support for browsers like Safari and IE at the moment
  5. Reading or writing data into files is difficult
  6. Limited support for iFrames

Conclusion

Testing is a key step in the development process as it ensures that your application works correctly. Some programmers prefer to manually test their programs because writing tests requires a significant amount of time and energy. Fortunately, Cypress has solved this problem by allowing the developer to write tests in a short amount of time.