Experience Sitecore ! | All posts tagged 'SXA'

Experience Sitecore !

More than 200 articles about the best DXP by Martin Miles

The ultimate guide to Sitecore XM Cloud

If you ask anyone in the Sitecore community what was the biggest hype of 2022 – there wouldn’t be any other opinion than mentioning XM Cloud. Let’s take a look at this latest and shiny SaaS offering!

Content

Overview, Architecture, Features

Introduction

Sitecore XM Cloud is a cloud-based platform that provides tools and features for managing and optimizing digital customer experiences. The platform is a headless SaaS solution initially designed to resolve certain points of pain XP platform previously suffered from, including

  • Platform upgrades: one of the time-consuming and uncertain activities that eat out a significant budget, but on their own bring very little of new features. I have written a decent series of blog posts exclusively devoted to the Sitecore platform upgrades and the accompanying complexities. With the SaaS model, you get upgrades automatically done for you by Sitecore.
  • Infrastructure management and maintenance, security maintenance, taking care of scalability: all that was previously done by the end clients or partners now is fully managed by Sitecore itself.
  • A bottleneck of Sitecore developers: lack of availability of developers with specific knowledge of Sitecore internals may encounter a staffing bottleneck when building and maintaining Sitecore-based applications, including the complexity of the platform, limited documentation, dependence on the Sitecore support team, and limited resources. Since most of that is now provided by the vendor, development shifts mainly to the “head” and requires generic frontend skills that are much easier to come by.
  • Platform architectural bottleneck: in addition to all the above, previously scaling up your CD servers in a non-headless environment was quite a bottleneck and came at a cost compared to XM Cloud which does not have CDs at all, but only serves content via highly available Edge APIs endpoints.

That results in a drastically improved development process, velocity, and reliability; not to mention saved budgets. Clients can mostly focus on building the sites for visitors, where the platform itself gets care from the vendor.

Licensing

Before talking about licensing, it is important to understand how this SaaS offering is structured.

XM Cloud hosting platform includes the concepts of organizations, projects, and environments. An organization is the equivalent of an XM Cloud subscription. To initially access the XM Cloud portal, you must be a member of an “Organization” admin account at Sitecore. Once you have access to this account, you can manage your users, give them the required access or even set them as admins, create projects, set up environments for those projects, and promote your code through these environments. In old XP terminology, each environment is a Sitecore instance.

There are three environments per project: one production and two non-prod – typically for development, staging, testing, or other purposes. With the XM Cloud portal, you can easily manage and access all of your Sitecore instances in the cloud.

Coming back to the licensing model, it is similar to a subscription license from Sitecore platforms and its cost model was simplified very much to be based on traffic and consumption.

Each XM Cloud license includes a certain number of projects and environments, and you can use these to manage your different Sitecore instances. Depending on your specific requirements, you may be able to use different projects to handle different instances. You can use environments to set up and configure your development, staging, testing, and production environments, as well as any other environments you may need. If you have any questions about how to best use projects and environments, or if you have any other licensing questions, it is recommended that you speak with your Account Manager, who will be able to provide more information and guidance to meet your specific needs.

Interestingly, in order to run local docker containers with XM Cloud, one requires to have a valid Sitecore license. I cannot say if your existing partner’s license file will work with local containers development, as being a Sitecore MVP I was given a universal Sitecore license file which worked well. For building and deploying a source code with a built-in Deploy App you don’t need to provide a license file – it is assumed from your organization (subscription).

xm cloud license

Architecture

In very simplified terms, the architecture could be explained by the below diagram provided by Sitecore:

XM Cloud architecture

Not all the internals of the architecture are mentioned above (ie. ACR, Kubernetes, etc. are missed out), but should you really care about anything within the dashed area? All that is Sitecore-managed and developers typically focus on the development of the front-end website (also known as a “head”) which is most often built with Next.js. Of course, other frameworks also would work with XM Cloud, however, there’s lots of plumbing to be done that Next provides out-of-box.

One of the most important features of XM Cloud is Webhook Framework. It is built into XM Cloud in the same as is for 10.3 platforms. In a composable world of decoupled SaaS products webhooks are used to notify external services about changes in XM Cloud, for example, to validate and even cancel Workflow state transitions.

For example, in the old good XP platform, we used events to notify that publishing to CD has been completed. One of the possible scenarios was using Core database to pass the remote events as that was an architectural feature of a monolith platform. In a composable world that cannot be the case, as systems cannot share resources in that way and can only communicate through APIs. You don’t have CDs and publish to Edge which reasonably also has its own webhooks. You could also utilize webhooks on git repository or at Vercel side for example.

With some obvious architectural limitations, it is possible to customize XM Cloud in a similar way as we did it with XP by applying patches, but there is an expectation is that developers would customize less and less with time and platform growth. From the functionality point of view, these customizations would focus on data and synchronization rather than patching system features.

Speaking about the drawbacks of XM Cloud I could state single region geolocation, which you must specify initially.

Cloud Portal

It is a visual dashboard of your Organization. Here you can get all your tools in the same place, based on what you have in your subscription. In addition – there are shortcuts to Documentation and you can access Support from the Portal as well.

cloud portal

You can create projects and environments right from this portal. Choose between a starter template or setting up your GitHub repository and if the latter has been chosen – once you grant access to XM Cloud Deploy App to your account and choose the desired repository – it will perform the deployment in the background.

You choose a specific git branch for the desired environment (for example – main branch deploys to the production environment, while develop branch – is to testing) and can also enable auto-deploy upon each commit to a chosen branch. There is nothing else to set up or configure on top of the above, like CI/CD pipelines – Deploy App already knows what to do.

Is GitHub the only way to provide source code for build and deployment? The answer is both yes and no: from GUI it indeed currently only supports GitHub, with later plans to add support for other popular version control hosting providers, such as Azure DevOps, GitLab, Bitbucket, etc. But from CLI one can have more configuring options.

And not just that – almost everything you can do on the portal with GUI you can do remotely with CLI, for example. However, many of the Cloud Portal tools are not available for local development: Pages, Components, Explorer, Deploy App, and Dashboard itself all run exclusively in the cloud and are not available to run locally.

So, how would I develop it at my local rendering host and test it then with Pages or Experience Editor running in the cloud? Currently, there’s a workaround to configure tunneling for the local Rendering Host with a reverse proxy like ngrok or localtunnel, so that your local rendering host server becomes available outside.

If using default GitHub is not an option or you want to customize the automation and/or set your own CI/CD pipeline, there’s another feature of Cloud Portal named Authentication Clients available – an access token generator for XM Cloud.

This is how the documentation describes it:

“When you generate an authentication client, the client creates credentials that include a client ID and a client secret. You can use the credentials to request a JSON Web Token for your CM instance or to request a JWT for Experience Edge XM.”

So, that is an effective tool for creating tokens for approving custom tools for the automation and/or setting your own CI/CD pipeline.

The third tab named Status provides a basic but helpful overview of a deployment process.

I would recommend reading through the official Cloud Portal documentation.

Identity

With the XP platform, we used to have an Identity Server however the one is no longer useful in a genuine composable world. Sitecore had to re-think the authentication approach and implemented a new Unified Identity system so that it offers SSO across all applications of the Composable DXP family.

Integrations

How XM Cloud integrates with Content Hub DAM and CMP out-of-box and reference to the CMP/DAM connector now in the base image.

Sitecore Experience Edge

Experience Edge for XM comes out of the box with XM Cloud and is the default destination for publishing content. It is a content delivery service that provides scalable, globally replicated access to Sitecore Experience Platform items, layout, and media through an API. It uses CDN networks to distribute published content across the globe, ensuring a fast experience at scale.

The Edge for XM connector is required to publish the content from your pages and components in Sitecore XM Cloud to the highly scalable Sitecore Experience Edge delivery platform. It is also included.

GraphQL schema used by Experience Edge in XM Cloud is the same as used for 10.3 platforms (with a minor difference in temporal query complexity limits). However, XM Cloud schema is different from those used in Content Hub and Content Hub One as they implement different underlying data structures.

Speaking about security, previously on XP when we dealt with “protected” pages, security data went along with published content to CD servers, where Sitecore platform implemented it in the correct way. With XM Cloud we don’t have a CD server any longer, we publish to Edge Content gets published to Edge regardless of that permissions. Of course, it is not immediately available to the outer world – without a valid API key, it is not possible to access it. But instead of a fully compliant Sitecore CD server we now got a “head” which is a totally detached piece of technology. Having API Key, head may, or may not respect the security rules – Edge will give it out anyway. So, you should extra care and test these things while developing head application on your rendering host.

Familiar XP tools that have gone out of XM Cloud

Since we do not have xDB and xConnect within XM Cloud architecture, many features from XP did not find their way to the new composable world. We have to say goodbye to:

  • XDB
  • EXM
  • Identity Server
  • FXM
  • Path Analyzer

In addition, because of the exclusively headless architecture of XM Cloud, not having CD servers, and operating through Edge these features also went off this SaaS platform:

  • MVC renderings, including classical SXA
  • Publishing Service
  • Sitecore Forms
  • Custom Search Indexes
  • Universal Tracker

Sitecore Forms have gone simply because there is nowhere to submit the data back to. It is expected that Sitecore comes up with a sort of SaaS forms component to be used with a composable family of products. Meanwhile, you can consider using something like Jotforms, Marketo, Hubspot Forms, etc.

As for implementing search functionality – despite Solr still being a part of this platform for supporting XM needs (same as does it for XP), however, without having CD servers there is no more reason to have custom search indexes. Implementing a search feature for your website now requires a composable search component. I will talk through it in more detail below.

Sitecore platform features that still remain

  • Media Library exists and functions exactly the same as with XP
  • The above could be said about Workbox and workflows
  • Content Editor and Experience Editor remain untouched
  • We still have the luxury of poking the system internals with Sitecore PowerShell Extensions, this component is built-in since that is a requirement for the same built-in SXA Headless
  • Many other less important features stay

As said above, custom search indexes are no longer available and one has to select from a choice of composable options: Sitecore Search, Coveo, or Algolia – those first come into my mind. All these products are platform agnostic and will integrate seamlessly with any site regardless of underlying CMS or web engine.

There might be several approaches for implementing external search with XM Cloud:

  • Using search-based queries from Experience Edge directly. The development team can write searches against our Experience Edge endpoints and call from their application.
  • Using a crawling search engine. Sitecore Search has been released and got already proven on several implementations including sitecore.com website. Currently, it indexes against rendered HTML markup, however, Edge support should come up with future releases.
  • There’s still an option of setting up and configuring your own external Solr solution to send content to it. Also, SearchStax Studio for XM Cloud could be another alternative option. 

Headless SXA

The headless part of SXA is provided with Sitecore XM Cloud, a cloud-based CMS platform. It allows developers to build and deploy headless websites using XM Cloud, taking advantage of all the known existing SXA features, such as Page Layout, Partial Layout and especially Rendering Variants.

In addition, new concepts were introduced: Page Branches, and standard values for layout on a per-website basis.

What I found really great was the out-of-box implementation of SEO-related things, such as:

  • sitemaps
  • robots.txt
  • redirect items
  • redirect maps
  • errors (404 and 500)

This version of SXA supports Next.js as a first-class citizen and therefore has a revised list of components, leaving only those most basic to work (here they are all). With years of SXA practices, it became a default way of doing things to clone components and add rendering variants to existing ones. Assuming the same, Rendering Variants remain with us in a headless world, however, Scriban is no longer there, replaced with Next.js. Here is an example of how Rendering Variants are defined within a single promo.tsx file with Next.js implementation – there are two of them named Default and WithText. It’s really simple and similar to Scriban.

When you set up an empty site you have default SXA Headless components (at the components tab). Drag and drop works, and also works dragging components along the screen to the position. The right-hand-side section features component properties, including styling.

component tab

There is a content tab that shows item properties similar to what it was before in Content Editor.

CLI and Serialization

Similar to the XP platform, CLI operates through built-in Sitecore Management Services in order to support Sitecore CLI.

Almost everything that can be done through the Cloud portal can be also done through the Sitecore CLI. The latest version of CLI allows the creation of Projects, New Environments and Deployments, and much more.

Sitecore CLI allows you to take existing serialized content and move it directly into XM Cloud. Since serialization is done in YAML format, it stays the same as with XP and also is compatible with Unicorn, therefore correctly formatted content is easily portable to XM Cloud. I want to emphasize that it is only about serialized content, Unicorn itself is an XP module and cannot be installed on XM Coud.


New Tools and Interfaces

Old good Content Editor and Experience Editor still remain with us and that’s good news for those who habitually stick to the interfaces. they used to. Nevertheless, this blog post will familiarize you with the totally new apps and interfaces coming with XM Cloud.

Sites

This application opens by default. It shows up all of the websites available in the system with the predictive typing search bar to filter them out. Clicking any of the sites opens them up in a Pages app, as expected.

What is more interesting is the ability to create websites directly from this interface by clicking the “Create website” button.

create website

Pages

If you played around Horizon in the latest versions of XP such as 10.2, you will immediately recognize this user interface. Pages are built on the foundation of Horizon:

Pages application

The Pages application has 5 views which you select from the left navigation stripe:

  • Pages
  • Components
  • Content
  • Personalize
  • Analyze

I personally find it a little confusing, that some views have the same names and apps. For example, by clicking the Components view of the above Pages app from the left bar navigation, you will see a component selector which you can drag and drop to the editable page. However, this is totally different from the Components application which is a turbocharged tool for creating components to be used on a page, which you later choose by the mentioned component selector of Pages application.

I want to highlight that Pages was done with attention to minor details. It has very pleasant animation effects for many transitions, leaving you with a premium experience feelings. There is no “Save” button anywhere as Pages application autosaves your changes.

It also has support for multiple websites selected from a dropdown. Preview, Devices Experience, Navigation Tree – all these features are located at the same interface you would expect in order to have a productive editing experience.

Components

The tool is still in its early stages, but it has a lot of potentials. Its purpose is to enable content creators and marketers to create components without the need for a developer to start from scratch.

There are lots of styling options to specify custom types, and custom fonts, at least it looks advanced.

Here we create the component and publish it, so that component becomes available in the component library of the Pages app. The intention is for us to be able to put components together and create them together from basic page elements, images, links, and so forth. Starting with a grid, then you simply start adding columns, set alignment, and apply other UI-related things.

But what is the most impressive feature of Components app is – we not only create and customize individual components similar to the way we do it in SXA, but now we create entire components completely from scratch, we can style them and can hydrate them with data from an external data source. By saying external I mean really external: one of many 3rd party content providers and headless CMSs, like Content Hub One, Contentful, Kontent.ai, etc., and all that with zero code!

All you need to do is to provide an API endpoint and use a mouse to select what fields from the source you want to map. For those who are behind a tough firewall or strict corporate policies, there is an option to directly paste in a sample JSON snippet to map against it. They don’t ever have to hit the live API when creating a datasource.

What is even more mind-breaking is that you can feed the same single component from various external systems at the same time. Image a Promo Block component that you can easily drag and drop to a page that consists of:

  • Heading title that sits natively at Sitecore XM Cloud
  • Image taken from Content Hub DAM
  • Localized promo content taken from Contentful CMS
  • Call to action button that is wired up to CDP

In my opinion, that is a superpower of XM Cloud editing capabilities! I very much recommend watching this short video demonstrating the whole power of the new Components app.

Components

How the data is updated and invalidated with Components? It seems that the data is pulled in at the moment when the connection is established, and not get refreshed at the moment of publication.

Personalize

A cut version of Sitecore Personalize comes built into Pages interface of XM Cloud to allow for basic view event personalization. Therefore there isn’t a one-to-one match of old XP rulesets to the new ones, so a deeper analysis and re-evaluation of your personalization strategy is needed.

With XM Cloud there’s a new way of doing personalization coming out of the box: we have a limited set of rules available to us and those are primarily rules based on the current session, such as day of the week, a visitor’s country, to an operating system of visitor’s current device. However, there’s no way you can manage the built-in Personalize tenant directly from the XM Cloud interface.

Checking carefully, you can still find some rules based on historical data: an example of a history-based rule is the number of page views a visitor has visited pages within the past (max) 30 days. Of course, once you enable full Sitecore Personalize a set of tools will expand the functionality to a wider range of options.

Unlike XP where we would personalize each component individually, in XM cloud we create page variants to achieve that so that we can define the audience that will be exposed to that page variant.

Configuring the audience, you see the default set of rules, I counted 14:

  • Point of sale: The visit is to point(s) of sale
  • Region: The visitor is in region(s) during the current visit
  • Country: The visitor is in country(s) during the current visit
  • Visit day of the month: The visit is on a day of the month that compares to number based on your organization’s time zone
  • Day of the week: The visit is on a day(s) of the week, based on your organization’s time zone
  • Month of visit: The visit is in month(s), based on your organization’s time zone
  • First referrer: The visitor comes from a URL that compares to referrer in the current visit
  • UTM value: The visit includes a UTM type that compares to UTM value
  • Operating system: The visitor is using operating system(s) during the current visit
  • Device: The visitor is using a device type(s) device during the current visit
  • Number of page views: The visitor has visited page within the past x days and the total number of page views compares to a number of views
  • First page: The visit has started on a page that compares to page during the current visit
  • Page view: The visitor has visited page name(s) during the current visit
  • New or returning visitor: The visitor is a specific type to your site

After specifying the audience, you then process customizing components for that page variant.

That is implemented in a very similar manner as it was back in the XP: specify the data source item with the desired content, choose a different component to replace the default one, or simply choose to hide the component as another option.

Personalize

The bigger question comes about how personalization works with Edge, without having CD servers previously responsible for executing personalization. It is important to understand that the personalization logic is now happening at the Vercel/Next.js middleware level.

A middleware package intercept browser request at Vercel in order to do audience matching with Personalize, and then serves the relevant content from the Edge, for example by substituting personalized Edge content from one to another. The above approach does not imply any performance penalties at all since all the parts of this chain are super-fast at CDN/Jamstack level at Vercel and also because the content is already cached at Experience Edge. I would recommend spending some time reading through more details about the built-in personalization in XM Cloud by this link.

Analyze

Another excellent embedded tool powered by Boxever page-level analytics: pages built-in with Pages have embedded tracking tag therefore analytics become auto-available. Just out of the box it will empower you to see:

  • the browsers the operating system
  • the source where they came from
  • the pages: first page, entry page, top pages
  • visited top countries
  • by source visits
  • by time of the day

Analytics presents data with a heat map for the time of the day with most traffic shown by darker rectangles.

analyze

Similar to the previous example of Personalize, this analytics system is now built on top of the Sitecore CDP. Previously all the analytics was taken out from Sitecore XDB, and without having that in SaaS cloud offerings the only option for providing built-in analytics and personalization was to rely on CDP and Personalize in some way.

Explorer

In addition to Pages you can use Explorer for editing content. I think the main idea of this interface was to give editors the ability to switch rapidly between visual and filled editing interfaces.

Explorer presents you the content with some similarity to Horizon performing on XP: kind of a Content Editor with navigation and the ability to publish and edit items at the fields level. You can do many expected content operations here like modify, copy, upload download media assets, etc.

Interestingly that Sitecore decided to walk away from a tree-based style of presenting content structure to a drill-into way of navigation. It definitely makes things clear, and when we need to take a look at the entire structure of the content tree – there is still old good Content Editor to our help.

explorer

Site Identifier

This is an interface to add a tracker to your websites if they are missing so that they become trackable with Analyze section of Pages app.

site identifier

That was an overview of the most important new applications and interfaces coming with XM Cloud. The part of this post will take you through the development experience with XM Cloud.


Development with XM Cloud

In the final post of the XM Cloud series, we’ll talk about the development process with XM Cloud and its nuances. I assume you’re either familiar with the traditional way of Sitecore development or at least get some basic understanding of the process.

In order to identify the difference in the development, let’s quickly recap architectural features of XM Cloud that affect:

  • there are no more Content Delivery (CD) servers available
  • therefore content gets published to Sitecore Edge
  • that, in turn, enforces headless development only

Sitecore Next.js SDK

That means we now have a “head” web application running on a Rendering Host. That would be most likely built with Sitecore Next.js SDK – using Next.js on top of React with TypeScript running on a Node-powered server. Next.js is a framework created by Vercel that is built on top of the React library, designed to help developers build production-ready applications with little need for configuration and boilerplate code.

Of course, as a natively headless platform Sitecore allows using any other framework or technology, like Vue, Angular, or .NET Renderings, however, if React is your choice then Next.js SDK will offer you lots of features available out of the box, including

  • Powerful routing mechanism
  • Layout Service fetching and mapping
  • Placeholder Resolver
  • Multi-language support
  • Field Helper components
  • Components Factory
  • Experience Editor support
  • Sitecore Analytics support

I would recommend going deeply through the relevant documentation, as it covers it all in the detail.

A headless architecture consists of a back end with a layer of services and APIs and a front-end/client/user-facing application. The front-end application, or presentation layer, retrieves data from the CMS using API endpoints and uses that data to populate or hydrate the markup it generates.

Without having CD servers any longer, there is no place to execute HTTP request pipeline extensions. Experience Edge provides just raw published data in a headless way, there is no place to apply any logic. But what if you need to modify a request, for example?

In that case, all that logic gets moved to a “head” application, since changes will need to be made in the hosting environment for the client application.

If the Next.js application is hosted on Vercel, a Vercel Serverless Function can be set up to process incoming HTTP requests and generate a response. For Cloudflare Pages, one can choose Cloudflare Worker for the same purpose, and so on.

Similarly, any other integrations or personalization previously taken place on a CD server should be refactored to work with the “head” application at the Rendering Host. It is recommended to use out-of-process API-based integration as much as possible to maintain loose coupling.

Development Modes

There are two approaches for developers to interact with XM Cloud and implement customer requirements: Edge Mode and Fully Local.

With Edge Mode, you developed a classical JavaScript-based rendering app on a local node server.  This app connects to which will connect to the GraphQL endpoint of the Experience Edge which XM Cloud publishes to. This option does not require a license, as Edge relates to your Cloud Subscription which is aware of your license. Therefore you can scale and outsource the development as much as you desire. You can also develop on Mac or Linux, as soon as the OS can run a Node server.

Fully Local development requires containers running on a Windows-based environment and you need to have a valid license file to run it. If running on a client version of Windows (say, 10 or 11) make sure to reference 1809-based images in your .env file.

SOLUTION_BASE_IMAGE=mcr.microsoft.com/windows/nanoserver:1809
NETCORE_BUILD_IMAGE=mcr.microsoft.com/dotnet/sdk:6.0-nanoserver-1809
NETCORE_RELEASE_IMAGE=mcr.microsoft.com/dotnet/aspnet:6.0-nanoserver-1809

Once you run it, you may see a list of containers – it is pretty similar to what you can find with the classical XM platform:

xm cloud containers

There is a Solr container used for internal purposes, a container with a SQL server, and init containers for both of them. The main container is called CM, but please be aware (mainly for patching purposes) that the role name is not CM any longer but is XMCloud instead – here’s its definition:

<add key="role:define" value="XMCloud" />

Another important container here is Traefik – the one that accepts and distributes external traffic. One of the most frequent errors occurring while trying to set up local containers is that developers often already have a running web server (for example IIS) that occupies port 80, preventing Traefik running in a container from exposing its own port 80, so that it errors out.

The rest of the containers are providing rendering hosts with “head” applications.

The principal difference of local container development is that instead of using Edge from XM CLoud, your rendering host will connect to the GraphQL endpoint of the CM instance which features the same API as cloud-based Experience Edge (if you’re using Headless SXA then Edge endpoint is configured in the site settings item).

What can a developer control?

You can apply config patches for configuring CM instance, the same as you did before with XP (here's an example). There’s also a guide on deploying customizations to the XM Cloud environment.

That means it is possible to turn things on and off, like overriding pipeline processors. Of course, there is no longer request manipulation with XM Cloud as request processing shifts to the rendering host while httpRequestBegin and httpRequestEnd pipelines relate only to the CM instance itself. Nevertheless, there are lots of other familiar pipelines to deal with.

For your custom code Sitecore NuGet Feed will support relevant packages. All the packages follow up the name pattern starting with Sitecore.XMCloud.* so that one cannot mess them up with the packages for XP/XM platforms. For example, the top-level package is called Sitecore.XMCloud.Platform. Versioning of these packages follows up SemVer version of XMCloud.

Sitecore has an expectation that developers customize it less and less with time, with most of the customization happening around data synchronization rather than the altering system itself.

One of the good news is that you still have the PowerShell Extensions, but that is disabled by default. Enabling it requires setting SITECORE_SPE_ELEVATION environment variable to either Confirm or Allow with new deployment to take effect. Once complete – you get full power, including direct database access. I recorded a short video showing how to enable SPE for your XM Cloud environment, if you also need SPE remoting - read this post for additional instructions.

Sitecore Connect for Content Hub: CMP or DAM connector provided by XM Cloud, is also included in the base container image. There is a promise of adding connectors to other popular systems and DXPs with time.

Unfortunately, installing familiar modules is no longer available in the easy way of package installation we used to.

Build configuration

There is a new important configuration file with XM Cloud – xmcloud.build.json. It configures build targets, editing hosts, XDT transforms, if any, serialization modules to deploy to the XM instance for that environment, etc. Its structure looks as below:

{
  "deployItems": {
    "modules": []
  },
  "buildTargets": [],
  "renderingHosts": {
    "<key>": <value>
  },
  "transforms": [],
  "postActions": {
    "actions": {
      "<key>": <value>
    }
  }
}

For example, the default build pipeline of XM Cloud is configured to look at a single *.sln file at the root of your repository to process, but with xmcloud.build.json a developer can override this behavior and specify what exactly to build.

Read more about configuring build configuration with XM Cloud.

Sitecore CLI

With Sitecore XP most of you likely used CLI for serialization purposes, with some rare developers experimenting with itemres plugin for creating items as resources from serialization modules. With XM Cloud Sitecore CLI becomes much more useful – lots of management could (and should) be done using it, like creating the projects and environments, performing initial deployments using CLI, etc.

In order to install it you must have .NET 6 already installed on your machine. The tool consists of two parts – Sitecore Management Services running on CM and the command line tool itself. Management Services comes already preinstalled for XM Cloud (in both development container and cloud instance), so it only comes to installing CLI from the project folder:

dotnet nuget add source -n Sitecore https://sitecore.myget.org/F/sc-packages/api/v3/index.json
dotnet tool install Sitecore.CLI

Here’s an example of using CLI to list projects running against my XM Cloud organization (subscription).

CLI shows list of projects

In the XM Cloud terminology, a project is a set of environments. Each environment is in fact its own XM Cloud CM instance. Therefore, we can list a set of its environments and can take actions towards each individual environment, let’s say publish its content to Experience edge:

dotnet sitecore publish --pt Edge -n <environment-name>

Old good serialization also works well with CLI in order to synchronize items between remote and local XM Cloud instances:

dotnet sitecore serialization pull -n development
dotnet sitecore serialization push -n development

Similarly to 10.x platforms, you may benefit from a Sitecore for Visual Studio if you prefer using GUI for Sitecore Content Serialization. It is not free to use and requires purchasing a TDS license (notably, TDS itself does not support XM Cloud).

One of its greatest features in my opinion is Sitecore Module Explorer for you you to navigate, create and modify Sitecore Content Serialization configuration files as items in a tree structure.

Head Application

Configuring front-end applications mainly comes to these 3 settings:

  • JSS_APP_NAME – the name of a site
  • GRAPH_QL_ENDPOINT should be pointing to Edge GraphQL
  • SITECORE_API_KEY for Edge Token, typically found under /sitecore/system/Settings/Services/API Keys

I would recommend watching this video for the front-end developer setup for XM Cloud.

The really impressive productivity feature is running npm run start:connected where you set up a watch mode against the source code so that changes get immediately updated in the browser.

You can also configure an external editing host for XM Cloud instances so that you could also benefit from editing experiences in Pages.

Edge and GraphQL

There is a common misunderstanding about content published to Experience Edge becoming unprotected. It comes from the official documentation, saying: “Experience Edge for XM does not enforce security constraints on Sitecore content. You must apply publishing restrictions to avoid publishing content that you do not want to be publicly accessible“. In fact, publishing to Edge is not equal to making content publically exposed – no one can access it without providing a valid Edge Token you typically configure as SITECORE_API_KEY parameter for your environment. API Key should never be shared publicly – each time you need to pull the data out of Edge you should delegate that task to your head application in order to obscure API Keys rather than making such calls directly from JavaScript code running in the client’s browser, thus exposing your secrets.

Previously with XP/XM it was possible to extend the GraphQL Preview endpoint for additional features like comparison for integers and dates, coordinates, multifield searches, faceting, etc. It is worth mentioning that now you cannot perform any changes around GraphQL endpoints via pipelines with XM Cloud, because it uses Edge. You can still do the following operators in addition to searching fields by a value: compare with EQ / NEQ, CONTAINS / NCONTAINS, order by field names both ways, do pagination, logical AND, and OR.

You can read more about Edge Schema at this link. Also, it is worth mentioning that Edge has its own webhooks. This can be helpful for notifying a caller when publishing is complete.

Use New-EdgeToken.ps1 script to create one for any desired environment. Upon completion, this script opens up GraphQL IDE automatically as well as returns X-GQL-Token for you to use with IDE straight away.

Note that you can use the GraphQL IDE only if you have installed Experience Edge. However, Edge tenant and edge connecter are created automatically upon the creation of an environment with XM Cloud.

Another thing to keep in mind is that despite you're empowered to use Media Library, there's a GraphQL limitation of 50Mb per resource. For larger files (likely videos), consider using DAMs, given they typically allow wider management options.

XM Cloud Play! Summit Demo

This is an exceptional gift from Sitecore Demo Team to us! It allows you to experiment working with XM Cloud and its features, and editing capabilities in both local containers and by deploying it to your XM Cloud environment.

The demo solution features many of the latest tech working together for us to play with:

  • Sitecore XM Cloud
  • Content Hub DAM and CMP
  • Headless SXA
  • JavaScript Services (JSS)
  • Next.js
  • Vercel
  • Tailwind CSS

Since the demo is available in the GitHub repository which means it is easily deployable with Deploy App in a few clicks, as demonstrated below.

Headless SXA

XM Cloud comes with a built-in headless SXA as well as it is included in the base XM Cloud container image, so the question comes up – should I go with it or without it?

Of course, it is possible to build a headless app on XM Cloud without using SXA, but it is not recommended. SXA provides many benefits and is included by default with Sitecore XM Cloud, so it is generally a better option for building a new site. If you are in a migration scenario, you may not have the option to use SXA, but other than that, it is recommended to use SXA and headless services for the best results.

There is an XM Cloud Starter Kit with Next JS for your faster journey to XM Cloud development.

Rendering Variants was always one of the most powerful features of SXA and it took its own evolution from NVelocity templates to scriban, now became powered with Next.js – here’s an example of a Promo component having two rendering variants – Default and WithText. You can have as many variants other than Default as you want within a *.tsx file following this syntax:

export const Default = (props: PromoProps): JSX.Element => {
  if (props.fields) {
    return (
    // ... markup merged with props 
    );
  }
  return <PromoDefaultComponent {...props} />;
};

SXA is perfect for multi-site implementations as it comes with cross-site capabilities for sharing page and partial designs, renderings, content, and cross-linking. In a previous post, I already mentioned a new feature – Page Branches – that allow setting standard values for layout on a per-website basis. Site-specific standard values feature is another example of managing default on a per-site basis. The development team is also working on implementing Site Templates – the idea comes about blueprinting the entire sites from pre-defined templates.

Speaking about the UI – by default, SXA comes with the grid system Bootstrap 5, but that is configurable. Default renderings respect both grid and styles through parameter templates, you must care of that once creating your own custom renderings unless, of course, you clone existing ones.

I remember a while ago when I only started working with SXA there was a lack of proper solutions around renderings having a background image set – we had to invent our own approaches to that. Slightly later SXA got that feature, but today I was glad to find support for a decent number of stretch modes with it: parallax, stretch, vertical and horizontal tiles, fixed. And all that works in headlessly!

Deploy App

Sitecore XM Cloud comes with an integrated Deploy App that performs exactly what is called for – deploying your solution using existing source code to XM Cloud from a friendly GUI as an alternative to using CLI. You can also create a project using a starter template.

choosing deployment method

In order to demonstrate its capabilities, let’s Start from your existing XM Cloud code, using Play! Summit demo mentioned earlier. Currently, the only provider to work with Deploy App is GitHub. So let’s go ahead with it.

deployment with github

After providing access to your GitHub account, you configure and choose the branch and the specific environment it deploys to if it is production or not. There is an option to trigger a deployment on commit to that branch to trigger an automatic deployment each time someone makes a push to the specified repository branch.

deployment with github parameters

That’s it! The deployment starts and makes things done automatically: provisioning tenants and environments, configuring Edge, pulling and building the codebase, deploying the artifacts, and eventually running post-build actions, if any.

deployment log

The whole process takes around 15 minutes and is impressive – the whole CI/CD pipeline comes out of the box, preconfigured. However, there’s one more thing left to do.

Deploying to Vercel

Since our head application is decoupled from Sitecore, we also need to deploy and reference it to Experience Edge. There are various options but the best would be using Vercel hosting. Vercel is the company that invented and maintains Next.js which is why their powerful hosting platform ideally fits solutions built with it.

Vercel is an all-in-one development platform that combines the best developer experience with an obsessive focus on end-user performance. As a developer-centric stack, Vercel is accessible to developers of all skill sets and removes a historical lock-in to .NET. Vercel provides a scalable solution to the largest of organizations with the newest best practices in content delivery while helping to reduce infrastructure/deployment overhead previously required to deploy Sitecore applications.

Deployment is pretty simple. Upon creating an account and project, you link your GitHub account and use the same repository used for XM Cloud deployment previously. Vercel is smart enough to autosuggest a folder with your next.js application highlighting it with an icon. After providing three familiar environmental parameters (App name, Edge endpoint, and API key for Edge token) deployment takes place quickly and then you’re all set!

Tip: do not modify .env file directly, instead - create an "override" file called .env.local. The difference is that in starterkits that file by default is excluded from source control with .gitignore, plus in addition to preventing mess with other developers, you can easily maintain your specific environmental setting in the same place.

XM Cloud also allows you to set up a Vercel hosting directly from its interface and add integration.

vercel setup hosting

If you do not have an account - you'll also be able to create one with quickly and easily, currently, there's only support for Personal Account types. Also, make sure you choose All Project access.

vercel link to vercel

You will still need to log in to Vercel and manually add GitHub integration to the same repository and branch, add a few more parameters, and at least 3 above environmental variables. Once done you can immediately trigger the deployment.

I hope this overview helps you familiarize XM Cloud development options and encourages you to try it earlier rather than later. Would you have any questions – please feel free reaching me!

Sitecore Edge considerations for sitemap

A quick one today. We recently came across interesting thoughts and concerns about using Sitecore Edge. As you might know (for example from my previous post), there are no more CD servers when publishing to Sitecore Edge - think of that as just a GraphQL endpoint serving out json.

So, how do we implement a sitemap.xml in such a case? Brainstorming brought several approaches to consider:

Approach one

  • Create a custom sitemap NextJS route
  • Use GraphQL to query Edge using the search query. Here we would have to iterate through items in increments of 10
  • Cache the result on Vercel side using SSG

Approach two

  • Create a service from CM side that will return all published items/urls
  • This service will only be accessible by Azure function which will generate a sitemap file and store it in CDN
  • Front-end would then in this case access this file and render the content of it (or similar)

Approach three

  • Generate all the sitemaps (if more than a single sitemap) on CM, then store them all in single text fields
  • Returned them via edge, using GraphQL the font-end head which handles sitemap.xml

Then I realized, there is SXA Headless boasts SEO features OOB, including sitemap.xml. Let's take a look at what they do in order to generate sitemaps.

With 10.3 of SXA, the team has revised the Sitemap feature providing much more flexibility to cover as many use cases as only possible. Looking at /Sitecore/Content/Tenant/Site/Settings/Sitemap item you'll find lots of settings for fine-tuning your sitemaps depending on your particular needs. CM crawls websites and generates sitemaps. Then they get published to Sitecore Edge as a blob and then it gets proxied by a Rendering Host via GraphQL. When search engines request sitemaps of a particular website, Rendering Host gives them exactly what has been asked. That is actually similar to the above approach three with all the invalidation and updates of sitemaps provided also OOB.

This gives out a good amount of options, depending on your particular scenario.

The easiest way of installing Solr cores for SXA search

Few years ago, I made a walkthrough on setting up search for SXA-based website. However, it does not cover one important thing - out-of-the-box SXA uses two own indexes. As a good practice "one index - one core" it assumes you create two additional cores for each of those indexes.

It is also assuming that you may follow the same principle used for existing Sitecore Solr cores naming them <INSTANCEPREFIX>_sxa_master_index and <INSTANCEPREFIX>_sxa_web_index. But how do you create those cores?

I would like to introduce the easiest way achieving that using Sifon and its existing plugin for creating SXA Solr cores. All you need to do is updating plugins from public repository by executing "Plugins" - "Get Sifon plugins" command.

Once updated, you will see corresponding plugin at the menu:


And that's it! After a bit of waiting Sifon will complete installation of both new cores into a Solr instance references with the selected profile. You'll see the confirmation:


As it prompts, the cores have been created at the named location, managed schema has been also published but you still need to rebuild these new indexes. Sifon also has a plugin for that (fastest, but SPE Remoting must be enabled for that instance):


Alternatively, you can use Control Panel as you normally do:


Confirmation:


Hope this helps enjoying search with SXA which is really taken to the new level!

Implementing blogs index page with filters and paging: SXA walkthrough

Objective.

Initially I've had a template called Blog, and several pages of it which are actual blog posts. Now I have created a page called Blog Index, implementing template, partial and page designs of the same name. The obvious purpose of given page is to show the list of blog posts, being able to filter this out by certain criteria, including a new custom one - series. Content authors want to group these blogs posts into series, so that's a grouping by a logical criteria. They also want to to display the most recent higher, but give readers an option to select an order, as well as page size.


Implementation plan

  1. Switch to SXA "Live mode" (optional)
  2. Create taxonomy categories
  3. Create Series interface template
  4. Use interface template and update blog posts
  5. Create search scope for page template
  6. Create computed field
  7. Publish, redeploy and re-build indexes
  8. Create facets
  9. Create filter datasources
  10. Make rendering variant for search filters
  11. Make rendering variant for search results
  12. Place component to partial design
  13. Configuring Search Results component
  14. Enjoy result!


IMPLEMENTATION

1. Before start, switch web to master, this can be done at /sitecore/content/Tenant/Platform/Settings/Site Grouping/Platform at Database field by setting it to master (dont't forget to publish that particulat item however). Once done, it will use not only master database at published site, but also master indexes.


2. Firstly, let's create Series taxonomy folder under Taxonomy (/sitecore/content/Tenant/Platform/Data/Taxonomy) and populate it with actual series-categories that will be used for filtering:


3. Now I can create interface template to implement series selection. This template will be later used with not just Blogs but also few other page types, that's why I make it an interface and put into shared - /sitecore/templates/Project/Tenant/Platform/Interfaces/Shared/_Series.

Make sure the Source column has correctly set datasource, so that you later will be able to pick up right category under site's Data/Taxonomy/Series folder, as on example below:

Datasource=query:$site/*[@@name='Data']/Taxonomy/Series&IncludeTemplatesForDisplay=Taxonomy folder,Category&IncludeTemplatesForSelection=Category


4. Once done, add _Series interface template to actual page template (Blog in my case). Then one can go to existing blog posts and assign them into series (best with Sitecore PowerShell):

$rootItem = Get-Item master:/sitecore/content/Tenant/Platform;
$sourceTemplate = Get-Item "/sitecore/templates/Project/Tenant/Platform/Pages/Blog";  
$selectedSeries = "{1072C536-0EC2-4EAB-8D98-DC9BF441F30A}";

Get-ChildItem $rootItem.FullPath -Recurse | Where-Object { $_.TemplateName -eq $sourceTemplate.Name } | ForEach-Object {  
        $_.Editing.BeginEdit()
        $_.Fields["Series"].Value = $selectedSeries;
$_.Editing.EndEdit() }

Now selecting an item displays which series it belongs to:


5. Create scope under /sitecore/content/Tenant/Platform/Settings/Scopes, call it Blogs and set its field Scope Query to filter out by Blog template ID:


6. Create computed field contentseries at you project to store actual name of Series into index that's in addition to another field in index called series so that automatically indexed by template and stores GUID for series. This is how I implemented it in Platform.Website.ContentSearch.config:

<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/">
  <sitecore>
    <contentSearch>
      <indexConfigurations>
        <defaultSolrIndexConfiguration>
          <fieldMap>
            <fieldNames>
              <field fieldName="contentseries" returnType="stringCollection" patch:after="*[0]" />
            </fieldNames>
          </fieldMap>
          <documentOptions>
            <fields hint="raw:AddComputedIndexField">
              <field fieldId="{ID-of-Series-field-within-_Series_template}" fieldName="contentseries" returnType="stringCollection" patch:after="*[0]">
                Tenant.Site.Website.ComputedFields.CategoriesField,Tenant.Site.Website
              </field>
            </fields>
          </documentOptions>          
        </defaultSolrIndexConfiguration>
      </indexConfigurations>
    </contentSearch>
  </sitecore>
</configuration>


7. Publish this configuration into webfolder, then clean up (you can run PS script below) and rebuild indexes.

stop-service solrServiceName
get-childitem -path c:\PathTo\Solr\server\solr\Platform*\Data -recurse | remove-item -force -recurse
start-service solrServiceName
iisreset /stop
iisreset /start


8. When computed field comes into index, the next step sould be to create Series facets. Please note that facet name should be different from the field name as the one got by template and then facet overwrites it

  • For Series - under /sitecore/content/Tenant/Platform/Settings/Facets. The most important field here is Field Name, the value would be contentseries that matched name of field we've created at the previous step
  • Also create one for Published date that relies on already existing published_date_tdt field, which is a custom date field presenting in all of my content page templates.


9. Create new datasource items:

  • checklist filter called Series under /sitecore/content/Tenant/Platform/Data/Search/Checklist Filter folder and assign its first field to point Series facet created at previous step
  • an item for Publication Date under /sitecore/content/Tenant/Platform/Data/Search/Date Filter/Publication Date.


10. Implement Search Filter rendering variant that will contain actual filters. I create that under my custom component Search Content, make two columns and also component variant field into each of them. Assign Filter (Checklist) into first and Filter (Date) into second. Reference datasource items from previous step for each component correspondingly:


11. Implement Search Result rendering variant that will define presentation for each item shown/found:

Noticed Series reference field? That switches context to the item references by Series field, so that I can get a value of actual category under Taxonomy folder.


12. In partial design for Blog Index, drop the following renderings into the canvas: Search content, Sort Results, Page Size and Search Results.


13. Finally, for Search Results component, go to Edit component properties, under SearchCriteria section assign Search results signature to search-results and also select Search scope to match Blogs.

The result:

Uploading scheduled auto-backups of editors content for your Helix / SXA website into Azure Blob storage

INTRODUCTION

I assume you are following good practices and develop Helix-powered websites (or SXA, that follows Helix by definition). In that case, you do separate actual user-generated content items from definition items, normally shipped along with each release and created by developers (if not, then please refer to an article I've previously written on that), so you end up having a Unicorn configuration that stores all of your author-edited content:

Will are going to split the process into 3 major steps

  1. Re-serialize Unicorn configuration for content items
  2. Archive serialized content
  3. Upload an archive into Azure Blob Storage

IMPLEMENTATION

1. Re-serialize Unicorn configuration for content items. Luckily, Unicorn provides us with MicroCHAP.dll library helping to automate the sync process as a part of your deployment process (note that the DLL and corresponding PowerShell module Unicorn.psm1  should be referenced from your code). The good news is that any verb can be passed to it, not just sync, so one can use 'Reserialize'. That call will look like:

$ErrorActionPreference = 'Stop'
$ScriptPath = Split-Path $MyInvocation.MyCommand.Path

Import-Module $ScriptPath\Unicorn.psm1
Sync-Unicorn -Verb 'Reserialize' -Configurations @('Platform.Website.Content') -ControlPanelUrl 'https://platform.dev.local/unicorn.aspx' -SharedSecret '$ecReT!'

2. Archiving serialized content would be the next step. If you click Show Config button for Platform.Website.Content configuration in Unicorn configuration page, you may find all the relevant information about it, including the physical folder where items are serialized. We need this folder to be archived. The step comes as:

$resource = "Content_$(get-date -f yyyy.MM.dd)"
$archiveFile = "d:\$resource.7z"
$contentFolder = "C:\inetpub\wwwroot\Platform.dev.local\App_Data\serialization\Project\serialization\Content"

7z a -t7z -m0=lzma -mx=9 -mfb=64 -md=32m -ms=on $archiveFile $contentFolder
I am using 7zip as an archiver, as my content is slightly more than 4GBs and traditional zip cannot handle that. As a hidden bonus, I am getting the best compression ratio coming with 7zip. Also, It would be worth checking if a file with such name exists at target and deleting it before archiving, especially if you run the script more often than daily.

3. Uploading to Azure Blob Storage concludes given routine. To make this happen you should have a subscription, and ideally a connection string to your Storage account. Then you may use the following code to achieve the result:
$containerName = "qa-serialization"
$ctx = New-AzureStorageContext -ConnectionString "DefaultEndpointsProtocol=https;AccountName=lab1234;AccountKey=y9CFAKE6PQYYf/vVDSFAKEzxOgl/RFv03PwAgcj8K80mSfQFDojdnKfakeaLMva0S9DbrQTzNjDMdGCp7rseRw==;EndpointSuffix=core.windows.net"

Set-AzureStorageBlobContent -File $archiveFile -Container $containerName -Blob $resource -Force -Context $ctx

Also, it is assumed you already have blob container created, if not you need to create it upfront:
New-AzureStorageContainer -Name $containerName -Context $ctx -Permission blob
Optionally, you may want to delete the temporal archive, once it's uploaded to blob storage.


RUNNING

Here's an entire code that works for me:
# Step 1: re-serialize user-generated content
$ErrorActionPreference = 'Stop'
$ScriptPath = Split-Path $MyInvocation.MyCommand.Path

Import-Module $ScriptPath\Unicorn.psm1
Sync-Unicorn -ControlPanelUrl 'https://platform.dev.local/unicorn.aspx' -Configurations @('Platform.Website.Content') -Verb 'Reserialize' -SharedSecret '$ecReT!'
# Step 2: archive serialized user-generated content with 7zip using best compression $resource = "Content_$(get-date -f yyyy.MM.dd)" $archiveFile = "d:\$resource.7z" $contentFolder = "C:\inetpub\wwwroot\Platform.dev.local\App_Data\serialization\Project\serialization\Content" 7z a -t7z -m0=lzma -mx=9 -mfb=64 -md=32m -ms=on $archiveFile $contentFolder # Step 3: upload generated content into Azure Blob Storage $containerName = "qa-serialization" $ctx = New-AzureStorageContext -ConnectionString "DefaultEndpointsProtocol=https;AccountName=lab1234;AccountKey=y9CFAKE6PQYYf/vVDSFAKEzxOgl/RFv03PwAgcj8K80mSfQFDojdnKfakeaLMva0S9DbrQTzNjDMdGCp7rseRw==;EndpointSuffix=core.windows.net"
Set-AzureStorageBlobContent -File $archiveFile -Container $containerName -Blob $resource -Force -Context $ctx #Step 4: clean-up after yourself Remove-Item $archiveFile -Force
I run it on a daily basis by Windows Task Scheduler in order to get a daily snapshot of editors' activity. The script produces the following output:


As a result of running script, I get an archive appearing in the Azure Blob Storage:



RESTORING

There's no sense in making backups unless you confirm restoring the data out of it works well. For content items, download an archive, restore and substitute content serialization folder with what you've extracted, then sync content configuration. As simple as that!

Note that your content should be aligned with the definition items, or it may not work well!

Hope this post helps!

Yet another SXA rendering variant - Script Reference Tag coming to improve your SEO

Note! The code used in this post can be cloned from GitHib repository: SXA.Foundation.Variants

I previously wrote a post about having a rendering variant holding an inline JavaScript one might need along adding some basic JS functionality into your components. 

This is useful when you're early developing your pages and have no possibility or capacity of recompiling entire frontend and updating Creative Exchange package into your solution because of adding/changing few lines; however, given approach is not SEO-friendly as search engines penalize sites for excessive inline scripts and styles. So use it considering to be technical debt, that should be addressed prior to going to production.

The very minimal change one can do is to replace the inline script with a reference to that same script stored in Media Library - same that SXA does itself with themes. This blog post below reveals an approach:

Firstly, create a template:

Then reference given template IDs within Constants.cs file:

using Sitecore.Data;

namespace Platform.Foundation.Variants.Pipelines.VariantFields.ScriptReferenceTag
{
    public static partial class Constants
    {
        public static partial class RenderingVariants
        {
            public static partial class Templates
            {
                public static ID ScriptReferenceTag { get; } = new ID("{0EC036D7-384D-4CF6-AD1F-FE949E96126A}");
            }

            public static partial class Fields
            {
                public static class ScriptReferenceTag
                {
                    public static ID ScriptMedia { get; } = new ID("{F1497AF9-7DD3-4B38-BE22-5F092007F929}");
                }
            }
        }
    }
}
Model class, having just one property that stores a GUID of a referenced script from Media Library 
using Sitecore.Data.Items;
using Sitecore.XA.Foundation.RenderingVariants.Fields;

namespace Platform.Foundation.Variants.Pipelines.VariantFields.ScriptReferenceTag
{
    public class VariantScriptReferenceTag : RenderingVariantFieldBase
    {
        public string ScriptMedia { get; set; }

        public VariantScriptReferenceTag(Item variantItem) : base(variantItem)
        {
        }
    }
}
Parser:
using Sitecore.Data;
using Sitecore.XA.Foundation.Variants.Abstractions.Pipelines.ParseVariantFields;

namespace Platform.Foundation.Variants.Pipelines.VariantFields.ScriptReferenceTag
{
    public class ParseScriptReferenceTag : ParseVariantFieldProcessor
    {
        public override ID SupportedTemplateId =>  Constants.RenderingVariants.Templates.ScriptReferenceTag;
        
        public override void TranslateField(ParseVariantFieldArgs args)
        {
            ParseVariantFieldArgs variantFieldArgs = args;

            var variantHtmlTag = new VariantScriptReferenceTag(args.VariantItem) { Tag = "script" };
            variantHtmlTag.ScriptMedia = args.VariantItem[Constants.RenderingVariants.Fields.ScriptReferenceTag.ScriptMedia];
            variantFieldArgs.TranslatedField = variantHtmlTag;
        }
    }
}
Renderer:
using System;
using Sitecore.Data;
using System.Web.UI.HtmlControls;
using Sitecore.XA.Foundation.RenderingVariants.Pipelines.RenderVariantField;
using Sitecore.XA.Foundation.Variants.Abstractions.Pipelines.RenderVariantField;
using Sitecore.Resources.Media;

namespace Platform.Foundation.Variants.Pipelines.VariantFields.ScriptReferenceTag
{
    public class RenderScriptReferenceTag : RenderVariantField
    {
        public override Type SupportedType => typeof(VariantScriptReferenceTag);

        public override void RenderField(RenderVariantFieldArgs args)
        {
            var variantField = args.VariantField as VariantScriptReferenceTag;
            if (variantField != null)
            {
                var id = variantField?.ScriptMedia;
                if (string.IsNullOrWhiteSpace(id))
                {
                    return;
                }

                var scriptItem = Context.Database.GetItem(new ID(id));
                if(scriptItem == null)
                {
                    return;
                }

                var url = MediaManager.GetMediaUrl(scriptItem);

                var tag = new HtmlGenericControl(variantField.Tag);
                tag.Attributes.Add("type", "text/javascript");
                tag.Attributes.Add("defer", String.Empty);
                tag.Attributes.Add("src", url);

                args.ResultControl = tag;
                args.Result = RenderControl(args.ResultControl);
            }
        }
    }
}

Example of usage:

This rendering variant field generates the following output:

<script src="/-/media/Project/Platform/Other/Scripts/Header-script.js" type="text/javascript" defer="" ></script>

This approach works perfectly well. But once again for a second, have you ever considered moving such scripts into a Theme along with related component (if any) instead of leaving it like that? Hope this helps!

Walktrough: creating a footer for SXA website implementing a precisely demanded front end markup

Note! This is a second walkthrough explaining an implementation of real-life scenario with Sitecore SXA.
It reveals best practices and several powerful techniques, such as:

  • structuring data for complex components to be both easy to maintain and editors-friendly
  • referencing other renderings using Component field, setting datasource and rendering variant
  • reusing existing built components and their templates by Clone rendering PowerShell script
  • nesting rendering variants and looping through them
  • using Query Variant field for accessing child items
  • restricting rendering variants by certain page templates

When starting working with SXA I faced some lack of good guidance and walkthrough (with an exception of excellent series by Adam Najmanowicz). Today I am going to eliminate this gap by adding my own walkthrough of implementing a footer, what could be simpler, I thought. After reading some official tutorials I expected it to be an exercise of dropping structure components (ie. splitters with rows and columns) and assigning link lists into them so that it could be later styled by front-end team. Wrong! Things appeared to be not as easy.

To start with, my strict front-end team came to me with quite a precise requirement for a footer, and below is they demand from me.

Requirements

They send me an image with all the blocks assigned:


It was accompanied by the HTML output expected for me to achieve:


Minimal and effective, nice job, front-enders! Now the ball is on my side in order to get all that implemented.

Implementation

1. As per Sitecore SXA recommendations, I create Footer partial design and open it in Experience Editor for editing. Footer component will be created for that partial design, which itself will be used to construct a resulting page.

2. Not to mess with any OOB components, I create Footer rendering by cloning one of the existing renderings with datasource, so that I ensured a copy of datasource template and folder created. Also make sure this rendering stays outside of Experience Accelerator folder, but on feature layer where serialization enabled.

3. Assign new rendering to Available Renderings (/sitecore/content/Tenant/Site/Presentation/Available Renderings/Module) so that it appears in my custom components sectionand also in Toolbox

4. Now it is a good time to adjust a Template for Footer,it was created automatically by cloning, but this is how I defined it:


What is important to explain here - Elements field is pointing to Footer Elements Folder. At a first glance, that folder contains just a collection of link lists. But that's not right! There are at least 3 types of footer blocks: first four blocks are indeed Link List blocks,however,followed by Rich Text blocks, and finally, there is Social Presence block and all three can be added intoFooter Elements Folder. These three types are defined along with Footer template, you may see on the screenshots below.

5. Elements templates. This is how I defined them, Link ist footer element:

Rich text footer element:

Social presence footer element:


6. Insert Options to be configured for Footer folder to accept both Footer items and Footer Elements Folder. Configure Footer Elements Folder to accept these 3 types. Now one can insert the data. Once done, that will be how the data folder looks like:


First 4 items (About Us, Hot topics, Other sites, Help and Support) reference corresponding Link Lists as defined as below:


Next two items point to reusable Rich Text items under /Data folder. And the last one is a reference to a Social Presence I have implemented previously.

After items of all three types are created, we can assign them into footer datasource item itself:



7. Rendering variants. In the previous steps, we have defined rendering, templates, folders and actual data. Now it is time to make it all together work to produce the output by creating rendering variants. This is the most tricky part of the current walkthrough


Default is the main and only rendering variant to be called or footer. Other three variants are "service" variants and designed to be called from Defaultinternally in a loop, being assigned to a component rendering field with personalisation applied (see image above).

Here's how these three other variants look like:


Link list footer element switches to a referenced Link List item and uses Query variant field to iterate its children.

Rich Text footer element simply references to a Rich Text (reusable) item under /Data folder in the same manner.

As for Social presence footer element rendering variant, it defines a component that references Social buttons rendering with Social presence rendering variant, that I have described in one of my previous posts.

Lastly, footer-info__copyright field renders copyright lines at the very bottom.

8. In order to avoid confusion for your editors, it makes sense to restrict rendering variant by selecting Allowed in templates field leaving only Default variant since other three variants are internally called from Default using Component variant field. 

9. Apply new rendering to a Partial Design it in Experience Editor. Do not forget to configure footer placeholder to accept only Footer component by creating a placeholder setting,

Result

Save the page and enjoy the result:


No single line of back-end code!

How to add id and data-attributes to a Rendering Variant in SXA?

When dealing with a rendering variant field, it is not a big deal to set few data-attributes to it - those inputs are located at the very bottom of Variant Details section. You can do it like that:


But what if you need data attributes to the top level of component, which it Rendering Variant item itself? There isn't such an option!

Requirements are

  1. An id attribute (ie. section-1, section-2, ... section-N)
  2. One or many data-attributes (ie. " User-friendly title", "Another user-friendly title", etc.)
  3. CSS class section-with-anchor on those instances, which have both previous requirements implemented
All of the above should be set for the top node of a rendering - outside of the control of Rendering Variant. Thinking logically - if we ever could add the above to Rendering Variant item itself, then it would present on every single instance of that given rendering variant. We do have CSS-class field on Rendering Variant item, but as I said, we need this class to present only occasionally for some individual instances as per requirement so we cannot use that field.

Solution

That is where Rendering Parameters come into a play, as they apply per each individual rendering usage. Let's take a look!

1. ID of a component. That was the easiest as luckily default rendering parameters do support field for that:


2. Data-attributes do not exist in Rendering Parameters control, unlike id attribute. But since that is just a collection of Key-Value pairs, why not to convert them into a set of data-attributes on a component node. Not all, of them, of course, but those that start with data- as on an image below:


In order to pick them up and assign to a rendering view, I write a simple extension method:
public static MvcHtmlString RenderAllDataAttributes(this HtmlHelper helper)
{
    var rendering = Sitecore.Mvc.Presentation.RenderingContext.Current.Rendering;

    string additionalAttributes = String.Empty;
    if (rendering?.Parameters != null)
    { 
        foreach (KeyValuePair<string, string> parameter in rendering.Parameters)
        {
            if (parameter.Key.ToLower().StartsWith("data-"))
            {
                additionalAttributes += $"{parameter.Key}=\'{parameter.Value}\' ";
            }
        }
    }

    return new MvcHtmlString(additionalAttributes.Trim());
}
It can be called like that:
<div @Html.RenderAllDataAttributes() @Html.Sxa().Component(Model.Rendering.RenderingCssClass ?? "default-class", Model.Attributes)>
    <div class="component-content">
        ...
    </div>
</div>


3 Setting a style class. As mentioned before, do not misuse Css Class field of rendering variant definition for styling a specific implementation of rendering variant. Style that are applied individually for each instance of rendering (regardless of variant selected) can be found at that same Rendering Parameter window, located at Styling section.


Of course, you may need to create this style beforehand, if not yet done. To do so, create a new Style item underneath Styles grouping item and restrict to renderings where give style can be shown. That is a part of your style them and is located under /sitecore/content/Tenant/Site/Presentation/Styles node:

Result

Finally, I got it all rendered as expected:


That blog post shows how useful Rendering Parameters are, hope you find it helpful!

Script rendering variant field in SXA - why would one need it?

Note! The code used in this post can be cloned from GitHib repository: SXA.Foundation.Variants

Yet another rendering variant field came to my to-do list for implementation - Script rendering variant field. Why would I need one at all? 

I came across two uses cases where implementing this field type made my job done and that's just recently. Some developers have (reasonable) biases against having JavaScript code inline instead of referencing JS-file at the bottom of a page, and I do understand their point - Google may penalise your site for overusing inline JS. So use it responsibly, don't overuse. Keeping in mind that given field comes up as a part of a component dynamically added to a page - there isn't a big choice on how to extend running website with some additional client-side functionality. I am presenting both cases below, let's take a look at them. Would you know the better way of achieving these goals - please let me know via Slack or Twitter. Also, the code of Script Variant Field is at the very bottom of the page.


Use case 1: implementing in-page navigation panel

As usual, I got a precise requirement from my strict front-end team to implement such a piece of code as a component. It has a UL-tag that will keep a link list to other components of this same page (prefixed with #) created client-side dynamically, and a script that does the actual job. When rendering the in-page navigation component by the backend, we're now aware of other components and their attributes, so the walk-around was handling page loaded event and identifying all the components that have IDs set and class section-with-anchor.

<div class="component content col-12 content-section">
    <div class="component-content">
        <div class="anchor-panel">
            <ul class="anchor-panel__list"></ul>
            <script defer>
                document.addEventListener('DOMContentLoaded', function(){
                    if (!$('.section-with-anchor').length || !$('.section-with-anchor').length) return;
                    $('.section-with-anchor').each(function(index, el) {
                        var anchor = '#' + $(el).attr('id');
                        var text = $(el).attr('data-text');
                        var $anchorList = $('.anchor-panel__list');
                        var $anchorItem = $('<li class="anchor-panel__item"></li>');
                        var $anchorLink = $('<a href=""></a>')
                        $anchorItem.append($anchorLink.text(text).attr('href', anchor));
                        $anchorList.append($anchorItem);
                    });
                });
            </script>
        </div>
    </div>
</div>

Quite obviously, my rendering variant will contain 2 fields: a UL-tag section field for in-page navigation that takes the links dynamically and script variant field, containing the client-side logic. You can test this script in action by this link. That is how it looks in Content Editor:


Since I have 5 other sections qualifying script requirements - they all have been identified by the script and added to in-page navigation panel. Here's how the result looks like on a styled page for me:


Once again, I decided to implement a new script variant reference field because the component is subject to some seldom minor JavaScript changes, but should be configurable. Also, at given use case it can be dropped only once into a page, so there's no problem with multiple instances of the same script for me, but using it you might need to check if that applies for your scenarios. But even with that in mind, I'd probably not bothered creating yet another new rendering variant field, if not few more potential usages I have in backlog.


Use case 2: accessing client side URL hash-key parameter and presenting it on a page

Recently I implemented a URL query string parameter variant field, where one can set any parameter and the field present its URL-decoded value on a page in any given tag and style. A colleague of mine who develops search results page with SXA asked if that's doable to extract a hash-key URL parameter and show it on a page along with other content, however that a fully client-side parameter that never got posted to the server.

I decided to give a try and managed to confirm with a small proof of concept. I quickly wrote this script (so please be forgiving for it being just a quick PoC and me not being the proper FED).

window.addEventListener("hashchange", function () {
    var h1 = document.getElementsByClassName("updatedHashValue");
    if (h1.length > 0) {
        h1[0].innerHTML = getHashValue("param");
    }
    function getHashValue(parameter) {
        var hashValues = window.location.hash.substr(1);
        var result = hashValues.split('&').reduce(function (result, item) {
                var parts = item.split('=');
                result[parts[0]] = parts[1];
                return result;
        }, {});
        return result[parameter];
    };
});

That's how it looks implemented as the part of rendering variant. It creates an H1 element to store the value, extracted out of hash parameters stored at URL bar and updated without any postback to the server, and of source script variant field:


Testing. After adding a component to a page, saving and selecting the above rendering variant, I got the page reloaded as anticipated with no visual changes. Then I open a console from browser dev.tools and enter:

window.location.hash = "param=Successful!"

From dev.tools that was easy to confirm that there was no postback done to the server, however, browser navigation bar predictably changed, appending new hash parameters pair: 

And guess what? H1 element immediately got that value displayed. Love that magic!


The code is very simple, similar to other variant fields I've blogged previously. Create the model and implement property for a field:

public class VariantScript : VariantField
{
    public string Script { get; set; }
}

Reference ID of that field from a template and ID of a template itself:

public static class Constants
{
    public static class RenderingVariants
    {
        public static class Templates
        {
            public static ID Script = new ID("...");
        }
        public static class Fields
        {
            public static class Script
            {
                public static ID ScriptField { get; } = new ID("...");
            }
        }
    }
}

Here's a parser

public class ParseScript : ParseField
{
    public override ID SupportedTemplateId => Constants.RenderingVariants.Templates.Script;

    public override void TranslateField(ParseVariantFieldArgs args)
    {
        ParseVariantFieldArgs variantFieldArgs = args;

        variantFieldArgs.TranslatedField = new VariantScript
        {
            Script = args.VariantItem[Constants.RenderingVariants.Fields.Script.ScriptField]
        };
    }
}

and a renderer:

public class RenderScript : RenderVariantField
{
    public override Type SupportedType => typeof(VariantScript);

    public override RendererMode RendererMode => RendererMode.Html;

    public override void RenderField(RenderVariantFieldArgs args)
    {
        var variantField = args.VariantField as VariantScript;
        if (variantField != null)
        {
            args.ResultControl = RenderScriptField(variantField, args);
            args.Result = RenderControl(args.ResultControl);
        }
    }

    protected virtual Control RenderScriptField(VariantScript variantScript, RenderVariantFieldArgs args)
    {
        if (!string.IsNullOrWhiteSpace(variantScript.Script))
        {
            var tag = new HtmlGenericControl("script") { InnerHtml = variantScript.Script };
            tag.Attributes.Add("defer", String.Empty);
            return tag;
        }

        return new LiteralControl();
    }
}

New Script Rendering Variant field, that enhances your tooling for implementing more modern-looking websites. And as usual - please use it responsibly!

Welcome Item Reference - a rendering variant field missing out of the box in SXA

Note! The code used in this post can be cloned from GitHib repository: SXA.Foundation.Variants
The majority of folks working with SXA are aware of Reference Item variant field - that allows switching a context of rendering variant from a context item (current page or datasource item, if set) to either one or many items referenced by a link-type field of a given context item. It works like a charm, but in some cases one may meet a case where setting datasource is not applicable or you may need your component to show something different apart from provided datasource item - I have already described how to implement that using Query Variant Field. Let's evaluate it:
Pros
  • it comes out of the box, and could be used straight away
  • you may query in more complex way rather just proving an ID
Cons
  • it relies on SXA search index to be in actual state - you need to have it rebuilt
  • writing query is less user friendly then just picking up an item.
So why not to implement a dedicated Item Reference variant field - it definitely pays off, once used often.

This time I picked up built-in Reference variant field as a donor. Instead of PathTrough field I added Item field of Droptree type:



Code-wise, I implemented one property named PassThroughItem. There's also another one to store child fields, nested underneath given reference item field, those will be executed and render in a switched context:
public class VariantItemReference : BaseVariantField
{
    public string PassThroughItem { get; set; }

    public IEnumerable<BaseVariantField> NestedFields { get; set; }
}
Need to reference IDs of template and its single field:
public static class Constants
{
    public static class RenderingVariants
    {
        public static class Templates
        {
            public static ID ItemReference = new ID("...");
        }

        public static class Fields
        {
            public static class ItemReference
            {
                public static ID Item { get; } = new ID("...");
            }
        }
    }
}
Implement a parser:
public class ParseItemReference : ParseVariantFieldProcessor
{
    public override ID SupportedTemplateId => Constants.RenderingVariants.Templates.ItemReference;

    public override void TranslateField(ParseVariantFieldArgs args)
    {
        ParseVariantFieldArgs variantFieldArgs = args;

        var reference = new VariantItemReference();
        reference.ItemName = args.VariantItem.Name;
        reference.PassThroughItem = args.VariantItem[Constants.RenderingVariants.Fields.ItemReference.Item];

        reference.NestedFields = args.VariantItem.Children.Count > 0
            ? ((IVariantFieldParser)ServiceLocator.ServiceProvider.GetService(typeof(IVariantFieldParser))).ParseVariantFields(args.VariantItem, args.VariantRootItem, false)
            : new List<BaseVariantField>();

        variantFieldArgs.TranslatedField = reference;
    }
}
And a renderer:
public class RenderItemReference : RenderVariantFieldProcessor
{
    public override Type SupportedType => typeof(VariantItemReference);

    public override RendererMode RendererMode => RendererMode.Html;
      
    public override void RenderField(RenderVariantFieldArgs args)
    {
        var control = new PlaceHolder();

        var variantItemReference = args.VariantField as VariantItemReference;
        if (!string.IsNullOrWhiteSpace(variantItemReference?.PassThroughItem))
        {
            var newContextItem = Sitecore.Context.Database.GetItem(new ID(variantItemReference.PassThroughItem));
            if (newContextItem != null)
            {
                foreach (BaseVariantField referencedItem in variantItemReference.NestedFields)
                {
                    RenderVariantFieldArgs variantFieldArgs = new RenderVariantFieldArgs
                    {
                        VariantField = referencedItem,
                        Item = newContextItem,
                        HtmlHelper = args.HtmlHelper,
                        IsControlEditable = args.IsControlEditable,
                        IsFromComposite = args.IsFromComposite,
                        RendererMode = args.RendererMode,
                        Model = args.Model
                    };

                    CorePipeline.Run("renderVariantField", variantFieldArgs);
                    if (variantFieldArgs.ResultControl != null)
                    {
                        control.Controls.Add(variantFieldArgs.ResultControl);
                    }
                }
            }
        }

        args.ResultControl = control;
        args.Result = RenderControl(args.ResultControl);
    }
}
Here is the usage:


Don't' forget to add it into Insert Options and hope you'll enjoy it!