Experience Sitecore ! | More than 200 articles about the best DXP by Martin Miles

Experience Sitecore !

More than 200 articles about the best DXP by Martin Miles

Sitecore Edge considerations for sitemap

A quick one today. We recently came across interesting thoughts and concerns about using Sitecore Edge. As you might know (for example from my previous post), there are no more CD servers when publishing to Sitecore Edge - think of that as just a GraphQL endpoint serving out json.

So, how do we implement a sitemap.xml in such a case? Brainstorming brought several approaches to consider:

Approach one

  • Create a custom sitemap NextJS route
  • Use GraphQL to query Edge using the search query. Here we would have to iterate through items in increments of 10
  • Cache the result on Vercel side using SSG

Approach two

  • Create a service from CM side that will return all published items/urls
  • This service will only be accessible by Azure function which will generate a sitemap file and store it in CDN
  • Front-end would then in this case access this file and render the content of it (or similar)

Approach three

  • Generate all the sitemaps (if more than a single sitemap) on CM, then store them all in single text fields
  • Returned them via edge, using GraphQL the font-end head which handles sitemap.xml

Then I realized, there is SXA Headless boasts SEO features OOB, including sitemap.xml. Let's take a look at what they do in order to generate sitemaps.

With 10.3 of SXA, the team has revised the Sitemap feature providing much more flexibility to cover as many use cases as only possible. Looking at /Sitecore/Content/Tenant/Site/Settings/Sitemap item you'll find lots of settings for fine-tuning your sitemaps depending on your particular needs. CM crawls websites and generates sitemaps. Then they get published to Sitecore Edge as a blob and then it gets proxied by a Rendering Host via GraphQL. When search engines request sitemaps of a particular website, Rendering Host gives them exactly what has been asked. That is actually similar to the above approach three with all the invalidation and updates of sitemaps provided also OOB.

This gives out a good amount of options, depending on your particular scenario.

Sitecore 10.3 is out! What's new?

On December 1st, after more than a year of hard work Sitecore has released its new version 10.3 of XM and XP platforms. 

Please note, that Experience Commerce sales have been discontinued after version 10.2 so unsure if there will be XC releases anymore. Historically XC releases follow up the platform releases with some lag of several weeks.

Let's take a look at what Sitecore put into the latest release.

With version 10.3 Sitecore moved in the direction of unifying its XM/XP platforms with XM Cloud. The two biggest proofs of that are SXA Headless and Integrated Web Hook architecture​ being a part of 10.3 - similar to XM Cloud.

Headless SXA

As you may hear, Headless SXA became a first-class citizen for XM Cloud. Now we get Headless SXA with 10.3 and new Next.js Headless SXA components, such as Container, Image, LinkList, Navigation, PageContent, Promo, RichText, Title, etc. SXA development team made an incredible job aiming to achieve feature parity for their product between XM Cloud and X/XP platforms. 

Because of that, the team sadly had to retire several features that do not fall nicely into a new concept - that's why Headless SXA doesn't use Creative Exchange any longer. The same comes valid for Forms - you will not be able to use them with Headless SXA  out-of-the-box, there is however documentation on how to use forms with Next.js, and is also one can also consider a dedicated forms builder. At the same time, SXA Headless brings some new concepts, like Page Branches and site-specific standard values. You may also want to leverage nextjs-sxa starter template (installs with npx create-sitecore-jss --templates nextjs,nextjs-sxa).

Among the new features, I like the ability to duplicate pages without subpages by clicking a right mouse button at a page, which may be helpful for cloning landing pages having multiple subpages without the unwanted routine of manually deleting cloned subpages afterward. Also, it works well with SEO concepts such as sitemaps, robots.txt files, redirect items and maps as well as error handling (for generating static 404 and 500 pages) - all that is extremely useful for almost any headless site.

In general, if you are planning a new implementation today and feel positive about using SXA, the best advice would be to download 10.3 and use the new headless SXA with it. That immediately brings you into the headless world of 2023 and drastically simplifies the further upgrade options, not to mention the potential migration to the XM Cloud.

Webhooks

That is a new introduction to the XM/XP platforms, while other Sitecore SaaS products which already use webhooks - XM Cloud, Content Hub, OrderCloud, etc. But firstly, what are webhooks? A webhook is just an HTTP request, triggered by some event in a source system being sent to any destination you specify, carrying some useful payload of data. Webhooks are automatically sent out when their event is fired in the source system. Basically, they are user-defined HTTP callbacks triggered by specific events. As per documentation, we are given 3 types of webhooks:

A good example of webhook usage may be validating and further canceling workflow transitions.

GraphQL Authoring and Management API

Another great new feature is GraphQL Authoring and Management API. This API provides a GraphQL endpoint for managing Sitecore content and performing some custom authoring tasks which previously one could do only with Sitecore user interface, almost any function. That means now we can automate operations around items (including media), templates, search as well as managing sites. Unfortunately, user management is not yet supported.

Sitecore Forms

Forms is the feature used on almost every solution I worked on, therefore it is a pleasure to see the new Embeddable Forms Framework. Using it one can add a Sitecore Form to any webpage, including pages that are not running on a Sitecore application - similarly to what FXM allowed doing. The good news is that an embedded form supports custom form elements and will not mess with any existing styles on a page as it is powered with Tailwind CSS. However, to benefit from Embeddable Forms you must have at least Headless Services 21.0.0 in place in order to deal with the Layout Service and also the endpoint for the data submission.

xConnect

There is a new Data Export Tool that exports both contacts and interactions from the data database into files. It supports both Azure Blob and File Storage providers to be used for your deployments, but can also write into a network folder which is helpful for local instances.

Database Encryption

At the storage level, Transparent Data Encryption could be used with MsSQL Server to protect critical data by using data-at-rest encryption. In simple words, the data get encrypted prior to writing it into databases, so that physical SQL tables contain already encrypted data. When read-accessing, the data get transparently decrypted for authorized SQL users. It significantly protects the information, stored prevents data breaches, and complies with regulatory requirements for sensitive data.

What raised the event?

An interesting new feature helps us to identify which database raised a publish:end / publish:end:remote events will simplify updating the cache on remote CD instances.

Sitecore CLI

Version 5.0 of CLI has been around for a while since the XM Cloud release, now with its version 5.1.25 it became also an integral part of 10.3 platforms. It now supports Linux-based environments and features publishing to Edge, and features a few more new commands. It also employs integrated telemetry so that developers can improve CLI even further, however using telemetry can raise some security compliance concerns for governed environments.

What are the additional features we will see with the 10.3 release?

  • With version 10.3 of the platforms, Headless Services v21 comes into play. You may find a new starter kit for your new projects on Next.js 12.3.x over React 18.
  • Sitecore Host (along with components relying on it such as Publishing Service and Identity Server 7) were updated with .NET 6.0 which is an LTS version of a framework and has improved performance.
  • The supported version of Solr is now 8.11.2.
  • Those using EXM may now benefit from OAuth authentication with third-party services for custom SMTP.
  • Horizon, unfortunately, won't get any update beyond version 10.2. Despite technically it still works with 10.3 platforms, Sitecore discourages using it with 10.3 or later.
  • Management Services 5.0 offering publishing to Experience Edge now is capable of publishing a single item, and a few more improvements.
  • Search has got numerous improvements, like searching by ID and path, and searching for non-quotes-enclosed terms returns both exact and possible matches.
  • Windows Server 2022 support was promised but is slightly delayed, until January 2023. I assume support also relates to 2022-based containers in the first place, rather than underlying infrastructure.
  • More than 160 other issues submitted by customers were fixed and released in 10.3!


You can download and install Sitecore 10.3 right now, please feel free to share your thought on it!


Sitecore 10.3 dashboard

My 2022 Sitecore contributions

One more year has passed so quickly! I have just submitted my MVP application for the oncoming year and it actually took me a while to manage my contributions using the new format with a mandatory date selector. Not everything went submitted well there for me, so I decided to duplicate my submission in an open format, and maybe that will in some way help other new and potential applicants.

To start with, I need to mention that this year was very much special for me - I have relocated to the USA from England, UK for family reasons. But since my family is non-US, getting work authorization took an insane amount of time and effort, more than half a year took me to find a great place to work that could sponsor me an O-1 visa as the only possible solution to start work. Anyway, luckily the visa got granted and I recently started my new career path.


Learning and certification

So, having lots of time on a bench without any income, I still used my time wisely. I started the year by collaborating with Learning@Sitecore team (they made really good progress this year!) as well as learning new things myself. Between January and May, I cleared all the certification exams for each of the new Sitecore SaaS offerings. I also shared my learning & preparation in these blog posts: Content Hub, OrderCloud, and earlier there was also XP10.


SUGCON

But my biggest contribution went in a form of SUGCON presentation in Budapest, at the end of March. I picked up the difficult topic which lay in a shadow area - performing Sitecore XP/XM upgrades. There was a lot of controversial information on that subject, and previously I suffered it out myself on several projects. So I gathered all my knowledge as well as what I learned from colleagues and other MVPs and made a universal guide on approaching and performing platform upgrades. All the traps are mentioned, and following my guide may save at least 2-3 x times off the cumulative effort.

I also lodged good proposals for SUGCON ANZ and Symposium, however, those haven't been chosen for some reasons


Sifon

If you haven't heard about it, Sifon is a must-have open-source tool for any Sitecore developer, to simplify most of your day-to-day DevOps activities. Beyond its OOB "install-backup-restore" features, empowered with a plugin system Sifon turns into a real Swiss army knife. Plugins reside in a separate repository that Sifon pulls with one click, everyone is welcome to create and pull-request their own plugins.

After XP 10.2 got released, I updated Sifon to support this latest version. It took me a few weeks in December-January to troubleshoot installing it to Windows 11, eventually, it all got implemented and a new version 1.2.6 became available (along with updated plugins). Thus, Sifon became the only GUI tool that could download and install XP on Win11 in just a few clicks, even easier than in containers.


Speaking at usergroups

I was invited to be a speaker at three Sitecore usergroups:


Sitecore Discussion Club

I organized the Sitecore Discussion Club only three times this year. Despite being a wonderful format of the event compared to traditional Sitecore user groups, it expanded most of its power in an offline format. With both founders having moved outside of the UK and living in totally different time zones, maintaining online events becomes a challenge. So much sadly we're thinking of closing it next year unless someone from the UK community takes it over from me.


Blogging

Because of not having a work permit visa for such a long period, I was less exposed to the actual challenges and had fewer topics to blog about. There are still ~10 posts over the past year, and my blog homepage nicely indexes all the posts as they were written, reverse chronologically.


Sitecore Telegram

This year more effort went to Sitecore Telegram - a great channel with regular insights about Sitecore products on useful tips approaches ideas and concepts that are sometimes difficult to approach for Sitecore technology professionals - mostly developers, architects, and strategists - with around 800 subscribers in total. In 2022 I decided to expand Sitecore Telegram with 4 more additional channels exclusively dedicated to new SaaS products to promote those directly and channel out the audience attention: 


Podcasts

In the summertime I was invited to (and took part in) the two podcasts:


Sitecore Link project
When it comes to Sitecore.Link project - it could seem semi-abandoned at a first glance. That is partly true, but only partially. I still keep a fat bill in my pocket for the underlying infrastructure and anticipate the changes to save much on ownership costs. At this moment I am only adding some new content to the existing instance running on Sitecore 9.3 JSS. Big housekeeping is in scope to revise existing material that is no longer actual. For quite a long time I was thinking of rebuilding it with the proper technology, but nothing was much suitable.. until now! Next.js is a 100% ideal technology for this project. The backend is to be rebuilt for XM Cloud and to be interchangeable with 10.3+Edge and one of my biggest ambitions for 2023 would be to document the whole path in the format of a series of tutorials for typical Sitecore backend developers to "convert" into the new headless world.


Awesome Sitecore
Awesome Lists are the legendary curated lists about almost everything in the world, if you never came across it - please spend some time to see what it features. I am a creator and curator for the Awesone Sitecore list there for the past 3 years, classifying all the important and valuable open-source repositories we (as the community) have created so far. This is the best place to look up code for almost any Sitecore aspect one may be coding. This repository is gained 54 starts so far which is a great indicator of its value!


Organizing Los Angeles Sitecore User Group

After relocating to Southern California and commitments (plus presentation) for Los Angeles Sitecore UserGroup, I was invited to become a co-organizer for this event. I proudly accepted this request and now became a person in charge of this quarterly-run event. The next big event we do takes place on December 1-st just a few hours after the MVP application closes so technically it does not not count as a contribution for 2022.



MVP Summit
That is, in my opinion, the best perk of being a Sitecore MVP so I simply couldn't miss it! I shared some of my thoughts and vision (and some concerns as well) with the development teams and generally spent a great time learning lots of valuable information from other MVPs and Sitecore product teams



Other MVP-related activities

Over the past several years I have had a pattern of using my 1-to-1 half an hour with MVP Program to share the vision on platform development, business adoption, some Sitecore-related hot topics, and of course - the feedback to the Sitecore teams. I really hope it is been shared with the teams helping them to improve. Past December was not exclusion and we had a very productive conversation, as usual.

This year I took part in the Sitecore Mentor program (as a mentor). I wish we had more time spent together with my mentee. My biggest outcome is that unless you set up a recurring meeting invite that works for both of you - planning stays extremely difficult (given that we're both grown-ups with families and tons of responsibilities).

I also participated in MVP webinars and monthly MVP Lunch events as much as I could (given that my new timezone is a bit restrictive).

Lastly, as in all previous years, I helped out MVP Program with new applications and was happy to spot a few superstars on Sitecore horizon (no, not that one).


Future plans

Speaking about the plans for the next year, my new position requires me to get deeper with Sitecore and its products, with much more interaction and some shift-n-drift into strategy. I feel extremely positive about it!

Other than that:

  1. With no doubt, XM Cloud will be the headliner for 2023, not just for me, but for all of us. Now after finally getting access to the system I am so much eager to start blogging it backwards and forwards.
  2. SmartHub (especially the CDP part of it) has huge undiscovered potential, which I am anticipating coming across, luckily my new employer specializes in it.
  3. Content Hub is another product (also well-practiced at my company) for me to master, discover and blog about.
  4. XP platform is still with us, with lots of support required. Learning and documenting upgrade paths would be a hot topic fin 2023.

My learnings from project management and lead experience mistakes I came across

Recently I passed through a series of interviews for almost every of the top 10 Sitecore Platinum partners in the US. That was a wonderful experience, I learned lots about all these companies and their way of doing business. Moreover, I can definitely say now that organizations in their culture and temper are like humans - some are pro-active extroverts, while others are very pedantic process-oriented nerds.

One of the companies made me a set of a few ~1.5 hours-long interviews with a vice president baking me with lots of challenging but interesting questions, mostly management and business-related. As it usually happens, I keep thinking about it well beyond the time in the background, and my thoughts resulted in this post. I compiled that into a single solid dump of my experience, so here we go.


Presales

Take it as the rule of thumb: you have to spend resources on presale! The evaluation should be well detailed, not to receive a "negative profit" later. I remember a project when the assessment was done by the architect, who did not allocate enough time to study the Customer's processes in detail. He conducted most of the work "according to the standard", using the "best practices". As a result of poor quality assessment, the project received "negative margin".

At the stage of pre-sales evaluation of the project, it is necessary to provide a list of possible additional works (volume, cost, time). For example, a change in some indicators entails a restructuring of the entire model, which can take a significant amount of time, require the involvement of significant resources and cost decent money.


Integrations

Integration evaluation also refers to the pre-sales evaluation of an upcoming project, but I emphasized this point intentionally.

First and most, you need to make sure that the systems do integrate with each other. Moreover, at the presale stage, it is crucial to find out as many details about the upcoming integration as possible: what systems, what protocols, buses, etc.

In another projects I took part in, the incompatibility of the external systems was discovered only at the stage of integration after work started. That resulted in the customer being unhappy with and additional labor costs for developing and configuring the alternative. Don't underestimate that!


Agreement

Probably the most important stage with all the parties to be involved. Mistakes in the agreement may cost a lot!

1. It is crucial to ensure that the terms in the Agreement do match your resource plan. Why? It often happens that even at the presale stage of negotiating with a potential Customer, some additional clauses get added to the Agreement, and these changes get forgotten to be reflected into the resource plan. This is what happened in our project. The PM did not take part in the process of signing up the Agreement, and different participants in the process added each from themselves, but no one checked the compliance of the Agreement with the actual resource plan. As a result, a misalignment between the Agreement and Resource Plan led to time and labor losses that were not taken previously into account. Just doing an extra check would eliminate that loss and could save time and retain customer satisfaction!

2. When it comes to client training it makes sense to describe the learning process well in details in the Agreement. You may include a number of hours, topics, number, and type of users - also specifying business and/or technical users, as well as the list of attached instructions (including number, titles, etc.). No need to be ultra precise here, but the scope should be agreed upon and signed. Once in a time, our contract stated that we would provide instructions and provide training. But it ended up that the list of instructions and the amount of training planned by an Architect (why him, BTW?), mismatched the Client's expectations, and they demanded much more. This case is in fact a subset of the following point.

3. Try to avoid wording that could be interpreted differently. The presence of inaccurate wording may lead to ambiguity when the Customer insists on his understanding of the Agreement inaccuracies. According to the Agreement, during the data collection phase, the team supposed to hold an introductory workshop. However it turned out that no one knew what exactly should be shown. The customer expected that workshop to demonstrate the system in action with their actual data, so that potential users would understand how the system would work. Therefore preparation took a lot of extra time, not included in the initial project estimation. The lesson learned: when there is some uncertain clause in your contract with an unclear meaning or unknown implementation, it is better to immediately clarify what exactly this clause implies.

4. One more point relates to customer data or content. It is wise to specify that the Customer provides data in the required templates and formats in your Agreement. If that is not the case, then formatting/conversion work must be paid additionally. Once ago we had to load a large amount of data into the system. The Customer provided the data in the non-classified loose raw format. As a result, a large amount of labour spent on cleansing the provided data. That wasn't originally included in the estimate calculations and resulted with extra time and cost.

5. This clause is likely be found in the most of contract templates, however few people pay attention to it, sometimes delaying the deadlines. It pays to specify the deadlines or the amount of iterations for the approval. Any approval delays cause by the Customer should be recorded. By fixing these delays, you can win some extra time to adjust the deadlines if needed.

6. When it comes to change requests it helps to cap your maximum efforts in the Agreement. For example, stating that changes that require adjusting the architecture say more than 10% are provided for an additional fee. Most of the Agreement templates have this clause stated out in this or that way, though.

7. It goes without saying, however I spell it just in case. You must keep all communiaction with the Customer until the project completion and beyond a warranty period. Always note and take a record of any additional work outside of Agreement terms to be done to complete the project - at least time spent, reason, resources taken, etc.


Testing and Acceptance

1. All the acceptance criteria should be clear and known well in advance. 

2. When it comes to testing, the amount of effort should also be defined and signed well in advance. At least, types of tests, their order, and required resources from both sides.

3. It is often a case, when testing is due to start however the required access is missing or the testing environment is not ready, or not even provided (I am now writing about things to be provided by a Client) - this is to be recorder as mentioned before.

4. There might be more unpredicted cases like once the Client demanded most of their staff to take part in the delivery testing and the Agreement did not cover that case. The issue for us was that Client employees were in totally different time zones, which made us to adjust. Would that be predicted in the Agreement - there'd be a possible solution negotiated rather than getting this unpredicted at a cost of overtime for our Dev and Ops teams.

The above is just few of project issues I remembered but was still good to share.

My speech proposal for SUGCON ANZ: Developers' guide to XM Cloud

Developers' guide to XM Cloud

Over the last months, we've heard lots of insights about the newest cloud product from Sitecore - XM Cloud. Many developers have wondered about what would be their changes of scope and responsibilities, how would they work with this new SaaS solution, or whether would they even become redundant. 

There is nothing to worry about! This session is answering most of these questions and in fact comes as the most crucial developers' guide to XM Cloud, as for now. It explains the caveats of changes in the content lifecycle and local development routine, introduced new Headless SXA for XM Cloud, and new options for personalization. It will also highlight changes in the security model, and site search and give the best advice on legacy strategies and content migration. Finally, some practical experience with Headstart for XM Cloud and utilizing the new deployment model, so that it becomes live!

Getting started
  •     why SaaS cloud? what is the change?
  •     a brief overview of XM Cloud for developers    
Familiar tools that remain with you
  •     review of the process and deployment tools available to a developer
  •     local development for XM Cloud in containers
  •     customizing pipelines with XM Cloud
  •     leveraging SPE functions
  •     Sitecore CLI becomes "The Tool"
Editing Experience and Content Considerations:
  •     using Site Builder
  •     dealing with Pages & Components
  •     extensions catalog
  •     diversity of datasources: where can my content reside?
  •     migrating content from legacy Sitecore platforms
Changes in the security model
  •     Sitecore Unified Identity
  •     integrating 3-rd party services
Changes in search
  • where's my Solr?
  • what are the options?
  • plugging an external search technology
Dealing with the legacy
  •     are my legacy sites still compatible with XM Cloud?
  •     migrating headless site from XP to XM Cloud guidance
  •     EDGE considerations
  •     is my legacy module for XP compatible with XM Cloud?

SXA for XM Cloud
  •     new old Headless SXA - what's the difference
  •     new old rendering variants
  •     can we use headless forms on XM Cloud?
  •     a bare minimum to build Headless SXA site for Next.js
Hands-On
  •     starter kits available for you straight away
  •     leveraging Headstart basic starter kit foundation built for XM Cloud
  •     make your own module compatible with XM Cloud
Personalization
  •     are the built-in rules enough to go?
  •     two ways of leveraging CDP/Personalize for a better experience
Deploying into XM Cloud
  •     Single-location? Will that affect my GEO-distributed authors team?
  •     Understanding terminology: deployment, project, environment
  •     Understanding Build and Deployment Service (how to trigger and its lifecycle)
  •     CLI along with DevEx plugins
  •     GUI-powered Deploy App tool and 
  •     auto deployments from connected GitHub
It looks to me like an excellent topic exposing a spotlight on the new Sitecore SaaS-based platform. Keep fingers crossed!

Infrastructure-as-Code: best practices you have to comply with

Infrastructure as Code (IaC) is an approach that involves describing infrastructure as code and then applying it to make the necessary changes. IaC does not dictate how exactly to write code, it just provides tools instead. A good examples are Terraform, Ansible and Kubernetes itself where you don't say what to do, rather than you dictate what state you want you infrastructure to get into.

Keep the infrastructure code readable. Your colleagues would be able to easily understand it, and, if necessary, add or test it. Looking to be an obvious point, it is quite often is forgotten, resulting in “write-only code” - the one can only be written, but cannot be read. Its author inclusive, and is unlikely to be able to understood what he wrote and figure out how it all works, even a few days afterward.

An example of a good practice is keeping all variables in a separate file. This is convenient because they do not have to be searched throughout the code. Just open the file and immediately get what you need.


Adhere to a certain style of writing code. As a good example, you may want keeping the code line length between 80-120 characters. If the lines are very long, the editor starts wrapping them. Line breaks destroy the overall view and interfere with the understanding of the code. One has to spend a lot of time just figuring out where the line starts and where it ends.

It's nice to have the coding style check automated, at least use by using the CI/CD pipeline for this. Such a pipeline could have a Lint step: a process of statistical analysis of what is written, helping to identify potential problems before the code is applied.


Utilize git repositories same way developers do. Saying that I mean developing new branches, linking branches to tasks, reviewing what has already been written, sending Pull Requests before making changes, etc.

Being a solo maintainer one may seem the listed actions to be redundant - it is a common practice when people just come and start committing. However, even if you have a small team, it could be difficult to understand who, when, and why made some corrections. As the project grows, such practices will increasingly help the understanding of what is happening and mess up the work. Therefore, it is worth investing some time into adopting some of the development practices to work with repositories.


Infrastructure as Code tools are typically associated with DevOps. As we know DevOps as specialists who not only deal with maintenance but also help developers work: set up pipelines, automate test launches, etc. - all the above also applies to IaC.

In Infrastructure as Code, automation should be applied: Lint rules, testing, automatic releases, etc. Having repositories with let's say Ansible or Terraform, but rolled out manually (by an engineer manually starting a task) is not that much good. Firstly, it is difficult to track who launched it, why, and at what moment. Secondly, it is impossible to understand how that worked out and draw conclusions.

With everything kept in the repository and controlled by an automatic CI/CD pipeline, we can always see when the pipeline was launched and how it performed. We can also control the parallel execution of pipelines, identify the causes of failures, quickly find errors, and much more.

You can often hear from maintainers that they do not test the code at all or just first run it somewhere on dev. It's not the best practice, because it does not give any guarantee that dev matches prod. In the case of Ansible or other configuration tools, standard testing could be something as:

  • launched a test on dev;
  • rolled on dev, but crashed with an error;
  • fixed this error;
  • once again, the test was not run because dev is already in the state to which they tried to bring it.

It seems that the error has been corrected, and you can roll on prod. What will happen to prod? It is always a matter of luck - hit or miss, guess or miss. If somewhere in the middle, something falls again, the error will be corrected and everything will be restarted.

But infrastructure code can and should be tested. At the same time, even if specialists know about different testing methods, they still cannot use them. The reason is that Ansible roles or Terraform files are written without the initial focus on the fact that they will need to be tested somehow.

In an ideal world, at the moment of writing a code developer is aware of what (else) needs to be tested. Accordingly, before starting to write a code, developer plans on how to test it, commonly know as TDD. Untested code is low-quality code.

The same exactly applies to infrastructure code: once written, you should be able to test it. Decent testing allows to reduce the number of errors and make it easier for colleagues who will finalize your roles on Ansible or Terraform files.


A few words about automation. A common practice when working with Ansible is that even if something could be tested, there is no automation to it. Usually, this is a case when someone creates a virtual machine, takes some role written by colleagues, and launches it. Afterward that person relizes the need to add certain new things to it - appends and launches again on the virtual machine. Then he realizes that even more changes are equired and also the current virtual machine has already been brought to some kind of state, so it needs to be killed, new virtual machine reinstantstiated and the role rolled over it. In case something does not work, this algorithm would have to be repeated until all errors are eliminated.

Usually, the human factor comes into a play, and after the N-th number of repetitions, it becomes too lazy deleting the VM and re-creating it again. Once everything seems to work exactly as it should (this time), so one seems could freeze the changes and roll into the prod environment. But reality is that errors could still occur, that is why automation is needed. When it works through automated pipelines and Pull Requests are used - it helps to identify bugs faster and prevent their re-appearance.

Sitecore Edge and XM Cloud - explain it to me as I was 5

Explain to me as I was 5 years old.

Well, not sure that could be explained to 5 years old, but will instead explain it as you were not around the changes for lets say the past 5 years. There is a lot to go through. Before explaining the most interesting concepts like XM Cloud and Sitecore Edge, I need briefly touching some terminology they rely on.


Headless

Previously you used Sitecore to render HTML using ASP.NET MVC. All that happened server-side: Sitecore pulled up the data and your controllers built views with that content, resulted combined HTML being sent out back to a calling browser by a CD server. So that meant, would you need just raw content or data not being wrapped with HTML, then the only way would be setting up duplicating WebAPI, which could be clumsy in addressing the correct data. Or that could be too verbose, returning you much more data than you need. In any case - too exhaustive!

So it comes up logically: why not make the raw data returned universally through API? It could be then consumed by various callers like mobiles or other systems, even some content aggregators, not just browsers (that is what is called "omnichannel"). This is why the approach is called "headless" - there is no HML (or any other "head" returned along with your data).


Rendering Host

When it comes to browsers that still need HTML: it makes sense to merge the content with HTML somewhere later on a request lifecycle - after it left you the universal platform endpoint that you still have on your CD. There is still a webserver required to serve all the web requests. It will receive a request, then pull some required raw data from a universal endpoint, then render output HTML with that data. This is why such webserver is also known as a "rendering host" - now we clearly separated serving actual raw data from rendering HTML to be returned to the browser. Previously both steps were done at a single point on CD.


GraphQL

Once read above, you could think that serving all the content through WebAPI would be some sort of overkill, that is especially valid for large and complicated data. More to be considered about adequate caching. Even with a headless approach, imagine pulling a large list of books stored in some database and having a reference with authors by authorId field. 

So you either do lots of JOIN-alike operations and expose lots of custom API endpoints to fit data the way your client needs that, or take pulls of all-the-data from a database, cache it in somewhere memory, and keep "merging" books to authors on the fly (in memory JOIN per-request). None of both is a nice solution. In the case of really large data, there won't be an elegant solution.

So there was a clear need for some sort of flexibility, and that flexibility should be requested by a client application, addressing its immediate need for data. Moreover, often clients want to return a specific set of data and nothing else above what is being requested - mobile apps typically operate expensive and potentially slow mobile lines, compared to inter-data center superfast networks between CD and Rendering Hosts. Also, headless CDs always return meaningful and structured data of certain type(s), which means it could be strongly typed. And where there are several types, those could relate. We clearly need a schema for data.

That is how GraphQL was invented to address all the above. Instead of having lots of API endpoints we now got a universal endpoint to serve all our data needs in a single request. It provides a schema of all the data types it could return. So now it is the client who defines what type(s) of data to request, defines how those relate together, and the amount of data it needs - not more than it should consume. Another benefit of predefined Shema is that now knowing it in advance, writing a code for clients' apps is quicker thanks to autocompleting, likely provided by your IDE. It also respects primitive types supporting all the relevant operations (comparison, orderBy, etc.)


Sitecore Edge

Previously with XP, you had a complex setup most important of which were CM and CD instances fed by corresponding databases - commonly known as master and web. Editors logged into CM, created some content, and published it from master to the web database to be used by CD.

Now imagine you only got a CM part from the above example. When you do publish, it publishes "into a cloud". By "cloud", it meant a globally distributed database with CDN for media along with some API (GraphQL) to expose content for your front-end.

In fact, not only from CM content could reach Edge and be served from it - Content Hub could be another tool, performing like XM does.

Previously you had a CD instance with a site deployed there that consumes data from a web database, but no you neither have those nor Sitecore provide it for you. That means you should build a front-end site that consumes data from a given GraphQL. That is what is called headless, so you could use JSS with or without Next, or ASP.NET Core renderings. Or anything else - any front-end of your choice, however with more effort. Or it could be not a website at all, but a smart device consuming your data - the choice is unlimited. Effectively, we've got something as CD data as a service provided, maintained, and geo-scaled by Sitecore.


XM Cloud

From the previous explanation you've learned that Experience Edge is "when we remove CD instance and replace Web databases with a cloud service". Now we want to do exactly the same with XM. Provided as a service, it always has the latest error-prone version, maintained and scaled by the vendor. Please welcome XM Cloud and let's decouple all-the-things!

Before going ahead, let's answer what was a typical XM in Sitecore as we knew it, and what it expect to do?

  • create, edit and store content
  • set up layout and presentation for a page
  • apply personalization
  • publishing to CD
  • lots of other minor things

Publishing has been already optionally decoupled from XM in for of Sitecore Publishing Service. That works as a standalone web app or an individual container. Its only duty is copying required content from CM to CD and doing it perfectly well and fast.

Another thing that could be decoupled is the content itself. Previously it was stored in the CM database in a form of Sitecore item abstraction. What if we could have something like Content-as-a-Service, where a data source could be supplied from any source at all that supplies it through GraphQL - any other headless CMS or professional platforms, such as Content Hub? That is very much a "composable" look and feel to me! Then it comes to total flexibility, after setting up the data endpoint, authors could benefit from autocomplete suggestions coming from GraphQL schema when wiring up their components.

Personalization also comes as a composable SaaS service - Personalize. Without using one XM Cloud will also offer you some basic personalization options.

Speaking about a layout, it also could be decoupled. We already have Horizon as a standalone webapp/container, so whatever its cloud reincarnation appears to be (ie. Symphony-?) - it gets decoupled from XM engine. There will be an old good Content Editor anyway, but its ability to edit content is limited to Sitecore items from the master database, unlike Symphony being universal.


Sitecore Managed Cloud

Question: so is XM Cloud something similar to Managed Cloud, and what is the difference between those?

No, not at all. Sitecore Managed Cloud hosts, monitors, manages, and maintains your installation of the platform, on your behalf. They provide infrastructure and the default technology stack that suits it in the best way. Previously you had to care for an infrastructure yourself, which took lots of effort, and that is the main thing that changes with Managed Cloud. Managed cloud supports XM, XP and XC (however on premium tier).

XM Cloud as the opposite - is a totally SaaS offering. It will be an important part of Sitecore Composable DXP where you will architect and "compose" an analog of what was XP from various other Sitecore products (mostly SaaS, but not mandatory).


That is what XM Cloud expected to be in a composable spirit of modern DXP for 2022. Hope we all enjoy it!

OrderCloud Certification - tips on preparation and successful pass

Finally, I am happy to complete Sitecore learning and certification by successfully passing the OrderCloud Certification exam. In this post, I will try to share some thoughts and insights on how you could also progress with it.

This exam was relatively easy, especially compared with other Sitecore certifications. Have to note that I've read almost the whole documentation available, took the eLearning course, and did the notes for every minor important point - so did not find it tough.

I managed to score 90% being hesitative on 6 questions of a total of 30 which gives the exact 20% above the required 80% for a successful "pass" result. Basically, in that case, gambling out at least one question takes the score above the pass level. I want to admit that test questions seem to be very reasonable chosen and mature, as the product itself is.

In order to succeed, you'll definitely need to progress through the following materials:


The competencies under a test are:
  • OrderCloud Architecture and Conventions
  • Integration
  • User Management and Access Control
  • Environments
  • Product Management
  • Order and Fulfillment Management
  • Troubleshooting

Please note: eLearning is really good but does not cover all the competencies. It covers the Security and Product areas really well but has nothing about Order and Fulfillment (at the moment of writing this blog - there is a 'coming soon' promise, however). That means you must get on your own learning path.


Exam in numbers:
  • 60 minutes
  • 30 questions
  • 80% to pass
  • Costs $350 (some categories of tesе takers may qualify for a discount)

Today, having a "fresh memory" I am trying to remember some of the questions personally I had on the exam and share some thoughts on what to emphasize. Without nuances, you must definitely know:
  • features of OrderCloud architecture
  • environments and their purpose
  • the UI and how to switch context between marketplaces
  • types of webhooks and their purpose
  • products and variants
  • price schedules
  • order flows and their statuses
  • general error codes returned by OrderCloud and their meaning
  • querying and filtering through API
  • in general, lots of API which is fully available from API Reference

Initially making due diligence for this technology, while diving into it, I went full of excitement from OrderCloud - what I saw so far. For me, it feels like a very mature product (it is in fact), with decent documentation great training, and very well architectured. Proper MACH architecture which you can fit into pretty everything. You can make a client storefront with zero backend coding, purely FE!
From a feature set point - also unbelievably flexible for both B2b and B2C. 

I decided to invest more time into learning OrderCloud and plan to make this platform one of the main techs for the oncoming year or at least part of Sitecore Triangle: XM Cloud - OrderCloud - Content Hub.

I created a Sitecore OrderCloud Telegram channel where I am sharing everything related to this platform. If you're using Telegram messenger - you'll definitely want to join by following this link. Otherwise, it is still possible to read it in a Twitter-like format in the browser using another link.


My SUGCON Presentation: The Mastery of Sitecore Upgrades

I am proud to be chosen as a presenter at SUGCON 2022 which took place in Budapest. 

This blog post contains the supporting material for my topic "The Mastery of Sitecore Upgrades".

Content


Why upgrade?

Why do we upgrade Sitecore given it is not that quick and easy? The answer is simple – a carrot and a stick motivation!

The Stick: every New Year Mainstream support expires for one or a few versions. This means Sitecore still provides security updates and fixes, they do not support development and compatibility queries. Read more details about it at Sitecore Product Support Lifecycle.

Now, the Carrot: companies update due to the new features or new experiences. For example, you may read a detailed article I wrote about those in the latest version - New features in 10.2 you must be aware of.

Also, having the latest version is especially valuable in front of Composable DXP, which already takes place:


Planning the upgrade

Propper planning is the key to success. How do we perform the planning and what should be considered?

1. Upgrading vanilla Sitecore would cost you minimal effort, but with the more custom code you have - the more labor is expected to commit. How much is it customized? These will affect your timescale and impose additional risks which is better to know in advance.

2. If you get access to any of the existing documentation - that would be a perfect place to start. And it will likely give some answers to the previous point.

Find the details about all environments, code branches, and which code branch is deployed to which environment. Also, you will be interested in the details about existing CI/CD pipelines.

3. Find the custom configurations made into the Sitecore.

4. Finally, talk to stakeholders! Who if not they know the details and are the most interested in the success!

5. It sounds logical that the more versions you jump through – the more complicated the upgrade process will be.

However, that isn’t a rule of thumb - some versions have very minor changesets and could be updated with minimal effort. Others - the opposite, could be difficult as hell.

For example, it is the exact reason 9.3 stands out from this version chain. It happened that it took most breaking changes, deprecations, and internal housekeeping improvements than any other version did.

6. Because of that, one of the most valuable planning activities - investigating Release Notes for every single version on your upgrade path. Pay special care to these two sections: Deprecated/Removed & Breaking changes.

7. Identify the functionalities which are no more supported including any third-party add-ons/modules and find their alternatives.

8. With new platform features license file format changes with time. Even with the most permissive license, your old license file may be incompatible with the newer platform. That for example happened when version 9 was released.

It is good to care about that in advance, as the process takes some time. Please check that with your account manager.

9. Every solution is unique, and the same unique the teams are. Estimates must consider these factors, along with previous relevant experience of your team. Make sure you will have the resources for the whole duration of the upgrade also care about the fallback plan if someone gets ill or leaves (also known as "bus factor").

This diagram shows a very optimistic view of the team of two experienced Sitecore professionals, performing the upgrade in 4 typical sprints. While sharing the pool of tasks with each other, one of them is primarily focused on the codebase, while another more cares about DevOps things.

Once again, this is not guidance or any sort of assessment. But just a high-level view of teams’ activity through sprints.

10. Also take a look at Sitecore compatibility guide to ensure all your planned tech spec stays valid.


Recap legacy tools

Before heading to the upgrade tactics, let’s quickly recap some upgrade tools we did have over time and how they performed.

1. Express Migration Tool

You can use Sitecore's Express Migration Tool to move over data from your old instance to your new one. It supports migrating from previous old versions to Sitecore 9 initial version. Express Migration Tool copies items and files from one Sitecore instance at a time.

The Sitecore Express Migration Tool copies items and files from one Sitecore instance at a time. The tool supports the migration of remote servers.

2. The update Center

Update Centre was introduced in Sitecore 9.0 Update 2 and is valid up to Sitecore 10.1.

You can find, download, install, and manage updates and hotfixes for the Sitecore platform and Sitecore modules. The Update Center accesses a package management service provided by Sitecore, or you can install and provide a service yourself.

The Sitecore Update Center uses this URL of the Package Management Service to check for the Sitecore updates:

<add name="PackageManagementServiceUrl" connectionString="https://updatecenter.cloud.sitecore.net/" />


Upgrade tactics

Next, let’s talk through some upgrade tactics that can significantly help your upgrade process.

1. Affixing the states and being able to switch between them as quickly as possible is crucial for productivity when working on instance upgrades. Small mistakes are inevitable and we actually need some sort of Undo button for the whole process.

We already have this for the codebase – git, with its ability to switсh between the branches.

But what about the published sites, indexes, certificates, and the rest of the minors that matter? Even database backups are not that fast and straightforward.

The one and only solution that comes into mind is fixing the whole state of a working machine. Something that one could have with a Virtual machine. Luckily, we have at least one suitable technology:

2. Hyper-V

Actually, this ticks all the boxes:
  • free and included in Win Pro
  • extremely fast with SSD
  • maximum OS integration
  • move backups between hosts
  • perfect networking options
  • remote management
  • universal virtual drives

However, you will need a relatively fast SSD and you should mind the disk space – it easily gets eaten out by multiple snapshots as you progress. Having a 1TB drive is probably the minimum you'd need to have for productive work on and upgrade.

3. Leverage PowerShell

Now we are confident with reverting bad things quickly. But how about redoing successful steps? With a lot of monotonous and repetitive actions, one could lose very much valuable time for things they will re-do again and again.

Let’s think about the case when someone spent the last 2 hours upgrading NuGet references for a Helix solution with 100 projects in it. Unfortunately, at last-minute something went totally wrong - restoring to the latest Hyper-V checkpoint will take less than a minute, but should one repeat these monotonous steps again and again?

One could think - why not to make restore point more often. Well, creating a restore point for every minor step seems to be an overkill here, an also Hyper-V will quickly eat out all of your disk drive space.

It is fairly difficult to perform an upgrade of everything from the first attempt. We will have to repeat the successful steps, again and again, so the only matter would be automating them. PowerShell built in the OS is the best and natural fit for such activities. You can leverage PowerShell for:

  • Any sort of automation, system tasks and jobs
  • Mass replace configuration and code with RegEx
  • Backup and restore databases and web application
  • Managing you infrastructure: either local or cloud
  • Leveraging SPE Remote to operate your instance: Managing content, security, publishing, and indexing
So what would and could be a good idea to write these scripts for? In addition to the above points, consider scripting some activities that take longer to complete and/or which result in a minor textual change in files.
Returning to the above case os upgrading NuGet references in a large solution, that took 2 hours to complete - on your disk it ends up with just a few lines of difference in a project or packages file. That means, comparing the difference “before and after” (with a diff tool) it is fairly easy to use PowerShell in order to process all such files performing regular expression replace. Not being a master of regular expressions, you're unlikely to succeed from the first attempt, but with a quick and easy checkpoint restore option you’ll quickly master it. Also, you'll end up with the artifacts: successful scripts that could be used for your future upgrades with minimal or no alterations.

I wrote a good long article about PowerShell best practices and all the aspects of leveraging this built-in scripting language. It is a long read, but highly valuable.

4. Quick Proof-of-Concept

In some cases, when you are performing a jump to the following version (or maybe a few) and if your solution is relatively simple staying close to vanilla Sitecore, it makes sense to pitch PoC. That will either give you a quick win or if not - then at least will identify most of the traps and things to improve. Having those open up better opportunities for planning.

Ideally, it should be a one-day job for a single professional.

5. Try to get a recent backup of a master database

It’s a highly desirable thing to obtain a relatively recent backup of the master database. I do realize, it is not always possible to do: sometimes due to the large size or more often because of the organization's security policy, especially when you're working for a partner agency or being an external contractor.

But why do you need that?

First of all, you restore it along with the existing solution to verify the codebase you've obtained performs exactly the same and on the published website(s). Of course, if you work for a given organization and were standing in charge of the project under upgrade from day one - this step could be dismissed, as you fairly know the codebase, infrastructure, the history of decisions taken over the project lifetime, and most of the potential traps.

But what is more important, you will need to upgrade and use this database with a new solution instance to ensure that also looks and works as it functioned before. That would help you eliminate most of the unobvious bugs and errors just before regression testing starts, which will save lots of time.

A good example from my own experience: I committed and upgrade from 8.2.7 to 10.0.1 and everything seemed to be successful. I used an upgraded full master database, I was visually comparing old sites running at the existing production with those I have locally upgraded to the latest Sitecore. Accidentally I spotted that some small SVG icons were missing from an upgraded site. Escalating this issue brought me to one of the breaking changes that would be difficult to spot - the codebase builds, no runtime errors, and no obvious logs issues.

6. What if a newer Sitecore release comes while you’re upgrading?

The upgrade process takes a decent amount of time and it may be a case when a new Sitecore version released meanwhile. My best advice will be to stick to the newer version. Just take it!

First of all, it may be not be an easy task from an organizational point of view, especially if you are working for a partner rather than an end client and the version has been signed off. It will take lots of effort to persuade stakeholders to consider the latest version instead, and you have to motivate your argument pretty well.

Upgrading the solution to the newer build when you a halfway done upgrading to a previous one adds some time but will cost times less than upgrading once you go live. It will be a good idea in most of the cases with the only exclusion of the main component getting deprecated in the newer version (for example once I was upgrading to 9.0.2 and then 9.1 has been released, but due to large usage of WFFM it was not possible to easily jump to 9.1 as WFFM was deprecated). Otherwise, we should have chosen a newer version.

Choosing a newer version in fact could cost you even less. That happened to me while upgrading to 10.0.1 when the newer 10.1 got released. Switching to 10.1 from 10.0.1 would cost me very little effort, and also would reduce operational efforts due to the specific advantages of Sitecore 10.1 - but unfortunately, the decision chain to the stakeholders was too long (I was subcontracted by an implementing partner), and that never happened.

7. Take comprehensive notes

Last but not least advice may seem to be obvious, but please take it seriously. Please document even minor steps, decisions, and successful solutions. You will likely go through them again, and maybe several times. Save your scripts either.

An even better job would be if you could wrap all your findings into a blog post and share the challenge and solution with the rest of the world. Because of our blogs being indexed, you’ll help many other people. Several times in my career I googled out my own blog posts with the solution while facing a challenge and googling it out.


Content migration

When asking my colleagues what was the biggest challenge with their upgrade experience, almost everyone responded - migrating the content.

1. Content maintenance

Depending on your level of ownership, it may be a good idea to undertake content maintenance. This is optional but desirable.

Remove legacy version - it is a know good practice to keep less than 10 versions per item improving content editing performance. Once I saw 529 versions of the site's Home item and doubted the authors did need them all.

You can remove legacy items manually by running the SPE script (Rules Engine Actions to Remove Old Versions in the Sitecore) or perform that on a regular basis by creating a Rule in Rules Engine to automatically keep a number of versions less than the desired number (PowerShell Extensions script to delete unused media Items older than 30 days).

Clean up Media Library as it is the most abused area of content, typically. Untrained or simply rushing editors often place content without any care. Almost every single solution I’ve seen had some issues with the media library: either messed up structure, lots of unused and uncertain media, or both. These items are heavyweight and bloat database and content tree without bringing any benefit. We got a PowerShell Extensions script to list all media items that are not linked to other items, so having those you may revise the list and get rid of those unwanted ones.

You may also want to clean up broken links prior to doing an upgrade. Sitecore has an admin page that allows removing broken links. You can find it in the Admin folder: RemoveBrokenLinks.aspx - just select the database and execute the action.

From 9.1 default Helix folders comes OOB with databases so became universal. But lots of Helix solutions were implemented before and their folders IDs do not match those coming OOB in later versions.

For example, the existing solution also has folders like /sitecore/Layout/Renderings/Feature but they are not coming out of the box and therefore serialized, but what is even worse - having different IDs from those coming OOB that are now universal.

You’ll need to rework serialization to match the correct parent folders coming OOB.


2. Content migration options

In fact, you've got plenty of options to migrate the content. Let's take a look at what they are.

Sitecore Packaging

This is the default way of occasionally exchanging items between instances known to every developer. However, it is extremely slow and does not work well with large packages of a few hundreds of megabytes.

Sidekick

Is free and has a Content Migrator module using Rainbow serialization format to copy items between instances in a multi-threaded way. Unlike packages it super-fast. Keep in mind that both servers need to have the content migrator installed on them.
Sitecore Sidekick (by Jeff Darchuk)

Razl

Offers quite an intelligent way of copying items between the servers. However, this software is not free and requires a license to run.

Sitecore Razl: Tool for Compare and Merge


Sitecore PowerShell Migration

We've got a script to migrate content between Sitecore instances using Sitecore PowerShell Extensions (by Michael West) which also leverages Unicorn and Rainbow serialization. This script is extremely fast, but it requires SPE with Remoting enabled on both instances.

Move an item to another database from Control Panel

If none of the above options work for you, there is a built-in tool to copy items between databases that is part of the Control Panel. The trick is to plug the target database as an external database to an instance so that it becomes seen in Sitecore, then you’ll be able to copy items using this option. It has limited functionality and works reasonably slow, but allows copying data both ways.

If you are migrating content to version 10.1 or newer, there is a much better way of dealing with a content and database upgrade, which I will explain in detail below.


3. Content Security

There are a few options to transfer Users and Roles from one instance to another.

Sitecore Packages

You can use the standard Package Designer from the Development Tools menu in the Sitecore Desktop. You can then add the Roles and Users that you want to package up for migration, generate the package, download it and then install it at the target instance in the same way you would do for content.

Serialization

An alternative is to use the Serialization option from within the User Manager and Role Manager applications. The users and roles will be serialized to (data)/serialization/security folder. You can copy this from the source instance to the target instance and then use the revert option.

For both of these options, the user's password is not transferred and instead reset (to a random value when using Sitecore Packages or to "b" when using serialization).

How to deal with passwords?

You can then either reset the password for the users manually (from the User Manager ), or the users themselves can reset the password by the "forgot your password" option from the login screen, assuming the mail server has been configured and they get password recovery email.

You also have the option to use the admin folder TransferUserPasswords.aspx tool to transfer the passwords from the source and target databases. To do so you will need both connection strings and these connections must have access enabled. Read more about Transferring user passwords between Sitecore instances with TransferUserPasswords.aspx tool.

Raw SQL?

Without having the required access, you could do that manually with SQL. The role and user data are stored using ASP.NET Membership provider in SQL Server tables in the Core database. Please note that Sitecore 9.1 and newer Membership tables could be extracted from Core into their own isolated Security database (moving Sitecore membership data away from Core db into an isolated Security database).

So it is possible to transfer the roles and users with a tool such as Redgate SQL. You will need to ensure you migrate user data from the following tables:

aspnet_Membership
aspnet_Profile
aspnet_Roles
aspnet_Users
aspnet_UsersInRoles
RolesInRoles

You may adjust and use SQL script to migrate content security when both source and target DBs are on the same SQL server. Also, follow up on a StackExchange answer about Moving users/roles/passwords to a new core database (by Kamruz Jaman).


4. Dynamic placeholders format

From version 9.0 they became an integral part of the platform, unlike previously we used 3-rd party modules. However, Sitecore has implemented its own format of addressing components: {placeholder key}-{rendering unique suffix}-{unique suffix within rendering}

That means your previous layouts will not be shown on the page unless you update presentation details for these differences otherwise the renderings. For example, if you have a component on a page in an old dynamic placeholder format, you need to change it from:

main_9d32dee9-17fd-4478-840b-06bab02dba6c

to the new format, so it becomes:

main-{9d32dee9-17fd-4478-840b-06bab02dba6c}-0

There are a few solutions how to approach that

  • with PowerShell Extensions (by Rich Seal)
  • by creating a service (or admin folder) page: one, two
Also, you no longer need a 3-rd party implementation module, so remove it (both DLL and its config patch file). Its functionality was "moved" into built-in Sitecore MVC libraries which you of course need to reference from web.config:
<add namespace="Sitecore.Mvc"/>
<add namespace="Sitecore.Mvc.Presentation"/> 

Read more: an official guide on dynamic placeholders


Upgrading codebase

The most central part of my presentation comes about upgrading the actual codebase and the challenges with it. To start with I want to share two useful tips to identify the customization of a given project.

Trick 1 on how to Identify customization

In order to estimate to amount of customization for a solution, I do the following quick trick.

First of all, I need to have a vanilla version of Sitecore that the existing solution is using.

After it is up and running, I come to the webroot and do: git init, and then commit everything. It’s just a local git initialized within Webroot - I do not push it anywhere and delete git files immediately after completing this exercise.

Next, I build and publish the codebase as normal, so that all the artifacts come above vanilla Sitecore into a Webroot.

Finally, my git tool of choice shows the entire delta of all the customizations of that solution. With that trick I could also easily see all web.config transforms and mismatching or altered DLLs, for example, provided with hotfixes.

That trick gives me the overall feel of how much that instance varies from the vanilla one.

Trick 2 on how to find customization with ShowConfig

When previous trick 1 mostly aims to show the overall amount of customizations, this one would be more specific about the changes in configuration. It will show all the patches and config alterations done on top of the vanilla instance so that a more precise estimation could be done.

We already got vanilla Sitecore of the legacy instance version at the previous step. What we need to do is run showconfig.aspx admin folder tool for generating the combined configuration. And save it into an XML file.

Next, once again perform build and publish as normal. Once the instance gets up and running – run showconfig tool again and save the output into a different file.

Now, having an actual solution configuration we can compare it against a vanilla combined configuration for that same version of the platform – using any diff tool. I recommend using advanced Beyond Compare.

For your convenience, you could also extract the whole delta between both into a single config file – it will be a valid patch!

3. Incorrect DLL(s) coming from the deployment

While upgrading the codebase you might also find facing a dependency hell in terms of referenced assemblies not playing well with each other. This can take an age to fix.

Every Sitecore release contains an Assembly List featuring a complete list of assemblies shipped with this release in the Release Information section. (for example, there are 400 assemblies exactly shipped for the 10.2 platforms).

DLLs must exactly match counterpart vanilla DLLs in versions and also in size unless that is a hotfix DLL.

With a previous first trick, you could identify all the mismatching DLLs, and with the second trick – where those are referenced from.

You may also perform a solution-wide search for a specific faulty assembly and then see all the found results along with their sized bytes. At least one would mismatch – that will point you to the specific project where the faulty one is coming from. I would recommend my file manager of choice – Total Commander which is perfect for such operations.

If the previous result gives you more than one – but lots of faulty referenced assemblies, you could rely on PowerShell for performing RegEx replace all the invalid references with the correct ones. It may have a steep learning curve initially, but very soon this technique will start saving you lots of time.

4. Troubleshoot the assembly bindings

This issue occurs due to the dependencies used in your solution projects, that mismatch those DLLs versions that are actually shipped with vanilla Sitecore. For example, your feature module uses Sitecore.MVC dependency, which in turn relies on System.Web.MVC. When you add Sitecore.MVC using NuGet package manager, it will pull the latest dependency, but not the one that was latest and referenced at the time when Sitecore build was released. Or in case if you add a library ignoring dependencies, you could process with the latest dependency yourself.

Two ways of getting out of this. One is strictly hard referencing your project dependencies to match those from webroot. Troubleshooting becomes really annoying when upgrading the solution with a large number of projects (the max I have seen was 135) or in rare cases not possible.

Another approach is advised to adjust particular Assembly Binding to the latest version. Since you should not modify vanilla config manually – you may employ the config transform step as a part of your local build & deploy script (see using web.config transforms on assembly binding redirects).

The situation has improved with 10.1 as libraries have been updated to the latest versions and bindings became less strict.

5. NoReference packages

Sitecore no longer publishes the NoReferences NuGet packages. For example, if you install the 9.3 packages you will get a lot of dependencies installed. Not what we wanted.

The correct way, in this case, is to use the Dependency Behavior option of the Nuget Package Manager.

Choosing the dependency behavior to “IgnoreDependencies” will only install the package without any of the dependencies.

This function is also available as a parameter switch in PowerShell.

6. Migrate from packages.config to PackageReference

Just like a project to project references and assembly references, PackageReferences are managed directly within project files rather than using separate packages.config file.

Unlike packages.config, PackageReference lists only those NuGet packages you directly installed in the project.

Using PackageReference, packages are maintained in the global-packages folder rather than in a packages folder within the solution. That results in performing faster and with less disk space consumption.

MSBuild allows you to conditionally reference a NuGet package and choose package references per target framework, configuration, platform, or other pivots.

So if you are convinced - please perform the update either manually from right-click context menu in Visual Studio or by running a migrate from packages.config to PackageReference script by Nick Wesselman.

7. Update Target Framework

Net Framework 4.8 is the terminal version of Framework and is being used from Sitecore 9.3 onwards.

Migrating from earlier solutions will require you to update Target Framework for every project within a solution. I personally prefer to do that in PowerShell (PowerShell way to upgrade Target Framework), but there is a Target Framework Migrator Visual Studio extension that also bulk-updates Target Framework for the whole solution.

8. Update Visual Studio

But of course, to benefit from this extension you may need to update Visual Studio itself first so that it has the required Target Framework. Least and most important – install the latest VS Build Tools which you of course can be updated on your own.

You will still likely be keen to get the latest VS, the 2022 worked well for me but is not yet officially supported. In that case, you can get at least VS 2019 which also has nice features.

Helix Solutions may have up to a hundred projects that affect overall performance. Solution filters allow filtering only those projects you’re working with, keeping unloaded from VS but still operable. It works as an SLNF-file that references your solution with an array of whitelisted projects.

You can also now perform one-click code cleanups and also search for Objects and Properties in the Watch, Autos, and Locals Windows.

9. Unsupported third-party libraries

The solution you’re upgrading may reference some third-party libraries that are discontinued. In that case, you need to do an investigation for each of those libraries and decide what to do.

Example from my experience. I came across Sitecore.ContentSearch.Spatial library that performed a geo-spatial searches. It has the source code available but has not been updated for 5 years so far. Meantime Lucene search has been removed from Sitecore and this library became no longer relevant, as the library hard-referenced Lucene.

As an outcome, the whole related feature at the solution was temporarily disabled to be later rewritten using Solr Spatial search.

10. Dependency injection

Microsoft.Extensions.DependencyInjection became built into Sitecore from version 8.2

It is fast and reliable and supports pretty much everything you may need. Just use this one!

Read more: Sitecore Dependency Injection and scoped Lifetime (by Corey Smith)

11. Glass Mapper

You will need to update Glass Mapper to version 5 unless already done. Package naming has changed to reflect the version of Sitecore it is working with, and are three related packages:

  • Glass.Mapper.Sc.{Version}
  • Glass.Mapper.Sc.{Version}.Mvc
  • Glass.Mapper.Sc.{Version}.Core

Glass has lots of changes in v5 most significant of which would be a new way of accessing content from Sitecore. Glass Controller gets obsolete, same is true for GlassView which went in favor of elegancy:

<!-- instead of of GlassView -->
@inherits GlassView<ModelClass>

<!-- it becomes -->
@model ModelClass

Instead of ISitecoreContext using the new IMvcContext, IRequestContext and IWebFormsContext abstractions.

public class FeatureController : Controller
{
    private readonly IMvcContext _mvcContext;
    public FeatureController()
    {
        _mvcContext = new MvcContext();
    }
    var parameters = _mvcContext.GetRenderingParameters();
    var dataFromItem = _mvcContext.GetDataSourceItem();
}

Glass lazy loads by default now and [IsLazy] attributes on model classes were removed. Other attributes removed are [SitecoreQuery], [NotNull].

You will need to remove these model properties in order to re-implement them as methods, getting rid of the attributes (see Lazy loading after upgrading to Glass Mapper 5).

After upgrading Glass if faced a bunch of run time errors that were difficult to identify. Eventually, I realized that happened due to a lack of virtual property modifiers in model classes. With old versions of Glass Mapper, model properties still map correctly even without the virtual modifier, but not any longer.

[SitecoreChildren(InferType = true)]
public virtual IEnumerable<Model> InnerChildren { get; set; }

Tip: Installing a higher version of the GlassMapper with NuGet package makes sure the files App_Start\GlassMapperSc.cs and App_Start\GlassMapperScCustom.cs are not overwritten with the defaults. Also, they should not appear in the feature modules, but only in the Foundation module that is responsible for Glass Mapper.

Before upgrading GlassMapper I would recommend reading through these great blog posts about the changes:

12. Custom Pipelines

Sitecore try and reduce breaking changes where possible but sometimes they are unavoidable, so as with all Sitecore upgrades some custom code and configuration will need to be updated to be compatible with

The HttpRequestArgs.Context property has been removed in favor of HttpRequestArgs.HttpContext – in fact, just been renamed. It resulted in a breaking change for all of your custom pipeline processors, which you all need to rewrite. I used my preferred PowerShell RegEx to replace one-liner to update them in one go:

gci -r -include "*.cs" | foreach-object {$a = $_.fullname; (Get-Content -Raw $a -Encoding UTF8) | `
foreach-object {$_ -replace 'args\.Context\.','args.HttpContext.' } | `
set-content -NoNewLine $a -Encoding UTF8}

Custom processors patched into a pipeline now required mandatory attribute resolve.

<CustomPipeline>
    <processor type="ProcessorType, Library.DLL" 
        patch:instead="processor[@type='Sitecore.Mvc.Pipelines.Response.RenderRendering.GenerateCacheKey, Sitecore.Mvc']" 
        resolve="true" />
</CustomPipeline>

13. Link Manager

One more change in version 9.3 takes place with LinkManager in the way we get the default option for building a URL:
var options = LinkManager.GetDefaultUrlBuilderOptions();
var url = LinkManager.GetItemUrl(item, options);

Long story short - its internals has been rewritten to the best, and are mostly hidden from our eyes. What we need to know is an updated way of patching default URL options. The old way of patching

<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/">
  <sitecore>
    <linkManager defaultProvider="sitecore">
      <providers>
        <add name="sitecore">
          <patch:attribute name="languageEmbedding">never</patch:attribute>
          <patch:attribute name="lowercaseUrls">true</patch:attribute>
      </add>
      </providers>
    </linkManager>
  </sitecore>
</configuration>

now became more elegant:

<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/">
  <sitecore>
    <links>
      <urlBuilder>
        <languageEmbedding>never</languageEmbedding>
        <lowercaseUrls>true</lowercaseUrls>
      </urlBuilder>
    </links>
  </sitecore>
</configuration>

Read more about LinkManager changes to link generation in 9.3 (by Volodymyr Hil).

14. Caching changes

The solution I was upgrading had an publish:end event configuration to do HTML Cache clearing for different site instances:
<events>
  <event name="publish:end">
    <handler type="Sitecore.Publishing.HtmlCacheClearer, Sitecore.Kernel" method="ClearCache">
      <sites hint="list">
        <site hint="apple">apple</site>
      </sites>
    </handler>
  </event>
  <event name="publish:end:remote">
    <handler type="Sitecore.Publishing.HtmlCacheClearer, Sitecore.Kernel" method="ClearCache">
      <sites hint="list">
        <site hint="apple">apple</site>
      </sites>
    </handler>
  </event>
</events>

Since 9.3 such behavior became default after a publish. It actually works the other way around now - you have to actually disable HTML cache clearing explicitly if you need to.

The previous code will break, and you have to remove that whole section.
<site name="apple" cacheHtml="true" preventHtmlCacheClear="true" … />

15. Forms

WebForms For Marketers, one of the most questionable modules, was deprecated in Sitecore 9.1 in favor of built-in Sitecore Experience Forms. That also raised questions on how to keep the data and migrate the existing forms while performing platform upgrade

Thankfully there is a tool to do exactly that: convert WFFM forms and data to Sitecore Forms. WFFM Conversion Tool (by Alessandro Faniuolo) is a console application that provides an automated solution to convert and migrate Sitecore Web Forms For Marketers (WFFM) forms items and their data to Sitecore Forms. It takes WFFM data from a SQL or MongoDB database as a source into the destination - Sitecore Experience Forms SQL database.

16. Support Patches & Hotfixes

You will likely find some hotfixes or support patches that have been applied to your solution over time. Sitecore Support addresses issues by providing support DLLs and releasing hotfixes.

Each of them sorts out some certain issue that you need to investigate if it was resolved in the version you’re upgrading to. Use this 6-digit support code to find more details by using a search at Sitecore Knowledge Base website.

When it comes to hotfixes when they keep the original name of DLL they replace, the knowledge base code could be found in the file properties dialog.

Once it got resolved, you may remove those along with any related configuration. This post is aimed at giving an idea of how we can identify if we have hotfix DLLs within Sitecore \bin folder so that we could approach them individually.

17. SXA

Is not an integral part of the platform and it has its own upgrade guidance released along with each version. So I only touch it briefly.

As is true for the rest of the Sitecore platform, the most of SXA changes took place with the 9.3 release. As for me, the biggest change was NVelocity deprecation from 9.3 in favor of Scriban templates, which affects Rendering Variants - one of the most powerful features of SXA.

Also since that version, it has a version parity with its holding XP or XM.

Read more:

SearchStax and Sitecore: The top integration benefits for search optimization and personalization

Both Lucene and Azure Search became obsolete and removed. With Sitecore 10, Solr became the recommended technology to manage all of the search infrastructure and indices for Sitecore. Managed Solr allows your developers to implement faster, and spend more time focused on building a better search experience and less time supporting search infrastructure.

Please migrate your solution, it might take significant effort.

Consider using SearchStax which takes from you the pain of installation, failover, security, maintenance, and scale. SearchStax provides solutions to teams for the most challenging issues for developing with Solr.

There are also a few commonly met issues while upgrading Solr. Sometimes you can come across the case of core name mismatching index name. This is a fairly simple issue and could be fixed with a config patch:

<configuration>
  <indexes>
    <index id="sitecore_testing_index">
      <param desc="core">$(id)</param>
    </index>
  </indexes>
</configuration>

The second issue relates to a configuration that defines Solr index. If you are indexing all fields – that must be wrapped with documentOptions tag of the correct type:

<defaultSolrIndexConfiguration>
  <documentOptions type="Sitecore.ContentSearch.SolrProvider.SolrDocumentBuilderOptions, Sitecore.ContentSearch.SolrProvider">
    <indexAllFields>true</indexAllFields>
  </documentOptions>
</defaultSolrIndexConfiguration>

19. Custom Content Databases

Custom Sitecore databases now require a Blob Storage setting and you have to append Blob Storage for all the custom databases you are using, in a similar manner as that is done for those coming out of the box.

20. Upgrading xDB and analytics

After the introduction of xConnect, there was a question on what to do with all existing analytics data from MongoDB upgrading it to 9.X or 10.X. To address that Sitecore created a tool called the xDB migration tool which works on top of Data Exchange Framework reads from MongoDB, and writes to the xConnect server.

Keep in mind if you have years of data on production, you can be looking at gigabytes of data migration. Be sure to take the appropriate precautions, like preventing regular app pool recycles, as the process can take days. If possible, discuss trimming the MongoDB data, or not importing it at all if it's not used. This tool uses a custom collection model to be deployed to both xConnect service and the xConnect indexer service. After rebuilding the xDB search index you will get the data in the Experience Profile.
It has an optional Verification feature, which is in fact a standalone database that gets a record of each entity being submitted to xConnect.


Changes in Sitecore 10.X

1. Containers

Containers are immutable which means that any changes to the file system within the container will be lost as soon as the container restarts

When switching to containers your codebase will remain the same, however, there are minor changes - debugging now works a little bit differently: now we need to choose a relevant container and then pick up a process within it. instead of publishing artifacts into webfolder we now build images and run a container from it.

With Docker, you no longer deploy just your application code. A container is now the unit of deployment, which includes the whole environment: OS and application dependencies. Therefore, your build process will have to be extended to build Docker containers and push them to the container registry.

Both Azure and AWS are suitable for running Sitecore in containers. They both provide managed Kubernetes services and other container hosting options.

Besides containers you need to consider other system components: SQL, Solr, and optionally Redis

Luckily, Sitecore comes with health check endpoints (/healthz/live and /healthz/ready) out of the box, which you can use for this purpose.

Kubernetes on its own will impose quite a steep learning curve.

You can learn more tips on migrating your Sitecore solution to Docker.

2. Upgrade the database

Databases were not 100% compatible between the versions. Previously (before 10.1) one had to run an upgrade script against Core and Master in order to attach both to a vanilla target instance and progress with the rest of the upgrade.

The update script ensured the schema and default OOB content get updated by applying all the intermediate changes between both versions. This article explains in more detail how did we upgrade databases before 10.1.

The idea that came into consideration was that supplying empty databases with a vanilla platform would eliminate the above need for everyone who updates DB to operate at an SQL admin level. But every version has its own unique set of default OOB items, where do we keep them?

Because of container-first the way of thinking, there was a clear need to store those somewhere at a filesystem level, so it was a matter of choosing and adopting a suitable data provider. Protobuf from Google was a perfect choice ticking all the boxes, moreover, it is a very mature technology.


3. Items as Resources

With that in mind, now having a database full of content you can upgrade the version without even touching it. SQL Databases stay up to date, and the rest of the default content gets updated by just substituting Protobuf Data resource files.

Sitecore called this approach Items as Resources.

I would strongly recommend you look into the items as a resources plugin for the Sitecore CLI - instead of going through all the trouble of making asset images, you can create Protobuf files that contain all your items and bake them directly into your container image.

I wrote a comprehensive explanation about everything you wanted to ask about "Items-as-Resources" coming with the new Sitecore 10.1 - hope it answers most if not all questions.

You can create your own resource files from *.item serialization using Sitecore CLI Items as Resources plugin (version of CLI 4.0 or newer). You cannot delete items from Resource File it is read-only, but this trick (by Jeroen De Groot) helps you "remove" items at the Provider level.


4. Sitecore UpdateApp

But how do we upgrade let’s say 8.2 databases to 10.2?

From 10.1 and onwards databases come with no content – it becomes a matter of removing the default items from the SQL database, leaving the rest of the content untouched. That default content would come from the actual items resource file which will be placed in App_Data\Items folder.

For Upgrading to Sitecore 10.1 with UpdateApp Tool it comes to purely replacing resource files which naturally fits the container filesystem way of doing business.

That is where a new tool comes into play, please welcome Sitecore UpdateApp Tool!

This tool updates the Core, Master, and Web databases of the Sitecore Experience Platform. You must download and use the version of the tool that is appropriate for the version and topology of the Sitecore Experience Platform that you are upgrading from.

It also works with official modules resource files for upgrading Core, Master, and Web. (SXA, SPE, DEF, Horizon, both CRMs, etc.)

Sitecore UpdateApp Tool 1.2.0

5. Asset images

To containerize modules we now use Sitecore Asset images, instead of packages as we did before. But why?

First of all, you won’t be able to install a package that drops DLL due to the instance \bin folder being locked for an IIS user.

Secondly, containers are immutable and are assumed to be killed and instantiated - any changes to a container will go away with it.

Also, since a module is a logical unit of functionality with its own lifecycle and assets - it should be treated by the "works together, ships together" principle. Also, you as a package maintainer provide sample usage Dockerfiles for the required roles so that your users could pick it up with ease.

Sitecore Asset images are in fact storage of your module’s assets in the relevant folders, based on the smallest windows image - nanoserver. You can create those manually, as I show on this diagram.

With the recent platform, there are two ways of managing the items: either creating an Items Resource file or converting the package zip file into a WebDeploy package, then extracting its DACPAC file and placing it into the DB folder. In either case, the file structure will stay as pictured in the diagram above.


There is a tool called Docker Asset Image for a given Sitecore module (by Robbert Hock) which aims to automate Asset Image creation for you.

I would recommend going through these materials for better understanding:

Testing and going live

You will definitely need to conduct one (or if things go wrong - a series of) Regression Testing. The first question comes to clarify, what exactly to test?

Depending on the upgrade scope, you will need to at least ensure the following:

  • Overall site(s) health look & feel
  • 3-rd party integrations aren’t broken
  • Editing experience is not broken
  • Most recent tasks/fixes function well

Another question: manual, automated, or a mix of both?

There is no exact answer - but without automation regression testing becomes effort-consuming. especially when you need to repeat it again and again. At the same time, it does not make sense to automate everything (and sometimes is not even possible).


Since you do the upgrade into a new instance of Sitecore in parallel to the existing one, you definitely need to conduct Load Testing prior to this new instance becoming live. The metrics must not mandatory me better than previously had as new Sitecore releases have much of new features and the number of assemblies within \bin folder always grows up. But as soon as it meets the projected SLA - that should be fine.


Monitoring is also crucial when going live. Once the new Sitecore instance is up and running, the first thing to check would be Sitecore logs to inspect and fix any errors that are being logged. We have a comprehensive set of automated UI tests that cover the majority of business use cases and are executed overnight. Watch the metrics for anomalies


Another exercise to undertake before going live would be Security Hardening - the process of securing a system by reducing its surface of vulnerability, which is high for systems that perform more functions like Sitecore is. From older versions, Sitecore comes up with the Security hardening guide and considerations to be followed.


In case you have the luxury of organizing Content Freeze for the going live period - that is great. If not you will need to pick up and care for the content delta produced meanwhile going live. The best thing to advise would be to use PowerShell Extensions to identify and pack all the content produced at a specific timestamp onwards. After installing this package the delta would be applied to the updated instance.

The actual moment of Going Live is when you switch DNS records from a previous instance to the new one. Once done, the traffic will go and be served by the updated instance. There are a few considerations, however:

  • you will need to reduce TTL values for the domain DNS record well in advance to make the switch immediate; then bring it back once done.
  • you may consider slightly increasing traffic (ie. 5% of visitors) to new instances and monitor logs rather than switch it entirely.

If everything went well and the updated instance works fine, you still need to keep both instances running in parallel for a certain amount of time, Maturing Period. If something goes totally wrong you could still revert to an old instance and address the issues.

That's it!

Hope my presentation helps your future Sitecore upgrades!
Finally, I want to thank all the event organizers for making it happen and giving me the opportunity to share the experience with you.