Experience Sitecore ! | More than 200 articles about the best DXP by Martin Miles

Experience Sitecore !

More than 200 articles about the best DXP by Martin Miles

My speech proposal for SUGCON ANZ: Developers' guide to XM Cloud

Developers' guide to XM Cloud

Over the last months, we've heard lots of insights about the newest cloud product from Sitecore - XM Cloud. Many developers have wondered about what would be their changes of scope and responsibilities, how would they work with this new SaaS solution, or whether would they even become redundant. 

There is nothing to worry about! This session is answering most of these questions and in fact comes as the most crucial developers' guide to XM Cloud, as for now. It explains the caveats of changes in the content lifecycle and local development routine, introduced new Headless SXA for XM Cloud, and new options for personalization. It will also highlight changes in the security model, and site search and give the best advice on legacy strategies and content migration. Finally, some practical experience with Headstart for XM Cloud and utilizing the new deployment model, so that it becomes live!

Getting started
  •     why SaaS cloud? what is the change?
  •     a brief overview of XM Cloud for developers    
Familiar tools that remain with you
  •     review of the process and deployment tools available to a developer
  •     local development for XM Cloud in containers
  •     customizing pipelines with XM Cloud
  •     leveraging SPE functions
  •     Sitecore CLI becomes "The Tool"
Editing Experience and Content Considerations:
  •     using Site Builder
  •     dealing with Pages & Components
  •     extensions catalog
  •     diversity of datasources: where can my content reside?
  •     migrating content from legacy Sitecore platforms
Changes in the security model
  •     Sitecore Unified Identity
  •     integrating 3-rd party services
Changes in search
  • where's my Solr?
  • what are the options?
  • plugging an external search technology
Dealing with the legacy
  •     are my legacy sites still compatible with XM Cloud?
  •     migrating headless site from XP to XM Cloud guidance
  •     EDGE considerations
  •     is my legacy module for XP compatible with XM Cloud?

SXA for XM Cloud
  •     new old Headless SXA - what's the difference
  •     new old rendering variants
  •     can we use headless forms on XM Cloud?
  •     a bare minimum to build Headless SXA site for Next.js
Hands-On
  •     starter kits available for you straight away
  •     leveraging Headstart basic starter kit foundation built for XM Cloud
  •     make your own module compatible with XM Cloud
Personalization
  •     are the built-in rules enough to go?
  •     two ways of leveraging CDP/Personalize for a better experience
Deploying into XM Cloud
  •     Single-location? Will that affect my GEO-distributed authors team?
  •     Understanding terminology: deployment, project, environment
  •     Understanding Build and Deployment Service (how to trigger and its lifecycle)
  •     CLI along with DevEx plugins
  •     GUI-powered Deploy App tool and 
  •     auto deployments from connected GitHub
It looks to me like an excellent topic exposing a spotlight on the new Sitecore SaaS-based platform. Keep fingers crossed!

Infrastructure-as-Code: best practices you have to comply with

Infrastructure as Code (IaC) is an approach that involves describing infrastructure as code and then applying it to make the necessary changes. IaC does not dictate how exactly to write code, it just provides tools instead. A good examples are Terraform, Ansible and Kubernetes itself where you don't say what to do, rather than you dictate what state you want you infrastructure to get into.

Keep the infrastructure code readable. Your colleagues would be able to easily understand it, and, if necessary, add or test it. Looking to be an obvious point, it is quite often is forgotten, resulting in “write-only code” - the one can only be written, but cannot be read. Its author inclusive, and is unlikely to be able to understood what he wrote and figure out how it all works, even a few days afterward.

An example of a good practice is keeping all variables in a separate file. This is convenient because they do not have to be searched throughout the code. Just open the file and immediately get what you need.


Adhere to a certain style of writing code. As a good example, you may want keeping the code line length between 80-120 characters. If the lines are very long, the editor starts wrapping them. Line breaks destroy the overall view and interfere with the understanding of the code. One has to spend a lot of time just figuring out where the line starts and where it ends.

It's nice to have the coding style check automated, at least use by using the CI/CD pipeline for this. Such a pipeline could have a Lint step: a process of statistical analysis of what is written, helping to identify potential problems before the code is applied.


Utilize git repositories same way developers do. Saying that I mean developing new branches, linking branches to tasks, reviewing what has already been written, sending Pull Requests before making changes, etc.

Being a solo maintainer one may seem the listed actions to be redundant - it is a common practice when people just come and start committing. However, even if you have a small team, it could be difficult to understand who, when, and why made some corrections. As the project grows, such practices will increasingly help the understanding of what is happening and mess up the work. Therefore, it is worth investing some time into adopting some of the development practices to work with repositories.


Infrastructure as Code tools are typically associated with DevOps. As we know DevOps as specialists who not only deal with maintenance but also help developers work: set up pipelines, automate test launches, etc. - all the above also applies to IaC.

In Infrastructure as Code, automation should be applied: Lint rules, testing, automatic releases, etc. Having repositories with let's say Ansible or Terraform, but rolled out manually (by an engineer manually starting a task) is not that much good. Firstly, it is difficult to track who launched it, why, and at what moment. Secondly, it is impossible to understand how that worked out and draw conclusions.

With everything kept in the repository and controlled by an automatic CI/CD pipeline, we can always see when the pipeline was launched and how it performed. We can also control the parallel execution of pipelines, identify the causes of failures, quickly find errors, and much more.

You can often hear from maintainers that they do not test the code at all or just first run it somewhere on dev. It's not the best practice, because it does not give any guarantee that dev matches prod. In the case of Ansible or other configuration tools, standard testing could be something as:

  • launched a test on dev;
  • rolled on dev, but crashed with an error;
  • fixed this error;
  • once again, the test was not run because dev is already in the state to which they tried to bring it.

It seems that the error has been corrected, and you can roll on prod. What will happen to prod? It is always a matter of luck - hit or miss, guess or miss. If somewhere in the middle, something falls again, the error will be corrected and everything will be restarted.

But infrastructure code can and should be tested. At the same time, even if specialists know about different testing methods, they still cannot use them. The reason is that Ansible roles or Terraform files are written without the initial focus on the fact that they will need to be tested somehow.

In an ideal world, at the moment of writing a code developer is aware of what (else) needs to be tested. Accordingly, before starting to write a code, developer plans on how to test it, commonly know as TDD. Untested code is low-quality code.

The same exactly applies to infrastructure code: once written, you should be able to test it. Decent testing allows to reduce the number of errors and make it easier for colleagues who will finalize your roles on Ansible or Terraform files.


A few words about automation. A common practice when working with Ansible is that even if something could be tested, there is no automation to it. Usually, this is a case when someone creates a virtual machine, takes some role written by colleagues, and launches it. Afterward that person relizes the need to add certain new things to it - appends and launches again on the virtual machine. Then he realizes that even more changes are equired and also the current virtual machine has already been brought to some kind of state, so it needs to be killed, new virtual machine reinstantstiated and the role rolled over it. In case something does not work, this algorithm would have to be repeated until all errors are eliminated.

Usually, the human factor comes into a play, and after the N-th number of repetitions, it becomes too lazy deleting the VM and re-creating it again. Once everything seems to work exactly as it should (this time), so one seems could freeze the changes and roll into the prod environment. But reality is that errors could still occur, that is why automation is needed. When it works through automated pipelines and Pull Requests are used - it helps to identify bugs faster and prevent their re-appearance.

Sitecore Edge and XM Cloud - explain it to me as I was 5

Explain to me as I was 5 years old.

Well, not sure that could be explained to 5 years old, but will instead explain it as you were not around the changes for lets say the past 5 years. There is a lot to go through. Before explaining the most interesting concepts like XM Cloud and Sitecore Edge, I need briefly touching some terminology they rely on.


Headless

Previously you used Sitecore to render HTML using ASP.NET MVC. All that happened server-side: Sitecore pulled up the data and your controllers built views with that content, resulted combined HTML being sent out back to a calling browser by a CD server. So that meant, would you need just raw content or data not being wrapped with HTML, then the only way would be setting up duplicating WebAPI, which could be clumsy in addressing the correct data. Or that could be too verbose, returning you much more data than you need. In any case - too exhaustive!

So it comes up logically: why not make the raw data returned universally through API? It could be then consumed by various callers like mobiles or other systems, even some content aggregators, not just browsers (that is what is called "omnichannel"). This is why the approach is called "headless" - there is no HML (or any other "head" returned along with your data).


Rendering Host

When it comes to browsers that still need HTML: it makes sense to merge the content with HTML somewhere later on a request lifecycle - after it left you the universal platform endpoint that you still have on your CD. There is still a webserver required to serve all the web requests. It will receive a request, then pull some required raw data from a universal endpoint, then render output HTML with that data. This is why such webserver is also known as a "rendering host" - now we clearly separated serving actual raw data from rendering HTML to be returned to the browser. Previously both steps were done at a single point on CD.


GraphQL

Once read above, you could think that serving all the content through WebAPI would be some sort of overkill, that is especially valid for large and complicated data. More to be considered about adequate caching. Even with a headless approach, imagine pulling a large list of books stored in some database and having a reference with authors by authorId field. 

So you either do lots of JOIN-alike operations and expose lots of custom API endpoints to fit data the way your client needs that, or take pulls of all-the-data from a database, cache it in somewhere memory, and keep "merging" books to authors on the fly (in memory JOIN per-request). None of both is a nice solution. In the case of really large data, there won't be an elegant solution.

So there was a clear need for some sort of flexibility, and that flexibility should be requested by a client application, addressing its immediate need for data. Moreover, often clients want to return a specific set of data and nothing else above what is being requested - mobile apps typically operate expensive and potentially slow mobile lines, compared to inter-data center superfast networks between CD and Rendering Hosts. Also, headless CDs always return meaningful and structured data of certain type(s), which means it could be strongly typed. And where there are several types, those could relate. We clearly need a schema for data.

That is how GraphQL was invented to address all the above. Instead of having lots of API endpoints we now got a universal endpoint to serve all our data needs in a single request. It provides a schema of all the data types it could return. So now it is the client who defines what type(s) of data to request, defines how those relate together, and the amount of data it needs - not more than it should consume. Another benefit of predefined Shema is that now knowing it in advance, writing a code for clients' apps is quicker thanks to autocompleting, likely provided by your IDE. It also respects primitive types supporting all the relevant operations (comparison, orderBy, etc.)


Sitecore Edge

Previously with XP, you had a complex setup most important of which were CM and CD instances fed by corresponding databases - commonly known as master and web. Editors logged into CM, created some content, and published it from master to the web database to be used by CD.

Now imagine you only got a CM part from the above example. When you do publish, it publishes "into a cloud". By "cloud", it meant a globally distributed database with CDN for media along with some API (GraphQL) to expose content for your front-end.

In fact, not only from CM content could reach Edge and be served from it - Content Hub could be another tool, performing like XM does.

Previously you had a CD instance with a site deployed there that consumes data from a web database, but no you neither have those nor Sitecore provide it for you. That means you should build a front-end site that consumes data from a given GraphQL. That is what is called headless, so you could use JSS with or without Next, or ASP.NET Core renderings. Or anything else - any front-end of your choice, however with more effort. Or it could be not a website at all, but a smart device consuming your data - the choice is unlimited. Effectively, we've got something as CD data as a service provided, maintained, and geo-scaled by Sitecore.


XM Cloud

From the previous explanation you've learned that Experience Edge is "when we remove CD instance and replace Web databases with a cloud service". Now we want to do exactly the same with XM. Provided as a service, it always has the latest error-prone version, maintained and scaled by the vendor. Please welcome XM Cloud and let's decouple all-the-things!

Before going ahead, let's answer what was a typical XM in Sitecore as we knew it, and what it expect to do?

  • create, edit and store content
  • set up layout and presentation for a page
  • apply personalization
  • publishing to CD
  • lots of other minor things

Publishing has been already optionally decoupled from XM in for of Sitecore Publishing Service. That works as a standalone web app or an individual container. Its only duty is copying required content from CM to CD and doing it perfectly well and fast.

Another thing that could be decoupled is the content itself. Previously it was stored in the CM database in a form of Sitecore item abstraction. What if we could have something like Content-as-a-Service, where a data source could be supplied from any source at all that supplies it through GraphQL - any other headless CMS or professional platforms, such as Content Hub? That is very much a "composable" look and feel to me! Then it comes to total flexibility, after setting up the data endpoint, authors could benefit from autocomplete suggestions coming from GraphQL schema when wiring up their components.

Personalization also comes as a composable SaaS service - Personalize. Without using one XM Cloud will also offer you some basic personalization options.

Speaking about a layout, it also could be decoupled. We already have Horizon as a standalone webapp/container, so whatever its cloud reincarnation appears to be (ie. Symphony-?) - it gets decoupled from XM engine. There will be an old good Content Editor anyway, but its ability to edit content is limited to Sitecore items from the master database, unlike Symphony being universal.


Sitecore Managed Cloud

Question: so is XM Cloud something similar to Managed Cloud, and what is the difference between those?

No, not at all. Sitecore Managed Cloud hosts, monitors, manages, and maintains your installation of the platform, on your behalf. They provide infrastructure and the default technology stack that suits it in the best way. Previously you had to care for an infrastructure yourself, which took lots of effort, and that is the main thing that changes with Managed Cloud. Managed cloud supports XM, XP and XC (however on premium tier).

XM Cloud as the opposite - is a totally SaaS offering. It will be an important part of Sitecore Composable DXP where you will architect and "compose" an analog of what was XP from various other Sitecore products (mostly SaaS, but not mandatory).


That is what XM Cloud expected to be in a composable spirit of modern DXP for 2022. Hope we all enjoy it!

OrderCloud Certification - tips on preparation and successful pass

Finally, I am happy to complete Sitecore learning and certification by successfully passing the OrderCloud Certification exam. In this post, I will try to share some thoughts and insights on how you could also progress with it.

This exam was relatively easy, especially compared with other Sitecore certifications. Have to note that I've read almost the whole documentation available, took the eLearning course, and did the notes for every minor important point - so did not find it tough.

I managed to score 90% being hesitative on 6 questions of a total of 30 which gives the exact 20% above the required 80% for a successful "pass" result. Basically, in that case, gambling out at least one question takes the score above the pass level. I want to admit that test questions seem to be very reasonable chosen and mature, as the product itself is.

In order to succeed, you'll definitely need to progress through the following materials:


The competencies under a test are:
  • OrderCloud Architecture and Conventions
  • Integration
  • User Management and Access Control
  • Environments
  • Product Management
  • Order and Fulfillment Management
  • Troubleshooting

Please note: eLearning is really good but does not cover all the competencies. It covers the Security and Product areas really well but has nothing about Order and Fulfillment (at the moment of writing this blog - there is a 'coming soon' promise, however). That means you must get on your own learning path.


Exam in numbers:
  • 60 minutes
  • 30 questions
  • 80% to pass
  • Costs $350 (some categories of tesе takers may qualify for a discount)

Today, having a "fresh memory" I am trying to remember some of the questions personally I had on the exam and share some thoughts on what to emphasize. Without nuances, you must definitely know:
  • features of OrderCloud architecture
  • environments and their purpose
  • the UI and how to switch context between marketplaces
  • types of webhooks and their purpose
  • products and variants
  • price schedules
  • order flows and their statuses
  • general error codes returned by OrderCloud and their meaning
  • querying and filtering through API
  • in general, lots of API which is fully available from API Reference

Initially making due diligence for this technology, while diving into it, I went full of excitement from OrderCloud - what I saw so far. For me, it feels like a very mature product (it is in fact), with decent documentation great training, and very well architectured. Proper MACH architecture which you can fit into pretty everything. You can make a client storefront with zero backend coding, purely FE!
From a feature set point - also unbelievably flexible for both B2b and B2C. 

I decided to invest more time into learning OrderCloud and plan to make this platform one of the main techs for the oncoming year or at least part of Sitecore Triangle: XM Cloud - OrderCloud - Content Hub.

I created a Sitecore OrderCloud Telegram channel where I am sharing everything related to this platform. If you're using Telegram messenger - you'll definitely want to join by following this link. Otherwise, it is still possible to read it in a Twitter-like format in the browser using another link.


My SUGCON Presentation: The Mastery of Sitecore Upgrades

I am proud to be chosen as a presenter at SUGCON 2022 which took place in Budapest. 

This blog post contains the supporting material for my topic "The Mastery of Sitecore Upgrades".

Content


Why upgrade?

Why do we upgrade Sitecore given it is not that quick and easy? The answer is simple – a carrot and a stick motivation!

The Stick: every New Year Mainstream support expires for one or a few versions. This means Sitecore still provides security updates and fixes, they do not support development and compatibility queries. Read more details about it at Sitecore Product Support Lifecycle.

Now, the Carrot: companies update due to the new features or new experiences. For example, you may read a detailed article I wrote about those in the latest version - New features in 10.2 you must be aware of.

Also, having the latest version is especially valuable in front of Composable DXP, which already takes place:


Planning the upgrade

Propper planning is the key to success. How do we perform the planning and what should be considered?

1. Upgrading vanilla Sitecore would cost you minimal effort, but with the more custom code you have - the more labor is expected to commit. How much is it customized? These will affect your timescale and impose additional risks which is better to know in advance.

2. If you get access to any of the existing documentation - that would be a perfect place to start. And it will likely give some answers to the previous point.

Find the details about all environments, code branches, and which code branch is deployed to which environment. Also, you will be interested in the details about existing CI/CD pipelines.

3. Find the custom configurations made into the Sitecore.

4. Finally, talk to stakeholders! Who if not they know the details and are the most interested in the success!

5. It sounds logical that the more versions you jump through – the more complicated the upgrade process will be.

However, that isn’t a rule of thumb - some versions have very minor changesets and could be updated with minimal effort. Others - the opposite, could be difficult as hell.

For example, it is the exact reason 9.3 stands out from this version chain. It happened that it took most breaking changes, deprecations, and internal housekeeping improvements than any other version did.

6. Because of that, one of the most valuable planning activities - investigating Release Notes for every single version on your upgrade path. Pay special care to these two sections: Deprecated/Removed & Breaking changes.

7. Identify the functionalities which are no more supported including any third-party add-ons/modules and find their alternatives.

8. With new platform features license file format changes with time. Even with the most permissive license, your old license file may be incompatible with the newer platform. That for example happened when version 9 was released.

It is good to care about that in advance, as the process takes some time. Please check that with your account manager.

9. Every solution is unique, and the same unique the teams are. Estimates must consider these factors, along with previous relevant experience of your team. Make sure you will have the resources for the whole duration of the upgrade also care about the fallback plan if someone gets ill or leaves (also known as "bus factor").

This diagram shows a very optimistic view of the team of two experienced Sitecore professionals, performing the upgrade in 4 typical sprints. While sharing the pool of tasks with each other, one of them is primarily focused on the codebase, while another more cares about DevOps things.

Once again, this is not guidance or any sort of assessment. But just a high-level view of teams’ activity through sprints.

10. Also take a look at Sitecore compatibility guide to ensure all your planned tech spec stays valid.


Recap legacy tools

Before heading to the upgrade tactics, let’s quickly recap some upgrade tools we did have over time and how they performed.

1. Express Migration Tool

You can use Sitecore's Express Migration Tool to move over data from your old instance to your new one. It supports migrating from previous old versions to Sitecore 9 initial version. Express Migration Tool copies items and files from one Sitecore instance at a time.

The Sitecore Express Migration Tool copies items and files from one Sitecore instance at a time. The tool supports the migration of remote servers.

2. The update Center

Update Centre was introduced in Sitecore 9.0 Update 2 and is valid up to Sitecore 10.1.

You can find, download, install, and manage updates and hotfixes for the Sitecore platform and Sitecore modules. The Update Center accesses a package management service provided by Sitecore, or you can install and provide a service yourself.

The Sitecore Update Center uses this URL of the Package Management Service to check for the Sitecore updates:

<add name="PackageManagementServiceUrl" connectionString="https://updatecenter.cloud.sitecore.net/" />


Upgrade tactics

Next, let’s talk through some upgrade tactics that can significantly help your upgrade process.

1. Affixing the states and being able to switch between them as quickly as possible is crucial for productivity when working on instance upgrades. Small mistakes are inevitable and we actually need some sort of Undo button for the whole process.

We already have this for the codebase – git, with its ability to switсh between the branches.

But what about the published sites, indexes, certificates, and the rest of the minors that matter? Even database backups are not that fast and straightforward.

The one and only solution that comes into mind is fixing the whole state of a working machine. Something that one could have with a Virtual machine. Luckily, we have at least one suitable technology:

2. Hyper-V

Actually, this ticks all the boxes:
  • free and included in Win Pro
  • extremely fast with SSD
  • maximum OS integration
  • move backups between hosts
  • perfect networking options
  • remote management
  • universal virtual drives

However, you will need a relatively fast SSD and you should mind the disk space – it easily gets eaten out by multiple snapshots as you progress. Having a 1TB drive is probably the minimum you'd need to have for productive work on and upgrade.

3. Leverage PowerShell

Now we are confident with reverting bad things quickly. But how about redoing successful steps? With a lot of monotonous and repetitive actions, one could lose very much valuable time for things they will re-do again and again.

Let’s think about the case when someone spent the last 2 hours upgrading NuGet references for a Helix solution with 100 projects in it. Unfortunately, at last-minute something went totally wrong - restoring to the latest Hyper-V checkpoint will take less than a minute, but should one repeat these monotonous steps again and again?

One could think - why not to make restore point more often. Well, creating a restore point for every minor step seems to be an overkill here, an also Hyper-V will quickly eat out all of your disk drive space.

It is fairly difficult to perform an upgrade of everything from the first attempt. We will have to repeat the successful steps, again and again, so the only matter would be automating them. PowerShell built in the OS is the best and natural fit for such activities. You can leverage PowerShell for:

  • Any sort of automation, system tasks and jobs
  • Mass replace configuration and code with RegEx
  • Backup and restore databases and web application
  • Managing you infrastructure: either local or cloud
  • Leveraging SPE Remote to operate your instance: Managing content, security, publishing, and indexing
So what would and could be a good idea to write these scripts for? In addition to the above points, consider scripting some activities that take longer to complete and/or which result in a minor textual change in files.
Returning to the above case os upgrading NuGet references in a large solution, that took 2 hours to complete - on your disk it ends up with just a few lines of difference in a project or packages file. That means, comparing the difference “before and after” (with a diff tool) it is fairly easy to use PowerShell in order to process all such files performing regular expression replace. Not being a master of regular expressions, you're unlikely to succeed from the first attempt, but with a quick and easy checkpoint restore option you’ll quickly master it. Also, you'll end up with the artifacts: successful scripts that could be used for your future upgrades with minimal or no alterations.

I wrote a good long article about PowerShell best practices and all the aspects of leveraging this built-in scripting language. It is a long read, but highly valuable.

4. Quick Proof-of-Concept

In some cases, when you are performing a jump to the following version (or maybe a few) and if your solution is relatively simple staying close to vanilla Sitecore, it makes sense to pitch PoC. That will either give you a quick win or if not - then at least will identify most of the traps and things to improve. Having those open up better opportunities for planning.

Ideally, it should be a one-day job for a single professional.

5. Try to get a recent backup of a master database

It’s a highly desirable thing to obtain a relatively recent backup of the master database. I do realize, it is not always possible to do: sometimes due to the large size or more often because of the organization's security policy, especially when you're working for a partner agency or being an external contractor.

But why do you need that?

First of all, you restore it along with the existing solution to verify the codebase you've obtained performs exactly the same and on the published website(s). Of course, if you work for a given organization and were standing in charge of the project under upgrade from day one - this step could be dismissed, as you fairly know the codebase, infrastructure, the history of decisions taken over the project lifetime, and most of the potential traps.

But what is more important, you will need to upgrade and use this database with a new solution instance to ensure that also looks and works as it functioned before. That would help you eliminate most of the unobvious bugs and errors just before regression testing starts, which will save lots of time.

A good example from my own experience: I committed and upgrade from 8.2.7 to 10.0.1 and everything seemed to be successful. I used an upgraded full master database, I was visually comparing old sites running at the existing production with those I have locally upgraded to the latest Sitecore. Accidentally I spotted that some small SVG icons were missing from an upgraded site. Escalating this issue brought me to one of the breaking changes that would be difficult to spot - the codebase builds, no runtime errors, and no obvious logs issues.

6. What if a newer Sitecore release comes while you’re upgrading?

The upgrade process takes a decent amount of time and it may be a case when a new Sitecore version released meanwhile. My best advice will be to stick to the newer version. Just take it!

First of all, it may be not be an easy task from an organizational point of view, especially if you are working for a partner rather than an end client and the version has been signed off. It will take lots of effort to persuade stakeholders to consider the latest version instead, and you have to motivate your argument pretty well.

Upgrading the solution to the newer build when you a halfway done upgrading to a previous one adds some time but will cost times less than upgrading once you go live. It will be a good idea in most of the cases with the only exclusion of the main component getting deprecated in the newer version (for example once I was upgrading to 9.0.2 and then 9.1 has been released, but due to large usage of WFFM it was not possible to easily jump to 9.1 as WFFM was deprecated). Otherwise, we should have chosen a newer version.

Choosing a newer version in fact could cost you even less. That happened to me while upgrading to 10.0.1 when the newer 10.1 got released. Switching to 10.1 from 10.0.1 would cost me very little effort, and also would reduce operational efforts due to the specific advantages of Sitecore 10.1 - but unfortunately, the decision chain to the stakeholders was too long (I was subcontracted by an implementing partner), and that never happened.

7. Take comprehensive notes

Last but not least advice may seem to be obvious, but please take it seriously. Please document even minor steps, decisions, and successful solutions. You will likely go through them again, and maybe several times. Save your scripts either.

An even better job would be if you could wrap all your findings into a blog post and share the challenge and solution with the rest of the world. Because of our blogs being indexed, you’ll help many other people. Several times in my career I googled out my own blog posts with the solution while facing a challenge and googling it out.


Content migration

When asking my colleagues what was the biggest challenge with their upgrade experience, almost everyone responded - migrating the content.

1. Content maintenance

Depending on your level of ownership, it may be a good idea to undertake content maintenance. This is optional but desirable.

Remove legacy version - it is a know good practice to keep less than 10 versions per item improving content editing performance. Once I saw 529 versions of the site's Home item and doubted the authors did need them all.

You can remove legacy items manually by running the SPE script (Rules Engine Actions to Remove Old Versions in the Sitecore) or perform that on a regular basis by creating a Rule in Rules Engine to automatically keep a number of versions less than the desired number (PowerShell Extensions script to delete unused media Items older than 30 days).

Clean up Media Library as it is the most abused area of content, typically. Untrained or simply rushing editors often place content without any care. Almost every single solution I’ve seen had some issues with the media library: either messed up structure, lots of unused and uncertain media, or both. These items are heavyweight and bloat database and content tree without bringing any benefit. We got a PowerShell Extensions script to list all media items that are not linked to other items, so having those you may revise the list and get rid of those unwanted ones.

You may also want to clean up broken links prior to doing an upgrade. Sitecore has an admin page that allows removing broken links. You can find it in the Admin folder: RemoveBrokenLinks.aspx - just select the database and execute the action.

From 9.1 default Helix folders comes OOB with databases so became universal. But lots of Helix solutions were implemented before and their folders IDs do not match those coming OOB in later versions.

For example, the existing solution also has folders like /sitecore/Layout/Renderings/Feature but they are not coming out of the box and therefore serialized, but what is even worse - having different IDs from those coming OOB that are now universal.

You’ll need to rework serialization to match the correct parent folders coming OOB.


2. Content migration options

In fact, you've got plenty of options to migrate the content. Let's take a look at what they are.

Sitecore Packaging

This is the default way of occasionally exchanging items between instances known to every developer. However, it is extremely slow and does not work well with large packages of a few hundreds of megabytes.

Sidekick

Is free and has a Content Migrator module using Rainbow serialization format to copy items between instances in a multi-threaded way. Unlike packages it super-fast. Keep in mind that both servers need to have the content migrator installed on them.
Sitecore Sidekick (by Jeff Darchuk)

Razl

Offers quite an intelligent way of copying items between the servers. However, this software is not free and requires a license to run.

Sitecore Razl: Tool for Compare and Merge


Sitecore PowerShell Migration

We've got a script to migrate content between Sitecore instances using Sitecore PowerShell Extensions (by Michael West) which also leverages Unicorn and Rainbow serialization. This script is extremely fast, but it requires SPE with Remoting enabled on both instances.

Move an item to another database from Control Panel

If none of the above options work for you, there is a built-in tool to copy items between databases that is part of the Control Panel. The trick is to plug the target database as an external database to an instance so that it becomes seen in Sitecore, then you’ll be able to copy items using this option. It has limited functionality and works reasonably slow, but allows copying data both ways.

If you are migrating content to version 10.1 or newer, there is a much better way of dealing with a content and database upgrade, which I will explain in detail below.


3. Content Security

There are a few options to transfer Users and Roles from one instance to another.

Sitecore Packages

You can use the standard Package Designer from the Development Tools menu in the Sitecore Desktop. You can then add the Roles and Users that you want to package up for migration, generate the package, download it and then install it at the target instance in the same way you would do for content.

Serialization

An alternative is to use the Serialization option from within the User Manager and Role Manager applications. The users and roles will be serialized to (data)/serialization/security folder. You can copy this from the source instance to the target instance and then use the revert option.

For both of these options, the user's password is not transferred and instead reset (to a random value when using Sitecore Packages or to "b" when using serialization).

How to deal with passwords?

You can then either reset the password for the users manually (from the User Manager ), or the users themselves can reset the password by the "forgot your password" option from the login screen, assuming the mail server has been configured and they get password recovery email.

You also have the option to use the admin folder TransferUserPasswords.aspx tool to transfer the passwords from the source and target databases. To do so you will need both connection strings and these connections must have access enabled. Read more about Transferring user passwords between Sitecore instances with TransferUserPasswords.aspx tool.

Raw SQL?

Without having the required access, you could do that manually with SQL. The role and user data are stored using ASP.NET Membership provider in SQL Server tables in the Core database. Please note that Sitecore 9.1 and newer Membership tables could be extracted from Core into their own isolated Security database (moving Sitecore membership data away from Core db into an isolated Security database).

So it is possible to transfer the roles and users with a tool such as Redgate SQL. You will need to ensure you migrate user data from the following tables:

aspnet_Membership
aspnet_Profile
aspnet_Roles
aspnet_Users
aspnet_UsersInRoles
RolesInRoles

You may adjust and use SQL script to migrate content security when both source and target DBs are on the same SQL server. Also, follow up on a StackExchange answer about Moving users/roles/passwords to a new core database (by Kamruz Jaman).


4. Dynamic placeholders format

From version 9.0 they became an integral part of the platform, unlike previously we used 3-rd party modules. However, Sitecore has implemented its own format of addressing components: {placeholder key}-{rendering unique suffix}-{unique suffix within rendering}

That means your previous layouts will not be shown on the page unless you update presentation details for these differences otherwise the renderings. For example, if you have a component on a page in an old dynamic placeholder format, you need to change it from:

main_9d32dee9-17fd-4478-840b-06bab02dba6c

to the new format, so it becomes:

main-{9d32dee9-17fd-4478-840b-06bab02dba6c}-0

There are a few solutions how to approach that

  • with PowerShell Extensions (by Rich Seal)
  • by creating a service (or admin folder) page: one, two
Also, you no longer need a 3-rd party implementation module, so remove it (both DLL and its config patch file). Its functionality was "moved" into built-in Sitecore MVC libraries which you of course need to reference from web.config:
<add namespace="Sitecore.Mvc"/>
<add namespace="Sitecore.Mvc.Presentation"/> 

Read more: an official guide on dynamic placeholders


Upgrading codebase

The most central part of my presentation comes about upgrading the actual codebase and the challenges with it. To start with I want to share two useful tips to identify the customization of a given project.

Trick 1 on how to Identify customization

In order to estimate to amount of customization for a solution, I do the following quick trick.

First of all, I need to have a vanilla version of Sitecore that the existing solution is using.

After it is up and running, I come to the webroot and do: git init, and then commit everything. It’s just a local git initialized within Webroot - I do not push it anywhere and delete git files immediately after completing this exercise.

Next, I build and publish the codebase as normal, so that all the artifacts come above vanilla Sitecore into a Webroot.

Finally, my git tool of choice shows the entire delta of all the customizations of that solution. With that trick I could also easily see all web.config transforms and mismatching or altered DLLs, for example, provided with hotfixes.

That trick gives me the overall feel of how much that instance varies from the vanilla one.

Trick 2 on how to find customization with ShowConfig

When previous trick 1 mostly aims to show the overall amount of customizations, this one would be more specific about the changes in configuration. It will show all the patches and config alterations done on top of the vanilla instance so that a more precise estimation could be done.

We already got vanilla Sitecore of the legacy instance version at the previous step. What we need to do is run showconfig.aspx admin folder tool for generating the combined configuration. And save it into an XML file.

Next, once again perform build and publish as normal. Once the instance gets up and running – run showconfig tool again and save the output into a different file.

Now, having an actual solution configuration we can compare it against a vanilla combined configuration for that same version of the platform – using any diff tool. I recommend using advanced Beyond Compare.

For your convenience, you could also extract the whole delta between both into a single config file – it will be a valid patch!

3. Incorrect DLL(s) coming from the deployment

While upgrading the codebase you might also find facing a dependency hell in terms of referenced assemblies not playing well with each other. This can take an age to fix.

Every Sitecore release contains an Assembly List featuring a complete list of assemblies shipped with this release in the Release Information section. (for example, there are 400 assemblies exactly shipped for the 10.2 platforms).

DLLs must exactly match counterpart vanilla DLLs in versions and also in size unless that is a hotfix DLL.

With a previous first trick, you could identify all the mismatching DLLs, and with the second trick – where those are referenced from.

You may also perform a solution-wide search for a specific faulty assembly and then see all the found results along with their sized bytes. At least one would mismatch – that will point you to the specific project where the faulty one is coming from. I would recommend my file manager of choice – Total Commander which is perfect for such operations.

If the previous result gives you more than one – but lots of faulty referenced assemblies, you could rely on PowerShell for performing RegEx replace all the invalid references with the correct ones. It may have a steep learning curve initially, but very soon this technique will start saving you lots of time.

4. Troubleshoot the assembly bindings

This issue occurs due to the dependencies used in your solution projects, that mismatch those DLLs versions that are actually shipped with vanilla Sitecore. For example, your feature module uses Sitecore.MVC dependency, which in turn relies on System.Web.MVC. When you add Sitecore.MVC using NuGet package manager, it will pull the latest dependency, but not the one that was latest and referenced at the time when Sitecore build was released. Or in case if you add a library ignoring dependencies, you could process with the latest dependency yourself.

Two ways of getting out of this. One is strictly hard referencing your project dependencies to match those from webroot. Troubleshooting becomes really annoying when upgrading the solution with a large number of projects (the max I have seen was 135) or in rare cases not possible.

Another approach is advised to adjust particular Assembly Binding to the latest version. Since you should not modify vanilla config manually – you may employ the config transform step as a part of your local build & deploy script (see using web.config transforms on assembly binding redirects).

The situation has improved with 10.1 as libraries have been updated to the latest versions and bindings became less strict.

5. NoReference packages

Sitecore no longer publishes the NoReferences NuGet packages. For example, if you install the 9.3 packages you will get a lot of dependencies installed. Not what we wanted.

The correct way, in this case, is to use the Dependency Behavior option of the Nuget Package Manager.

Choosing the dependency behavior to “IgnoreDependencies” will only install the package without any of the dependencies.

This function is also available as a parameter switch in PowerShell.

6. Migrate from packages.config to PackageReference

Just like a project to project references and assembly references, PackageReferences are managed directly within project files rather than using separate packages.config file.

Unlike packages.config, PackageReference lists only those NuGet packages you directly installed in the project.

Using PackageReference, packages are maintained in the global-packages folder rather than in a packages folder within the solution. That results in performing faster and with less disk space consumption.

MSBuild allows you to conditionally reference a NuGet package and choose package references per target framework, configuration, platform, or other pivots.

So if you are convinced - please perform the update either manually from right-click context menu in Visual Studio or by running a migrate from packages.config to PackageReference script by Nick Wesselman.

7. Update Target Framework

Net Framework 4.8 is the terminal version of Framework and is being used from Sitecore 9.3 onwards.

Migrating from earlier solutions will require you to update Target Framework for every project within a solution. I personally prefer to do that in PowerShell (PowerShell way to upgrade Target Framework), but there is a Target Framework Migrator Visual Studio extension that also bulk-updates Target Framework for the whole solution.

8. Update Visual Studio

But of course, to benefit from this extension you may need to update Visual Studio itself first so that it has the required Target Framework. Least and most important – install the latest VS Build Tools which you of course can be updated on your own.

You will still likely be keen to get the latest VS, the 2022 worked well for me but is not yet officially supported. In that case, you can get at least VS 2019 which also has nice features.

Helix Solutions may have up to a hundred projects that affect overall performance. Solution filters allow filtering only those projects you’re working with, keeping unloaded from VS but still operable. It works as an SLNF-file that references your solution with an array of whitelisted projects.

You can also now perform one-click code cleanups and also search for Objects and Properties in the Watch, Autos, and Locals Windows.

9. Unsupported third-party libraries

The solution you’re upgrading may reference some third-party libraries that are discontinued. In that case, you need to do an investigation for each of those libraries and decide what to do.

Example from my experience. I came across Sitecore.ContentSearch.Spatial library that performed a geo-spatial searches. It has the source code available but has not been updated for 5 years so far. Meantime Lucene search has been removed from Sitecore and this library became no longer relevant, as the library hard-referenced Lucene.

As an outcome, the whole related feature at the solution was temporarily disabled to be later rewritten using Solr Spatial search.

10. Dependency injection

Microsoft.Extensions.DependencyInjection became built into Sitecore from version 8.2

It is fast and reliable and supports pretty much everything you may need. Just use this one!

Read more: Sitecore Dependency Injection and scoped Lifetime (by Corey Smith)

11. Glass Mapper

You will need to update Glass Mapper to version 5 unless already done. Package naming has changed to reflect the version of Sitecore it is working with, and are three related packages:

  • Glass.Mapper.Sc.{Version}
  • Glass.Mapper.Sc.{Version}.Mvc
  • Glass.Mapper.Sc.{Version}.Core

Glass has lots of changes in v5 most significant of which would be a new way of accessing content from Sitecore. Glass Controller gets obsolete, same is true for GlassView which went in favor of elegancy:

<!-- instead of of GlassView -->
@inherits GlassView<ModelClass>

<!-- it becomes -->
@model ModelClass

Instead of ISitecoreContext using the new IMvcContext, IRequestContext and IWebFormsContext abstractions.

public class FeatureController : Controller
{
    private readonly IMvcContext _mvcContext;
    public FeatureController()
    {
        _mvcContext = new MvcContext();
    }
    var parameters = _mvcContext.GetRenderingParameters();
    var dataFromItem = _mvcContext.GetDataSourceItem();
}

Glass lazy loads by default now and [IsLazy] attributes on model classes were removed. Other attributes removed are [SitecoreQuery], [NotNull].

You will need to remove these model properties in order to re-implement them as methods, getting rid of the attributes (see Lazy loading after upgrading to Glass Mapper 5).

After upgrading Glass if faced a bunch of run time errors that were difficult to identify. Eventually, I realized that happened due to a lack of virtual property modifiers in model classes. With old versions of Glass Mapper, model properties still map correctly even without the virtual modifier, but not any longer.

[SitecoreChildren(InferType = true)]
public virtual IEnumerable<Model> InnerChildren { get; set; }

Tip: Installing a higher version of the GlassMapper with NuGet package makes sure the files App_Start\GlassMapperSc.cs and App_Start\GlassMapperScCustom.cs are not overwritten with the defaults. Also, they should not appear in the feature modules, but only in the Foundation module that is responsible for Glass Mapper.

Before upgrading GlassMapper I would recommend reading through these great blog posts about the changes:

12. Custom Pipelines

Sitecore try and reduce breaking changes where possible but sometimes they are unavoidable, so as with all Sitecore upgrades some custom code and configuration will need to be updated to be compatible with

The HttpRequestArgs.Context property has been removed in favor of HttpRequestArgs.HttpContext – in fact, just been renamed. It resulted in a breaking change for all of your custom pipeline processors, which you all need to rewrite. I used my preferred PowerShell RegEx to replace one-liner to update them in one go:

gci -r -include "*.cs" | foreach-object {$a = $_.fullname; (Get-Content -Raw $a -Encoding UTF8) | `
foreach-object {$_ -replace 'args\.Context\.','args.HttpContext.' } | `
set-content -NoNewLine $a -Encoding UTF8}

Custom processors patched into a pipeline now required mandatory attribute resolve.

<CustomPipeline>
    <processor type="ProcessorType, Library.DLL" 
        patch:instead="processor[@type='Sitecore.Mvc.Pipelines.Response.RenderRendering.GenerateCacheKey, Sitecore.Mvc']" 
        resolve="true" />
</CustomPipeline>

13. Link Manager

One more change in version 9.3 takes place with LinkManager in the way we get the default option for building a URL:
var options = LinkManager.GetDefaultUrlBuilderOptions();
var url = LinkManager.GetItemUrl(item, options);

Long story short - its internals has been rewritten to the best, and are mostly hidden from our eyes. What we need to know is an updated way of patching default URL options. The old way of patching

<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/">
  <sitecore>
    <linkManager defaultProvider="sitecore">
      <providers>
        <add name="sitecore">
          <patch:attribute name="languageEmbedding">never</patch:attribute>
          <patch:attribute name="lowercaseUrls">true</patch:attribute>
      </add>
      </providers>
    </linkManager>
  </sitecore>
</configuration>

now became more elegant:

<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/">
  <sitecore>
    <links>
      <urlBuilder>
        <languageEmbedding>never</languageEmbedding>
        <lowercaseUrls>true</lowercaseUrls>
      </urlBuilder>
    </links>
  </sitecore>
</configuration>

Read more about LinkManager changes to link generation in 9.3 (by Volodymyr Hil).

14. Caching changes

The solution I was upgrading had an publish:end event configuration to do HTML Cache clearing for different site instances:
<events>
  <event name="publish:end">
    <handler type="Sitecore.Publishing.HtmlCacheClearer, Sitecore.Kernel" method="ClearCache">
      <sites hint="list">
        <site hint="apple">apple</site>
      </sites>
    </handler>
  </event>
  <event name="publish:end:remote">
    <handler type="Sitecore.Publishing.HtmlCacheClearer, Sitecore.Kernel" method="ClearCache">
      <sites hint="list">
        <site hint="apple">apple</site>
      </sites>
    </handler>
  </event>
</events>

Since 9.3 such behavior became default after a publish. It actually works the other way around now - you have to actually disable HTML cache clearing explicitly if you need to.

The previous code will break, and you have to remove that whole section.
<site name="apple" cacheHtml="true" preventHtmlCacheClear="true" … />

15. Forms

WebForms For Marketers, one of the most questionable modules, was deprecated in Sitecore 9.1 in favor of built-in Sitecore Experience Forms. That also raised questions on how to keep the data and migrate the existing forms while performing platform upgrade

Thankfully there is a tool to do exactly that: convert WFFM forms and data to Sitecore Forms. WFFM Conversion Tool (by Alessandro Faniuolo) is a console application that provides an automated solution to convert and migrate Sitecore Web Forms For Marketers (WFFM) forms items and their data to Sitecore Forms. It takes WFFM data from a SQL or MongoDB database as a source into the destination - Sitecore Experience Forms SQL database.

16. Support Patches & Hotfixes

You will likely find some hotfixes or support patches that have been applied to your solution over time. Sitecore Support addresses issues by providing support DLLs and releasing hotfixes.

Each of them sorts out some certain issue that you need to investigate if it was resolved in the version you’re upgrading to. Use this 6-digit support code to find more details by using a search at Sitecore Knowledge Base website.

When it comes to hotfixes when they keep the original name of DLL they replace, the knowledge base code could be found in the file properties dialog.

Once it got resolved, you may remove those along with any related configuration. This post is aimed at giving an idea of how we can identify if we have hotfix DLLs within Sitecore \bin folder so that we could approach them individually.

17. SXA

Is not an integral part of the platform and it has its own upgrade guidance released along with each version. So I only touch it briefly.

As is true for the rest of the Sitecore platform, the most of SXA changes took place with the 9.3 release. As for me, the biggest change was NVelocity deprecation from 9.3 in favor of Scriban templates, which affects Rendering Variants - one of the most powerful features of SXA.

Also since that version, it has a version parity with its holding XP or XM.

Read more:

SearchStax and Sitecore: The top integration benefits for search optimization and personalization

Both Lucene and Azure Search became obsolete and removed. With Sitecore 10, Solr became the recommended technology to manage all of the search infrastructure and indices for Sitecore. Managed Solr allows your developers to implement faster, and spend more time focused on building a better search experience and less time supporting search infrastructure.

Please migrate your solution, it might take significant effort.

Consider using SearchStax which takes from you the pain of installation, failover, security, maintenance, and scale. SearchStax provides solutions to teams for the most challenging issues for developing with Solr.

There are also a few commonly met issues while upgrading Solr. Sometimes you can come across the case of core name mismatching index name. This is a fairly simple issue and could be fixed with a config patch:

<configuration>
  <indexes>
    <index id="sitecore_testing_index">
      <param desc="core">$(id)</param>
    </index>
  </indexes>
</configuration>

The second issue relates to a configuration that defines Solr index. If you are indexing all fields – that must be wrapped with documentOptions tag of the correct type:

<defaultSolrIndexConfiguration>
  <documentOptions type="Sitecore.ContentSearch.SolrProvider.SolrDocumentBuilderOptions, Sitecore.ContentSearch.SolrProvider">
    <indexAllFields>true</indexAllFields>
  </documentOptions>
</defaultSolrIndexConfiguration>

19. Custom Content Databases

Custom Sitecore databases now require a Blob Storage setting and you have to append Blob Storage for all the custom databases you are using, in a similar manner as that is done for those coming out of the box.

20. Upgrading xDB and analytics

After the introduction of xConnect, there was a question on what to do with all existing analytics data from MongoDB upgrading it to 9.X or 10.X. To address that Sitecore created a tool called the xDB migration tool which works on top of Data Exchange Framework reads from MongoDB, and writes to the xConnect server.

Keep in mind if you have years of data on production, you can be looking at gigabytes of data migration. Be sure to take the appropriate precautions, like preventing regular app pool recycles, as the process can take days. If possible, discuss trimming the MongoDB data, or not importing it at all if it's not used. This tool uses a custom collection model to be deployed to both xConnect service and the xConnect indexer service. After rebuilding the xDB search index you will get the data in the Experience Profile.
It has an optional Verification feature, which is in fact a standalone database that gets a record of each entity being submitted to xConnect.


Changes in Sitecore 10.X

1. Containers

Containers are immutable which means that any changes to the file system within the container will be lost as soon as the container restarts

When switching to containers your codebase will remain the same, however, there are minor changes - debugging now works a little bit differently: now we need to choose a relevant container and then pick up a process within it. instead of publishing artifacts into webfolder we now build images and run a container from it.

With Docker, you no longer deploy just your application code. A container is now the unit of deployment, which includes the whole environment: OS and application dependencies. Therefore, your build process will have to be extended to build Docker containers and push them to the container registry.

Both Azure and AWS are suitable for running Sitecore in containers. They both provide managed Kubernetes services and other container hosting options.

Besides containers you need to consider other system components: SQL, Solr, and optionally Redis

Luckily, Sitecore comes with health check endpoints (/healthz/live and /healthz/ready) out of the box, which you can use for this purpose.

Kubernetes on its own will impose quite a steep learning curve.

You can learn more tips on migrating your Sitecore solution to Docker.

2. Upgrade the database

Databases were not 100% compatible between the versions. Previously (before 10.1) one had to run an upgrade script against Core and Master in order to attach both to a vanilla target instance and progress with the rest of the upgrade.

The update script ensured the schema and default OOB content get updated by applying all the intermediate changes between both versions. This article explains in more detail how did we upgrade databases before 10.1.

The idea that came into consideration was that supplying empty databases with a vanilla platform would eliminate the above need for everyone who updates DB to operate at an SQL admin level. But every version has its own unique set of default OOB items, where do we keep them?

Because of container-first the way of thinking, there was a clear need to store those somewhere at a filesystem level, so it was a matter of choosing and adopting a suitable data provider. Protobuf from Google was a perfect choice ticking all the boxes, moreover, it is a very mature technology.


3. Items as Resources

With that in mind, now having a database full of content you can upgrade the version without even touching it. SQL Databases stay up to date, and the rest of the default content gets updated by just substituting Protobuf Data resource files.

Sitecore called this approach Items as Resources.

I would strongly recommend you look into the items as a resources plugin for the Sitecore CLI - instead of going through all the trouble of making asset images, you can create Protobuf files that contain all your items and bake them directly into your container image.

I wrote a comprehensive explanation about everything you wanted to ask about "Items-as-Resources" coming with the new Sitecore 10.1 - hope it answers most if not all questions.

You can create your own resource files from *.item serialization using Sitecore CLI Items as Resources plugin (version of CLI 4.0 or newer). You cannot delete items from Resource File it is read-only, but this trick (by Jeroen De Groot) helps you "remove" items at the Provider level.


4. Sitecore UpdateApp

But how do we upgrade let’s say 8.2 databases to 10.2?

From 10.1 and onwards databases come with no content – it becomes a matter of removing the default items from the SQL database, leaving the rest of the content untouched. That default content would come from the actual items resource file which will be placed in App_Data\Items folder.

For Upgrading to Sitecore 10.1 with UpdateApp Tool it comes to purely replacing resource files which naturally fits the container filesystem way of doing business.

That is where a new tool comes into play, please welcome Sitecore UpdateApp Tool!

This tool updates the Core, Master, and Web databases of the Sitecore Experience Platform. You must download and use the version of the tool that is appropriate for the version and topology of the Sitecore Experience Platform that you are upgrading from.

It also works with official modules resource files for upgrading Core, Master, and Web. (SXA, SPE, DEF, Horizon, both CRMs, etc.)

Sitecore UpdateApp Tool 1.2.0

5. Asset images

To containerize modules we now use Sitecore Asset images, instead of packages as we did before. But why?

First of all, you won’t be able to install a package that drops DLL due to the instance \bin folder being locked for an IIS user.

Secondly, containers are immutable and are assumed to be killed and instantiated - any changes to a container will go away with it.

Also, since a module is a logical unit of functionality with its own lifecycle and assets - it should be treated by the "works together, ships together" principle. Also, you as a package maintainer provide sample usage Dockerfiles for the required roles so that your users could pick it up with ease.

Sitecore Asset images are in fact storage of your module’s assets in the relevant folders, based on the smallest windows image - nanoserver. You can create those manually, as I show on this diagram.

With the recent platform, there are two ways of managing the items: either creating an Items Resource file or converting the package zip file into a WebDeploy package, then extracting its DACPAC file and placing it into the DB folder. In either case, the file structure will stay as pictured in the diagram above.


There is a tool called Docker Asset Image for a given Sitecore module (by Robbert Hock) which aims to automate Asset Image creation for you.

I would recommend going through these materials for better understanding:

Testing and going live

You will definitely need to conduct one (or if things go wrong - a series of) Regression Testing. The first question comes to clarify, what exactly to test?

Depending on the upgrade scope, you will need to at least ensure the following:

  • Overall site(s) health look & feel
  • 3-rd party integrations aren’t broken
  • Editing experience is not broken
  • Most recent tasks/fixes function well

Another question: manual, automated, or a mix of both?

There is no exact answer - but without automation regression testing becomes effort-consuming. especially when you need to repeat it again and again. At the same time, it does not make sense to automate everything (and sometimes is not even possible).


Since you do the upgrade into a new instance of Sitecore in parallel to the existing one, you definitely need to conduct Load Testing prior to this new instance becoming live. The metrics must not mandatory me better than previously had as new Sitecore releases have much of new features and the number of assemblies within \bin folder always grows up. But as soon as it meets the projected SLA - that should be fine.


Monitoring is also crucial when going live. Once the new Sitecore instance is up and running, the first thing to check would be Sitecore logs to inspect and fix any errors that are being logged. We have a comprehensive set of automated UI tests that cover the majority of business use cases and are executed overnight. Watch the metrics for anomalies


Another exercise to undertake before going live would be Security Hardening - the process of securing a system by reducing its surface of vulnerability, which is high for systems that perform more functions like Sitecore is. From older versions, Sitecore comes up with the Security hardening guide and considerations to be followed.


In case you have the luxury of organizing Content Freeze for the going live period - that is great. If not you will need to pick up and care for the content delta produced meanwhile going live. The best thing to advise would be to use PowerShell Extensions to identify and pack all the content produced at a specific timestamp onwards. After installing this package the delta would be applied to the updated instance.

The actual moment of Going Live is when you switch DNS records from a previous instance to the new one. Once done, the traffic will go and be served by the updated instance. There are a few considerations, however:

  • you will need to reduce TTL values for the domain DNS record well in advance to make the switch immediate; then bring it back once done.
  • you may consider slightly increasing traffic (ie. 5% of visitors) to new instances and monitor logs rather than switch it entirely.

If everything went well and the updated instance works fine, you still need to keep both instances running in parallel for a certain amount of time, Maturing Period. If something goes totally wrong you could still revert to an old instance and address the issues.

That's it!

Hope my presentation helps your future Sitecore upgrades!
Finally, I want to thank all the event organizers for making it happen and giving me the opportunity to share the experience with you.

Content Hub Administrator and Developer Certification - tips on preparation and successful pass

To start with, I am happy to complete learning and successfully pass both Content Hub Administrator and Developer Certification exams. In this post, I will try to share some thoughts and insights on how you could also progress with those two.


Everything started for me from this twit from the official Sitecore account:

With more and more large clients choosing Content Hub, I thought that would be a great time investment learning how to deal with it. To start with, one could start with the Content Hub sandbox which is intentionally done for acknowledgment and playing it over.

Documentation is free and available here: Sitecore Content Hub's Functional documentation.

There is an official learning collection for Content Hub 4 which comes as a 12-month on-demand digital learning subscription, however costly ($2k, unless you're a partner). Instead, I referred to the one for version 3 which became free at some point as well as lots of third-party resources and blogs. 

Since Content Hub is a SaaS solution, one cannot install that in order to familiarize with it. Instead, you may need to spin up your own Sandbox Environment in order to practice. You may read what are the options by this link (also Sitecore MVPs get $50 off the bill).

When it comes to actual exams, it is not possible to book either of both directly from Kryterion Webassesor - it will prompt you that both given exams are available as voucher-only exams. But no worries, that brings us to just another loop to the official study guides, which you purchase instead of the desired exam directly, at the same price as the exam ($350) and you'll be given an exam voucher valid for 3 months at the end of the study guide. This is an additional step but it makes sure you're on the right path and prevents you from giving an unsuccessful attempt: there will be a 10-question quiz at the end that mimics the exam question (in fact, way simplified than the real exam) which you have to answer at least 8 of 10 in order to progress with obtaining a voucher code. I would strongly discourage you from taking the exam unless you cover 10 of 10 and are strong in the competencies below.

If you're lucky to me Sitecore MVP (as I am), you'll get a generous discount of 75% off the price. Yes, you pay only a quarter:

You will have to answer 50 and 60 59 questions for the Administrator and Developer exam, correspondingly with a pass rate of 80%. This is a "closed book" proctored exam, which means during the course of the whole exam one cannot refer to any of the materials, notes, browser tabs, etc. A built-in laptop internal camera would be enough, but the proctor will be monitoring you all the time. They are ofter alerted when test-takers move their sight outside of the monitor and can put an exam on pause asking to turn the camera and show the whole environment to ensure no cheating takes place. Another cautious wearable is glasses (especially tinted) - those could be smart glasses and the proctor must ensure they aren't.

The Sitecore Content Hub Administrator Certification Study Guide talks about the competencies and the expectation on each area for the exam. If you follow this guide and cover all the areas mentioned you should be able to pass the exam.

  • Schema Design
  • UI Configuration: Search Component and Mass Edit
  • Branding and Theme, Custom Home Pages
  • Media Processing
  • Digital Rights Management
  • Data Import and Export
  • Security: Basic and Advanced
  • Reporting
  • Enterprise Domain Model: Schema and Metadata Management
  • UI and Advanced Pages
  • Entity Printing
  • Create and Configure a New Workflow

In addition to the above, there are Developer competencies added at Sitecore Content Hub Developer Certification Study Guide:

  • Metadata Processing Scripts
  • Develop External Page Components
  • Develop Web-enabled Action Scripts
  • Develop Triggers, Actions, and Action Scripts to Implement Custom Business Logic in Response to Entity Changes
  • Implement User Sign-in Scripts 
  • Develop LINQ Queries in Combination with Action Scripts that Run In-Process and Out-of-Process

Outside of official guidance, there are lots of third party helpful resources:

Finally, last but not the least, I'll share my own ideas and some interesting facts.

Firstly, both exams share the vast proportion of the same competencies up to 70% and at some moment it looks like the Administrator test is a subset of Developers one. Personally, I have met a certain amount of the same questions as a part of both exams. That means if you want to cover them both, it would be highly beneficial to start with the Administrator exam and once complete switch to Developers. But you don't have to - just choose the one that suits you best.

Next, the topics and competencies. Learn and understand the schema! The questions about it were widespread along with both tests. Without a bright and clear understanding of it, it becomes impossible passing these exams. 

You must also know:

  • how to define relation types and cardinality for the desired use case
  • work with metadata and add new metadata properties
  • option lists and how to use them with M.Asset
  • taxonomies and all about them
  • Search relevance and boosting
  • full-text search and to configure new metadata for it
  • how to set up security for the properties
  • rules and conditional display fields to display, and understand scopes (ie. for Apply All vs Apply Any)
  • exporting and importing (from Excel)
  • what are DRM contracts and right profiles, how to apply those to assets
  • anything related to workflows: state flow manager, etc.
  • how to work with media: converting, streaming, or download, how to work with renditions
  • how to implement custom logic triggered by entity changes
  • context and getting data out of it 
  • the difference between action scripts that run in-process and out of the process
  • how to implement authentication-related code (ie. checking the way user got authenticated, local or external)
I found nice guides from Navan in a form of a walkthrough:
That is just a small subset of what you expected to be confident with in order to successfully pass the exam (the last four bullets relate to the Developer exam only). Speaking about the Developer exam, please pay attention to the Scripting API Examples section, as some of the codebase questions will be based on the provided code. Read and understand the code!

Hope this helps and wish you all best, for both passing exams and working with this powerful SaaS solution!

Sifon 1.2.6 released with Sitecore 10.2 and Windows 11 support

Supports Sitecore 10.2. Supports Windows 11
SUPPORTS 10.2 ON Windows 11

It took me a month longer to conduct all the required testings for software and plugins supplied by me (in fact I was also stuck in Africa on the festive break), but finally, the new version of Sifon is available!

You can download it from Sifon.UK website, but the easier option would be using Chocolatey package manager:

cinst sifon

For the moment it is the only software that can install Sitecore on Windows 11. Let's take a look at Release Notes for this version:

  • supports Sitecore 10.2, supports Windows 11. Supports Sitecore 10.2 ON Windows 11!
  • comprehensive testing has been done - many bugs fixed or refactored
  • plugins were updated, consolidated with main functionality and each other
  • it is now possible to mark a plugin with ### Requires Profile: false to make it run even without an active profile
  • few features were temporarily suspended until got improved
  • lots of new plugins including installers for the latest Horizon, SXA, Publishing Service 6.0 etc
  • added support for Solr 8.8.2 and fixed a minor bug in Solr (un)installation script, making it genuinely universal
  • added syntax to present (double)clickable URLs right in the output to help users with supporting info
  • added Get-SitecoreVersion function that returns either object or string with current XP or XM release version
  • added Verify-NetCoreHosting function to ensure minimum required .NET Core version (passed as param) presents on a target system
  • since all the SQL Server activity goes through SqlServer PowerShell module (opposed to SQLPS) - it is now added to Sifon prerequisites
  • and much more minor issues got tested and fixed

I also recorded a video showing how easy one could install Sitecore without having anything installed at the target machine (except SQL server which could of course be accessible over the network):

Hope you find it useful!

Applying vulnerability fix to containerized environments

This advice was originally proposed by Peter Nazarov (Twitter, LinkedIn), who kindly asked me to give it a bigger spread.

The biggest question for the day is if the fix was already applied to all official Sitecore container images so that now we can just pull new Sitecore containers and rebuild to rebuild our own container images to apply the patch?

The KB article offers WDP and ZIP packages fixes but says noting about containers, just like containers are not supported by Sitecore:

Critical vulnerability applicable to all Sitecore versions related to XSS. 

This issue is related to a Cross Site Scripting (XSS) vulnerability which might allow authenticated Sitecore users to execute custom JS code within Sitecore Experience Platform (XP) and Sitecore Managed Cloud.

We encourage Sitecore customers and partners to familiarize themselves with the information below and apply the Solution to all affected Sitecore instances. We also recommend that customers maintain their environments in security-supported versions and apply all available security fixes without delay.


So below are some findings:


So to apply the fix for your Docker images you need to copy the patch files from the following Sitecore Docker assets images:

  • for XM1: scr.sitecore.com/sxp-pre/sitecore-xm1-assets:10.2.1.007064.169-10.0.17763.2366-1809
  • for XP0: scr.sitecore.com/sxp-pre/sitecore-xp0-assets:10.2.1.007064.169-10.0.17763.2366-1809
  • for XP1: scr.sitecore.com/sxp-pre/sitecore-xp1-assets:10.2.1.007064.169-10.0.17763.2366-1809

For example, if using XM1: scr.sitecore.com/sxp-pre/sitecore-xm1-assets:10.2.1.007064.169-10.0.17763.2366-1809 for XM 10.2.0.


Inside this Sitecore Docker assets image you find C:\platform\ directory which contains the directories for the corresponding Docker images that you need to patch:

  • \platform\cd
  • \platform\cm
  • \platform\id (it is empty and can be ignored)
You will need to copy the content of those directories to file system root C:\ of the corresponding container.
For your \docker\build\cm\Dockerfile  you would need a couple of new lines:
...
FROM scr.sitecore.com/sxp-pre/sitecore-xm1-assets:10.2.1.007064.169-10.0.17763.2366-1809 as kb1001489
...

FROM ${BASE_IMAGE}
...
WORKDIR C:\
COPY --from=kb1001489 /platform/cm/ ./


You would need to do similar changes to your \docker\build\cd\Dockerfile with only one difference that you copy the CD patch files instead of CM in the last line:
...
COPY --from=kb1001489 /platform/cd/ ./


Of course, you can introduce the .env variable for scr.sitecore.com/sxp-pre/sitecore-xm1-assets:10.2.1.007064.169-10.0.17763.2366-1809 and pass it to your docker files as an ARG.


Note: this patch changes the version of your Sitecore 10.2.0 instance to 10.2.1: Sitecore.NET 10.2.1 (rev. 007064 PRE) (see the screenshot below). Seeing this happening it feels that, sadly, Sitecore is unlikely to release 10.2.0 Docker that includes this patch - it would cause versioning issues:


The example above is good to learn how you apply a patch with Docker assets image-based patch to your containers.

However, this Cumulative fix for Sitecore XP 10.2 patch changes a lot of DLLs to the new version which are not exposed via NuGet feed, and changes your Sitecore version to a pre-release version (which does not exist). This would give you several challenges. Therefore, I would prefer in this specific case to apply just a standard .zip file-based fix as per Notes section on the page:

For Sitecore XP 10.1 and later, if it is not possible to apply the cumulative fix (pre-release update), the following patch can be applied alternatively: Sitecore.Support.500712.zip.

  • Changes multiple DLL versions (they are not available in NuGet feed)
  • Changes Sitecore version to a pre-release version (next version that is not released yet)


 

Sitecore.Support.500712.zip

  • Deploys new Sitecore.Support.500712.dll
  • Overwrites vulnerable \sitecore\shell\Applications\Content Manager\Execute.aspx page file so that it runs from Sitecore.Support.500712.dll which contains the fix.

Merry Christmas and happy New Year!

Every year I create a special Christmas postcard to congratulate my readers on a new oncoming year, full of changes and opportunities. Wish you all the best in 2022!

My artwork for the past years (click to expand)
2021


2020


2019


2018


2017


2016


Troubleshooting Marketing Automation service start for XP 10.2 installation on Windows 11

The problem

When installing Sitecore XP 10.2 for IIS 10.0.2*** like the one supplied for Windows 11, you will likely hit an error of Marketing Automation service unable to start. The SIF error starts with:

Install-SitecoreConfiguration : Failed to start service 'Sitecore Marketing Automation Engine'

That occurs due to a failure caused by TLS 1.3 so that the service fails to communicate with XConnect.

The solution

Just download and run this script prior to executing XP0-SingleDeveloper.ps1 main installation script. It will disable TLS 1.3 over TCP for the local IIS (beware as it may affect other non-Sitecore sites).

Alternatively, here's the code:

New-Item `
   'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.3\Server' `
   -Force | Out-Null
    
New-ItemProperty `
   -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.3\Server' `
   -name 'Enabled' -value '0' -PropertyType 'DWord' -Force | Out-Null
    
New-ItemProperty `
   -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.3\Server' `
   -name 'DisabledByDefault' -value 1 -PropertyType 'DWord' -Force | Out-Null

Hope that helps!