Experience Sitecore ! | More than 200 articles about the best DXP by Martin Miles

Experience Sitecore !

More than 200 articles about the best DXP by Martin Miles

Sifon - the easymost way of installing Sitecore XM/XP to your local machine

Hey folks, if you have not heard about Sifon for Sitecore - you must definitely check this out. It is a definite swiss army knife for local Sitecore development and you'd really like to learn why.

But here's a demo of how straightforward Sitecore installation is using the latest Sifon 1.3.3 release - you don't need to do anything at all rather than click a few buttons from UI. Below are the new features for this version:

  • added support for 10.3 version of the platform (downloads, Solr, dependencies, etc.)
  • added support for XM topology starting from 10.3
  • added SQL Server smooth installation in a single click
  • added convenient defaults so that you don't need typing at all, would you prefer the default setting
  • tested well on Windows 11
This tool is a gem for marketers, business analysts, and other non-developer groups of people who may need to set up Sitecore on their local machines but do not want to mess up with Docker and containers. Single-click smooth installation is what they want!
The installation itself is simple - either downloading the installer from the official website or even easier from Chocolatey gallery by this command:

cinst sifon
This 15-minute-long video shows it all in action - installing Sifon, then downloading and installing Solr, SQL Server, and Sitecore XP 10.3 with Publishing Service 7.0, SXA 10.3, and even Horizon from 10.2 - it works with 10.3 perfectly well, and installs in a single click, as everything else with Sifon:

Upon completion, Sifon will also automatically set up and activate the profile for the newly installed Sitecore instance (in the above image it is name xp and is also shown in the window title). Profiles are used to identify the active environment for the rest of Sifon functionality and plugins to operate against. One can easily switch active profiles from the dropdown and Profile editor menu.

I really hope this wonderful tool saves you lots of time and effort. Thanks for watching!

Update: occasionally, some rare systems report errors upon prerequisites installation. The error message prompts about being unable to identify and run AppPool task and is caused by the mandatory system restart requirement from IIS. For such systems, after the restart and re-running Sifon will work as expected. As an alternative, you may run the below command and restart your computer prior to using Sifon to ensure a smooth installation experience:
Enable-WindowsOptionalFeature -Online -FeatureName "IIS-WebServerManagementTools" -All

Sitecore Edge considerations for sitemap

A quick one today. We recently came across interesting thoughts and concerns about using Sitecore Edge. As you might know (for example from my previous post), there are no more CD servers when publishing to Sitecore Edge - think of that as just a GraphQL endpoint serving out json.

So, how do we implement a sitemap.xml in such a case? Brainstorming brought several approaches to consider:

Approach one

  • Create a custom sitemap NextJS route
  • Use GraphQL to query Edge using the search query. Here we would have to iterate through items in increments of 10
  • Cache the result on Vercel side using SSG

Approach two

  • Create a service from CM side that will return all published items/urls
  • This service will only be accessible by Azure function which will generate a sitemap file and store it in CDN
  • Front-end would then in this case access this file and render the content of it (or similar)

Approach three

  • Generate all the sitemaps (if more than a single sitemap) on CM, then store them all in single text fields
  • Returned them via edge, using GraphQL the font-end head which handles sitemap.xml

Then I realized, there is SXA Headless boasts SEO features OOB, including sitemap.xml. Let's take a look at what they do in order to generate sitemaps.

With 10.3 of SXA, the team has revised the Sitemap feature providing much more flexibility to cover as many use cases as only possible. Looking at /Sitecore/Content/Tenant/Site/Settings/Sitemap item you'll find lots of settings for fine-tuning your sitemaps depending on your particular needs. CM crawls websites and generates sitemaps. Then they get published to Sitecore Edge as a blob and then it gets proxied by a Rendering Host via GraphQL. When search engines request sitemaps of a particular website, Rendering Host gives them exactly what has been asked. That is actually similar to the above approach three with all the invalidation and updates of sitemaps provided also OOB.

This gives out a good amount of options, depending on your particular scenario.

Sitecore 10.3 is out! What's new?

On December 1st, after more than a year of hard work Sitecore has released its new version 10.3 of XM and XP platforms. 

Please note, that Experience Commerce sales have been discontinued after version 10.2 so unsure if there will be XC releases anymore. Historically XC releases follow up the platform releases with some lag of several weeks.

Let's take a look at what Sitecore put into the latest release.

With version 10.3 Sitecore moved in the direction of unifying its XM/XP platforms with XM Cloud. The two biggest proofs of that are SXA Headless and Integrated Web Hook architecture​ being a part of 10.3 - similar to XM Cloud.

Headless SXA

As you may hear, Headless SXA became a first-class citizen for XM Cloud. Now we get Headless SXA with 10.3 and new Next.js Headless SXA components, such as Container, Image, LinkList, Navigation, PageContent, Promo, RichText, Title, etc. SXA development team made an incredible job aiming to achieve feature parity for their product between XM Cloud and X/XP platforms. 

Because of that, the team sadly had to retire several features that do not fall nicely into a new concept - that's why Headless SXA doesn't use Creative Exchange any longer. The same comes valid for Forms - you will not be able to use them with Headless SXA  out-of-the-box, there is however documentation on how to use forms with Next.js, and is also one can also consider a dedicated forms builder. At the same time, SXA Headless brings some new concepts, like Page Branches and site-specific standard values. You may also want to leverage nextjs-sxa starter template (installs with npx create-sitecore-jss --templates nextjs,nextjs-sxa).

Among the new features, I like the ability to duplicate pages without subpages by clicking a right mouse button at a page, which may be helpful for cloning landing pages having multiple subpages without the unwanted routine of manually deleting cloned subpages afterward. Also, it works well with SEO concepts such as sitemaps, robots.txt files, redirect items and maps as well as error handling (for generating static 404 and 500 pages) - all that is extremely useful for almost any headless site.

In general, if you are planning a new implementation today and feel positive about using SXA, the best advice would be to download 10.3 and use the new headless SXA with it. That immediately brings you into the headless world of 2023 and drastically simplifies the further upgrade options, not to mention the potential migration to the XM Cloud.

Webhooks

That is a new introduction to the XM/XP platforms, while other Sitecore SaaS products which already use webhooks - XM Cloud, Content Hub, OrderCloud, etc. But firstly, what are webhooks? A webhook is just an HTTP request, triggered by some event in a source system being sent to any destination you specify, carrying some useful payload of data. Webhooks are automatically sent out when their event is fired in the source system. Basically, they are user-defined HTTP callbacks triggered by specific events. As per documentation, we are given 3 types of webhooks:

A good example of webhook usage may be validating and further canceling workflow transitions.

GraphQL Authoring and Management API

Another great new feature is GraphQL Authoring and Management API. This API provides a GraphQL endpoint for managing Sitecore content and performing some custom authoring tasks which previously one could do only with Sitecore user interface, almost any function. That means now we can automate operations around items (including media), templates, search as well as managing sites. Unfortunately, user management is not yet supported.

Sitecore Forms

Forms is the feature used on almost every solution I worked on, therefore it is a pleasure to see the new Embeddable Forms Framework. Using it one can add a Sitecore Form to any webpage, including pages that are not running on a Sitecore application - similarly to what FXM allowed doing. The good news is that an embedded form supports custom form elements and will not mess with any existing styles on a page as it is powered with Tailwind CSS. However, to benefit from Embeddable Forms you must have at least Headless Services 21.0.0 in place in order to deal with the Layout Service and also the endpoint for the data submission.

xConnect

There is a new Data Export Tool that exports both contacts and interactions from the data database into files. It supports both Azure Blob and File Storage providers to be used for your deployments, but can also write into a network folder which is helpful for local instances.

Database Encryption

At the storage level, Transparent Data Encryption could be used with MsSQL Server to protect critical data by using data-at-rest encryption. In simple words, the data get encrypted prior to writing it into databases, so that physical SQL tables contain already encrypted data. When read-accessing, the data get transparently decrypted for authorized SQL users. It significantly protects the information, stored prevents data breaches, and complies with regulatory requirements for sensitive data.

What raised the event?

An interesting new feature helps us to identify which database raised a publish:end / publish:end:remote events will simplify updating the cache on remote CD instances.

Sitecore CLI

Version 5.0 of CLI has been around for a while since the XM Cloud release, now with its version 5.1.25 it became also an integral part of 10.3 platforms. It now supports Linux-based environments and features publishing to Edge, and features a few more new commands. It also employs integrated telemetry so that developers can improve CLI even further, however using telemetry can raise some security compliance concerns for governed environments.

What are the additional features we will see with the 10.3 release?

  • With version 10.3 of the platforms, Headless Services v21 comes into play. You may find a new starter kit for your new projects on Next.js 12.3.x over React 18.
  • Sitecore Host (along with components relying on it such as Publishing Service and Identity Server 7) were updated with .NET 6.0 which is an LTS version of a framework and has improved performance.
  • The supported version of Solr is now 8.11.2.
  • Those using EXM may now benefit from OAuth authentication with third-party services for custom SMTP.
  • Horizon, unfortunately, won't get any update beyond version 10.2. Despite technically it still works with 10.3 platforms, Sitecore discourages using it with 10.3 or later.
  • Management Services 5.0 offering publishing to Experience Edge now is capable of publishing a single item, and a few more improvements.
  • Search has got numerous improvements, like searching by ID and path, and searching for non-quotes-enclosed terms returns both exact and possible matches.
  • Windows Server 2022 support was promised but is slightly delayed, until January 2023. I assume support also relates to 2022-based containers in the first place, rather than underlying infrastructure.
  • More than 160 other issues submitted by customers were fixed and released in 10.3!


You can download and install Sitecore 10.3 right now, please feel free to share your thought on it!


Sitecore 10.3 dashboard

My 2022 Sitecore contributions

One more year has passed so quickly! I have just submitted my MVP application for the oncoming year and it actually took me a while to manage my contributions using the new format with a mandatory date selector. Not everything went submitted well there for me, so I decided to duplicate my submission in an open format, and maybe that will in some way help other new and potential applicants.

To start with, I need to mention that this year was very much special for me - I have relocated to the USA from England, UK for family reasons. But since my family is non-US, getting work authorization took an insane amount of time and effort, more than half a year took me to find a great place to work that could sponsor me an O-1 visa as the only possible solution to start work. Anyway, luckily the visa got granted and I recently started my new career path.


Learning and certification

So, having lots of time on a bench without any income, I still used my time wisely. I started the year by collaborating with Learning@Sitecore team (they made really good progress this year!) as well as learning new things myself. Between January and May, I cleared all the certification exams for each of the new Sitecore SaaS offerings. I also shared my learning & preparation in these blog posts: Content Hub, OrderCloud, and earlier there was also XP10.


SUGCON

But my biggest contribution went in a form of SUGCON presentation in Budapest, at the end of March. I picked up the difficult topic which lay in a shadow area - performing Sitecore XP/XM upgrades. There was a lot of controversial information on that subject, and previously I suffered it out myself on several projects. So I gathered all my knowledge as well as what I learned from colleagues and other MVPs and made a universal guide on approaching and performing platform upgrades. All the traps are mentioned, and following my guide may save at least 2-3 x times off the cumulative effort.

I also lodged good proposals for SUGCON ANZ and Symposium, however, those haven't been chosen for some reasons


Sifon

If you haven't heard about it, Sifon is a must-have open-source tool for any Sitecore developer, to simplify most of your day-to-day DevOps activities. Beyond its OOB "install-backup-restore" features, empowered with a plugin system Sifon turns into a real Swiss army knife. Plugins reside in a separate repository that Sifon pulls with one click, everyone is welcome to create and pull-request their own plugins.

After XP 10.2 got released, I updated Sifon to support this latest version. It took me a few weeks in December-January to troubleshoot installing it to Windows 11, eventually, it all got implemented and a new version 1.2.6 became available (along with updated plugins). Thus, Sifon became the only GUI tool that could download and install XP on Win11 in just a few clicks, even easier than in containers.


Speaking at usergroups

I was invited to be a speaker at three Sitecore usergroups:


Sitecore Discussion Club

I organized the Sitecore Discussion Club only three times this year. Despite being a wonderful format of the event compared to traditional Sitecore user groups, it expanded most of its power in an offline format. With both founders having moved outside of the UK and living in totally different time zones, maintaining online events becomes a challenge. So much sadly we're thinking of closing it next year unless someone from the UK community takes it over from me.


Blogging

Because of not having a work permit visa for such a long period, I was less exposed to the actual challenges and had fewer topics to blog about. There are still ~10 posts over the past year, and my blog homepage nicely indexes all the posts as they were written, reverse chronologically.


Sitecore Telegram

This year more effort went to Sitecore Telegram - a great channel with regular insights about Sitecore products on useful tips approaches ideas and concepts that are sometimes difficult to approach for Sitecore technology professionals - mostly developers, architects, and strategists - with around 800 subscribers in total. In 2022 I decided to expand Sitecore Telegram with 4 more additional channels exclusively dedicated to new SaaS products to promote those directly and channel out the audience attention: 


Podcasts

In the summertime I was invited to (and took part in) the two podcasts:


Sitecore Link project
When it comes to Sitecore.Link project - it could seem semi-abandoned at a first glance. That is partly true, but only partially. I still keep a fat bill in my pocket for the underlying infrastructure and anticipate the changes to save much on ownership costs. At this moment I am only adding some new content to the existing instance running on Sitecore 9.3 JSS. Big housekeeping is in scope to revise existing material that is no longer actual. For quite a long time I was thinking of rebuilding it with the proper technology, but nothing was much suitable.. until now! Next.js is a 100% ideal technology for this project. The backend is to be rebuilt for XM Cloud and to be interchangeable with 10.3+Edge and one of my biggest ambitions for 2023 would be to document the whole path in the format of a series of tutorials for typical Sitecore backend developers to "convert" into the new headless world.


Awesome Sitecore
Awesome Lists are the legendary curated lists about almost everything in the world, if you never came across it - please spend some time to see what it features. I am a creator and curator for the Awesone Sitecore list there for the past 3 years, classifying all the important and valuable open-source repositories we (as the community) have created so far. This is the best place to look up code for almost any Sitecore aspect one may be coding. This repository is gained 54 starts so far which is a great indicator of its value!


Organizing Los Angeles Sitecore User Group

After relocating to Southern California and commitments (plus presentation) for Los Angeles Sitecore UserGroup, I was invited to become a co-organizer for this event. I proudly accepted this request and now became a person in charge of this quarterly-run event. The next big event we do takes place on December 1-st just a few hours after the MVP application closes so technically it does not not count as a contribution for 2022.



MVP Summit
That is, in my opinion, the best perk of being a Sitecore MVP so I simply couldn't miss it! I shared some of my thoughts and vision (and some concerns as well) with the development teams and generally spent a great time learning lots of valuable information from other MVPs and Sitecore product teams



Other MVP-related activities

Over the past several years I have had a pattern of using my 1-to-1 half an hour with MVP Program to share the vision on platform development, business adoption, some Sitecore-related hot topics, and of course - the feedback to the Sitecore teams. I really hope it is been shared with the teams helping them to improve. Past December was not exclusion and we had a very productive conversation, as usual.

This year I took part in the Sitecore Mentor program (as a mentor). I wish we had more time spent together with my mentee. My biggest outcome is that unless you set up a recurring meeting invite that works for both of you - planning stays extremely difficult (given that we're both grown-ups with families and tons of responsibilities).

I also participated in MVP webinars and monthly MVP Lunch events as much as I could (given that my new timezone is a bit restrictive).

Lastly, as in all previous years, I helped out MVP Program with new applications and was happy to spot a few superstars on Sitecore horizon (no, not that one).


Future plans

Speaking about the plans for the next year, my new position requires me to get deeper with Sitecore and its products, with much more interaction and some shift-n-drift into strategy. I feel extremely positive about it!

Other than that:

  1. With no doubt, XM Cloud will be the headliner for 2023, not just for me, but for all of us. Now after finally getting access to the system I am so much eager to start blogging it backwards and forwards.
  2. SmartHub (especially the CDP part of it) has huge undiscovered potential, which I am anticipating coming across, luckily my new employer specializes in it.
  3. Content Hub is another product (also well-practiced at my company) for me to master, discover and blog about.
  4. XP platform is still with us, with lots of support required. Learning and documenting upgrade paths would be a hot topic fin 2023.

My learnings from project management and lead experience mistakes I came across

Recently I passed through a series of interviews for almost every of the top 10 Sitecore Platinum partners in the US. That was a wonderful experience, I learned lots about all these companies and their way of doing business. Moreover, I can definitely say now that organizations in their culture and temper are like humans - some are pro-active extroverts, while others are very pedantic process-oriented nerds.

One of the companies made me a set of a few ~1.5 hours-long interviews with a vice president baking me with lots of challenging but interesting questions, mostly management and business-related. As it usually happens, I keep thinking about it well beyond the time in the background, and my thoughts resulted in this post. I compiled that into a single solid dump of my experience, so here we go.


Presales

Take it as the rule of thumb: you have to spend resources on presale! The evaluation should be well detailed, not to receive a "negative profit" later. I remember a project when the assessment was done by the architect, who did not allocate enough time to study the Customer's processes in detail. He conducted most of the work "according to the standard", using the "best practices". As a result of poor quality assessment, the project received "negative margin".

At the stage of pre-sales evaluation of the project, it is necessary to provide a list of possible additional works (volume, cost, time). For example, a change in some indicators entails a restructuring of the entire model, which can take a significant amount of time, require the involvement of significant resources and cost decent money.


Integrations

Integration evaluation also refers to the pre-sales evaluation of an upcoming project, but I emphasized this point intentionally.

First and most, you need to make sure that the systems do integrate with each other. Moreover, at the presale stage, it is crucial to find out as many details about the upcoming integration as possible: what systems, what protocols, buses, etc.

In another projects I took part in, the incompatibility of the external systems was discovered only at the stage of integration after work started. That resulted in the customer being unhappy with and additional labor costs for developing and configuring the alternative. Don't underestimate that!


Agreement

Probably the most important stage with all the parties to be involved. Mistakes in the agreement may cost a lot!

1. It is crucial to ensure that the terms in the Agreement do match your resource plan. Why? It often happens that even at the presale stage of negotiating with a potential Customer, some additional clauses get added to the Agreement, and these changes get forgotten to be reflected into the resource plan. This is what happened in our project. The PM did not take part in the process of signing up the Agreement, and different participants in the process added each from themselves, but no one checked the compliance of the Agreement with the actual resource plan. As a result, a misalignment between the Agreement and Resource Plan led to time and labor losses that were not taken previously into account. Just doing an extra check would eliminate that loss and could save time and retain customer satisfaction!

2. When it comes to client training it makes sense to describe the learning process well in details in the Agreement. You may include a number of hours, topics, number, and type of users - also specifying business and/or technical users, as well as the list of attached instructions (including number, titles, etc.). No need to be ultra precise here, but the scope should be agreed upon and signed. Once in a time, our contract stated that we would provide instructions and provide training. But it ended up that the list of instructions and the amount of training planned by an Architect (why him, BTW?), mismatched the Client's expectations, and they demanded much more. This case is in fact a subset of the following point.

3. Try to avoid wording that could be interpreted differently. The presence of inaccurate wording may lead to ambiguity when the Customer insists on his understanding of the Agreement inaccuracies. According to the Agreement, during the data collection phase, the team supposed to hold an introductory workshop. However it turned out that no one knew what exactly should be shown. The customer expected that workshop to demonstrate the system in action with their actual data, so that potential users would understand how the system would work. Therefore preparation took a lot of extra time, not included in the initial project estimation. The lesson learned: when there is some uncertain clause in your contract with an unclear meaning or unknown implementation, it is better to immediately clarify what exactly this clause implies.

4. One more point relates to customer data or content. It is wise to specify that the Customer provides data in the required templates and formats in your Agreement. If that is not the case, then formatting/conversion work must be paid additionally. Once ago we had to load a large amount of data into the system. The Customer provided the data in the non-classified loose raw format. As a result, a large amount of labour spent on cleansing the provided data. That wasn't originally included in the estimate calculations and resulted with extra time and cost.

5. This clause is likely be found in the most of contract templates, however few people pay attention to it, sometimes delaying the deadlines. It pays to specify the deadlines or the amount of iterations for the approval. Any approval delays cause by the Customer should be recorded. By fixing these delays, you can win some extra time to adjust the deadlines if needed.

6. When it comes to change requests it helps to cap your maximum efforts in the Agreement. For example, stating that changes that require adjusting the architecture say more than 10% are provided for an additional fee. Most of the Agreement templates have this clause stated out in this or that way, though.

7. It goes without saying, however I spell it just in case. You must keep all communiaction with the Customer until the project completion and beyond a warranty period. Always note and take a record of any additional work outside of Agreement terms to be done to complete the project - at least time spent, reason, resources taken, etc.


Testing and Acceptance

1. All the acceptance criteria should be clear and known well in advance. 

2. When it comes to testing, the amount of effort should also be defined and signed well in advance. At least, types of tests, their order, and required resources from both sides.

3. It is often a case, when testing is due to start however the required access is missing or the testing environment is not ready, or not even provided (I am now writing about things to be provided by a Client) - this is to be recorder as mentioned before.

4. There might be more unpredicted cases like once the Client demanded most of their staff to take part in the delivery testing and the Agreement did not cover that case. The issue for us was that Client employees were in totally different time zones, which made us to adjust. Would that be predicted in the Agreement - there'd be a possible solution negotiated rather than getting this unpredicted at a cost of overtime for our Dev and Ops teams.

The above is just few of project issues I remembered but was still good to share.

My speech proposal for SUGCON ANZ: Developers' guide to XM Cloud

Developers' guide to XM Cloud

Over the last months, we've heard lots of insights about the newest cloud product from Sitecore - XM Cloud. Many developers have wondered about what would be their changes of scope and responsibilities, how would they work with this new SaaS solution, or whether would they even become redundant. 

There is nothing to worry about! This session is answering most of these questions and in fact comes as the most crucial developers' guide to XM Cloud, as for now. It explains the caveats of changes in the content lifecycle and local development routine, introduced new Headless SXA for XM Cloud, and new options for personalization. It will also highlight changes in the security model, and site search and give the best advice on legacy strategies and content migration. Finally, some practical experience with Headstart for XM Cloud and utilizing the new deployment model, so that it becomes live!

Getting started
  •     why SaaS cloud? what is the change?
  •     a brief overview of XM Cloud for developers    
Familiar tools that remain with you
  •     review of the process and deployment tools available to a developer
  •     local development for XM Cloud in containers
  •     customizing pipelines with XM Cloud
  •     leveraging SPE functions
  •     Sitecore CLI becomes "The Tool"
Editing Experience and Content Considerations:
  •     using Site Builder
  •     dealing with Pages & Components
  •     extensions catalog
  •     diversity of datasources: where can my content reside?
  •     migrating content from legacy Sitecore platforms
Changes in the security model
  •     Sitecore Unified Identity
  •     integrating 3-rd party services
Changes in search
  • where's my Solr?
  • what are the options?
  • plugging an external search technology
Dealing with the legacy
  •     are my legacy sites still compatible with XM Cloud?
  •     migrating headless site from XP to XM Cloud guidance
  •     EDGE considerations
  •     is my legacy module for XP compatible with XM Cloud?

SXA for XM Cloud
  •     new old Headless SXA - what's the difference
  •     new old rendering variants
  •     can we use headless forms on XM Cloud?
  •     a bare minimum to build Headless SXA site for Next.js
Hands-On
  •     starter kits available for you straight away
  •     leveraging Headstart basic starter kit foundation built for XM Cloud
  •     make your own module compatible with XM Cloud
Personalization
  •     are the built-in rules enough to go?
  •     two ways of leveraging CDP/Personalize for a better experience
Deploying into XM Cloud
  •     Single-location? Will that affect my GEO-distributed authors team?
  •     Understanding terminology: deployment, project, environment
  •     Understanding Build and Deployment Service (how to trigger and its lifecycle)
  •     CLI along with DevEx plugins
  •     GUI-powered Deploy App tool and 
  •     auto deployments from connected GitHub
It looks to me like an excellent topic exposing a spotlight on the new Sitecore SaaS-based platform. Keep fingers crossed!

Infrastructure-as-Code: best practices you have to comply with

Infrastructure as Code (IaC) is an approach that involves describing infrastructure as code and then applying it to make the necessary changes. IaC does not dictate how exactly to write code, it just provides tools instead. A good examples are Terraform, Ansible and Kubernetes itself where you don't say what to do, rather than you dictate what state you want you infrastructure to get into.

Keep the infrastructure code readable. Your colleagues would be able to easily understand it, and, if necessary, add or test it. Looking to be an obvious point, it is quite often is forgotten, resulting in “write-only code” - the one can only be written, but cannot be read. Its author inclusive, and is unlikely to be able to understood what he wrote and figure out how it all works, even a few days afterward.

An example of a good practice is keeping all variables in a separate file. This is convenient because they do not have to be searched throughout the code. Just open the file and immediately get what you need.


Adhere to a certain style of writing code. As a good example, you may want keeping the code line length between 80-120 characters. If the lines are very long, the editor starts wrapping them. Line breaks destroy the overall view and interfere with the understanding of the code. One has to spend a lot of time just figuring out where the line starts and where it ends.

It's nice to have the coding style check automated, at least use by using the CI/CD pipeline for this. Such a pipeline could have a Lint step: a process of statistical analysis of what is written, helping to identify potential problems before the code is applied.


Utilize git repositories same way developers do. Saying that I mean developing new branches, linking branches to tasks, reviewing what has already been written, sending Pull Requests before making changes, etc.

Being a solo maintainer one may seem the listed actions to be redundant - it is a common practice when people just come and start committing. However, even if you have a small team, it could be difficult to understand who, when, and why made some corrections. As the project grows, such practices will increasingly help the understanding of what is happening and mess up the work. Therefore, it is worth investing some time into adopting some of the development practices to work with repositories.


Infrastructure as Code tools are typically associated with DevOps. As we know DevOps as specialists who not only deal with maintenance but also help developers work: set up pipelines, automate test launches, etc. - all the above also applies to IaC.

In Infrastructure as Code, automation should be applied: Lint rules, testing, automatic releases, etc. Having repositories with let's say Ansible or Terraform, but rolled out manually (by an engineer manually starting a task) is not that much good. Firstly, it is difficult to track who launched it, why, and at what moment. Secondly, it is impossible to understand how that worked out and draw conclusions.

With everything kept in the repository and controlled by an automatic CI/CD pipeline, we can always see when the pipeline was launched and how it performed. We can also control the parallel execution of pipelines, identify the causes of failures, quickly find errors, and much more.

You can often hear from maintainers that they do not test the code at all or just first run it somewhere on dev. It's not the best practice, because it does not give any guarantee that dev matches prod. In the case of Ansible or other configuration tools, standard testing could be something as:

  • launched a test on dev;
  • rolled on dev, but crashed with an error;
  • fixed this error;
  • once again, the test was not run because dev is already in the state to which they tried to bring it.

It seems that the error has been corrected, and you can roll on prod. What will happen to prod? It is always a matter of luck - hit or miss, guess or miss. If somewhere in the middle, something falls again, the error will be corrected and everything will be restarted.

But infrastructure code can and should be tested. At the same time, even if specialists know about different testing methods, they still cannot use them. The reason is that Ansible roles or Terraform files are written without the initial focus on the fact that they will need to be tested somehow.

In an ideal world, at the moment of writing a code developer is aware of what (else) needs to be tested. Accordingly, before starting to write a code, developer plans on how to test it, commonly know as TDD. Untested code is low-quality code.

The same exactly applies to infrastructure code: once written, you should be able to test it. Decent testing allows to reduce the number of errors and make it easier for colleagues who will finalize your roles on Ansible or Terraform files.


A few words about automation. A common practice when working with Ansible is that even if something could be tested, there is no automation to it. Usually, this is a case when someone creates a virtual machine, takes some role written by colleagues, and launches it. Afterward that person relizes the need to add certain new things to it - appends and launches again on the virtual machine. Then he realizes that even more changes are equired and also the current virtual machine has already been brought to some kind of state, so it needs to be killed, new virtual machine reinstantstiated and the role rolled over it. In case something does not work, this algorithm would have to be repeated until all errors are eliminated.

Usually, the human factor comes into a play, and after the N-th number of repetitions, it becomes too lazy deleting the VM and re-creating it again. Once everything seems to work exactly as it should (this time), so one seems could freeze the changes and roll into the prod environment. But reality is that errors could still occur, that is why automation is needed. When it works through automated pipelines and Pull Requests are used - it helps to identify bugs faster and prevent their re-appearance.

Sitecore Edge and XM Cloud - explain it to me as I was 5

Explain to me as I was 5 years old.

Well, not sure that could be explained to 5 years old, but will instead explain it as you were not around the changes for lets say the past 5 years. There is a lot to go through. Before explaining the most interesting concepts like XM Cloud and Sitecore Edge, I need briefly touching some terminology they rely on.


Headless

Previously you used Sitecore to render HTML using ASP.NET MVC. All that happened server-side: Sitecore pulled up the data and your controllers built views with that content, resulted combined HTML being sent out back to a calling browser by a CD server. So that meant, would you need just raw content or data not being wrapped with HTML, then the only way would be setting up duplicating WebAPI, which could be clumsy in addressing the correct data. Or that could be too verbose, returning you much more data than you need. In any case - too exhaustive!

So it comes up logically: why not make the raw data returned universally through API? It could be then consumed by various callers like mobiles or other systems, even some content aggregators, not just browsers (that is what is called "omnichannel"). This is why the approach is called "headless" - there is no HML (or any other "head" returned along with your data).


Rendering Host

When it comes to browsers that still need HTML: it makes sense to merge the content with HTML somewhere later on a request lifecycle - after it left you the universal platform endpoint that you still have on your CD. There is still a webserver required to serve all the web requests. It will receive a request, then pull some required raw data from a universal endpoint, then render output HTML with that data. This is why such webserver is also known as a "rendering host" - now we clearly separated serving actual raw data from rendering HTML to be returned to the browser. Previously both steps were done at a single point on CD.


GraphQL

Once read above, you could think that serving all the content through WebAPI would be some sort of overkill, that is especially valid for large and complicated data. More to be considered about adequate caching. Even with a headless approach, imagine pulling a large list of books stored in some database and having a reference with authors by authorId field. 

So you either do lots of JOIN-alike operations and expose lots of custom API endpoints to fit data the way your client needs that, or take pulls of all-the-data from a database, cache it in somewhere memory, and keep "merging" books to authors on the fly (in memory JOIN per-request). None of both is a nice solution. In the case of really large data, there won't be an elegant solution.

So there was a clear need for some sort of flexibility, and that flexibility should be requested by a client application, addressing its immediate need for data. Moreover, often clients want to return a specific set of data and nothing else above what is being requested - mobile apps typically operate expensive and potentially slow mobile lines, compared to inter-data center superfast networks between CD and Rendering Hosts. Also, headless CDs always return meaningful and structured data of certain type(s), which means it could be strongly typed. And where there are several types, those could relate. We clearly need a schema for data.

That is how GraphQL was invented to address all the above. Instead of having lots of API endpoints we now got a universal endpoint to serve all our data needs in a single request. It provides a schema of all the data types it could return. So now it is the client who defines what type(s) of data to request, defines how those relate together, and the amount of data it needs - not more than it should consume. Another benefit of predefined Shema is that now knowing it in advance, writing a code for clients' apps is quicker thanks to autocompleting, likely provided by your IDE. It also respects primitive types supporting all the relevant operations (comparison, orderBy, etc.)


Sitecore Edge

Previously with XP, you had a complex setup most important of which were CM and CD instances fed by corresponding databases - commonly known as master and web. Editors logged into CM, created some content, and published it from master to the web database to be used by CD.

Now imagine you only got a CM part from the above example. When you do publish, it publishes "into a cloud". By "cloud", it meant a globally distributed database with CDN for media along with some API (GraphQL) to expose content for your front-end.

In fact, not only from CM content could reach Edge and be served from it - Content Hub could be another tool, performing like XM does.

Previously you had a CD instance with a site deployed there that consumes data from a web database, but no you neither have those nor Sitecore provide it for you. That means you should build a front-end site that consumes data from a given GraphQL. That is what is called headless, so you could use JSS with or without Next, or ASP.NET Core renderings. Or anything else - any front-end of your choice, however with more effort. Or it could be not a website at all, but a smart device consuming your data - the choice is unlimited. Effectively, we've got something as CD data as a service provided, maintained, and geo-scaled by Sitecore.


XM Cloud

From the previous explanation you've learned that Experience Edge is "when we remove CD instance and replace Web databases with a cloud service". Now we want to do exactly the same with XM. Provided as a service, it always has the latest error-prone version, maintained and scaled by the vendor. Please welcome XM Cloud and let's decouple all-the-things!

Before going ahead, let's answer what was a typical XM in Sitecore as we knew it, and what it expect to do?

  • create, edit and store content
  • set up layout and presentation for a page
  • apply personalization
  • publishing to CD
  • lots of other minor things

Publishing has been already optionally decoupled from XM in for of Sitecore Publishing Service. That works as a standalone web app or an individual container. Its only duty is copying required content from CM to CD and doing it perfectly well and fast.

Another thing that could be decoupled is the content itself. Previously it was stored in the CM database in a form of Sitecore item abstraction. What if we could have something like Content-as-a-Service, where a data source could be supplied from any source at all that supplies it through GraphQL - any other headless CMS or professional platforms, such as Content Hub? That is very much a "composable" look and feel to me! Then it comes to total flexibility, after setting up the data endpoint, authors could benefit from autocomplete suggestions coming from GraphQL schema when wiring up their components.

Personalization also comes as a composable SaaS service - Personalize. Without using one XM Cloud will also offer you some basic personalization options.

Speaking about a layout, it also could be decoupled. We already have Horizon as a standalone webapp/container, so whatever its cloud reincarnation appears to be (ie. Symphony-?) - it gets decoupled from XM engine. There will be an old good Content Editor anyway, but its ability to edit content is limited to Sitecore items from the master database, unlike Symphony being universal.


Sitecore Managed Cloud

Question: so is XM Cloud something similar to Managed Cloud, and what is the difference between those?

No, not at all. Sitecore Managed Cloud hosts, monitors, manages, and maintains your installation of the platform, on your behalf. They provide infrastructure and the default technology stack that suits it in the best way. Previously you had to care for an infrastructure yourself, which took lots of effort, and that is the main thing that changes with Managed Cloud. Managed cloud supports XM, XP and XC (however on premium tier).

XM Cloud as the opposite - is a totally SaaS offering. It will be an important part of Sitecore Composable DXP where you will architect and "compose" an analog of what was XP from various other Sitecore products (mostly SaaS, but not mandatory).


That is what XM Cloud expected to be in a composable spirit of modern DXP for 2022. Hope we all enjoy it!

OrderCloud Certification - tips on preparation and successful pass

Finally, I am happy to complete Sitecore learning and certification by successfully passing the OrderCloud Certification exam. In this post, I will try to share some thoughts and insights on how you could also progress with it.

This exam was relatively easy, especially compared with other Sitecore certifications. Have to note that I've read almost the whole documentation available, took the eLearning course, and did the notes for every minor important point - so did not find it tough.

I managed to score 90% being hesitative on 6 questions of a total of 30 which gives the exact 20% above the required 80% for a successful "pass" result. Basically, in that case, gambling out at least one question takes the score above the pass level. I want to admit that test questions seem to be very reasonable chosen and mature, as the product itself is.

In order to succeed, you'll definitely need to progress through the following materials:


The competencies under a test are:
  • OrderCloud Architecture and Conventions
  • Integration
  • User Management and Access Control
  • Environments
  • Product Management
  • Order and Fulfillment Management
  • Troubleshooting

Please note: eLearning is really good but does not cover all the competencies. It covers the Security and Product areas really well but has nothing about Order and Fulfillment (at the moment of writing this blog - there is a 'coming soon' promise, however). That means you must get on your own learning path.


Exam in numbers:
  • 60 minutes
  • 30 questions
  • 80% to pass
  • Costs $350 (some categories of tesе takers may qualify for a discount)

Today, having a "fresh memory" I am trying to remember some of the questions personally I had on the exam and share some thoughts on what to emphasize. Without nuances, you must definitely know:
  • features of OrderCloud architecture
  • environments and their purpose
  • the UI and how to switch context between marketplaces
  • types of webhooks and their purpose
  • products and variants
  • price schedules
  • order flows and their statuses
  • general error codes returned by OrderCloud and their meaning
  • querying and filtering through API
  • in general, lots of API which is fully available from API Reference

Initially making due diligence for this technology, while diving into it, I went full of excitement from OrderCloud - what I saw so far. For me, it feels like a very mature product (it is in fact), with decent documentation great training, and very well architectured. Proper MACH architecture which you can fit into pretty everything. You can make a client storefront with zero backend coding, purely FE!
From a feature set point - also unbelievably flexible for both B2b and B2C. 

I decided to invest more time into learning OrderCloud and plan to make this platform one of the main techs for the oncoming year or at least part of Sitecore Triangle: XM Cloud - OrderCloud - Content Hub.

I created a Sitecore OrderCloud Telegram channel where I am sharing everything related to this platform. If you're using Telegram messenger - you'll definitely want to join by following this link. Otherwise, it is still possible to read it in a Twitter-like format in the browser using another link.