Experience Sitecore ! | May 2022

Experience Sitecore !

More than 200 articles about the best DXP by Martin Miles

Infrastructure-as-Code: best practices you have to comply with

Infrastructure as Code (IaC) is an approach that involves describing infrastructure as code and then applying it to make the necessary changes. IaC does not dictate how exactly to write code, it just provides tools instead. A good examples are Terraform, Ansible and Kubernetes itself where you don't say what to do, rather than you dictate what state you want you infrastructure to get into.

Keep the infrastructure code readable. Your colleagues would be able to easily understand it, and, if necessary, add or test it. Looking to be an obvious point, it is quite often is forgotten, resulting in “write-only code” - the one can only be written, but cannot be read. Its author inclusive, and is unlikely to be able to understood what he wrote and figure out how it all works, even a few days afterward.

An example of a good practice is keeping all variables in a separate file. This is convenient because they do not have to be searched throughout the code. Just open the file and immediately get what you need.


Adhere to a certain style of writing code. As a good example, you may want keeping the code line length between 80-120 characters. If the lines are very long, the editor starts wrapping them. Line breaks destroy the overall view and interfere with the understanding of the code. One has to spend a lot of time just figuring out where the line starts and where it ends.

It's nice to have the coding style check automated, at least use by using the CI/CD pipeline for this. Such a pipeline could have a Lint step: a process of statistical analysis of what is written, helping to identify potential problems before the code is applied.


Utilize git repositories same way developers do. Saying that I mean developing new branches, linking branches to tasks, reviewing what has already been written, sending Pull Requests before making changes, etc.

Being a solo maintainer one may seem the listed actions to be redundant - it is a common practice when people just come and start committing. However, even if you have a small team, it could be difficult to understand who, when, and why made some corrections. As the project grows, such practices will increasingly help the understanding of what is happening and mess up the work. Therefore, it is worth investing some time into adopting some of the development practices to work with repositories.


Infrastructure as Code tools are typically associated with DevOps. As we know DevOps as specialists who not only deal with maintenance but also help developers work: set up pipelines, automate test launches, etc. - all the above also applies to IaC.

In Infrastructure as Code, automation should be applied: Lint rules, testing, automatic releases, etc. Having repositories with let's say Ansible or Terraform, but rolled out manually (by an engineer manually starting a task) is not that much good. Firstly, it is difficult to track who launched it, why, and at what moment. Secondly, it is impossible to understand how that worked out and draw conclusions.

With everything kept in the repository and controlled by an automatic CI/CD pipeline, we can always see when the pipeline was launched and how it performed. We can also control the parallel execution of pipelines, identify the causes of failures, quickly find errors, and much more.

You can often hear from maintainers that they do not test the code at all or just first run it somewhere on dev. It's not the best practice, because it does not give any guarantee that dev matches prod. In the case of Ansible or other configuration tools, standard testing could be something as:

  • launched a test on dev;
  • rolled on dev, but crashed with an error;
  • fixed this error;
  • once again, the test was not run because dev is already in the state to which they tried to bring it.

It seems that the error has been corrected, and you can roll on prod. What will happen to prod? It is always a matter of luck - hit or miss, guess or miss. If somewhere in the middle, something falls again, the error will be corrected and everything will be restarted.

But infrastructure code can and should be tested. At the same time, even if specialists know about different testing methods, they still cannot use them. The reason is that Ansible roles or Terraform files are written without the initial focus on the fact that they will need to be tested somehow.

In an ideal world, at the moment of writing a code developer is aware of what (else) needs to be tested. Accordingly, before starting to write a code, developer plans on how to test it, commonly know as TDD. Untested code is low-quality code.

The same exactly applies to infrastructure code: once written, you should be able to test it. Decent testing allows to reduce the number of errors and make it easier for colleagues who will finalize your roles on Ansible or Terraform files.


A few words about automation. A common practice when working with Ansible is that even if something could be tested, there is no automation to it. Usually, this is a case when someone creates a virtual machine, takes some role written by colleagues, and launches it. Afterward that person relizes the need to add certain new things to it - appends and launches again on the virtual machine. Then he realizes that even more changes are equired and also the current virtual machine has already been brought to some kind of state, so it needs to be killed, new virtual machine reinstantstiated and the role rolled over it. In case something does not work, this algorithm would have to be repeated until all errors are eliminated.

Usually, the human factor comes into a play, and after the N-th number of repetitions, it becomes too lazy deleting the VM and re-creating it again. Once everything seems to work exactly as it should (this time), so one seems could freeze the changes and roll into the prod environment. But reality is that errors could still occur, that is why automation is needed. When it works through automated pipelines and Pull Requests are used - it helps to identify bugs faster and prevent their re-appearance.

Sitecore Edge and XM Cloud - explain it to me as I was 5

Explain to me as I was 5 years old.

Well, not sure that could be explained to 5 years old, but will instead explain it as you were not around the changes for lets say the past 5 years. There is a lot to go through. Before explaining the most interesting concepts like XM Cloud and Sitecore Edge, I need briefly touching some terminology they rely on.


Headless

Previously you used Sitecore to render HTML using ASP.NET MVC. All that happened server-side: Sitecore pulled up the data and your controllers built views with that content, resulted combined HTML being sent out back to a calling browser by a CD server. So that meant, would you need just raw content or data not being wrapped with HTML, then the only way would be setting up duplicating WebAPI, which could be clumsy in addressing the correct data. Or that could be too verbose, returning you much more data than you need. In any case - too exhaustive!

So it comes up logically: why not make the raw data returned universally through API? It could be then consumed by various callers like mobiles or other systems, even some content aggregators, not just browsers (that is what is called "omnichannel"). This is why the approach is called "headless" - there is no HML (or any other "head" returned along with your data).


Rendering Host

When it comes to browsers that still need HTML: it makes sense to merge the content with HTML somewhere later on a request lifecycle - after it left you the universal platform endpoint that you still have on your CD. There is still a webserver required to serve all the web requests. It will receive a request, then pull some required raw data from a universal endpoint, then render output HTML with that data. This is why such webserver is also known as a "rendering host" - now we clearly separated serving actual raw data from rendering HTML to be returned to the browser. Previously both steps were done at a single point on CD.


GraphQL

Once read above, you could think that serving all the content through WebAPI would be some sort of overkill, that is especially valid for large and complicated data. More to be considered about adequate caching. Even with a headless approach, imagine pulling a large list of books stored in some database and having a reference with authors by authorId field. 

So you either do lots of JOIN-alike operations and expose lots of custom API endpoints to fit data the way your client needs that, or take pulls of all-the-data from a database, cache it in somewhere memory, and keep "merging" books to authors on the fly (in memory JOIN per-request). None of both is a nice solution. In the case of really large data, there won't be an elegant solution.

So there was a clear need for some sort of flexibility, and that flexibility should be requested by a client application, addressing its immediate need for data. Moreover, often clients want to return a specific set of data and nothing else above what is being requested - mobile apps typically operate expensive and potentially slow mobile lines, compared to inter-data center superfast networks between CD and Rendering Hosts. Also, headless CDs always return meaningful and structured data of certain type(s), which means it could be strongly typed. And where there are several types, those could relate. We clearly need a schema for data.

That is how GraphQL was invented to address all the above. Instead of having lots of API endpoints we now got a universal endpoint to serve all our data needs in a single request. It provides a schema of all the data types it could return. So now it is the client who defines what type(s) of data to request, defines how those relate together, and the amount of data it needs - not more than it should consume. Another benefit of predefined Shema is that now knowing it in advance, writing a code for clients' apps is quicker thanks to autocompleting, likely provided by your IDE. It also respects primitive types supporting all the relevant operations (comparison, orderBy, etc.)


Sitecore Edge

Previously with XP, you had a complex setup most important of which were CM and CD instances fed by corresponding databases - commonly known as master and web. Editors logged into CM, created some content, and published it from master to the web database to be used by CD.

Now imagine you only got a CM part from the above example. When you do publish, it publishes "into a cloud". By "cloud", it meant a globally distributed database with CDN for media along with some API (GraphQL) to expose content for your front-end.

In fact, not only from CM content could reach Edge and be served from it - Content Hub could be another tool, performing like XM does.

Previously you had a CD instance with a site deployed there that consumes data from a web database, but no you neither have those nor Sitecore provide it for you. That means you should build a front-end site that consumes data from a given GraphQL. That is what is called headless, so you could use JSS with or without Next, or ASP.NET Core renderings. Or anything else - any front-end of your choice, however with more effort. Or it could be not a website at all, but a smart device consuming your data - the choice is unlimited. Effectively, we've got something as CD data as a service provided, maintained, and geo-scaled by Sitecore.


XM Cloud

From the previous explanation you've learned that Experience Edge is "when we remove CD instance and replace Web databases with a cloud service". Now we want to do exactly the same with XM. Provided as a service, it always has the latest error-prone version, maintained and scaled by the vendor. Please welcome XM Cloud and let's decouple all-the-things!

Before going ahead, let's answer what was a typical XM in Sitecore as we knew it, and what it expect to do?

  • create, edit and store content
  • set up layout and presentation for a page
  • apply personalization
  • publishing to CD
  • lots of other minor things

Publishing has been already optionally decoupled from XM in for of Sitecore Publishing Service. That works as a standalone web app or an individual container. Its only duty is copying required content from CM to CD and doing it perfectly well and fast.

Another thing that could be decoupled is the content itself. Previously it was stored in the CM database in a form of Sitecore item abstraction. What if we could have something like Content-as-a-Service, where a data source could be supplied from any source at all that supplies it through GraphQL - any other headless CMS or professional platforms, such as Content Hub? That is very much a "composable" look and feel to me! Then it comes to total flexibility, after setting up the data endpoint, authors could benefit from autocomplete suggestions coming from GraphQL schema when wiring up their components.

Personalization also comes as a composable SaaS service - Personalize. Without using one XM Cloud will also offer you some basic personalization options.

Speaking about a layout, it also could be decoupled. We already have Horizon as a standalone webapp/container, so whatever its cloud reincarnation appears to be (ie. Symphony-?) - it gets decoupled from XM engine. There will be an old good Content Editor anyway, but its ability to edit content is limited to Sitecore items from the master database, unlike Symphony being universal.


Sitecore Managed Cloud

Question: so is XM Cloud something similar to Managed Cloud, and what is the difference between those?

No, not at all. Sitecore Managed Cloud hosts, monitors, manages, and maintains your installation of the platform, on your behalf. They provide infrastructure and the default technology stack that suits it in the best way. Previously you had to care for an infrastructure yourself, which took lots of effort, and that is the main thing that changes with Managed Cloud. Managed cloud supports XM, XP and XC (however on premium tier).

XM Cloud as the opposite - is a totally SaaS offering. It will be an important part of Sitecore Composable DXP where you will architect and "compose" an analog of what was XP from various other Sitecore products (mostly SaaS, but not mandatory).


That is what XM Cloud expected to be in a composable spirit of modern DXP for 2022. Hope we all enjoy it!