Experience Sitecore ! | More than 200 articles about the best DXP by Martin Miles

Experience Sitecore !

More than 200 articles about the best DXP by Martin Miles

Blog content

So, don't miss out! Subscribe to this blog/twitter from the buttons above!

Merry Christmas and happy New Year!

Every year I create a special Christmas postcard to congratulate my readers on a new oncoming year, full of changes and opportunities. Wish you all the best in 2025!

My artwork for the past years (click the label to expand)
2024


2023


2022


2021


2020


2019


2018


2017


2016


Reviewing my 2024 Sitecore MVP contributions

Sitecore Technology MVP 2024 Sitecore Technology MVP 2023 Sitecore Technology MVP 2022 Sitecore Technology MVP 2021

Sitecore Technology MVP 2020 Sitecore Technology MVP 2019 Sitecore Technology MVP 2018 Sitecore Technology MVP 2017

The Sitecore MVP program is designed to recognize individuals who have demonstrated advanced knowledge of the Sitecore platform and a commitment to sharing knowledge and technical expertise with community partners, customers, and prospects over the past year. The program is open to anyone who is passionate about Sitecore and has a desire to contribute to the community.

Over the past application year starting from December 1st, 2023, I have been actively involved in the Sitecore community, contributing in a number of ways.

Sitecore Blogs 

  1. This year I have written 18 blog posts at the Perficient site on various topics related to Sitecore, including my Crash Course to Next.Js with TypeScript and GraphQL, top-notch findings about XM Cloud and other composable products, best practices, tips and tricks, and case studies. Listing them all by the bullets would make this post too long, therefore instead I leave the link to the entire list of them, shown reverse chronologically.
  2. I’ve been also posting on my very own blog platform, which already contains more than 200 posts about Sitecore accumulated over the past years.
  3. Also, I occasionally create video recordings/walkthrough and upload them to my YouTube channel.

Sitecore User Groups 

  1. Organized three Los Angeles Sitecore User Groups (#19, #20, and #21). This user group has ~480 members!
  2. Last fall I established and organized the most wanted user group of the year – Sitecore Headless Development UserGroup educating its 410+ members. This one is very special since headless development has become the new normal of delivering sites with Sitecore, while so many professionals feel left behind unable to catch up with the fast-emerging tech. I put it as my personal mission to run it twice per quarter helping the community learn and grow “headlessly” and that is one of my commitments to it. It became the most run and the most attended/reviewed event of all Sitecore user groups with eight events organized over this year (#1 and #2) (#3, #4, #5, #6, #7, #8, #9, #10) along with event #11 scheduled for December 12th. All the recordings are publicly available on YouTube, and also referenced from the individual event pages.
  3. Presented my innovative approach to the Content Migration for XM Cloud solutions.
  4. Another user group presentation narrates all the new features happening with Next.Js 15, breaking API changes and what it all means for Sitecore.

GitHub

  • Sifon project keeps maintained and receives new features. Thus Sifon got support for Sitecore 10.4 platforms.
  • I keep Awesome Sitecore project up and actual. This repository has plenty of stars on GitHub and is an integral part of a big Awesome Lists family, if you haven’t heard of Awesome Lists and its significance I highly recommend reading these articles – first and the second.
  • There are also a few less significant repositories among my contributions that are still meaningful and helpful.

Sitecore Mentor Program 

  • Got two mentees in 2024, supported them over the course of a year, also delivered them both full-scaled XM Cloud training along with the certification.
  • One of my past year mentees was recognized as Sitecore MVP in 2024, resulting from an Exclusive Mentorship Agreement, proving my mentoring approach was successful.

MVP Program

  • I participate in most of the webinars and MVP Lunches (often in both time zones per event).
  • I think MVP Summit is the best perk of the MVP Program, so never miss it out. This year I’ve learned a lot and also provided feedback to the product teams, as usual.
  • I participate in several streams of the Early Access Program, sharing insights with the product team ahead of GA dates.
  • In the past, I have participated in a very honorable activity helping to review the first-time applicants for the MVP Program which is the first line of the evaluation and we carefully match every first-time applicant against high Sitecore MVP standards. This year I am taking part in reviewing as well.

Sitecore Learning

I collaborated with the Sitecore Learning team for the past 2-3 years, and this year was not an exception: 

  • I was invited by Sitecore Learning to make an excellent detailed review of a new feature - XM Cloud Forms Builder for Tips & Tricks series. 

Sitecore Telegram 

  • I am making Telegram a premium-level channel for delivering Sitecore news and materials. Telegram has a unique set of features that no other software can offer, and I am leveraging these advantages for more convenience to my subscribers.
  • Started in 2017 as a single channel, it was expanding rapidly and has now reached a milestone of 1,100 subscribers!
  • Growth did not stop but escalated further beyond Sitecore going composable with having a dedicated channel for almost any composable product. Here all they are:

Support Tickets

  • CS0514702 (Content Hub)
  • CS0462816 (SPE for XM Cloud)
  • CS0518934 (Forms Builder)

Other Contributions

  • I created Sitecore MVP section in the Wikipedia, explaining MVP Program, its significance for Sitecore and the overall process of determining the winners.
  • I am very active on my LinkedIn (with 7K+ followers) and Twitter aka X (with almost ~1.2K subscribers), multiple posts per week, sometimes a few a day.
  • With my dedication to Sitecore's new flagship product, XM Cloud, it was no wonder I just launched a new XM Cloud Daily series of tips and tricks on social media (this actually started in December 2024, so it falls on a new application period).
  • That comes in addition to the existing series on LinkedIn - Headless Tips & Tricks, where I share the insights and nuances of modern headless development with Sitecore

 The above is what I memorized about my annual contributions so far. Wishing all decent applicants to join this elite club for the coming year!

XM Cloud content migration: connecting external database

Historically when performing content migration with Sitecore we used to deal with database backups. In a modern SaaS world, we do not have the luxury of neither managing cloud database backups, nor the corresponding UI for doing this. Therefore, we must find an alternative approach.

Technical Challenge

Let’s assume we have a legacy Sitecore website, in my case that was XP 9.3 and we’ve been provided with only a master database backup having all the content. The objective is to perform content migration from this master database into a new and shiny XM Cloud environment(s).

Without having direct access to the cloud, we can only operate locally. In theory, there could be a few potential ways of doing this:

  1. Set up a legacy XP of the desired version with the legacy content database already attached/restored to it. Then try to attach (or restore) a vanilla XM Cloud database to a local SQL Server as a recipient database in order to perform content migration into it. Unfortunately, the given approach would not work since SQL Server version incompatibility between XM Cloud and XP 9.3. Even if that was possible, running XP 9.3 with the XM Cloud database won’t work as Xз 9.3 neither knows about XM Cloud schema nor is capable of handling Items as Resource required feature which was invented later in XP 10.1. Therefore – this option is not possible.

  2. Can we go the other way around by using the old database along with XM Cloud? This is not documented, but let’s assess it:

    1. Definitely won’t work in the cloud since we’re not given any control of DBs and their maintenance or backups.

    2. In a local environment, XM Cloud only works in Docker containers and it is not possible to use it with an external SQL Server where we have a legacy database. But what if we try to plug that legacy database inside of the local SQL Container? Sadly, there are no documented ways of achieving that.

  3. Keep two independent instances side by side (legacy XP and XM Cloud in containers) and use an external tool to connect both of them in order to migrate the content. In theory that is possible but carries on few drawbacks.
    1. The tool of choice is Razl, but this tool is not free, requires a paid license, and does not have a free trial to ever test this out.
    2. Connecting to a containerized environment may not be easy and require some additional preps
    3. You may need to have a high-spec computer (or at least two mid-level machines connected to the same network) to have both instances running side by side.

After some consideration, the second approach seems to be reasonable to try so let’s give it a chance and conduct a PoC.

Proof of Concept: local XM Cloud with external content database

Utilize the second approach we’re going to try attaching the given external legacy database to XM Cloud running in a local containerized setup. That will allow using a built-in UI for mass-migrating the content between the databases (as pictured below) along with the Sitecore PowerShell script for finalizing and fine-tuning the migrated content.

Control Panel

Step 1: Ensurу SQL Server port is externally exposed

We are connecting the external SQL Server Management studio through a port of the SQL Server container that is exposed externally in order to make it possible. Luckily, that has been done for us already, just make sure docker-compose has:

ports:
      - "14330:1433"

Step 2: Spin up an XM Cloud containers and confirm XM Cloud works fine for you

Nothing extraordinary here, as easy as running .\init.ps1 followed by .\up.ps1.

Step 3: Connect SQL Management Studio to SQL Server running in a container.

After you sound up containers, run SQL Management Studio and connect to SQL Server running in SQL container through an exposed port 14330, as we did at step 1:

Connection parameters

Step 4: Restore the legacy database

If you have a Data-Tier “backpack” file you may want to do an extra step and convert it into a binary backup for that particular version used by XMCloud before restoring. This step is optional, but in case you want to restore the backup more than once (which is likely to happen), it would make sense to take a binary backup as soon as you restore the data-tier “backpack” first time ever. Data-tier backups process much slower than binaries, so that will definitely save time in the future.

Once connected, let’s enable contained database authentication. This step is mandatory, otherwise, that would not be possible to restore a database:

EXEC sys.sp_configure N'contained database authentication', N'1'
go
exec ('RECONFIGURE WITH OVERRIDE')
go

One more challenge ahead: when performing backup and restore operations, SQL Server shows up a path local to the server engine, and not the host machine. That means, our backup should exist “inside” of SQL container. Luckily, w have this also covered. Make sure docker-compose.override.yml contains:

mssql:
volumes:
      - type: bind
source: .\docker\data\sql
target:c:\data

That means, one can locate legacy database backups into .\docker\data\sql folder of a host machine and it will magically appear within C:\datafolder when using SQL Management Studio database backup restore tool which you can perform now.

Important! Restore legacy database using the “magic name” in a format Sitecore.<DB_NAME_SUFFIX>, further down below I will be using the value RR as DB_NAME_SUFFIX.

Once got restored database in SQL Server Management Studio under the name Sitecore.RR we need to plug this database to the system. There is a naming convention hidden from our eyes within CM containers.

Step 5: Configure connection strings

Unlike in XM/XP – there is no documented way to plug an external database. The way connection strings are mapped to the actual system is cumbersome, it uses some “magic” hidden within the container itself and obfuscated from our eyes. It only tool to reach it experimental way. Here are the steps to reproduce:

  • Add environmental variable to docker-compose record for CM:

    • Sitecore_ConnectionStrings_RR: Data Source=${SQL_SERVER};Initial Catalog=${SQL_DATABASE_PREFIX}.RR;User ID=${SQL_SA_LOGIN};Password=${SQL_SA_PASSWORD}
  • Add a new connection string record. To do so you’ll need to create a connection strings file within your customization project as .\src\platform\<SITENAME>\App_Config\ConnectionStrings.config with the content of the connection strings file from the CM container with the addition of a new string:

Please note the difference in the suffix format of both above records, that is totally fine. CM container still processes that correctly.

Step 6: Reinstantiating CM container

Simply restarting a CM container is not sufficient. You must remove it and re-create it, just killing/stopping is not sufficient.

For example, the below command will work for that purpose:

docker-compose restart cm

… not will this one:

docker-compose kill cm

The reason is that CM will not update environmental variables from docker-compose file upon restart. Do this instead:

docker-compose kill cm
docker-compose rm cm --force
docker-compose up cm -d

Step 7: Validating

  1. Inspecting CM container for environmental variables will show you this new connection string, as added:

    1. "Env": [
                  "Sitecore_ConnectionStrings_RR=Data Source=mssql;Initial Catalog=Sitecore.RR;User ID=sa;Password=6I7X5b0r2fbO2MQfwKH"
  2. Inspecting connection string config (located at C:\inetpub\wwwroot\App_Config\ConnectionStrings.config on CM container) contains the newly added connection string.

Step 8: Register new database with XM Cloud

It can be done the below config patch that does this job. Save it as docker\deploy\platfo.rm\App_Config\Include\ZZZ\z.rr.config for test and later do not forget to include it in a platform customization project, so that it gets shipped with each deployment
<?xml version="1.0" encoding="UTF-8"?>
<configuration
	xmlns:patch="www.sitecore.net/.../">
	<sitecore>
		<eventingdefaultProvider="sitecore">
			<eventQueueProvider>
				<eventQueuename="rr"patch:after="evertQueue[@name='web']"type="Sitecore.Data.Eventing.$(database)EventQueue, Sitecore.Kernel">
					<paramref="dataApis/dataApi[@name='$(database)']"param1="$(name)"/>
					<paramref="PropertyStoreProvider/store[@name='$(name)']"/>
				</eventQueue>
			</eventQueueProvider>
		</eventing>
		<PropertyStoreProvider>
			<storename="rr"patch:after="store[@name='master']"prefix="rr"getValueWithoutPrefix="true"singleInstance="true"type="Sitecore.Data.Properties.$(database)PropertyStore, Sitecore.Kernel">
				<paramref="dataApis/dataApi[@name='$(database)']"param1="$(name)"/>
				<paramresolve="true"type="Sitecore.Abstractions.BaseEventManager, Sitecore.Kernel"/>
				<paramresolve="true"type="Sitecore.Abstractions.BaseCacheManager, Sitecore.Kernel"/>
			</store>
		</PropertyStoreProvider>
		<databases>
			<databaseid="rr"patch:after="database[@id='master']"singleInstance="true"type="Sitecore.Data.DefaultDatabase, Sitecore.Kernel">
				<paramdesc="name">$(id)
				</param>
				<icon>Images/database_master.png</icon>
				<securityEnabled>true</securityEnabled>
				<dataProvidershint="list:AddDataProvider">
					<dataProviderref="dataProviders/main"param1="$(id)">
						<disableGroup>publishing</disableGroup>
						<prefetchhint="raw:AddPrefetch">
							<sc.includefile="/App_Config/Prefetch/Common.config"/>
							<sc.includefile="/App_Config/Prefetch/Webdb.config"/>
						</prefetch>
					</dataProvider>
				</dataProviders>
				<!-- <proxiesEnabled>false</proxiesEnabled> -->
				<archiveshint="raw:AddArchive">
					<archivename="archive"/>
					<archivename="recyclebin"/>
				</archives>
				<cacheSizeshint="setting">
					<data>100MB</data>
					<items>50MB</items>
					<paths>2500KB</paths>
					<itempaths>50MB</itempaths>
					<standardValues>2500KB</standardValues>
				</cacheSizes>
			</database>
		</databases>
	</sitecore>
</configuration>

Step 9: Enabling Sitecore PowerShell Extension

Next, we’d want to enable PowerShell, if that is not yet done. You won’t be able to migrate the content using SPE without performing this step.

<?xml version="1.0" encoding="utf-8"?>
<configuration
	xmlns:patch="http://www.sitecore.net/xmlconfig/"
	xmlns:role="http://www.sitecore.net/xmlconfig/role/"
	xmlns:set="http://www.sitecore.net/xmlconfig/set/">
	<sitecorerole:require="XMCloud">
		<powershell>
			<userAccountControl>
				<tokens>
					<tokenname="Default"elevationAction="Block"/>
					<tokenname="Console"expiration="00:55:00"elevationAction="Allow"patch:instead="*[@name='Console']"/>
					<tokenname="ISE"expiration="00:55:00"elevationAction="Allow"patch:instead="*[@name='ISE']"/>
					<tokenname="ItemSave"expiration="00:55:00"elevationAction="Allow"patch:instead="*[@name='ItemSave']"/>
				</tokens>
			</userAccountControl>
		</powershell>
	</sitecore>
</configuration>

Include the above code into a platform customization project as .\docker\deploy\platform\App_Config\Include\ZZZ\z.SPE.config. If everything is done correctly, you can run SPE commands, as below:

SPE results

The Result

After all the above steps are done correctly, you will be able to utilize the legacy content database along with your new shiny local XM Cloud instance:
Result in Sitecore Content Editor
Now you can copy items between databases just by using built-in Sitecore UI preserving their IDs and version history. You can also copy items with SPE from one database to another which are both visible to the SPE engine.

Full guide to enabling Headless Multisite Add-On for your XM Cloud solution

Sitecore Headless Next.Js SDK recently brought the feature that makes it possible to have multiple websites from Sitecore content tree being served by the same rendering hoist JSS application. It uses Next.js Middleware to serve the correct Sitecore site based on the incoming hostname.

Why and When to Use the Multisite Add-on

The Multisite Add-on is particularly useful in scenarios where multiple sites share common components or when there is a need to centralize the rendering logic. It simplifies deployment pipelines and reduces infrastructure complexity, making it ideal for enterprises managing a portfolio of websites under a unified architecture. This approach saves resources and ensures a consistent user experience across all sites.

How it works

The application fetches site information at build-time, not at runtime (for performance reasons). Every request invokes all Next.js middleware. Because new site additions are NOT frequent, fetching site information at runtime (while technically possible) is not the best solution due to the potential impact on visitor performance. You can automate this process using a webhook to trigger automatic redeployments of your Next.js app on publish.

Sitecore provides a relatively complicated diagram of how it works (pictured below), but if you do not get it from the first look, do not worry as you’ll get the understanding after reading this article.

Uuid 8601bf89 50ab 5913 42ed 4773b05ab2c3[1]

Technical Implementation

There are a few steps one must encounter to make it work. Let’s start with the local environment.

Since multisite refers to different sites, we need to configure hostnames. The Main site operates on main.localhost hostname, and it is already configured as below:

.\.env
    RENDERING_HOST=main.localhost
.\docker-compose.override.yml
    PUBLIC_URL: "https://${RENDERING_HOST}"

For the sake of the experiment, we plan to create a second website served by second.localhost locally. To do so, let’s add a new environmental variable to the root .env file (SECOND_HOST=second.localhost) and introduce some changes to init.ps1 script:

$renderingHost = $envContent | Where-Object{$_ -imatch "^RENDERING_HOST=.+"}
$secondHost = $envContent | Where-Object{$_ -imatch "^SECOND_HOST=.+"}
$renderingHost = $renderingHost.Split("=")[1]
$secondHost = $secondHost .Split("=")[1]

down below the file we also want to create SSL certificate for this domain by adding a line at the bottom:

& $mkcert -install
# & $mkcert "*.localhost"
& $mkcert"$xmCloudHost"
& $mkcert"$renderingHost"
& $mkcert"$secondHost"

For Traefic to pick up the generated certificate and route traffic as needed, we need to add two more lines at the end of .\docker\traefik\config\dynamic\certs_config.yaml file:

tls:
certificates:
    - certFile:C:\etc\traefik\certs\xmcloudcm.localhost.pem
keyFile:C:\etc\traefik\certs\xmcloudcm.localhost-key.pem
    - certFile:C:\etc\traefik\certs\main.localhost.pem
keyFile:C:\etc\traefik\certs\main.localhost-key.pem
    - certFile:C:\etc\traefik\certs\second.localhost.pem
keyFile:C:\etc\traefik\certs\second.localhost-key.pem

If you try initializing and running – it may seem to work at first glance. But navigating to second.localhost in the browser will lead to an infinite redirect loop. Inspecting the cause I realized that occurs due to CORS issue, namely that second.localhost does not have CORS configured. Typically, when configuring the rendering host from docker-compose.override.yml we provide PUBLIC_URL environmental variable into the rendering host container, however, that is a single value and we cannot pass multiple.

Here’s the more descriptive StackOverflow post I created to address a given issue

To resolve it, we must provide the second host in the below syntax as well as define CORS rules as labels into the rendering host, as below:

labels:
  - "traefik.enable=true"
  - "traefik.http.routers.rendering-secure.entrypoints=websecure"
  - "traefik.http.routers.rendering-secure.rule=Host(`${RENDERING_HOST}`,`${SECOND_HOST}`)"
  - "traefik.http.routers.rendering-secure.tls=true"
# Add CORS headers to fix CORS issues in Experience Editor with Next.js 12.2 and newer
  - "traefik.http.middlewares.rendering-headers.headers.accesscontrolallowmethods=GET,POST,OPTIONS"
  - "traefik.http.middlewares.rendering-headers.headers.accesscontrolalloworiginlist=https://${CM_HOST}, https://${SECOND_HOST}"
  - "traefik.http.routers.rendering-secure.middlewares=rendering-headers"

Once the above is done, you can run the PowerShell prompt, initialize, and spin up Sitecore containers, as normal, by executing init.ps1 and up.ps1 scripts.

Configuring Sitecore

Wait until Sitecore spins up, navigate to a site collection, right-click it, and add another website calling it Second running on a hostname second.localhost.

Make sure second site uses the same exact application name as main, you can configure that from /sitecore/content/Site Collection/Second/Settings item, App Name field. That ensures the purpose of this exercise is for both sites to reuse the same rendering host application.

You should also make sure to match the values of the Predefined application rendering host fields of /sitecore/content/Site Collection/Second/Settings/Site Grouping/Second and /sitecore/content/Site Collection/Main/Settings/Site Grouping/Main items.

Another important field here is Hostname, make sure to set these fields for both websites:

Hostname

 

Now you are good to edit Home item of Second site. Multisite middleware does not affect the editing mode of Sitecore, so from there, you’ll see no difference.

Troubleshooting

If you’ve done everything right, but second.localhost resolves to the Main website, let’s troubleshoot. The very first location to check is .\src\temp\config.js file. This file contains sites variable with the array of sites and related hostnames to be used for the site resolution. The important fact is that a given file is generated at the build time, not runtime.

So, you open it up and see an empty array (config.sites = process.env.SITES || '[]') that means you just need to initialize the build, for example by simply removing and recreating the rendering host container:

docker-compose kill rendering
docker-compose rm rendering -f
docker-compose up rendering -d

Also, before running the above, it helps to check SXA Site Manager, which is available under the PowerShell Toolbox in Sitecore Desktop. You must see both sites and relevant hostnames there and in the correct order – make sure to move them as high as possible, as this site chain works by “first come – first served” principle.

Multisite

After rendering host gets recreated (it may take a while for both build and spinning up steps), check the site’s definition at .\src\temp\config.js again. It should look as below:

config.sites = process.env.SITES || '[{"name":"second","hostName":"second.localhost","language":""},{"name":"main","hostName":"main.localhost","language":""}]'

The amount of SXA client site records must match the records from SXA Site Manager. Now, running second.localhost in the browser should show you rendered home page of the second site.

Another technique of troubleshooting is to inspect middleware logs. To do so, create .env.local file at the rendering host app root (if does not exist yet) and add the debugging parameter:

DEBUG=sitecore-jss:multisite

Now, rendering host container logs will expose you the insights into how multisite middleware processes your requests and resolves site contexts, and sets site cookies. Below is a sample (and correct) output of the log:

sitecore-jss:multisite multisite middleware start: { pathname: '/', language: 'en', hostname: 'second.localhost' } +8h
sitecore-jss:multisite multisite middleware end in 7ms: {
  rewritePath: '/_site_second/',
  siteName: 'second',
  headers: {
  set-cookie: 'sc_site=second; Path=/',
  x-middleware-rewrite: 'https://localhost:3000/_site_second',
  x-sc-rewrite: '/_site_second/'
},
  cookies: ResponseCookies {"sc_site":{"name":"sc_site","value":"second","path":"/"}}
} +7ms

The above log is crucial to understanding how multisite middleware works internally. The internal request comes rewrites as <https://localhost:3000/_site_second where ‘second‘ is a tokenized site name parameter. If .\src\main\src\temp\config.js file contains the corresponding site record, site gets resolved and sc_site cookie is set.

If you still have the issues of showing up the Second website that resolves to Main website despite the multisite middleware log outputs correct site resolution, that may be likely caused by conflicting with other middleware processors.  This would be your number one thing to check especially if you have multiple custom middleware. Miltisie middleware is especially sensitive to the execution order, as it sets cookies. In my case, that problem was because Sitecore Personalize Engage SDK was registered through middleware and also programmed to set its own cookie, and somehow configured with multisite middleware.

In that case, you have to play with order constant within each conflicting middleware (they are all located under .\src\lib\middleware\plugins folder) with the following restart of a rendering host.

Resources Sharing

Since the multisite add-on leverages the same rendering host app for the multiple sites that use it, all the components and layouts TSX markups, middleware, and the rest of the resources become automatically available for the second site. However, that is not true by default for the Sitecore resources. We must assign at least one website that will share its assets, such as renderings, partials, layouts, etc across the entire Site Collection. Luckily, that is pretty easy to do at the site collection level:

Site Collection

For the sake of an experiment, let’s make Second website to use Page Designs from the Main site. After the above sharing permission, they are automatically available at /sitecore/content/Site Collection/Second/Presentation/Page Designs item.

Configuring the cloud

Once we make the multisite add-on function well, let’s make the same work in the cloud.

  1. First of all, check all the changes to the code repository, and trigger the deployment, either from Deploy App, CLI, or your preferred CI/CD
  2. After your changes arrive at XM Cloud environment – let’s bring the required changes to Vercel, starting with defining the hostnames for the sites:Vercel
  3. After you define the hostnames in Vercel, change the hostname under Site Grouping item for this particular cloud environment to match the above hostname configured earlier in Vercel.
  4. Save changes and smart publish Site Collection. Publishing takes some time, so please stay patient.
  5. Finally, you must also do Vercel full re-deployment, to force regenerating \src\temp\config.js file with the hostnames from Experience Edge published at the previous step

Testing

If everything is done right in the previous steps, you can access the published Second website from Vercel by its hostname, in our case that would be https://dev-second.vercel.app. Make sure all the shared partial layouts are seen as expected.

When testing 404 and 500 you will see that these are shared between both sites automatically, but you cannot configure them individually per site. This occurs because both are located at .\src\pages\404.tsx and .\src\pages\500.tsx of the same rendering host app and use different internal mechanics rather than generic content served from Sitecore throughout .\src\pages\[[...path]].tsx route. These error pages however could be multi-lingual, if needed.

Hope you find this article useful!

Using conditions with XM Cloud Form Builder

If you follow the release news from Sitecore, you’ve already noted the release of highly awaited XM Cloud Forms, which however did not have a feature of adding conditional logic. Until now.

The good news is that now we can do it, and here’s how.

See it in action

Imagine you have a registration form and want to ask if your clients want to receive the email. To do so you add two additional fields at the bottom:

  • a checkbox for users to define if they want to receive these emails
  • a dedicated email input field for leaving the email address

Obviously, you want to validate the email address input as normal and make it required. At the same time you want this field to only play on with a checkbook being checked, otherwise being ignored and ideally hidden.

Once we set both Required and Hidden properties, there is a hint appears saying that we cannot have them both together as it simply creates a deadlock – something I mentioned in my earlier review of XM Cloud Forms builder

01

So, how we achieve that?

From now on, there is an additional Logic tab that you can leverage to add some additional logic to your forms.

02

Let’s see what can do with it:

  • you can add new pieces of logic
  • apply multiple conditions within the piece of logic and define if all must comply or just any of them (logical “AND” and “OR”)
  • you can create groups of conditions to combine multiple OR logic clauses to work against the same single AND condition
  • for each condition, select a particular field of application from a dropdown
  • define what requirements you want to meet with it from another dropdown, which is content-specific:
    • strict match
    • begins or ends with
    • contains
    • checked / unchecked
    • etc.
  • not just having multiple conditional logic, you can also have multiple fields logic within a single condition with the same “and / “or”

Once all conditions are met, you execute the desired action against a specified field:

04

Coming back to a form with an optional email subscription defined by a checkbox, I created the following rule:

05

The conditional logic rules engine is very much intuitive and human-readable. I do not even need to explain the above screenshot as it is naturally self-explanatory. So, how does it perform? Let’s run Preview and see it in action.

When running the form as normal, we only see a checkbox disabled by default and can submit it straight away:

06

But checking Email me updates and promotions tick enables Email field, which is Required. The form won’t submit with an invalid email address until the checkbox is checked.

07

My expectation for the conditional logic was applying conditions for multipage forms at the page level. Say, I have a first page having some sort of branching condition, so that if the user meets it – it gets to pages 2 and 3, but if not – only page 4, with both branching ending up at page 5. Unfortunately, I did not find a way of doing that. Hopefully, the development teams will add this in the future, given how progressively and persistently they introduce new features. In any case, what we’ve been given today is already a very powerful tool that allows us to create really complicated steams of input user data.

GraphQL: not an ideal one!

You’ll find plenty of articles about how amazing GraphQL is (including mine), but after some time of using it, I’ve got some considerations with the technology and want to share some bitter thoughts about it.

GraphQL

History of GraphQL

How did it all start? The best way to answer this question is to go back to the original problem Facebook faced.

Back in 2012, we began an effort to rebuild Facebook’s native mobile applications. At the time, our iOS and Android apps were thin wrappers around views of our mobile website. While this brought us close to a platonic ideal of the “write once, run anywhere” mobile application, in practice, it pushed our mobile web view apps beyond their limits. As Facebook’s mobile apps became more complex, they suffered poor performance and frequently crashed. As we transitioned to natively implemented models and views, we found ourselves for the first time needing an API data version of News Feed — which up until that point had only been delivered as HTML.

We evaluated our options for delivering News Feed data to our mobile apps, including RESTful server resources and FQL tables (Facebook’s SQL-like API). We were frustrated with the differences between the data we wanted to use in our apps and the server queries they required. We don’t think of data in terms of resource URLs, secondary keys, or join tables; we think about it in terms of a graph of objects.

Facebook came across a specific problem and created its own solution: GraphQL. To represent data in the form of a graph, the company designed a hierarchical query language. In other words, GraphQL naturally follows the relationships between objects. You can now receive nested objects and return them all in a single HTTPS request. Back in the day, it was crucial for global users not to always have cheap/unlimited mobile tariff plans, so the GraphQL protocol was optimized, allowing only what users needed to be transmitted.

Therefore, GraphQL solves Facebook’s problems. Does it solve yours?

First, let’s recap the advantages

  • Single request, multiple resources: Compared to REST, which requires multiple network requests to each endpoint, GraphQL you can request all resources with a single call.
  • Receive accurate data: GraphQL minimizes the amount of data transferred over the wires, selectively selecting it based on the needs of the client application. Thus, a mobile client with a small screen may receive less information.
  • Strong typing: Every request, input, and response objects have a type. In web browsers, the lack of types in JavaScript has become a weakness that various tools (Google’s Dart, and Microsoft’s TypeScript) try to compensate for. GraphQL allows you to exchange types between the backend and frontend.
  • Better tooling and developer friendliness: The introspective server can be queried about the types it supports, allowing for API explorer, autocompletion, and editor warnings. No more relying on backend developers to document their APIs. Simply explore the endpoints and get the data you need.
  • Version independent: the type of data returned is determined solely by the client request, so servers become simpler. When new server-side features are added to the product, new fields can be added without affecting existing clients.

Thanks to the “single request, multiple resources” principle, front-end code has become much simpler with GraphQL. Imagine a situation where a user wants to get details about a specific writer, for example (name, id, books, etc.). In a traditional intuitive REST pattern, this would require a lot of cross-requests between the two endpoints /writers and /books, which the frontend would then have to merge. However, thanks to GraphQL, we can define all the necessary data in the request, as shown below:

writers(id: "1"){
    id
    name
    avatarUrl
    books(limit: 2){
        name
        urlSlug
    }
}

The main advantage of this pattern is to simplify the client code. However, some developers expected to use it to optimize network calls and speed up application startup. You don’t make the code faster; you simply transfer the complexity to the backend, which has more computing power. Also for many scenarios, metrics show that using the REST API appeared faster than GraphQL.

This is mostly relevant for mobile apps. If you’re working with a desktop app or a machine-to-machine API, there’s no added value in terms of performance.

Another point is that you may indeed save some kilobytes with GraphQL, but if you really want to optimize loading times, it’s better to focus on loading lower-quality images for mobile, as we’ll see, GraphQL doesn���t work very well with documents.

But let’s see what actually is wrong or could be better with GraphQL.

Strongly Typing

GraphQL defines all API types, commands, and queries in the graphql.schema file. However, I’ve found that typing with GraphQL can be confusing. First of all, there is a lot of duplication here. GraphQL defines the type in the schema, however, we need to override the types for our backend (TypeScript with node.js). You have to spend additional effort to make it all work with Zod or create some cumbersome code generation for types.

Debugging

It’s hard to find what you’re looking for in the Chrome inspector because all the endpoints look the same. In REST you can tell what data you’re getting just by looking at the URL:

Devtools 

compared to:

  Devtools

Do you see the difference?

No support for status codes

REST allows you to use HTTP error codes like “404 not found”, “500 server error” and so on, but GraphQL does not. GraphQL forces a 200 error code to be returned in the response payload. To understand which endpoint failed, you need to check each payload. Same applies for Monitoring: HTTP error monitoring is very easy compared to GraphQL because they all have their error code while troubleshooting GraphQL requires parsing JSON objects.

Additionally, some objects may be empty either because they cannot be found or because an error occurred. It can be difficult to distinguish the difference at a glance.

Versioning

Everything has its price. When modifying the GraphQL API, you can make some fields obsolete, but you are forced to maintain backward compatibility. They should still remain there for older clients who use them. You don’t need to support GraphQL versioning, at the price of maintaining each field.

To be fair, REST versioning is also a pain point, but it does provide an interesting feature for expiring functionality. In REST, everything is an endpoint, so you can easily block legacy endpoints for a new user and measure who is still using the old endpoint. Redirects could also simplify versioning from older to newer in some cases.

Pagination

GraphQL Best Practices suggests the following:

The GraphQL specification is deliberately silent on several important API-related issues, such as networking, authorization, and pagination.

How “convenient” (not!). In general, as it turns out, pagination in GraphQL is very painful.

Caching

The point of caching is to receive a server response faster by storing the results of previous calculations. In REST, the URLs are unique identifiers of the resources that users are trying to access. Therefore, you can perform caching at the resource level. Caching is a part of HTTP specification. Additionally, the browser and mobile device can also use this URL and cache the resources locally (same as they do with images and CSS).

In GraphQL this gets tricky because each query can be different even though it’s working on the same entity. It requires field-level caching, which is not easy to do with GraphQL because it uses a single endpoint. Libraries like Prisma and Dataloader have been developed to help with such scenarios, but they still fall short of REST capabilities.

Media types

GraphQL does not support uploading documents to the server, which is used multipart-form-data by default. Apollo developers have been working on a file-uploads solution, but it is difficult to set up. Additionally, GraphQL does not support a media types header when retrieving a document, which allows the browser to display the file correctly.

I previously made a post about the steps one must take in order to upload an image to Sitecore Media Library (either XM Cloud or XM 10.3 or newer) by using Authoring GraphQL API.

Security

When working with GraphQL, you can query exactly what you need, but you should be aware that this comes with complex security implications. If an attacker tries to send a costly request with attachments to overload the server, then may experience it as a DDoS attack.

They will also be able to access fields that are not intended for public access. When using REST, you can control permissions at the URL level. For GraphQL this should be the field level:

user {
    username <-- anyone can see that
    email <-- private field
    post {
      title <-- some of the posts are private
  }
}

Conclusion

REST has become the new SOAP; now GraphQL is the new REST. History repeats itself. It’s hard to say whether GraphQL will just be a popular new trend that will gradually be forgotten, or whether it will truly change the rules of the game. One thing is certain: it still requires some development to obtain more maturity.

A crash course of Next.js: TypeScript (part 5)

This series is my Next.js study resume, and despite it’s keen to a vanilla Next.js, all the features are applicable with Sitecore SDK. It is similar to the guide I recently wrote about GraphQL and aims to reduce the learning curve for those switching to it from other tech stacks.

  • In part 1 we covered some fundamentals of Next.js – rendering strategies along with the nuances of getStaticProps, getStaticPaths, getServerSideProps as well as data fetching.
  • In part 2 we spoke about UI-related things coming to OOB with Next.js – layouts, styles, and fonts powerful features, Image and Script components, and of course – TypeScript.
  • We went through the nuances of Next.js routing and explained middleware in part 3
  • Part 4 was all about caching, authentication, and going live tasks

In this post, we are going to talk about TypeScript. It was already well mentioned in the previous parts, but this time we’d put it under a spotlight.

Intro

TypeScript became an industry standard for strong typing in the current state of the JavaScript world. There is currently no other solution allowing us to effectively implement typing into a project. One can, of course, use some kind of contracts, conventions, or JSDoc to describe types. But all this will be times worse for code readability compared to typical TypeScript annotations. They will allow you not to have your eyes darting up and down, you will simply read the signature and immediately understand everything.

Another point comes to JavaScript support in editors and IDEs being usually based on TypeScript. JavaScript support in all modern IDEs is built on TypeScript! JavaScript support in VS Code is implemented using the TypeScript Language Service. WebStorm’s JavaScript support largely relies on the TypeScript framework and uses its standard library. This reality is – that JavaScript support in editors goes built on top of TypeScript. One more reason to learn TypeScript is that when the editor complains about the JavaScript type mismatch, we’ll have to read the declarations from TypeScript.

However, TypeScript is not a replacement for other code quality tools. TypeScript is just one of the tools that allows you to maintain some conventions in a project and make sure that there are strong types. You are still expected to write tests, do code reviews, and be able to design the architecture correctly.

Learning TypeScript, even in 2024, can be difficult, for many various reasons. Folks like myself who grew from C# may find things that don’t function as they should expect. People who have programmed in JavaScript for most of their lives get scared when the compiler yells at them.

TypeScript is a superset of JavaScript, this means that JavaScript is part of TypeScript: it is in fact made up of JavaScript. Going with TypeScript does not mean throwing out JavaScript with its specific behavior – TypeScript helps us to understand why JavaScript behaves this way. Let’s look at some mistakes people make when getting started with TypeScript. As an example, take error handling. It would be very our natural logical expectation to handle errors, similarly, we are used to doing in other programming languages:

try {
    // let's pull some API with Axios, and some error occured, say Axios fetch failed
} catch (e: AxiosError) {
    //         ^^^^^^^^^^ Error 
}

The above syntax is not possible because that’s not how errors work in JavaScript. Code that would be logical to write in TypeScript cannot be written that way in JavaScript.

A quick recap of a Typescript

  • Strongly typed language developed by Microsoft.
  • Code written in TypeScript compiles to native JavaScript
  • TypeScript extends JavaScript with the ability to statically assign types.

TypeScript supports modern editions of the ECMAScript standards, and code written using them is compiled taking into account the possibility of its execution on platforms that support older versions of the standards. This means that a TS programmer can take advantage of the features of ES2015 and newer standards, such as modules, arrow functions, classes, spread, and destructuring, and do what they can in existing environments that do not yet support these standards.

The language is backward compatible with JavaScript and supports modern editions of the ECMAScript standards. If you feed the compiler pure JavaScript, the compiler will “eat out” your own JS and will not say that it is an error – that would be a valid TypeScript code, however with no type benefits. Therefore one can write a mixed code of ES2015 and newer standards, such as modules, arrow functions, classes, spread, destructuring, etc. using TypeScript syntax, and implementing methods without strongly typing just using pure JS – and that will also be valid.

So, what benefits does language provide?

  • System for working with modules/classes – you can create interfaces, modules, classes
  • You can inherit interfaces (including multiple inheritance), classes
  • You can describe your own data types
  • You can create generic interfaces
  • You can describe the type of a variable (or the properties of an object), or describe what interface the object to which the variable refers should have
  • You can describe the method signature
  • Using TypeScript (as opposed to JavaScript) significantly improves the development process due to IDE receives type information from the TS compiler in real time.

On the other hand, there are some disadvantages:

  • In order to use some external tool (say, “library” or “framework”) with TypeScript, the signature of each of the methods of each module of this tool must be described so that the compiler does not throw errors, otherwise, it simply won’t know about your tool. For the majority of popular tools, the interface description most likely could be found in the repository. Otherwise, you’ll have to describe it yourself
  • Probably the biggest disadvantage is the learning curve and building a habit of thinking and writing TypeScript
  • At least as it goes with me, more time is spent on development compared to vanilla JavaScript: in addition to the actual implementation, it is also necessary to describe all the involved interfaces and method signatures.

One of the serious advantages of TS over JS is the ability to create, in various IDEs, a development environment that allows you to identify common errors directly in the process of entering code. Using TypeScript in large projects can lead to increased reliability of programs that can still be deployed in the same environments where regular JS applications run.

Types

There are expectations that it is enough to learn a few types to start writing in TypeScript and automatically get good code. Indeed, one can simply write types in the code, explaining to the compiler that in this place we are expecting a variable of a certain type. The compiler will prompt you on whether you can do this or not:

let helloFunction = () => { "hello" };
let text: string = helloFunction();
// TS2322: Type 'void' is not assignable to type 'string'.

However, the reality is that it will not be possible to outsource the entire work to the compiler. And let’s see what types you need to learn to progress with TypeScript:

  • basic primitives from JavaScript: boolean, number, string, symbol, bigint, undefined and object.
  • instead of the function type, TypeScript has Function and a separate syntax similar to the arrow function, but for defining types. The object type will mean that the variable can be assigned any object literals in TypeScript.
  • TypeScript-specific primitives: null, unknown, any, void, unique symbol, never, this.
  • Next are the types standard for many object-oriented languages: array, tuple, generic
  • Named types in TypeScript refer to the practice of giving a specific, descriptive name to a type for variables, function parameters, and return types: type User = { name: string; age: number; }
  • TypeScript doesn’t stop there: it offers union and intersection. Special literal types often work in conjunction with string, number, boolean, template string. These are used when a function takes not just a string, but a specific literal value, like “foo” or “bar”, and nothing else. This significantly improves the descriptive power of the code.
  • TypeScript also has typeof, keyof, indexed, conditional, mapped, import, await, const, predicate.

These are just the basic types; many others are built on their basis: for example, a composite Record<T>, or the internal types Uppercase<T> and Lowercase<T>, which are not defined in any way: they are intrinsic types.

P.S. – do not use Function, pass a predefined type, or use an arrow notation instead:

// replace unknown with the types you're using with the function
F extends (...args: unknown[]) => unknown
// example of a function with 'a' and 'b' arguments that reutrns 'Hello'
const func = (a: number, b: number): string => 'Hello'

Map and d.ts files

TypeScript comes with its own set of unique file types that can be puzzling at first glance. Among these are the *.map and *.d.ts files. Let’s demystify these file types and understand their roles in TypeScript development.

What are .map Files in TypeScript?

.map files, or source map files, play a crucial role in debugging TypeScript code. These files are generated alongside the JavaScript output when TypeScript is compiled. The primary function of a .map file is to create a bridge between the original TypeScript code and the compiled JavaScript code. This linkage is vital because it allows developers to debug their TypeScript code directly in tools like browser developer consoles, even though the browser is executing JavaScript.

When you’re stepping through code or setting breakpoints, the .map file ensures that the debugger shows you the relevant TypeScript code, not the transpiled JavaScript. This feature is a game-changer for developers, as it simplifies the debugging process and enhances code maintainability.

Understanding .d.ts Files in TypeScript

On the other side, we have *.d.ts files, known as declaration files in TypeScript. These files are pivotal for using JavaScript libraries in TypeScript projects. Declaration files act as a bridge between the dynamically typed JavaScript world and the statically typed TypeScript world. They don’t contain any logic or executable code but provide type information about the JavaScript code to the TypeScript compiler.

For instance, when you use a JavaScript library like Lodash or jQuery in your TypeScript project, the *.d.ts files for these libraries describe the types, function signatures, class declarations, etc., of the library. This allows TypeScript to understand and validate the types being used from the JavaScript library, ensuring that the integration is type-safe and aligns with TypeScript’s static typing system.

The Key Differences

The primary difference between *.map and *.d.ts files lies in their purpose and functionality. While .map files are all about enhancing the debugging experience by mapping TypeScript code to its JavaScript equivalent, .d.ts files are about providing type information and enabling TypeScript to understand JavaScript libraries.

In essence, .map files are about the developer’s experience during the debugging process, whereas .d.ts files are about maintaining type safety and smooth integration when using JavaScript libraries in TypeScript projects.

Cheatsheet

I won’t get into the details of the basics and operations which can be visualized in this cheat sheet instead:

Better than a thousand words!

Let’s take a look at some other great features not mentioned in the above cheatsheet.

What the heck is Any?

In TypeScript you can use any data type. This allows you to work with any type of data without errors. Just like regular javascript – in fact what you do by using any is downgrading your TypeScript to just JavaScript by eliminating type safety. The best way is to look at it in action:

let car: any = 2024;
console.log(typeof car)// number

car = "Mercedes";
console.log(typeof car)// string

car = false;
console.log(typeof car)// boolean

car = null;
console.log(typeof car)// object

car = undefined;
console.log(typeof car)// undefined

The car variable can be assigned any data type. Any is an evil data type, indeed! If you are going to use any data type, then TypeScript immediately becomes unnecessary. Just write code in JavaScript.

TypeScript can also determine what data type will be used if we do not specify it. We can replace the code from the first example with this one.

const caterpie01: number = 2021;     // number
const caterpie001 = 2021;            // number  - that was chosen by typescript for us

const Metapod01: string = "sleepy";  // string
const Metapod001 = "sleepy";         // string  - that was chosen by typescript for us

const Wartortle01: boolean = true;   // boolean
const Wartortle001 = true;           // boolean - that was chosen by typescript for us

This is a more readable and shorter way of writing. And of course, we won’t be able to assign any other data type to a variable.

let caterpie = 2021;            // in typescript this vairable becomes number after assignment
caterpie = "text";              // type error, as the type was already defined upon the assingment

On the other hand, if we don’t specify a data type for the function’s arguments, TypeScript will use any type. Let’s look at the code:

const sum = (a, b) => {
    return a + b;
}
sum(2021, 9);

In strict mode, the above code will error out with “Parameter 'a' implicitly has an 'any' type; Parameter 'a' implicitly has an 'any' type” message, but will perfectly work outside of strict mode in the same manner as JavaScript code would. I assume that was intentionally done for the compatibility

Null checks and undefined

As simple as that:

if(value){
  ...
}

The expression in parentheses will be evaluated as true if it is not one of the following:

  • null
  • undefined
  • NaN
  • an empty string
  • 0
  • false

TypeScript supports the same type conversion rules as JavaScript does.

Type checking

Surprise-surprise, TypeScript is all about the types and annotations. Therefore, the next piece of advice may seem weird, but, is legit: avoid explicit type checking, if you can.

Instead, always prefer to specify the types of variables, parameters, and return values to harness the full power of TypeScript. This makes future refactoring easier.

function travelToTexas(vehicle: Bicycle | Car){
    if (vehicle instanceof Bicycle) {
        vehicle.pedal(currentLocation, newLocation('texas'));
    } elseif(vehicle instanceof Car){
        vehicle.drive(currentLocation, newLocation('texas'));
    }
}

The below rewrite looks much more readable and therefore easy to maintain. And there are no ugly type checking clauses:

type Vehicle = Bicycle | Car;

function travelToTexas(vehicle: Vehicle){
    vehicle.move(currentLocation, newLocation('texas'));
}

Generics

Use them wherever possible. This will help you better identify the type being used in your code. Unfortunately, they are not often used, but in vain.

function returnType <T> (arg: T): T {
    return arg;
}

returnType <string> ('MJ knows Sitecore')// works well
returnType <number> ('MJ knows Sitecore')// errors out
// ^ Argument of type 'string' is not assignable to parameter of type 'number'.

If you are using a specific type, be sure to use extends:

type AddDot<T extends string> = `${T}.`// receives only strings, otherwise errors out

Ternary operators with extends

extends is very useful, it helps to determine what a type is inherited from in the type hierarchy (any -> number -> …) and make a comparison. Thanks to the combination of extends and ternary operators, you can create such awesome conditional constructs:

type IsNumber <T> = T extends number ? true : false

IsNumber <5>        // true
IsNumber <'lol'>    // false

Readonly and Consts

Use readonly by default to avoid accidentally overwriting types in your interface.

interface User {
    readonly name: string;
    readonly surname: string;
}

Let’s say you have an array that comes from the backend [1, 2, 3, 4] and you need to use only these four numbers, that is, make the array immutable. The as const construction can easily handle this:

const arr = [1, 2, 3, 4]            // curent type is number so that you can assign any number
arr[3] = 5  // [1, 2, 3, 5]

const arr = [1, 2, 3, 4] as const   // now type is locked as readonly [1, 2, 3, 4]
arr[3] = 5  // errors out

Satisfies

That is a relatively recent feature (since version 4.9), but is so helpful as it allows you to impose restrictions without changing the type. This can be very useful when you manage different types that do not share common methods. For example:

type Numbers = readonly [1, 2, 3];
type Val = { value: Numbers | string };

// the valid accepted values could be numbers 1, 2, 3, or a string
const myVal: Val = { value: 'a' };
So far so good. Let’s say we have a string, and must convert it to capital letters. Intuitively trying to use the below code without Satisfies will get you an error:
myVal.value.toUpperCase()
// ^ Property 'toUpperCase' does not exist on type 'Numbers'.
So the right way to deal with it would be to use Satisfies, then everything will be fine:
const myVal = { value: 'a' } satisfies { value: string };
myVal.value.toUpperCase()   // works well and outputs 'A'

Unions

Sometimes you can see code like this:

interface User {
    loginData: "login" | "username";
    getLogin(): void;
    getUsername(): void;
}

This code is bad because you can use a username but still call getLogin() and vice versa. To prevent this, it is better to use unions instead:

interface UserWithLogin {
    loginData: "login";
    getLogin(): void;
}
interface UserWithUsername {
    loginData: "username";
    getUsername(): void;
}

type User = UserWithLogin | UserWithUsername;

What is even more impressive – unions are iterable, meaning they can be used to loop through a test:

type Numbers = 1 | 2 | 3
type OnlyRuKeys = { [R in Numbers]: boolean }
// {1: boolean, 2: boolean, 3: boolean}

Utility Types

TypeScript Utility Types are a set of built-in types that can be used to manipulate data types in code.

  • Required<T> – makes all properties of an object of type T required.
  • Partial<T> – makes all properties of an object of type T optional.
  • Readonly<T> – makes all properties of an object of type T read-only
  • NonNullable<Type> – Retrieves a type from Type, excluding null and undefined.
  • Parameters<Type> – retrieves the types of function arguments Type
  • ReturnType<Type> – retrieves the return type of the function Type
  • InstanceType<Type> – retrieves the type of an instance of the Type class
  • Record<Keys, Type> – creates a type that is a record with keys defined in the first parameter and values of the type defined in the second parameter.
  • Pick<T, K extends keyof T> – selects properties of an object of type T with the keys specified in K.
  • Omit<T, K extends keyof T> – selects properties of an object of type T, excluding those specified in K
  • Exclude<UnionType, ExcludedMembers> – Excludes certain types from the union type
  • Uppercase<StringType>, Lowercase<StringType>, Capitalize<StringType>, Uncapitalize<StringType> – string manipulation utility types that change the case of the string according to their name.

I won’t get into the details on all of them but will showcase just a few for better understanding:

// 1. Required
interface Person {
    name?: string;
    age?: number;
}

let requiredPerson: Required<Person>;   // now requiredPerson could be as { name: string; age: number; }

// 2. Partial
interface Person {
    name: string;
    age: number;
}

let partialPerson: Partial<Person>;    // now partialPerson could be as { name?: string; age?: number; }

// 3. NonNullable
let value: string | null | undefined;
let nonNullableValue: NonNullable<typeof value>;    // now nonNullableValue is a string

// 4. Awaited
asyncfunctiongetData(): Promise <string> {
    return'hello';
}
let awaitedData: Awaited<ReturnType<typeof getData>>;     // now awaitedData could be 'hello'

// 5 Case management
type Uppercased = Uppercase<'hello'>;       // 'HELLO'
type Lowercased = Lowercase<'Hello'>;       // 'hello'
type Capitalized = Capitalize<'hello'>;     // 'Hello'
type Uncapitalized = Uncapitalize<'Hello'>; // 'hello'

These above are just a few examples of utility types in TypeScript. To find out more please refer to the official documentation.

Interface vs types

One of the questions that raises most of the confusion for the C# developer trying TypeScript. What’s the difference between these two below signatures:

interface X {
    a: number
    b: string
}

type X = {
    a: number
    b: string
};

You can use both to describe the shape of an object or a function signature, it’s just the syntax differs.

Unlike interface, the type alias can also be used for other types such as primitives, unions, and tuples:

// primitive
type Name = string;

// object
type PartialPointX = { x: number; };
type PartialPointY = { y: number; };

// union
type PartialPoint = PartialPointX | PartialPointY;

// tuple
type Data = [number, string];

Both can be extended, but again, the syntax differs. Also, an interface can extend a type alias, and vice versa:

// Interface extends interface
interface PartialPointX { x: number; }
interface Point extends PartialPointX { y: number; }

// Type alias extends type alias
type PartialPointX = { x: number; };
type Point = PartialPointX & { y: number; };

// Interface extends type alias
type PartialPointX = { x: number; };
interface Point extends PartialPointX { y: number; }

// Type alias extends interface
interface PartialPointX { x: number; }
type Point = PartialPointX & { y: number; };

A class can implement an interface or type alias, both in the same exact way. However, a class and interface are considered static blueprints. Therefore, they can not implement or extend a type alias that names a union type.

Unlike a type alias, an interface can be defined multiple times and will be treated as a single interface (with members of all declarations being merged):

// These two declarations become:
// interface Point { x: number; y: number; }
interface Point { x: number; }
interface Point { y: number; }

const point: Point = { x: 1, y: 2 };

So, when I should use one against another? If simplified, use types when you might need a union or intersection. Use interfaces when you want to use extends or implements. There is no hard and fast rule though, use what works for you. I admit, that may still be confusing, please read this discussion for more understanding.

Error Handling

Throwing errors is always good: if something goes wrong at runtime, you can terminate the execution at the right moment and investigate the error using the stack trace in the console.

Always use rejects with errors

JavaScript and TypeScript allow you to throw any object. A promise can also be rejected with any reason object. It is recommended to use throw syntax with type Error. This is because your error can be caught at a higher level of code with catch syntax. Instead of this incorrect block:

function calculateTotal(items: Item[]): number{
    throw 'Not implemented.';
}

function get(): Promise < Item[] > {
    return Promise.reject('Not implemented.');
}

use the below:

function calculateTotal(items: Item[]): number{
    throw new Error('Not implemented.');
}
function get(): Promise <Item[]> {
    return Promise.reject(new Error('Not implemented.'));
}
// the above Promise could be rewritten with an async equivalent:
async function get(): Promise <Item[]> {
    throw new Error('Not implemented.');
}

The advantage of using Error types is that they are supported by try/catch/finally syntax and implicitly all errors and have a stack property which is very powerful for debugging. There are other alternatives: don’t use throw syntax and always return custom error objects instead. TypeScript makes this even easier and it works like a charm!

Dealing with Imports

Last but not least, I want to share some tips and best practices for using imports. With simple, clear, and logical import statements, you can faster inspect the dependencies of your current code.

Make sure you use the following good practices for import statements:

  • Import statements should be in alphabetical order and grouped.
  • Unused imports must be removed (linter will come to help you)
  • Named imports must be in alphabetical order, i.e. import {A, B, C} from 'mod';
  • Import sources should be in alphabetical order in groups, i.e.: import * as foo from 'a'; import * as bar from 'b';
  • Import groups are indicated by blank lines.
  • Groups must follow the following order:
    • Polyfills (i.e. import 'reflect-metadata';)
    • Node build modules (i.e. import fs from 'fs';)
    • External modules (i.e. import { query } from 'itiriri';)
    • Internal modules (i.e. import { UserService } from 'src/services/userService';)
    • Modules from the parent directory (i.e. import foo from '../foo'; import qux from '../../foo/qux';)
    • Modules from the same or related directory (i.e. import bar from './bar'; import baz from './bar/baz';)

These rules are not obvious to beginners, but they come with time once you start paying attention to the minors. Just compare the ugly block of imports:

import { TypeDefinition } from '../types/typeDefinition';
import { AttributeTypes } from '../model/attribute';
import { ApiCredentials, Adapters } from './common/api/authorization';
import fs from 'fs';
import { ConfigPlugin } from './plugins/config/configPlugin';
import { BindingScopeEnum, Container } from 'inversify';
import 'reflect-metadata';
against them nicely structured, as below:
import 'reflect-metadata';

import fs from 'fs';
import { BindingScopeEnum, Container } from 'inversify';

import { AttributeTypes } from '../model/attribute';
import { TypeDefinition } from '../types/typeDefinition';

import { ApiCredentials, Adapters } from './common/api/authorization';
import { ConfigPlugin } from './plugins/config/configPlugin';

Which one gets faster to read?

Conclusion

In my opinion, TypeScript’s superpower is that it provides feedback as you write the code, not at runtime: IDE nicely prompts what argument to use when calling a function, and types are defined and navigable. All that comes at the cost of a minor compilation overhead and a slightly increased learning curve. That is a fair deal, TypeScript will stay with us for a long and won’t go away – the earlier you learn it the more time and effort it will save later.

Of course, I left plenty of raw TypeScript features aboard and did that intentionally so as not to overload the readers with it. If the majority of you are coming in the capacity of professionals, either starting with Next.js development in general or switching from other programming languages and tech. stacks (like .NET with C#) where I was myself a year ago – that is definitely the right volume and agenda for you to start with. There are of course a lot of powerful TypeScript features for exploration beyond today’s post, such as:

  1. Decorators
  2. Namespaces
  3. Type Guards and Differentiating Types
  4. Type Assertions
  5. Ambient Declarations
  6. Advanced Types (e.g., Conditional Types, Mapped Types, Template Literal Types)
  7. Module Resolution and Module Loaders
  8. Project References and Build Optimization
  9. Declaration Merging
  10. Using TypeScript with WebAssembly

But I think that is enough for this post. Hope you’ll enjoy writing strongly typed code with TypeScript!

References:

XM Cloud Forms Builder

XM Cloud Forms Builder released

Composable Forms Builder is now available with Sitecore XM Cloud. Let’s take a look at this one of the most anticipated modules for Sitecore’s flagship hybrid headless cloud platform.

Historically, we had an external module called Web Forms for Marketers that one could install on top of their Sitecore instance in order to gain the desired functionality of collecting user input. This module was later well reconsidered and reworked, later finding its reincarnation as Sitecore Forms, an integral part of Sitecore platforms since version 9. Customers enjoyed this built-in solution provided along with their primary DXP, however with the headless architecture of XM Cloud there were no CD servers any longer, therefore no suitable place for saving the collected user input. There was clearly a need for a SaaS forms solution, and this gap if finally filled out!

An interesting fact: until the release of Forms with XM Cloud the relevant composable solution for interacting with the visitors was Sitecore Send, and because of that Sitecore logically decided to derive XM Cloud Form module from Sitecore Send codebase (as it already had plenty of the desired features), rather than from legacy Sitecore Forms.

Sitecore XM Cloud Forms

So, what we’ve got?

The main goal was to release a new Forms product as SaaS solution that integrates with any custom web front-end. The actual challenge was to combine the ultimate simplicity of creating and publishing forms for the majority of marketing professionals along with tailoring this offering optimized for typical headless projects. In my opinion, despite the complexities, it was well achieved!

Let’s first take a look at its desired/expected capabilities:

  • Template Library
  • Work with Components Builder
  • Use external datasources for pre-populating form
  • Reporting and analytics
  • Ability to create multi-step and multi-page forms
  • Conditional logic (not available yet)

One would ask, if there’s no CD server or any managed backend at all, where does the submission go into? There might be some SaaS-provided storage along with the required interface to manage the collected input. Incorrect! There’s none. It was actually a smart move by Sitecore developers who decided to kill two birds with one stone: save effort for not building a universal UI/UX with the tool that will hardly satisfy variable needs from such a diverse range of customers/industries, that would be hardly doable. But the second reason is more legit: avoid storing any Personally Identifiable Information data, so that it won’t be processed within XM Cloud, leaving particular implementation decisions to leave to customers’ discretion.

That is done in a very composable SaaS manner, offering you to configure a webhook, passing a payload of collected data to the desired system of your choice.

Webhooks

Upon the form submission, the webhook is triggered to submit the gathered data to the configured system, it could be a database, CRM, CDP, or whichever backend is for some form. Even more, you can have shared webhooks so that multiple forms can use the same configured webhook. Similarly to the legacy forms that submitted their data into xDB, the most logical choice would be using powerful Sitecore CDP for processing this data. However with webhooks, the use case of XM Cloud forms becomes truly universal, and if you combine it with Sitecore Connect – that could span to whichever integration gets provided by it.
Webhooks come with multiple authentication options, covering any potential backend requirement.

Let’s see it in action!

The editor looks and feels like XM Cloud Pages – similar styling, layout, and UI elements:
02 03

First, let’s pick up the layout by simply dragging and dropping it on a canvas. For simplicity, I selected Full Width Layout. Once there, you can start dropping fields to a chosen layout:

04
Available Fields:
  • Action Button
  • Email
  • Phone
  • Short Text
  • Long Text
  • Select (single dropdown)
  • Multi Select (where you can define the number of options, say selected 3 of 9)
  • Date
  • Number
  • Radio
  • Checkbox (single)
  • Checkbox Group
  • Terms & Conditions
  • reCAPTCHA
Besides input-capturing elements, you can also define “passive” UI elements from underneath Basic expander:
  • Text
  • Spacer
  • Social Media – a set of clickable buttons to socials you define
  • Image, which has a pretty strong set of source options:
Media Upload options
Look at the variety of configuration options on the right panel. You can define:
  • Background Color within that field – transparent is the default one. You can even put a background image instead!
  • Settings for the field box shadows which also define Horizontal and Vertical Lengths, Blur and Spread Radiuses, and of course – the shadow color
  • Help text that is shown below and prompts some unobvious guidance you’d like a user to follow
  • For text boxes, you can set placeholder and prefill values
  • The field could be made mandatory and/or hidden by the correspondent checkboxes
  • Validation is controlled by a regex pattern and character length limit
  • Additionally, you can style pretty everything: field itself, label, placeholder, and help text, as well as set the overall padding to it

Please note, that at the current stage, the edited form is in a Draft state. Clicking Save button suggests you run your form in a Preview before saving, and that was very helpful – in my case, I left the Full Name field as a hidden field by mistake, and preview mode immediately showed me that. After fixing the visibility, I am good to go with saving.

The Forms home screen shows all the available forms. To Activate, I need to create a Webhook first, then assign it to the form. In addition, you define the action you want to do upon webhook submission – redirect to URL, display success message, or maybe – do nothing, as well as configure failure submission message.

This time Activate button works well and my form is listed as Active. From now on you cannot edit fields anymore and you cannot change the status back from Active. Therefore always verify your form in Preview before publishing.

Weirdly, you cannot even delete a form in Active status. What you can do however is a duplicate active form into a draft one, and you could go on with fields editing from there.

Forms listing

Testing Form

The most obvious desire at this stage is to real-test your form before usage. And luckily developers took care of that as well.

Testing webhook
I also created a webhook catcher with Pipedream RequestBin. On my first go, I faced a deadlock being unable to submit the form for the test. The reason for this was that I mistakenly checked both the Hidden and Required checkboxes on a field and could bo progress from there. Even the validation message did not show on a hidden field. Another mistake was that I overlooked that on Preview and published form into the active state. Hopefully, developers will find a solution to it sooner rather than later.

I give it another try to test how validation works:

Validation in action

Once validation passes, Test Form Submission dialog shows you the JSON payload as it goes out along with HTTP headers supplied with this webhook request. Let’s hit Submit button and see the confirmation – I chose to see a confirmation message that shows up.

Webhook catcher shows all my submitted data along with HTTP headers, everything was sent and received successfully!

webhook catcher

Multipage Forms

Multipage Forms are supported and that is impressive. Pay attention to the Add Page link button in between page canvases. Once activated, it also enforces Next button on non-final page canvases that trigger switching to the next form page:
20

What’s Next? Pages Editor!

Let’s use this newly created form from XM Cloud pages. Please note a new section called Forms under the Components tab. That is where all of your active forms reside. You can simply drag-drop this form to a desired placeholder, as you normally do in the Pages editor.

Consume Forms from Pages Editor

Please note: you must have your site deployed to an editing host running on Headless (JSS) SDK version 21.6 or newer to make it work – that is when XM Cloud Forms support was added. In other case, you face this error:

BYOC error before SDK 21.6

Experience Editor and Components Builder

Surprisingly, created forms are available from Components Builder:
Forms in Components Builder
However, Experience Editor does not have a direct way of consuming XM Cloud Forms. I tried the following chain of steps in order to make it work:
  1. Create and Activate a new form from Forms editor
  2. Consume it from the Components builder into a new component using BYOC, then publish this component
  3. Open Pages app, find the component with an embedded form you’ve built at step (2) and drop it to a page, then publish
  4. Open that same page in Experience Editor

Live Demo in Action

As you know, often a video is worth a thousand words, so here it is below. I’ve recorded the whole walkthrough from explaining to showcasing it all in action up to еhe most extreme example – when you create and publish a form, then consume it from the XM Cloud Components builder, making the part of a composite component, which in turn is used Pages editor to put down on a page which also opens up successfully in the Experience Editor. Unbelievable, and it all functions well. Just take a look yourself:

Developers Experience

As developers, how would we integrate forms into our “head” applications? That should work with a Forms BYOC component for your Next.js App coming out of the box with SDK. I spotted some traces of XM Cloud forms a while ago as a part of Headless JSS SDK 21.6.0 a while ago when it was in a state of “Canary” build. Now it got released and among the features, one can see an import of SitecoreForm component into the sample next.js app, as part of pull request merged into this release.

The documentation is available here, but … everything is so absolutely intuitive, that you hardly need one, don’t you?

Template Library

It is worth mentioning that XM Cloud Forms contains a Template Library handful of pre-configured forms you can use straight away or slightly modify as per your needs. There is an expectation it will grow with time covering any potential scenario one could ever have.
Template Library

Licensing

Since Forms are bundled into XM Cloud they’re included with every XM Cloud subscription.

What is missing?

  • file upload feature is not supported – webhooks alone are not sufficient to handle it
  • ability for customization and extension – hopefully, it comes as there’s an empty section for custom fields

Hopefully, the product developers will implement these and more features in the upcoming releases. But even with what was released today, I really enjoyed XM Cloud Forms builder!

A crash course of Next.js: Caching, Authentication and Going Live tasks (part 4)

This series is my Next.js study resume, and despite it’s keen to a vanilla Next.js, all the features are applicable with Sitecore SDK. It is similar to the guide I recently wrote about GraphQL and aims to reduce the learning curve for those switching to it from other tech stacks.

  • In part 1 we covered some fundamentals of Next.js – rendering strategies along with the nuances of getStaticProps, getStaticPaths, getServerSideProps as well as data fetching.
  • In part 2 we spoke about UI-related things coming to OOB with Next.js – layouts, styles, and fonts powerful features, Image and Script components, and of course – TypeScript.
  • In part 3 we went through the nuances of Next.js routing and explained middleware

In this post we are going to talk about pre-going live optimizations such as caching and reducing bundle size as well as authentication.

Going live consideration

  • use caching wherever possible (see below)
  • make sure that the server and database are located (deployed) in the same region
  • minimize the amount of JavaScript code
  • delay loading heavy JS until you actually use it
  • make sure logging is configured correctly
  • make sure error handling is correct
  • configure 500 (server error) and 404 (page not found) pages
  • make sure the application meets the best performance criteria
  • run Lighthouse to test performance, best practices, accessibility, and SEO. Use an incognito mode to ensure the results aren’t distorted
  • make sure that the features used in your application are supported by modern browsers
  • improve performance by using the following:
    • next/image and automatic image optimization
    • automatic font optimization
    • script optimization

Caching

Caching reduces response time and the number of requests to external services. Next.js automatically adds caching headers to statics from _next/static, including JS, CSS, images, and other media.

Cache-Control: public, max-age=31536000, immutable

To revalidate the cache of a page that was previously rendered into static markup, use the revalidate setting in the getStaticProps function.

Please note: running the application in development mode using next dev disables caching:

Cache-Control: no-cache, no-store, max-age=0, must-revalidate

Caching headers can also be used in getServerSideProps and the routing interface for dynamic responses. An example of using stale-while-revalidate:

// The value is considered fresh and actual for 10 seconds (s-maxage=10).
// If the request is repeated within 10 seconds, the previous cached value
// considered fresh. If the request is repeated within 59 seconds,
// cached value is considered stale, but is still used for rendering
// (stale-while-revalidate=59)
// The request is then executed in the background and the cache is filled with fresh data.
// After updating the page will display the new value
export async function getServerSideProps({ req, res }){
    res.setHeader(
        'Cache-Control',
        'public, s-maxage=10, stale-while-revalidate=59'
    )
    return {
        props: {}
    }
}

Reducing the JavaScript bundle volume/size

To identify what’s included in each JS bundle, you can use the following tools:

  • Import Cost – extension for VSCode showing the size of the imported package
  • Package Phobia is a service for determining the “cost” of adding a new development dependency to a project (dev dependency)
  • Bundle Phobia – a service for determining how much adding a dependency will increase the size of the build
  • Webpack Bundle Analyzer – Webpack plugin for visualizing the bundle in the form of an interactive, scalable tree structure

Each file in the pages directory is allocated into a separate assembly during the next build command. You can use dynamic import to lazily load components and libraries.

Authentication

Authentication is the process of identifying who a user is, while authorization is the process of determining his permissions (or “authority” in other words), i.e. what the user has access to. Next.js supports several authentication patterns.

Authentication Patterns

Each authentication pattern determines the strategy for obtaining data. Next, you need to select an authentication provider that supports the selected strategy. There are two main authentication patterns:

  • using static generation to load state on the server and retrieve user data on the client side
  • receiving user data from the server to avoid “flushing” unauthenticated content (in the meaning of switching application states being visible to a user)

Authentication when using static generation

Next.js automatically detects that a page is static if the page does not have blocking methods to retrieve data, such as getServerSideProps. In this case, the page renders the initial state received from the server and then requests the user’s data on the client side.

One of the advantages of using this pattern is the ability to deliver pages from a global CDN and preload them using next/link. This results in a reduced Time to Interactive (TTI).

Let’s look at an example of a user profile page. On this page, the template (skeleton) is first rendered, and after executing a request to obtain user data, this data is displayed:

// pages/profile.js
import useUser from '../lib/useUser'
import Layout from './components/Layout'

export default function Profile(){
    // get user data on the client side
    const { user } = useUser({ redirectTo: '/login' })
    // loading status received from the server
    if (!user || user.isLoggedIn === false) {
        return <Layout>Loading...</Layout>
    }
    // after the request is completed, user data is
    return (
        <Layout>
            <h1>Your profile</h1>
            <pre>{JSON.stringify(user, null, 2)}</pre>
        </Layout>
    )
}

Server-side rendering authentication

If a page has an asynchronous getServerSideProps function, Next.js will render that page on every request using the data from that function.

export async function getServerSideProps(context){
    return {
        props: {}// will get passed down to a component as props
    }
}

Let’s rewrite the above example. If there is a session, the Profile component will receive the user prop. Note the absence of a template:

// pages/profile.js
import withSession from '../lib/session'
import Layout from '../components/Layout'

export const getServerSideProps = withSession(async (req, res) => {
    const user = req.session.get('user')
    if (!user) {
        return {
            redirect: {
                destination: '/login',
                permanent: false
            }
        }
    }
    return {
        props: {
            user
        }
    }
})
export default function Profile({ user }){
    // display user data, no loading state required
    return (
        <Layout>
            <h1>Your profile</h1>
            <pre>{JSON.stringify(user, null, 2)}</pre>
        </Layout>
    )
}

The advantage of this approach is to prevent the display of unauthenticated content before performing a redirect. Тote that requesting user data in getServerSideProps blocks rendering until the request is resolved. Therefore, to avoid creating bottlenecks and increasing Time to First Byte (TTFB), you should ensure that the authentication service is performing well.

Authentication Providers

Integrating with the users database, consider using one of the following solutions:

  • next-iron-session – low-level encoded stateless session
  • next-auth is a full-fledged authentication system with built-in providers (Google, Facebook, GitHub, and similar), JWT, JWE, email/password, magic links, etc.
  • with-passport-and-next-connect – old good node Password also works for this case