Experience Sitecore ! | More than 200 articles about the best DXP by Martin Miles

Experience Sitecore !

More than 200 articles about the best DXP by Martin Miles

Sitecore 10.4 is out and here’s all you need to know about it

That was a decent gap since 1.5 years ago Sitecore previously released a feature-full version of their XM/XP platform, namely 10.3 was released on December 1st of 2022. That is why I was very excited to look through the newest release of the vendor’s self-hosted platforms and familiarize myself with its changes.

First and foremost, the 10.4 platforms could be exclusively obtained from a new download page which has moved to its new home at Sitecore Developer Portal. I recommend bookmarking that for the current and all future releases.

Release Notes

There is a list of impressive 200 changes and improvements coming along with official Release Notes. I recommend going through it especially paying attention to the Deprecated and Removed sections.

So, what’s New?

From the important features and changes, I’d focus on a few:

  • XM to XM Cloud Migration Tool for migrating content, media, and users from a source XM instance to an XM Cloud environment. This tool provides an aid for the routine and sometimes recurring back-end migrations, so our customers/partners can focus on migrating and developing new front-end sites.
  • xDB to CDP Migration Tool for transferring site visitor contact facets to Sitecore’s CDP and Personalize products, and also via Sitecore Connect to external systems. This provides the ability to interwork with or eventually adopt other SaaS-based innovations.
  • New /sitecore/admin/duplicates.aspx admin folder page addressing the change in media duplication behavior (now, the blobs are in fact also duplicated) – run it upon the migration to 10.4 in order to change the media items accordingly.
  • Added a new Codeless Schema Extension module, enabling business users to extend the xConnect schema without requiring code development. If that one was available earlier – it could significantly boost xDB usage by marketers. It will be generally available in mid-May 2024.
  • Improved accessibility to help content authors with disabilities.
  • Sitecore Client Content Reader role allows access into CM without the risk of breaking something – it was a frequently requested feature.
  • It is now possible to extract data from xDB and transform the schema for external analytics tools such as Power BI.
  • GraphQL is enabled by default on the CM container instance in the local dev – which totally makes sense to me.
  • Underlying dependencies updated to the latest – SQL Server 2022, latest Azure Kubernetes Service, Solr 8.11, etc.

Containers

Spinning up Sitecore in local Docker containers used to be the easiest way of starting up. However, the most important fact you have to consider for a containerized setup is that base images are only available for ltsc2022 platform, at least for now. If you are a lucky one using a Windows 11 machine – you get the best possible performance running Sitecore in Process isolation mode. Otherwise, you may struggle with Hyper-V compatibility issues.

The other thing I noticed is that SitecoreDockerTools is simply set to pull the latest version which is 10.3.40 at the time of writing.

Also, Traefik image remains on one of the older versions (not versions 3.x of Traefik, but 2.9.8 which was even older before – v2.2.0) that do not support ltsc2022 and therefore still uses Hyper-V isolation. You can however fix that manually to have each and every image running fast in Process isolation mode. As always, it helps a lot to examine the list of available published images as your own exercise as some were standardized.

Compared to previous versions, this one seems to be lightweight, with no helpful PowerShell scripts for up & down containers (so we use docker-compose directly) as well as clean-up scripts and others. As before, it supports all three default topologies – XP0, XM1, and XP1.

Sitecore Gallery Tips:

  • Tip 1: Sitecore Gallery has recently moved from MyGet https://sitecore.myget.org/F/sc-powershell/api/v2 to Sitecore hosted NuGet https://nuget.sitecore.com/resources/v2.
  • Tip 2: don’t forget to update the PackageManagement and PowerShellGet modules from PSGallery if needed, as below:
Install-Module -Name PackageManagement -Repository PSGallery -Force -AllowClobber
Install-Module -Name PowerShellGet -Repository PSGallery -Force -AllowClobber

Containers

If for some reason you cannot or are unwilling to use containers, there are other options: SIA and manual installations from a zip archive. Over the past years, I have created a tool called Sifon that is effectively better than SIA, because it can also install all the prerequisites, such as Solr and SQL Server of the required versions, along with downloading the necessary resources from the developer portal. I will add the support for 10.4 in the next few days or a week.

10.4 dashboard

Upon the installation, you will see the Sitecore Dashboard:

Sitecore 10.4 Dashboard

Version 10.4 now operates 010422 revision:

Version 10.4

SXA

This crucial module comes in the correspondent version 10.4 along with a newer 7.0 version of the Sitecore PowerShell Extensions module. The biggest news about this module is that it now supports Tailwind, in the same way as XM Cloud does:

Tailwind

Conclusion

In general, time will prove what I expect this version to be – the most mature version of Sitecore, working faster and more reliably with the updated underlying JavaScript-dependent libraries. I am impatiently waiting for the hot things such as AI integrations and the delayed feature set promised to appear later the month in May 2024 to explore and share about.

Cypress: a new generation of end-to-end testing

What is Cypress

Cypress is a modern JavaScript-based end-to-end (e2e) testing framework designed to automate web testing by running tests directly in the browser. Cypress has become a popular tool for web applications due to a number of distinctive advantages such as user-friendly interface, fast test execution, ease of debugging, ease of writing tests, etc.

Those who have already had any experience with this testing framework probably know about its advantages, which make it possible to ensure that projects are covered with high-quality and reliable autotests. Cypress has well-developed documentation, one of the best across the industry, with helpful recommendations for beginners, which is constantly being improved, as well as an extensive user community. However, despite the convenience, simplicity, and quick start, when we talk about Cypress tests, we still mean the code. In this regard, to work effectively with the persona behind Cypress requires not only an understanding of software testing as such but also the basics of programming, being more or less confident with JavaScript/TypeScript.

Why Cypress

Typically, to test your applications, you’ll need to take the following steps:

  • Launch the application
  • Wait until the server starts
  • Conduct manual testing of the application (clicking the buttons, entering random text in input fields, or submit a form)
  • Validate the result of your test being correct (such as changes in title, part of the text, etc.)
  • Repeat these steps again after simple code changes.

Repeating these steps over and over again becomes tedious and takes up too much of your time and energy. What if we could automate this testing process? Thanks to this, you can focus on more important things and not waste time testing the UI over and over again.

This is where Cypress comes into play. When using Cypress the only thing you need to do is:

  • Write the code for your test (clicking a button, entering text in input fields, etc.)
  • Start the server
  • Run or rerun the test

That’s it! The Cypress library cares about all the testing for you. It not only tells you if all your tests passed or not, but it also points to which test failed and why exactly.

How about Selenium

Wait, but we already have Selenium, is it still actual?

Selenium remained The King of automated testing for more than a decade. I remember myself back in 2015 creating a powerful UI wrapper for Selenium WebDriver to automate and simplify its operations for non-technical users. The application is named Onero and is still available along with its source code. But when it comes to Cypress – it already offers powerful UI straight out of the box, and much more useful tools and integrations – just keep reading to find this below.

Cypress is the next generation web testing platform. It was developed based on Mocha and is a JavaScript-based end-to-end testing framework. That’s how it is different to Selenium which is a testing framework used for web browser automation. Selenium WebDriver controls the browser locally or remotely and is used to test UI automation.

The principal difference is that Cypress runs directly in browser, while Selenium is external to a browser and controls it via WebDriver. That alone makes Cypress much perfectly handling async operations and waits, which were all the time issues for Selenium and required clumsy scaffolding around related error handling.

With that in mind, let’s compare Cypress vs. Selenium line by line:

With that in mind, let’s compare Cypress vs. Selenium line by line:

Cypress Selenium
Types of testing Front end with APIs, end-to-end End-to-end, doesn’t support API testing
Supported languages JavaScript/Typescript Multiple languages are supported, such as Java, JavaScript, Perl, PHP, Python, Ruby, C#, etc.
Audience Developers as well as testers Automation engineers, testers
Ease of Use For those familiar with JavaScript, it will be an easy walk. Otherwise, it will be a bit tricky. It is still developer-friendly being designed keeping developers in mind. It also has a super helpful feature called “travel back in time.” As it supports multiple languages, people can quickly start writing tests, but it’s more time-consuming than Cypress as you have to learn specific syntax.
Speed It has a different architecture that doesn’t utilize a web driver and therefore is faster, also Cypress is written with JavaScript which is native to browsers where it executes. Because of its architecture, it’s hard to create simple, quick tests. However, the platform itself is fast, and you can run many tests at scale, in parallel, and cross-browser.
Ease of Setup Just run the following command: npm install Cypress –save-dev. It requires no other component installation (unlike web driver) as Selenium does, you don’t even have to have a browser as it can use Electron. Also, everything is well-bundled. As it has two component bindings and a web driver. Installation is more complicated and time-consuming.
Integrations & Plugins It has less integrations which is compensated by a rich set of plugins. Perfectly runs in Docker containers and supports GitHub Actions. It integrates with CI, CD, visual testing, cloud vendors, and reporting tools.
Supported Browsers Supports all Chromium-based browsers (Chrome, Edge, Brave) and Firefox. All browsers: Chrome, Opera, Firefox, Edge, Internet Explorer, etc along with the “scriptable headless” browser – PhantomJs.
Documentation Helpful code samples and excellent documentation in general Average documentation.
Online Community & support A growing community, but smaller then Selenium gained over a decade. It has a mature online community.

Selenium is aimed more at QA automation specialists, while Cypress is aimed merely at developers to improve TDD efficiency. Selenium was introduced in 2004, so it has more ecosystem support than Cypress, which was developed in 2015 and continues to expand.

Installation and first run

You need to have Node.js and as Cypress is shipped with npm module.

npm -i init
npm install cypress -- save -dev

Along with Cypress itself, you will likely want to install XPath plugin, otherwise, you’re limited to only CSS locators.

npm install -D cypress-xpath

Once ready, you may run it:

npx cypress open

From there you'll see two screens: E2E Testing and Components Testing

Main Screen

Most of the time you will likely be dealing with E2E testing. That’s where you choose your desired browser and execute your tests:

E2e

By default you’ll find a live documentation in a form of a bunch of helpful pre-written tests exposing best of Cypress API in action. Feel free to modify, copy and paste as per your needs.

Tests

Here’s how Cypress executes tests from the UI on an example of sample test run:

Sample Run

But of course, in a basic most scenario you can run it from console, You can even pass a specific test spec file to execute:

npx cypress run --spec .\cypress\e2e\JumpStart\JumpStart.cy.js

Regardless of the execution mode, results will stay persistent:

From Console

Component Testing

This feature was recently added and stayed long time in preview. Now once it is out of beta, let’s take a look at what Component Testing is.

Instead of working with the entire application, with component testing you can simply connect a component in isolation. This will save you time by downloading only the parts you’re interested in, and will allow you to test much faster. Or you can test different properties of the same component and see how they display. This can be very useful in situations where small changes affect a large part of the application.

Component

In addition to initializing the settings, Cypress will create several support files, one of the most important is component.ts, located in the cypress/support folder.

import { mount } from 'cypress/react18'

declare global {
    namespace Cypress {
    interface Chainable {
        mount: typeof mount
    }
    }
}

Cypress.Commands.add('mount', mount)

// Example of usage:
// cy.mount(MyComponent)

This file contains the component mount function for the framework being used. Cypress supports React, Angular, Svelte, Vue, and even frameworks like Next.js and Nuxt.

Cypress features

  1. Time travel
  2. Debuggability
  3. Automatic waits (built-in waits)
  4. Consistence results
  5. Screenshots and videos
  6. Cross browser testing – locally or remotely

I want to focus on some of these features.

Time Travel. This is an impressive feature that allows you to see the current state of your application at any time while it is being tested.

Debuggability. Your Cypress test code runs in the same run loop as your application. This means you have access to the code running on the page, as well as the things the browser makes available to you, like document, window, and debugger. You can also leverage .debug() function to quickly inspect any part of your app right while running a test. Just attach it to any Cypress chain of commands to have a look at the system’s state at that moment:

it('allows debugging like a pro', ()=>{
    cy.visit('/location/page')
    cy.get('[data-id="selector"]').debug()
})

Automatic waits. Aa a key advantage over Selenium, Cypress is smart to know how fast an element is animating and will wait for it to stop animating before acting against it. It will also automatically wait until an element becomes visible, becomes enabled, or when another element is no longer covering it.

Consistence results. Due to its architecture and runtime nuances, Cypress fully controls the entire automation process from top to bottom, which puts it in the unique position of being able to understand everything happening in and outside of the browser. This means Cypress is capable of delivering more consistent results than any other external testing tool.

Screenshots and videos. Cypress can work on screenshots and videos. One can capture both the complete page and particular element screenshot with the screenshot command in Cypress. Cypress also has the in-built feature to capture the screenshots of failed tests. To capture a screenshot of a particular scenario, we use the command screenshot.

describe('Test with a screenshot', function(){
    it("Test case 1", function(){
        //navigate URL
        cy.visit("https://microsoft.com/windows")

        //complete page screenshot with filename - CompletePage
        cy.screenshot('CompletePage')

        //screenshot of the particular element
        cy.get(':nth-child(3) > section').screenshot()
    });
});

Produced screenshots appear inside the screenshots folder (in the plugins folder) of the project, but that’s configurable from the globals.

Cypress video capturing executes for tests. Enable it from cypress.config.ts:

import{ defineConfig } from 'cypress'

export default defineConfig({
    video: true,
})

Please refer to the official documentation that explains how to use screenshots and videos with Cypress.

GitHub Actions Integration

Cypress nicely allows to run tests in Cypress using GitHub Actions.

To do this on the GitHub Action server, you first need to install everything necessary. We also need to determine when we want to run tests (for example, run them on demand or every time new code is introduced). This is how we gradually define what GitHub Actions will look like. In GitHub Actions, these plans are called “workflows”. Workflow files are located under .github/workflows folder. Each file is a YAML with a set of rules configuring what and how will get executed:

name: e2e-tests

on:[push]

jobs:
    cypress-run:
    runs-on: ubuntu-latest
    steps:
        - name: Checkout
        uses: actions/checkout@v3
        - name: Cypress run
        uses: cypress-io/github-action@v5
        with:
            start: npm start
    

Let’s look at what’s going on in this file. In the first line, we give the action a name. It can be anything, but it is better to be descriptive.

In the first line, we give the action a name. In the second line, we define the event on which this script should be executed. There are many different events, such as push, pull_request, schedule, or workflow_dispatch (that allows you to trigger an action manually).

The third line specifies the task or tasks to be performed. Here we must determine what needs to be done. If we were starting from scratch, this is where we would run npm install to install all the dependencies, run the application, and run tests on it. But, as you can see, we are not starting from scratch, but using predefined actions – instead of doing that, we can re-use previously created macros. For example, cypress-io/github-action@v5 will run npm install, correctly cache Cypress (so installation will be faster next time), start the application with npm start, and run npx cypress run. And all this with just four lines in a YAML file.

Run Cypress in containers

In modern automated testing, setting up and maintaining a test environment can often be a time-consuming task, especially when working with multiple dependencies and their configurations, different operating systems, libraries, tools, and versions. Often one may encounter dependency conflicts, inconsistency of environments, limitations in scalability and error reproduction, etc., which ultimately leads to unpredictability and unreliability of testing results.

Using Docker greatly helps prevent most of these problems from occurring and the good news is that you can do that. In particular, using Cypress in Docker can be useful because:

  1. It ensures that Cypress autotests run in an isolated test environment. In this case, the tests are essentially independent of what is outside the container, which ensures the reliability and uninterrupted operation of the tests every time they are launched.
  2. For running it locally, this means the absence of Node.js, Cypress, any exotic browser on the host computer – that won’t become an obstacle. This not just allows to run Cypress locally on different host computers but also deploy them in CI/CD pipelines and to cloud services by ensuring uniformity and consistency in the test environment. When moving a Docker image from one server to another, containers with the application itself and tests will work the same regardless of the operating system used, the presence of Node.js, Cypress, browsers, etc. This ensures that Cypress autotests are reproducible and the results of running them predictable across different underlying systems.
  3. Docker allows you to quickly deploy the necessary environment for running Cypress autotests, and therefore you do not need to install operating system dependencies, the necessary browsers, and test frameworks each time.
  4. Speeds up the testing process by reducing the total time for test runs. This is achieved through scaling, i.e. increasing the number of containers, running Cypress autotests in different containers in parallel, parallel cross-browser testing capabilities using Docker Compose, etc.

The official images of Cypress

Today, the public Docker Hub image repository, as well as the corresponding cypress-docker-images repository on GitHub, hosts 4 official Cypress Docker images:

Limitations of Cypress

Nothing is ideal on Earth, so Cypress also has some limitations mostly caused by its unique architecture:

  1. One cannot use Cypress to drive two browsers at the same time
  2. It doesn’t provide support for multi-tabs
  3. Cypress only supports JavaScript for creating test cases
  4. Cypress doesn’t provide support for browsers like Safari and IE at the moment
  5. Reading or writing data into files is difficult
  6. Limited support for iFrames

Conclusion

Testing is a key step in the development process as it ensures that your application works correctly. Some programmers prefer to manually test their programs because writing tests requires a significant amount of time and energy. Fortunately, Cypress has solved this problem by allowing the developer to write tests in a short amount of time.

Storybook

You may have never heard of Storybook or maybe that was just a glimpse leaving you a feeling Storybook is such an unnecessary tool – in that case, this article is for you. Previously, I could share this opinion, but since I played with Storybook in action when building the XM Cloud starter kit with Next.Js, it changed.

Storybook

Why

With the advent of responsive design, the uniqueness of user interfaces has increased significantly – with the majority of them having bespoke nuances. New requirements have emerged for devices, browser interfaces, accessibility, and performance. We started using JavaScript frameworks, adding different types of rendering to our applications (CSR, SSR, SSG, and ISR) and breaking the monolith into micro-frontends. Ultimately, all this complicated the front end and created the need for new approaches to application development and testing.

The results of a 2020 study showed that 77% of developers consider current development to be more complex than 10 years ago. Despite advances in JavaScript tools, professionals continue to face more complex challenges. The component-based approach used in React, Vue, and Angular helps break complex user interfaces into simple components, but it’s not always enough. As the application grows, the number of components increases; in serious projects, there can be hundreds of them, which gives thousands of permutations. To even further complicate matters, interfaces are difficult to debug because they are entangled in business logic, interactive states, and application context.

This is where Storybook comes to the rescue.

What Storybook is

Storybook is a tool for the rapid development of UI components. It allows you to view a library of components and track the status of each of them. With StoryBook, one can develop components separately from the application, making it easier to reuse and test UI components.

Storybook promotes the Component-Driven Development (CDD) approach, where every part of the user interface is a component. These are the basic building blocks of an application. Each of them is developed, tested, and documented separately from the others, which simplifies the process of developing and maintaining the application as a whole.

A component is an independent fragment of the application interface. In Sitecore, in most cases, a component is equal to a rendering, for example, CTA, input, badge, and so on. If we understand the principles of CDD and know how to apply this approach in development, we can use components as the basis for creating applications. Ideally, they should be designed independently from each other and be reusable in other parts of the application. You can approach creating components in different ways: start with smaller ones and gradually combine them into larger ones, and vice versa. You can create them both within the application itself and in a separate project – in the form of a library of components.

With Storybook’s powerful functionality, you can view your interfaces the same way users do. It provides the ability to run automated tests, analyze various interface states, work with mock data, create documentation, and even conduct code reviews. All these tasks are performed within the framework of the so-called Story, which allows you to effectively use Storybook for development.

What is a Story

This is the basic unit of Storybook design and allows you to demonstrate different states of a component to test its appearance and behavior. Each component can have multiple stories, and each one can be treated as a separate test case to test the functionality of the component.

You write stories for specific states of UI components and then use them to demonstrate the appearance during development, testing, and documentation.

Using the Storybook control panel, you can edit each of the story function arguments in real time. This allows your team to dynamically change components in Storybook to test and validate different edge cases.

Storybook explained

Storybook Capabilities

Creating documentation

Storybook provides the ability to create documentation along with components, making the process more convenient. With its help, you can generate automatic documentation based on code comments, as well as create separate pages with examples of use and descriptions of component properties. This allows you to maintain up-to-date and detailed documentation that will be useful not only for developers but also for designers, testers, and users.

User Interface Testing

Another good use of Storybook – UI Tests identify visual changes to interfaces. For example, if you use Chromatic, the service takes a snapshot of each story in a cloud browser environment. Each time you push the code, Chromatic creates a new set of snapshots to compare existing snapshots with those from previous builds. The list of visual changes is displayed on the build page in the web application so that you can check if these changes are intentional. If they are not, that may be a bug or glitch to be corrected.

Accessibility Compliance

As The State of Frontend 2022 study found, respondents pay high attention to accessibility, with 63% predicting this trend will gain more popularity in the coming years. Accessibility in Storybook can be tested using the storybook-addon-a11y. Upon the installation, the “Accessibility” tab will appear, for you to see the results of the current audit.

Mocking the data

When developing components for Storybook, one should consider realistic data to demonstrate the capabilities of the components and simulate a real-life use case. For this purpose, mock data is often taken, that is, fictitious data that has a structure and data types similar to real ones but does not carry real information. In Storybook, you can use various libraries to create mock data, and you can also create your own mocks for each story. If a component itself needs to perform network calls pulling data, you can use the msw library.

Simulating context and API

Storybook addons can help you simulate different component usage scenarios, such as API requests or different context values. This will allow you to quickly test components in realistic scenarios. If your component uses a provider to pass data, you can use a decorator that wraps the history and provides a cloaked version of the provider. This is especially useful if you are using Redux or context.

Real-life advantages

Wasting resources on the user journey

Building a landing page may seem to be a simple exercise, especially in a development mode when you see changes appear in the browser immediately. However, the majority of cases are not that straightforward. Imagine a site with a backend entirely responsible for routing – and one may need to login first, answer the security questions, and then navigate through the complex menu structure. Say you only need to “change the color of a button” on the final screen of the application, then the developer needs to launch the application in its initial state, login, get to the desired screen, fill out all the forms along the way, and only after that check whether the new style has been applied to the button.

If the changes have not been applied, the entire sequence of actions must be repeated. Storybook solves this problem. With it, a developer can open any application screen and instantly see how it looks, taking into account the applied styles and the desired state. This allows you to significantly speed up the process of developing and testing components since they can be tested and verified independently of the backend and other parts of the application.

Development without having actual data

Often the UI development takes place before the API is ready from the backend developers. Storybook allows you to create components that stub data that will be retrieved from the real API in the future. This allows us to prototype and test the user interface, regardless of the presence or readiness of the backend, and use mock data to demonstrate components.

Frequently changing UI

On a project, we often encounter changes in layout design, and it is very important for us to quickly adapt our components to these changes. Storybook allows you to quickly create and compare different versions of components, helping you save time and make your development process more efficient.

Infrastructural issues

The team may encounter problems when a partner’s test environment or dependencies stop working, which leads to delays and lost productivity. However, with Storybook, it is possible to continue developing components in isolation and not wait for service to recover. Storybook also helps to quickly switch between your application versions and test components in different contexts. This significantly reduces downtime and increases productivity.

Knowledge transfer

In large projects, onboarding may take a lot of time and resources. Storybook allows new developers to quickly become familiar with components and how they work, understand the structure of the project, and start working on specific components without having to learn everything from scratch. This makes the development process easier and more intuitive, even for those not familiar with a particular framework.

Application build takes a long time

Webpack is a powerful tool for building JavaScript applications. However, when developing large applications, building a project can take a long time. Storybook automatically compiles and assembles components whenever changes occur. This way, developers quickly receive updated versions of components without a need to rebuild the entire project. In addition, Storybook supports additional plugins and extensions for Webpack, to improve performance and optimize project build time.

Installation

First, install Storybook using the following commands:

cd nextjs-app-folder
npx storybook@latest init

 

Once installed, execute it:

npm run storybook

This will run Storybook locally, by default on por 6066, but if the port is occupied – it will pick an alternative one.

Storybook

Storybook is released under the MIT license, you can access its source code in the GitHub repository.

Making it with Sitecore

When developing headless projects with Sitecore, everything stays in the same manner. As part of our mono repository, we set up Storybook with Next.js so that front-end developers don’t have to run an instance of Sitecore to do their part of the development work.

Upon installation, you’ll find a .storybook folder at the root of your Next.Js application (also used as a rendering host) which contains configuration and customization files for your Storybook setup. This folder is crucial for tailoring Storybook to your specific needs, such as setting up addons, and Webpack configurations, and defining the overall behavior of Storybook in your project.

  1. main.js(or main.ts): this is the core configuration file for Storybook. It includes settings for loading stories, adding add-ons, and custom Webpack configurations. You can specify the locations of your story files, add an array of addons you’re using, and customize the Webpack and Babel configs as needed.
  2. preview.js(or preview.tsx): used to customize the rendering of your stories. You can globally add decorators and parameters here, affecting all stories. This file is often used for setting up global contexts like themes, internationalization, and configuring the layout or backgrounds for your stories.

02

One of the best integrations (found from Jeff L’Heureux) allows you to use your own Sitecore context mock and also any placeholder to use any of your components (see the lines below decorators).

import React from 'react';
import {LayoutServicePageState, SitecoreContext} from '@sitecore-jss/sitecore-jss-nextjs';
import {componentBuilder} from 'temp/componentBuilder';
import type { Preview } from '@storybook/react';
import 'src/assets/main.scss';
export const mockLayoutData = {
  sitecore: {
    context: {
      pageEditing: false,
      pageState: LayoutServicePageState.Normal,
},
    setContext: () =>{
    // nothing
},
    route: null,
},
};
const preview: Preview = {
  parameters: {
    actions: { argTypesRegex: '^on[A-Z].*'},
    controls: {
      matchers: {
        color: /(background|color)$/i,
        date: /Date$/,
      },
    },
},
  decorators: [
    (Story) =>(
      <SitecoreContext
              componentFactory={componentBuilder.getComponentFactory({ isEditing: mockLayoutData.sitecore.context.pageEditing})}
              layoutData={mockLayoutData}
      >
        <Story />
      </SitecoreContext>
    ),
  ],
};

export default preview;

It is important to understand that getServerSide/getStaticProps are not executed when using Storybook. You are responsible for providing all the required data needed as well as context, so you need to wrap your story or component.

Component level fetching works nicely with Sitecore headless components using MSW – you can just mock fetch API to return the required data from inside of the story file.

Useful tips for running Storybook for Headless Sitecore

  • use next-router-mock to mock the Nextjs router in Storybook (or upgrade to version 7 with the @storybook/nextjs)
  • exclude stories from the componentFactory / componentBuilder file.
  • make sure to run npm run bootstrap before starting storybook or adding it to the package.json, something like: "prestorybook": "npm-run-all --serial bootstrap" – when the storybook script is invoked, this prestorybook will automatically run just before, using a default NPM feature.

Conclusion

The Storybook integration into a Sitecore headless project will require you to invest some time digging into it, but offers numerous benefits, including improved component visualization, and isolation for development and testing.

Using conditions with XM Cloud Form Builder

If you follow the release news from Sitecore, you’ve already noted the release of highly awaited XM Cloud Forms, which however did not have a feature of adding conditional logic. Until now.

The good news is that now we can do it, and here’s how.

See it in action

Imagine you have a registration form and want to ask if your clients want to receive the email. To do so you add two additional fields at the bottom:

  • a checkbox for users to define if they want to receive these emails
  • a dedicated email input field for leaving the email address

Obviously, you want to validate the email address input as normal and make it required. At the same time you want this field to only play on with a checkbook being checked, otherwise being ignored and ideally hidden.

Once we set both Required and Hidden properties, there is a hint appears saying that we cannot have them both together as it simply creates a deadlock – something I mentioned in my earlier review of XM Cloud Forms builder

01

So, how we achieve that?

From now on, there is an additional Logic tab that you can leverage to add some additional logic to your forms.

02

Let’s see what can do with it:

  • you can add new pieces of logic
  • apply multiple conditions within the piece of logic and define if all must comply or just any of them (logical “AND” and “OR”)
  • you can create groups of conditions to combine multiple OR logic clauses to work against the same single AND condition
  • for each condition, select a particular field of application from a dropdown
  • define what requirements you want to meet with it from another dropdown, which is content-specific:
    • strict match
    • begins or ends with
    • contains
    • checked / unchecked
    • etc.
  • not just having multiple conditional logic, you can also have multiple fields logic within a single condition with the same “and / “or”

Once all conditions are met, you execute the desired action against a specified field:

04

Coming back to a form with an optional email subscription defined by a checkbox, I created the following rule:

05

The conditional logic rules engine is very much intuitive and human-readable. I do not even need to explain the above screenshot as it is naturally self-explanatory. So, how does it perform? Let’s run Preview and see it in action.

When running the form as normal, we only see a checkbox disabled by default and can submit it straight away:

06

But checking Email me updates and promotions tick enables Email field, which is Required. The form won’t submit with an invalid email address until the checkbox is checked.

07

My expectation for the conditional logic was applying conditions for multipage forms at the page level. Say, I have a first page having some sort of branching condition, so that if the user meets it – it gets to pages 2 and 3, but if not – only page 4, with both branching ending up at page 5. Unfortunately, I did not find a way of doing that. Hopefully, the development teams will add this in the future, given how progressively and persistently they introduce new features. In any case, what we’ve been given today is already a very powerful tool that allows us to create really complicated steams of input user data.

GraphQL: not an ideal one!

You’ll find plenty of articles about how amazing GraphQL is (including mine), but after some time of using it, I’ve got some considerations with the technology and want to share some bitter thoughts about it.

GraphQL

History of GraphQL

How did it all start? The best way to answer this question is to go back to the original problem Facebook faced.

Back in 2012, we began an effort to rebuild Facebook’s native mobile applications. At the time, our iOS and Android apps were thin wrappers around views of our mobile website. While this brought us close to a platonic ideal of the “write once, run anywhere” mobile application, in practice, it pushed our mobile web view apps beyond their limits. As Facebook’s mobile apps became more complex, they suffered poor performance and frequently crashed. As we transitioned to natively implemented models and views, we found ourselves for the first time needing an API data version of News Feed — which up until that point had only been delivered as HTML.

We evaluated our options for delivering News Feed data to our mobile apps, including RESTful server resources and FQL tables (Facebook’s SQL-like API). We were frustrated with the differences between the data we wanted to use in our apps and the server queries they required. We don’t think of data in terms of resource URLs, secondary keys, or join tables; we think about it in terms of a graph of objects.

Facebook came across a specific problem and created its own solution: GraphQL. To represent data in the form of a graph, the company designed a hierarchical query language. In other words, GraphQL naturally follows the relationships between objects. You can now receive nested objects and return them all in a single HTTPS request. Back in the day, it was crucial for global users not to always have cheap/unlimited mobile tariff plans, so the GraphQL protocol was optimized, allowing only what users needed to be transmitted.

Therefore, GraphQL solves Facebook’s problems. Does it solve yours?

First, let’s recap the advantages

  • Single request, multiple resources: Compared to REST, which requires multiple network requests to each endpoint, GraphQL you can request all resources with a single call.
  • Receive accurate data: GraphQL minimizes the amount of data transferred over the wires, selectively selecting it based on the needs of the client application. Thus, a mobile client with a small screen may receive less information.
  • Strong typing: Every request, input, and response objects have a type. In web browsers, the lack of types in JavaScript has become a weakness that various tools (Google’s Dart, and Microsoft’s TypeScript) try to compensate for. GraphQL allows you to exchange types between the backend and frontend.
  • Better tooling and developer friendliness: The introspective server can be queried about the types it supports, allowing for API explorer, autocompletion, and editor warnings. No more relying on backend developers to document their APIs. Simply explore the endpoints and get the data you need.
  • Version independent: the type of data returned is determined solely by the client request, so servers become simpler. When new server-side features are added to the product, new fields can be added without affecting existing clients.

Thanks to the “single request, multiple resources” principle, front-end code has become much simpler with GraphQL. Imagine a situation where a user wants to get details about a specific writer, for example (name, id, books, etc.). In a traditional intuitive REST pattern, this would require a lot of cross-requests between the two endpoints /writers and /books, which the frontend would then have to merge. However, thanks to GraphQL, we can define all the necessary data in the request, as shown below:

writers(id: "1"){
    id
    name
    avatarUrl
    books(limit: 2){
        name
        urlSlug
    }
}

The main advantage of this pattern is to simplify the client code. However, some developers expected to use it to optimize network calls and speed up application startup. You don’t make the code faster; you simply transfer the complexity to the backend, which has more computing power. Also for many scenarios, metrics show that using the REST API appeared faster than GraphQL.

This is mostly relevant for mobile apps. If you’re working with a desktop app or a machine-to-machine API, there’s no added value in terms of performance.

Another point is that you may indeed save some kilobytes with GraphQL, but if you really want to optimize loading times, it’s better to focus on loading lower-quality images for mobile, as we’ll see, GraphQL doesn’t work very well with documents.

But let’s see what actually is wrong or could be better with GraphQL.

Strongly Typing

GraphQL defines all API types, commands, and queries in the graphql.schema file. However, I’ve found that typing with GraphQL can be confusing. First of all, there is a lot of duplication here. GraphQL defines the type in the schema, however, we need to override the types for our backend (TypeScript with node.js). You have to spend additional effort to make it all work with Zod or create some cumbersome code generation for types.

Debugging

It’s hard to find what you’re looking for in the Chrome inspector because all the endpoints look the same. In REST you can tell what data you’re getting just by looking at the URL:

Devtools 

compared to:

  Devtools

Do you see the difference?

No support for status codes

REST allows you to use HTTP error codes like “404 not found”, “500 server error” and so on, but GraphQL does not. GraphQL forces a 200 error code to be returned in the response payload. To understand which endpoint failed, you need to check each payload. Same applies for Monitoring: HTTP error monitoring is very easy compared to GraphQL because they all have their error code while troubleshooting GraphQL requires parsing JSON objects.

Additionally, some objects may be empty either because they cannot be found or because an error occurred. It can be difficult to distinguish the difference at a glance.

Versioning

Everything has its price. When modifying the GraphQL API, you can make some fields obsolete, but you are forced to maintain backward compatibility. They should still remain there for older clients who use them. You don’t need to support GraphQL versioning, at the price of maintaining each field.

To be fair, REST versioning is also a pain point, but it does provide an interesting feature for expiring functionality. In REST, everything is an endpoint, so you can easily block legacy endpoints for a new user and measure who is still using the old endpoint. Redirects could also simplify versioning from older to newer in some cases.

Pagination

GraphQL Best Practices suggests the following:

The GraphQL specification is deliberately silent on several important API-related issues, such as networking, authorization, and pagination.

How “convenient” (not!). In general, as it turns out, pagination in GraphQL is very painful.

Caching

The point of caching is to receive a server response faster by storing the results of previous calculations. In REST, the URLs are unique identifiers of the resources that users are trying to access. Therefore, you can perform caching at the resource level. Caching is a part of HTTP specification. Additionally, the browser and mobile device can also use this URL and cache the resources locally (same as they do with images and CSS).

In GraphQL this gets tricky because each query can be different even though it’s working on the same entity. It requires field-level caching, which is not easy to do with GraphQL because it uses a single endpoint. Libraries like Prisma and Dataloader have been developed to help with such scenarios, but they still fall short of REST capabilities.

Media types

GraphQL does not support uploading documents to the server, which is used multipart-form-data by default. Apollo developers have been working on a file-uploads solution, but it is difficult to set up. Additionally, GraphQL does not support a media types header when retrieving a document, which allows the browser to display the file correctly.

I previously made a post about the steps one must take in order to upload an image to Sitecore Media Library (either XM Cloud or XM 10.3 or newer) by using Authoring GraphQL API.

Security

When working with GraphQL, you can query exactly what you need, but you should be aware that this comes with complex security implications. If an attacker tries to send a costly request with attachments to overload the server, then may experience it as a DDoS attack.

They will also be able to access fields that are not intended for public access. When using REST, you can control permissions at the URL level. For GraphQL this should be the field level:

user {
    username <-- anyone can see that
    email <-- private field
    post {
      title <-- some of the posts are private
  }
}

Conclusion

REST has become the new SOAP; now GraphQL is the new REST. History repeats itself. It’s hard to say whether GraphQL will just be a popular new trend that will gradually be forgotten, or whether it will truly change the rules of the game. One thing is certain: it still requires some development to obtain more maturity.

Going Beyond the Sitecore Mentor Program: Pushing Boundaries with Exclusive Mentorship

At this time many of you may have heard about the official Sitecore Mentor Program, run by Nicole Montero from the Sitecore Technical Marketing Team. Mentor Program celebrates its second year with a significant growth of positive outcomes and success stories from both mentors and mentees. I am going to share mine with you, and this one is special.

As I sit down to reflect on my journey as a mentor over the past year, I am filled with a sense of accomplishment. The same exercise I did a year ago, after my initial year of experience as a mentor. You see, back in 2022, I dabbled in mentoring for the first time. It was a toe in the water, a test drive. I was figuring out what it meant to be a mentor, what I wanted from this experience, and how I could truly make a difference. I’ve always been a bit of an “all-or-nothing guy”, never happy with just intermediate results of skimming the surface. So, when I decided to take on mentoring again in 2023, I wanted it to be totally different, much deeper in terms of communication, more personal in building the relationship, and what is most important – ultimately expanding one’s potential.

Finding the right mentee

The search for the right mentee wasn’t about ticking boxes or fulfilling desired criteria. Being cognizant of time, I wanted to make sure my efforts commit to the maximum value:

  • I did not want to find someone who was not just ready to be a part of the Sitecore community: in both knowledge and their mentality or state of mind. I’d be happy to help them at the later stages of their career, of course, but it should be the right time.
  • At the same time, I also felt uncomfortable picking up a person of my caliber or ever who’s probably more skillful than I am and could definitely find their own way to success, who did not need my help.

Both above cases happened to me in 2022, and here’s what I learned from it: it was about finding someone with that unignorable spark—a burning desire to grow and achieve, but perhaps unsure of the path. Someone who can bring big value to the Sitecore community (remember what the letter ‘V’ in the abbreviation MVP stands for?) and who really needs my hand to break a “ceiling glass”. Who else but me was there, striving for recognition at the MVP status for many years before it finally became a reality?

Meet Tiffany Laster

That’s where I stumbled upon Tiffany Laster. It wasn’t some grand plan; it just happened during a routine call where she was discussing Content Hub implementation. The passion for the subject and the big level of detail and nuance in her presentation caught me straight away – that you don’t come across every day. A strike in my mind! We had never met before, but I knew right then she had something special. After some time beyond his call, I made some references, and it became absolutely clear that Tiffany is the right fit for the Mentor Program.

Later, I remember telling her, “As for me, you’ve already got all the makings of an MVP, but let’s prove it to the rest of the world. It’s not going to be a walk in the park. Are you in?“. That was our starting line.

Tiffany Laster photo from LinkedIn profile

Tiffany Laster’s photo from her LinkedIn profile

Exclusive Mentorship Program

As I said above, I learned a lot from my first year of mentorship in 2022 so 2023 was about rewriting my own mentorship playbook. One of the things I wanted to change was picking up just a single individual with the right potential and committing myself exclusively to mentoring that person. That’s where the term “exclusive” applied to the “mentorship program”.

So, I launched what I called the Exclusive Mentorship Program, a personal initiative that went far beyond any formal structure. It meant being available for Tiffany whenever she needed, breaking the mold of regular, scheduled meetings. I’ve always believed real progress doesn’t follow a timetable; a genuine workload is always spontaneous and unpredictable.

Therefore another thing that I learned from year one of mentorship was my total dislike of conducting regular meetings: that’s the one thing most of the other mentors typically do. Our typical contribution load is not linear, nor granular. You don’t need to wait for a meeting just to talk through some things. Or the other way around – attend without having anything achieved in the meantime. In that case, some people tend to simulate progress rather than create actual one which may need more time to do things properly. What should be chosen instead – meetings on demand. Anytime I am needed – I’ll be there!

At the same time, I wanted to formalize this professional engagement with a written agreement with plenty of terms, such as our declaration of the contribution, acceptance of the scope, etc. Since the Exclusive Mentorship Program is a superset of the Sitecore Mentor Program – it went in full compliance with its terms. It also happened that we both work for the same employer – this agreement also respects all the terms of our employer’s code of conduct for the employees. From the other formalities – the main communication channel was chosen as Telegram, as proven to be exceptionally effective for such a format of communication; we even signed for both parties to stay fully committed to this program in an unlike case of either of us losing a job. So, we figured out the formalities – agreed, signed it, and focused on real, meaningful interactions.

Our partnership was intense. I shared everything I could—resources, opportunities, insights. But it was never just about passing down knowledge. It was about sharing a journey, learning from each other, and growing together. Seeing Tiffany evolve, and find her making her mark in the community, was the real reward.

I endeavored to employ all the best of mine in offering assistance:

  • as Tiffany is an expert in Content Hub and CDP/Personalize therefore I offered her to become an editor for the corresponding Sitecore Telegram channels.
  • used all my connections in the Sitecore community to allocate speaker opportunities: user groups, meetups, etc.
  • it also worked well facilitating Tiffany’s expertise for the Content Hub round of the Sitecore Tips&Tricks series: who else knows that better?
  • encouraged to apply to SUGCON conferences, and helped tailor and position proposals which resulted in TWO(!) chosen proposals – impressive!
  • as a Technology MVP working with Tiffany as a genuine strategist, I provided some of the missing technology enablement – to get the best of both worlds.
  • over the whole year explained the Sitecore community, MVP Program goals and expectations, and the best routes and options.
  • without any doubt, there were many more tactical steps we did over the year, which simply went out of my mind with time – dozens of them.

Sitecore MVP Award

And then came the day of the MVP announcements. I was a bundle of nerves, probably more anxious than Tiffany herself. When she received that email, it wasn’t just a win for her; it felt like a shared triumph, a testament to our shared journey. A newly recognized Sitecore MVP has emerged, and I’m pleased that my assistance was well received.

This experience has been a revelation. It taught me so much about myself, about mentoring, and about the power of genuine connections. I owe a lot to the Sitecore Mentor Program and Nicole Montero for their guidance. But most of all, I owe it to Tiffany for reminding me what mentorship is really about.

Four more mentees!

Initially, I aimed to exclusively mentor a single individual; however, life sometimes has bigger plans for us. Over a year, I encountered four other experts who possessed keen intellects, a strong desire for personal and professional development, and an unmistakable enthusiasm for their field. Recognizing my potential to assist them, I found myself drawn to their magnetic personalities and decided to extend an invitation to join the Sitecore Mentor Program too.

We were all geographically distributed – Canada, India, USA. I often found myself scheduling meetings with overseas mentees at midnight or even 1AM of my local time. That’s fine, as soon as it helps. Regrettably, they joined the Mentor Program mid-year, limiting our opportunity for more contributions, yet we achieved significant success: presenting at SUGCON, and producing numerous blogs, webinars, and other valuable contributions. I am confident that each of these four possesses all the necessary elements to deem it a success in the coming year.

Looking forward

As I look ahead, I feel excited and ready. Ready to meet more passionate individuals, ready to embark on new journeys. If you’re out there, eager to learn and grow, and ready to put your all into it, know that I’m here, ready to dive in with you. Let’s make something amazing happen, together.

Do Something Great

A crash course of Next.js: TypeScript (part 5)

This series is my Next.js study resume, and despite it’s keen to a vanilla Next.js, all the features are applicable with Sitecore SDK. It is similar to the guide I recently wrote about GraphQL and aims to reduce the learning curve for those switching to it from other tech stacks.

  • In part 1 we covered some fundamentals of Next.js – rendering strategies along with the nuances of getStaticProps, getStaticPaths, getServerSideProps as well as data fetching.
  • In part 2 we spoke about UI-related things coming to OOB with Next.js – layouts, styles, and fonts powerful features, Image and Script components, and of course – TypeScript.
  • We went through the nuances of Next.js routing and explained middleware in part 3
  • Part 4 was all about caching, authentication, and going live tasks

In this post, we are going to talk about TypeScript. It was already well mentioned in the previous parts, but this time we’d put it under a spotlight.

Intro

TypeScript became an industry standard for strong typing in the current state of the JavaScript world. There is currently no other solution allowing us to effectively implement typing into a project. One can, of course, use some kind of contracts, conventions, or JSDoc to describe types. But all this will be times worse for code readability compared to typical TypeScript annotations. They will allow you not to have your eyes darting up and down, you will simply read the signature and immediately understand everything.

Another point comes to JavaScript support in editors and IDEs being usually based on TypeScript. JavaScript support in all modern IDEs is built on TypeScript! JavaScript support in VS Code is implemented using the TypeScript Language Service. WebStorm’s JavaScript support largely relies on the TypeScript framework and uses its standard library. This reality is – that JavaScript support in editors goes built on top of TypeScript. One more reason to learn TypeScript is that when the editor complains about the JavaScript type mismatch, we’ll have to read the declarations from TypeScript.

However, TypeScript is not a replacement for other code quality tools. TypeScript is just one of the tools that allows you to maintain some conventions in a project and make sure that there are strong types. You are still expected to write tests, do code reviews, and be able to design the architecture correctly.

Learning TypeScript, even in 2024, can be difficult, for many various reasons. Folks like myself who grew from C# may find things that don’t function as they should expect. People who have programmed in JavaScript for most of their lives get scared when the compiler yells at them.

TypeScript is a superset of JavaScript, this means that JavaScript is part of TypeScript: it is in fact made up of JavaScript. Going with TypeScript does not mean throwing out JavaScript with its specific behavior – TypeScript helps us to understand why JavaScript behaves this way. Let’s look at some mistakes people make when getting started with TypeScript. As an example, take error handling. It would be very our natural logical expectation to handle errors, similarly, we are used to doing in other programming languages:

try {
    // let's pull some API with Axios, and some error occured, say Axios fetch failed
} catch (e: AxiosError) {
    //         ^^^^^^^^^^ Error 
}

The above syntax is not possible because that’s not how errors work in JavaScript. Code that would be logical to write in TypeScript cannot be written that way in JavaScript.

A quick recap of a Typescript

  • Strongly typed language developed by Microsoft.
  • Code written in TypeScript compiles to native JavaScript
  • TypeScript extends JavaScript with the ability to statically assign types.

TypeScript supports modern editions of the ECMAScript standards, and code written using them is compiled taking into account the possibility of its execution on platforms that support older versions of the standards. This means that a TS programmer can take advantage of the features of ES2015 and newer standards, such as modules, arrow functions, classes, spread, and destructuring, and do what they can in existing environments that do not yet support these standards.

The language is backward compatible with JavaScript and supports modern editions of the ECMAScript standards. If you feed the compiler pure JavaScript, the compiler will “eat out” your own JS and will not say that it is an error – that would be a valid TypeScript code, however with no type benefits. Therefore one can write a mixed code of ES2015 and newer standards, such as modules, arrow functions, classes, spread, destructuring, etc. using TypeScript syntax, and implementing methods without strongly typing just using pure JS – and that will also be valid.

So, what benefits does language provide?

  • System for working with modules/classes – you can create interfaces, modules, classes
  • You can inherit interfaces (including multiple inheritance), classes
  • You can describe your own data types
  • You can create generic interfaces
  • You can describe the type of a variable (or the properties of an object), or describe what interface the object to which the variable refers should have
  • You can describe the method signature
  • Using TypeScript (as opposed to JavaScript) significantly improves the development process due to IDE receives type information from the TS compiler in real time.

On the other hand, there are some disadvantages:

  • In order to use some external tool (say, “library” or “framework”) with TypeScript, the signature of each of the methods of each module of this tool must be described so that the compiler does not throw errors, otherwise, it simply won’t know about your tool. For the majority of popular tools, the interface description most likely could be found in the repository. Otherwise, you’ll have to describe it yourself
  • Probably the biggest disadvantage is the learning curve and building a habit of thinking and writing TypeScript
  • At least as it goes with me, more time is spent on development compared to vanilla JavaScript: in addition to the actual implementation, it is also necessary to describe all the involved interfaces and method signatures.

One of the serious advantages of TS over JS is the ability to create, in various IDEs, a development environment that allows you to identify common errors directly in the process of entering code. Using TypeScript in large projects can lead to increased reliability of programs that can still be deployed in the same environments where regular JS applications run.

Types

There are expectations that it is enough to learn a few types to start writing in TypeScript and automatically get good code. Indeed, one can simply write types in the code, explaining to the compiler that in this place we are expecting a variable of a certain type. The compiler will prompt you on whether you can do this or not:

let helloFunction = () => { "hello" };
let text: string = helloFunction();
// TS2322: Type 'void' is not assignable to type 'string'.

However, the reality is that it will not be possible to outsource the entire work to the compiler. And let’s see what types you need to learn to progress with TypeScript:

  • basic primitives from JavaScript: boolean, number, string, symbol, bigint, undefined and object.
  • instead of the function type, TypeScript has Function and a separate syntax similar to the arrow function, but for defining types. The object type will mean that the variable can be assigned any object literals in TypeScript.
  • TypeScript-specific primitives: null, unknown, any, void, unique symbol, never, this.
  • Next are the types standard for many object-oriented languages: array, tuple, generic
  • Named types in TypeScript refer to the practice of giving a specific, descriptive name to a type for variables, function parameters, and return types: type User = { name: string; age: number; }
  • TypeScript doesn’t stop there: it offers union and intersection. Special literal types often work in conjunction with string, number, boolean, template string. These are used when a function takes not just a string, but a specific literal value, like “foo” or “bar”, and nothing else. This significantly improves the descriptive power of the code.
  • TypeScript also has typeof, keyof, indexed, conditional, mapped, import, await, const, predicate.

These are just the basic types; many others are built on their basis: for example, a composite Record<T>, or the internal types Uppercase<T> and Lowercase<T>, which are not defined in any way: they are intrinsic types.

P.S. – do not use Function, pass a predefined type, or use an arrow notation instead:

// replace unknown with the types you're using with the function
F extends (...args: unknown[]) => unknown
// example of a function with 'a' and 'b' arguments that reutrns 'Hello'
const func = (a: number, b: number): string => 'Hello'

Map and d.ts files

TypeScript comes with its own set of unique file types that can be puzzling at first glance. Among these are the *.map and *.d.ts files. Let’s demystify these file types and understand their roles in TypeScript development.

What are .map Files in TypeScript?

.map files, or source map files, play a crucial role in debugging TypeScript code. These files are generated alongside the JavaScript output when TypeScript is compiled. The primary function of a .map file is to create a bridge between the original TypeScript code and the compiled JavaScript code. This linkage is vital because it allows developers to debug their TypeScript code directly in tools like browser developer consoles, even though the browser is executing JavaScript.

When you’re stepping through code or setting breakpoints, the .map file ensures that the debugger shows you the relevant TypeScript code, not the transpiled JavaScript. This feature is a game-changer for developers, as it simplifies the debugging process and enhances code maintainability.

Understanding .d.ts Files in TypeScript

On the other side, we have *.d.ts files, known as declaration files in TypeScript. These files are pivotal for using JavaScript libraries in TypeScript projects. Declaration files act as a bridge between the dynamically typed JavaScript world and the statically typed TypeScript world. They don’t contain any logic or executable code but provide type information about the JavaScript code to the TypeScript compiler.

For instance, when you use a JavaScript library like Lodash or jQuery in your TypeScript project, the *.d.ts files for these libraries describe the types, function signatures, class declarations, etc., of the library. This allows TypeScript to understand and validate the types being used from the JavaScript library, ensuring that the integration is type-safe and aligns with TypeScript’s static typing system.

The Key Differences

The primary difference between *.map and *.d.ts files lies in their purpose and functionality. While .map files are all about enhancing the debugging experience by mapping TypeScript code to its JavaScript equivalent, .d.ts files are about providing type information and enabling TypeScript to understand JavaScript libraries.

In essence, .map files are about the developer’s experience during the debugging process, whereas .d.ts files are about maintaining type safety and smooth integration when using JavaScript libraries in TypeScript projects.

Cheatsheet

I won’t get into the details of the basics and operations which can be visualized in this cheat sheet instead:

Better than a thousand words!

Let’s take a look at some other great features not mentioned in the above cheatsheet.

What the heck is Any?

In TypeScript you can use any data type. This allows you to work with any type of data without errors. Just like regular javascript – in fact what you do by using any is downgrading your TypeScript to just JavaScript by eliminating type safety. The best way is to look at it in action:

let car: any = 2024;
console.log(typeof car)// number

car = "Mercedes";
console.log(typeof car)// string

car = false;
console.log(typeof car)// boolean

car = null;
console.log(typeof car)// object

car = undefined;
console.log(typeof car)// undefined

The car variable can be assigned any data type. Any is an evil data type, indeed! If you are going to use any data type, then TypeScript immediately becomes unnecessary. Just write code in JavaScript.

TypeScript can also determine what data type will be used if we do not specify it. We can replace the code from the first example with this one.

const caterpie01: number = 2021;     // number
const caterpie001 = 2021;            // number  - that was chosen by typescript for us

const Metapod01: string = "sleepy";  // string
const Metapod001 = "sleepy";         // string  - that was chosen by typescript for us

const Wartortle01: boolean = true;   // boolean
const Wartortle001 = true;           // boolean - that was chosen by typescript for us

This is a more readable and shorter way of writing. And of course, we won’t be able to assign any other data type to a variable.

let caterpie = 2021;            // in typescript this vairable becomes number after assignment
caterpie = "text";              // type error, as the type was already defined upon the assingment

On the other hand, if we don’t specify a data type for the function’s arguments, TypeScript will use any type. Let’s look at the code:

const sum = (a, b) => {
    return a + b;
}
sum(2021, 9);

In strict mode, the above code will error out with “Parameter 'a' implicitly has an 'any' type; Parameter 'a' implicitly has an 'any' type” message, but will perfectly work outside of strict mode in the same manner as JavaScript code would. I assume that was intentionally done for the compatibility

Null checks and undefined

As simple as that:

if(value){
  ...
}

The expression in parentheses will be evaluated as true if it is not one of the following:

  • null
  • undefined
  • NaN
  • an empty string
  • 0
  • false

TypeScript supports the same type conversion rules as JavaScript does.

Type checking

Surprise-surprise, TypeScript is all about the types and annotations. Therefore, the next piece of advice may seem weird, but, is legit: avoid explicit type checking, if you can.

Instead, always prefer to specify the types of variables, parameters, and return values to harness the full power of TypeScript. This makes future refactoring easier.

function travelToTexas(vehicle: Bicycle | Car){
    if (vehicle instanceof Bicycle) {
        vehicle.pedal(currentLocation, newLocation('texas'));
    } elseif(vehicle instanceof Car){
        vehicle.drive(currentLocation, newLocation('texas'));
    }
}

The below rewrite looks much more readable and therefore easy to maintain. And there are no ugly type checking clauses:

type Vehicle = Bicycle | Car;

function travelToTexas(vehicle: Vehicle){
    vehicle.move(currentLocation, newLocation('texas'));
}

Generics

Use them wherever possible. This will help you better identify the type being used in your code. Unfortunately, they are not often used, but in vain.

function returnType <T> (arg: T): T {
    return arg;
}

returnType <string> ('MJ knows Sitecore')// works well
returnType <number> ('MJ knows Sitecore')// errors out
// ^ Argument of type 'string' is not assignable to parameter of type 'number'.

If you are using a specific type, be sure to use extends:

type AddDot<T extends string> = `${T}.`// receives only strings, otherwise errors out

Ternary operators with extends

extends is very useful, it helps to determine what a type is inherited from in the type hierarchy (any -> number -> …) and make a comparison. Thanks to the combination of extends and ternary operators, you can create such awesome conditional constructs:

type IsNumber <T> = T extends number ? true : false

IsNumber <5>        // true
IsNumber <'lol'>    // false

Readonly and Consts

Use readonly by default to avoid accidentally overwriting types in your interface.

interface User {
    readonly name: string;
    readonly surname: string;
}

Let’s say you have an array that comes from the backend [1, 2, 3, 4] and you need to use only these four numbers, that is, make the array immutable. The as const construction can easily handle this:

const arr = [1, 2, 3, 4]            // curent type is number so that you can assign any number
arr[3] = 5  // [1, 2, 3, 5]

const arr = [1, 2, 3, 4] as const   // now type is locked as readonly [1, 2, 3, 4]
arr[3] = 5  // errors out

Satisfies

That is a relatively recent feature (since version 4.9), but is so helpful as it allows you to impose restrictions without changing the type. This can be very useful when you manage different types that do not share common methods. For example:

type Numbers = readonly [1, 2, 3];
type Val = { value: Numbers | string };

// the valid accepted values could be numbers 1, 2, 3, or a string
const myVal: Val = { value: 'a' };
So far so good. Let’s say we have a string, and must convert it to capital letters. Intuitively trying to use the below code without Satisfies will get you an error:
myVal.value.toUpperCase()
// ^ Property 'toUpperCase' does not exist on type 'Numbers'.
So the right way to deal with it would be to use Satisfies, then everything will be fine:
const myVal = { value: 'a' } satisfies { value: string };
myVal.value.toUpperCase()   // works well and outputs 'A'

Unions

Sometimes you can see code like this:

interface User {
    loginData: "login" | "username";
    getLogin(): void;
    getUsername(): void;
}

This code is bad because you can use a username but still call getLogin() and vice versa. To prevent this, it is better to use unions instead:

interface UserWithLogin {
    loginData: "login";
    getLogin(): void;
}
interface UserWithUsername {
    loginData: "username";
    getUsername(): void;
}

type User = UserWithLogin | UserWithUsername;

What is even more impressive – unions are iterable, meaning they can be used to loop through a test:

type Numbers = 1 | 2 | 3
type OnlyRuKeys = { [R in Numbers]: boolean }
// {1: boolean, 2: boolean, 3: boolean}

Utility Types

TypeScript Utility Types are a set of built-in types that can be used to manipulate data types in code.

  • Required<T> – makes all properties of an object of type T required.
  • Partial<T> – makes all properties of an object of type T optional.
  • Readonly<T> – makes all properties of an object of type T read-only
  • NonNullable<Type> – Retrieves a type from Type, excluding null and undefined.
  • Parameters<Type> – retrieves the types of function arguments Type
  • ReturnType<Type> – retrieves the return type of the function Type
  • InstanceType<Type> – retrieves the type of an instance of the Type class
  • Record<Keys, Type> – creates a type that is a record with keys defined in the first parameter and values of the type defined in the second parameter.
  • Pick<T, K extends keyof T> – selects properties of an object of type T with the keys specified in K.
  • Omit<T, K extends keyof T> – selects properties of an object of type T, excluding those specified in K
  • Exclude<UnionType, ExcludedMembers> – Excludes certain types from the union type
  • Uppercase<StringType>, Lowercase<StringType>, Capitalize<StringType>, Uncapitalize<StringType> – string manipulation utility types that change the case of the string according to their name.

I won’t get into the details on all of them but will showcase just a few for better understanding:

// 1. Required
interface Person {
    name?: string;
    age?: number;
}

let requiredPerson: Required<Person>;   // now requiredPerson could be as { name: string; age: number; }

// 2. Partial
interface Person {
    name: string;
    age: number;
}

let partialPerson: Partial<Person>;    // now partialPerson could be as { name?: string; age?: number; }

// 3. NonNullable
let value: string | null | undefined;
let nonNullableValue: NonNullable<typeof value>;    // now nonNullableValue is a string

// 4. Awaited
asyncfunctiongetData(): Promise <string> {
    return'hello';
}
let awaitedData: Awaited<ReturnType<typeof getData>>;     // now awaitedData could be 'hello'

// 5 Case management
type Uppercased = Uppercase<'hello'>;       // 'HELLO'
type Lowercased = Lowercase<'Hello'>;       // 'hello'
type Capitalized = Capitalize<'hello'>;     // 'Hello'
type Uncapitalized = Uncapitalize<'Hello'>; // 'hello'

These above are just a few examples of utility types in TypeScript. To find out more please refer to the official documentation.

Interface vs types

One of the questions that raises most of the confusion for the C# developer trying TypeScript. What’s the difference between these two below signatures:

interface X {
    a: number
    b: string
}

type X = {
    a: number
    b: string
};

You can use both to describe the shape of an object or a function signature, it’s just the syntax differs.

Unlike interface, the type alias can also be used for other types such as primitives, unions, and tuples:

// primitive
type Name = string;

// object
type PartialPointX = { x: number; };
type PartialPointY = { y: number; };

// union
type PartialPoint = PartialPointX | PartialPointY;

// tuple
type Data = [number, string];

Both can be extended, but again, the syntax differs. Also, an interface can extend a type alias, and vice versa:

// Interface extends interface
interface PartialPointX { x: number; }
interface Point extends PartialPointX { y: number; }

// Type alias extends type alias
type PartialPointX = { x: number; };
type Point = PartialPointX & { y: number; };

// Interface extends type alias
type PartialPointX = { x: number; };
interface Point extends PartialPointX { y: number; }

// Type alias extends interface
interface PartialPointX { x: number; }
type Point = PartialPointX & { y: number; };

A class can implement an interface or type alias, both in the same exact way. However, a class and interface are considered static blueprints. Therefore, they can not implement or extend a type alias that names a union type.

Unlike a type alias, an interface can be defined multiple times and will be treated as a single interface (with members of all declarations being merged):

// These two declarations become:
// interface Point { x: number; y: number; }
interface Point { x: number; }
interface Point { y: number; }

const point: Point = { x: 1, y: 2 };

So, when I should use one against another? If simplified, use types when you might need a union or intersection. Use interfaces when you want to use extends or implements. There is no hard and fast rule though, use what works for you. I admit, that may still be confusing, please read this discussion for more understanding.

Error Handling

Throwing errors is always good: if something goes wrong at runtime, you can terminate the execution at the right moment and investigate the error using the stack trace in the console.

Always use rejects with errors

JavaScript and TypeScript allow you to throw any object. A promise can also be rejected with any reason object. It is recommended to use throw syntax with type Error. This is because your error can be caught at a higher level of code with catch syntax. Instead of this incorrect block:

function calculateTotal(items: Item[]): number{
    throw 'Not implemented.';
}

function get(): Promise < Item[] > {
    return Promise.reject('Not implemented.');
}

use the below:

function calculateTotal(items: Item[]): number{
    throw new Error('Not implemented.');
}
function get(): Promise <Item[]> {
    return Promise.reject(new Error('Not implemented.'));
}
// the above Promise could be rewritten with an async equivalent:
async function get(): Promise <Item[]> {
    throw new Error('Not implemented.');
}

The advantage of using Error types is that they are supported by try/catch/finally syntax and implicitly all errors and have a stack property which is very powerful for debugging. There are other alternatives: don’t use throw syntax and always return custom error objects instead. TypeScript makes this even easier and it works like a charm!

Dealing with Imports

Last but not least, I want to share some tips and best practices for using imports. With simple, clear, and logical import statements, you can faster inspect the dependencies of your current code.

Make sure you use the following good practices for import statements:

  • Import statements should be in alphabetical order and grouped.
  • Unused imports must be removed (linter will come to help you)
  • Named imports must be in alphabetical order, i.e. import {A, B, C} from 'mod';
  • Import sources should be in alphabetical order in groups, i.e.: import * as foo from 'a'; import * as bar from 'b';
  • Import groups are indicated by blank lines.
  • Groups must follow the following order:
    • Polyfills (i.e. import 'reflect-metadata';)
    • Node build modules (i.e. import fs from 'fs';)
    • External modules (i.e. import { query } from 'itiriri';)
    • Internal modules (i.e. import { UserService } from 'src/services/userService';)
    • Modules from the parent directory (i.e. import foo from '../foo'; import qux from '../../foo/qux';)
    • Modules from the same or related directory (i.e. import bar from './bar'; import baz from './bar/baz';)

These rules are not obvious to beginners, but they come with time once you start paying attention to the minors. Just compare the ugly block of imports:

import { TypeDefinition } from '../types/typeDefinition';
import { AttributeTypes } from '../model/attribute';
import { ApiCredentials, Adapters } from './common/api/authorization';
import fs from 'fs';
import { ConfigPlugin } from './plugins/config/configPlugin';
import { BindingScopeEnum, Container } from 'inversify';
import 'reflect-metadata';
against them nicely structured, as below:
import 'reflect-metadata';

import fs from 'fs';
import { BindingScopeEnum, Container } from 'inversify';

import { AttributeTypes } from '../model/attribute';
import { TypeDefinition } from '../types/typeDefinition';

import { ApiCredentials, Adapters } from './common/api/authorization';
import { ConfigPlugin } from './plugins/config/configPlugin';

Which one gets faster to read?

Conclusion

In my opinion, TypeScript’s superpower is that it provides feedback as you write the code, not at runtime: IDE nicely prompts what argument to use when calling a function, and types are defined and navigable. All that comes at the cost of a minor compilation overhead and a slightly increased learning curve. That is a fair deal, TypeScript will stay with us for a long and won’t go away – the earlier you learn it the more time and effort it will save later.

Of course, I left plenty of raw TypeScript features aboard and did that intentionally so as not to overload the readers with it. If the majority of you are coming in the capacity of professionals, either starting with Next.js development in general or switching from other programming languages and tech. stacks (like .NET with C#) where I was myself a year ago – that is definitely the right volume and agenda for you to start with. There are of course a lot of powerful TypeScript features for exploration beyond today’s post, such as:

  1. Decorators
  2. Namespaces
  3. Type Guards and Differentiating Types
  4. Type Assertions
  5. Ambient Declarations
  6. Advanced Types (e.g., Conditional Types, Mapped Types, Template Literal Types)
  7. Module Resolution and Module Loaders
  8. Project References and Build Optimization
  9. Declaration Merging
  10. Using TypeScript with WebAssembly

But I think that is enough for this post. Hope you’ll enjoy writing strongly typed code with TypeScript!

References:

XM Cloud Forms Builder

XM Cloud Forms Builder released

Composable Forms Builder is now available with Sitecore XM Cloud. Let’s take a look at this one of the most anticipated modules for Sitecore’s flagship hybrid headless cloud platform.

Historically, we had an external module called Web Forms for Marketers that one could install on top of their Sitecore instance in order to gain the desired functionality of collecting user input. This module was later well reconsidered and reworked, later finding its reincarnation as Sitecore Forms, an integral part of Sitecore platforms since version 9. Customers enjoyed this built-in solution provided along with their primary DXP, however with the headless architecture of XM Cloud there were no CD servers any longer, therefore no suitable place for saving the collected user input. There was clearly a need for a SaaS forms solution, and this gap if finally filled out!

An interesting fact: until the release of Forms with XM Cloud the relevant composable solution for interacting with the visitors was Sitecore Send, and because of that Sitecore logically decided to derive XM Cloud Form module from Sitecore Send codebase (as it already had plenty of the desired features), rather than from legacy Sitecore Forms.

Sitecore XM Cloud Forms

So, what we’ve got?

The main goal was to release a new Forms product as SaaS solution that integrates with any custom web front-end. The actual challenge was to combine the ultimate simplicity of creating and publishing forms for the majority of marketing professionals along with tailoring this offering optimized for typical headless projects. In my opinion, despite the complexities, it was well achieved!

Let’s first take a look at its desired/expected capabilities:

  • Template Library
  • Work with Components Builder
  • Use external datasources for pre-populating form
  • Reporting and analytics
  • Ability to create multi-step and multi-page forms
  • Conditional logic (not available yet)

One would ask, if there’s no CD server or any managed backend at all, where does the submission go into? There might be some SaaS-provided storage along with the required interface to manage the collected input. Incorrect! There’s none. It was actually a smart move by Sitecore developers who decided to kill two birds with one stone: save effort for not building a universal UI/UX with the tool that will hardly satisfy variable needs from such a diverse range of customers/industries, that would be hardly doable. But the second reason is more legit: avoid storing any Personally Identifiable Information data, so that it won’t be processed within XM Cloud, leaving particular implementation decisions to leave to customers’ discretion.

That is done in a very composable SaaS manner, offering you to configure a webhook, passing a payload of collected data to the desired system of your choice.

Webhooks

Upon the form submission, the webhook is triggered to submit the gathered data to the configured system, it could be a database, CRM, CDP, or whichever backend is for some form. Even more, you can have shared webhooks so that multiple forms can use the same configured webhook. Similarly to the legacy forms that submitted their data into xDB, the most logical choice would be using powerful Sitecore CDP for processing this data. However with webhooks, the use case of XM Cloud forms becomes truly universal, and if you combine it with Sitecore Connect – that could span to whichever integration gets provided by it.
Webhooks come with multiple authentication options, covering any potential backend requirement.

Let’s see it in action!

The editor looks and feels like XM Cloud Pages – similar styling, layout, and UI elements:
02 03

First, let’s pick up the layout by simply dragging and dropping it on a canvas. For simplicity, I selected Full Width Layout. Once there, you can start dropping fields to a chosen layout:

04
Available Fields:
  • Action Button
  • Email
  • Phone
  • Short Text
  • Long Text
  • Select (single dropdown)
  • Multi Select (where you can define the number of options, say selected 3 of 9)
  • Date
  • Number
  • Radio
  • Checkbox (single)
  • Checkbox Group
  • Terms & Conditions
  • reCAPTCHA
Besides input-capturing elements, you can also define “passive” UI elements from underneath Basic expander:
  • Text
  • Spacer
  • Social Media – a set of clickable buttons to socials you define
  • Image, which has a pretty strong set of source options:
Media Upload options
Look at the variety of configuration options on the right panel. You can define:
  • Background Color within that field – transparent is the default one. You can even put a background image instead!
  • Settings for the field box shadows which also define Horizontal and Vertical Lengths, Blur and Spread Radiuses, and of course – the shadow color
  • Help text that is shown below and prompts some unobvious guidance you’d like a user to follow
  • For text boxes, you can set placeholder and prefill values
  • The field could be made mandatory and/or hidden by the correspondent checkboxes
  • Validation is controlled by a regex pattern and character length limit
  • Additionally, you can style pretty everything: field itself, label, placeholder, and help text, as well as set the overall padding to it

Please note, that at the current stage, the edited form is in a Draft state. Clicking Save button suggests you run your form in a Preview before saving, and that was very helpful – in my case, I left the Full Name field as a hidden field by mistake, and preview mode immediately showed me that. After fixing the visibility, I am good to go with saving.

The Forms home screen shows all the available forms. To Activate, I need to create a Webhook first, then assign it to the form. In addition, you define the action you want to do upon webhook submission – redirect to URL, display success message, or maybe – do nothing, as well as configure failure submission message.

This time Activate button works well and my form is listed as Active. From now on you cannot edit fields anymore and you cannot change the status back from Active. Therefore always verify your form in Preview before publishing.

Weirdly, you cannot even delete a form in Active status. What you can do however is a duplicate active form into a draft one, and you could go on with fields editing from there.

Forms listing

Testing Form

The most obvious desire at this stage is to real-test your form before usage. And luckily developers took care of that as well.

Testing webhook
I also created a webhook catcher with Pipedream RequestBin. On my first go, I faced a deadlock being unable to submit the form for the test. The reason for this was that I mistakenly checked both the Hidden and Required checkboxes on a field and could bo progress from there. Even the validation message did not show on a hidden field. Another mistake was that I overlooked that on Preview and published form into the active state. Hopefully, developers will find a solution to it sooner rather than later.

I give it another try to test how validation works:

Validation in action

Once validation passes, Test Form Submission dialog shows you the JSON payload as it goes out along with HTTP headers supplied with this webhook request. Let’s hit Submit button and see the confirmation – I chose to see a confirmation message that shows up.

Webhook catcher shows all my submitted data along with HTTP headers, everything was sent and received successfully!

webhook catcher

Multipage Forms

Multipage Forms are supported and that is impressive. Pay attention to the Add Page link button in between page canvases. Once activated, it also enforces Next button on non-final page canvases that trigger switching to the next form page:
20

What’s Next? Pages Editor!

Let’s use this newly created form from XM Cloud pages. Please note a new section called Forms under the Components tab. That is where all of your active forms reside. You can simply drag-drop this form to a desired placeholder, as you normally do in the Pages editor.

Consume Forms from Pages Editor

Please note: you must have your site deployed to an editing host running on Headless (JSS) SDK version 21.6 or newer to make it work – that is when XM Cloud Forms support was added. In other case, you face this error:

BYOC error before SDK 21.6

Experience Editor and Components Builder

Surprisingly, created forms are available from Components Builder:
Forms in Components Builder
However, Experience Editor does not have a direct way of consuming XM Cloud Forms. I tried the following chain of steps in order to make it work:
  1. Create and Activate a new form from Forms editor
  2. Consume it from the Components builder into a new component using BYOC, then publish this component
  3. Open Pages app, find the component with an embedded form you’ve built at step (2) and drop it to a page, then publish
  4. Open that same page in Experience Editor

Live Demo in Action

As you know, often a video is worth a thousand words, so here it is below. I’ve recorded the whole walkthrough from explaining to showcasing it all in action up to еhe most extreme example – when you create and publish a form, then consume it from the XM Cloud Components builder, making the part of a composite component, which in turn is used Pages editor to put down on a page which also opens up successfully in the Experience Editor. Unbelievable, and it all functions well. Just take a look yourself:

Developers Experience

As developers, how would we integrate forms into our “head” applications? That should work with a Forms BYOC component for your Next.js App coming out of the box with SDK. I spotted some traces of XM Cloud forms a while ago as a part of Headless JSS SDK 21.6.0 a while ago when it was in a state of “Canary” build. Now it got released and among the features, one can see an import of SitecoreForm component into the sample next.js app, as part of pull request merged into this release.

The documentation is available here, but … everything is so absolutely intuitive, that you hardly need one, don’t you?

Template Library

It is worth mentioning that XM Cloud Forms contains a Template Library handful of pre-configured forms you can use straight away or slightly modify as per your needs. There is an expectation it will grow with time covering any potential scenario one could ever have.
Template Library

Licensing

Since Forms are bundled into XM Cloud they’re included with every XM Cloud subscription.

What is missing?

  • file upload feature is not supported – webhooks alone are not sufficient to handle it
  • ability for customization and extension – hopefully, it comes as there’s an empty section for custom fields

Hopefully, the product developers will implement these and more features in the upcoming releases. But even with what was released today, I really enjoyed XM Cloud Forms builder!

A crash course of Next.js: Caching, Authentication and Going Live tasks (part 4)

This series is my Next.js study resume, and despite it’s keen to a vanilla Next.js, all the features are applicable with Sitecore SDK. It is similar to the guide I recently wrote about GraphQL and aims to reduce the learning curve for those switching to it from other tech stacks.

  • In part 1 we covered some fundamentals of Next.js – rendering strategies along with the nuances of getStaticProps, getStaticPaths, getServerSideProps as well as data fetching.
  • In part 2 we spoke about UI-related things coming to OOB with Next.js – layouts, styles, and fonts powerful features, Image and Script components, and of course – TypeScript.
  • In part 3 we went through the nuances of Next.js routing and explained middleware

In this post we are going to talk about pre-going live optimizations such as caching and reducing bundle size as well as authentication.

Going live consideration

  • use caching wherever possible (see below)
  • make sure that the server and database are located (deployed) in the same region
  • minimize the amount of JavaScript code
  • delay loading heavy JS until you actually use it
  • make sure logging is configured correctly
  • make sure error handling is correct
  • configure 500 (server error) and 404 (page not found) pages
  • make sure the application meets the best performance criteria
  • run Lighthouse to test performance, best practices, accessibility, and SEO. Use an incognito mode to ensure the results aren’t distorted
  • make sure that the features used in your application are supported by modern browsers
  • improve performance by using the following:
    • next/image and automatic image optimization
    • automatic font optimization
    • script optimization

Caching

Caching reduces response time and the number of requests to external services. Next.js automatically adds caching headers to statics from _next/static, including JS, CSS, images, and other media.

Cache-Control: public, max-age=31536000, immutable

To revalidate the cache of a page that was previously rendered into static markup, use the revalidate setting in the getStaticProps function.

Please note: running the application in development mode using next dev disables caching:

Cache-Control: no-cache, no-store, max-age=0, must-revalidate

Caching headers can also be used in getServerSideProps and the routing interface for dynamic responses. An example of using stale-while-revalidate:

// The value is considered fresh and actual for 10 seconds (s-maxage=10).
// If the request is repeated within 10 seconds, the previous cached value
// considered fresh. If the request is repeated within 59 seconds,
// cached value is considered stale, but is still used for rendering
// (stale-while-revalidate=59)
// The request is then executed in the background and the cache is filled with fresh data.
// After updating the page will display the new value
export async function getServerSideProps({ req, res }){
    res.setHeader(
        'Cache-Control',
        'public, s-maxage=10, stale-while-revalidate=59'
    )
    return {
        props: {}
    }
}

Reducing the JavaScript bundle volume/size

To identify what’s included in each JS bundle, you can use the following tools:

  • Import Cost – extension for VSCode showing the size of the imported package
  • Package Phobia is a service for determining the “cost” of adding a new development dependency to a project (dev dependency)
  • Bundle Phobia – a service for determining how much adding a dependency will increase the size of the build
  • Webpack Bundle Analyzer – Webpack plugin for visualizing the bundle in the form of an interactive, scalable tree structure

Each file in the pages directory is allocated into a separate assembly during the next build command. You can use dynamic import to lazily load components and libraries.

Authentication

Authentication is the process of identifying who a user is, while authorization is the process of determining his permissions (or “authority” in other words), i.e. what the user has access to. Next.js supports several authentication patterns.

Authentication Patterns

Each authentication pattern determines the strategy for obtaining data. Next, you need to select an authentication provider that supports the selected strategy. There are two main authentication patterns:

  • using static generation to load state on the server and retrieve user data on the client side
  • receiving user data from the server to avoid “flushing” unauthenticated content (in the meaning of switching application states being visible to a user)

Authentication when using static generation

Next.js automatically detects that a page is static if the page does not have blocking methods to retrieve data, such as getServerSideProps. In this case, the page renders the initial state received from the server and then requests the user’s data on the client side.

One of the advantages of using this pattern is the ability to deliver pages from a global CDN and preload them using next/link. This results in a reduced Time to Interactive (TTI).

Let’s look at an example of a user profile page. On this page, the template (skeleton) is first rendered, and after executing a request to obtain user data, this data is displayed:

// pages/profile.js
import useUser from '../lib/useUser'
import Layout from './components/Layout'

export default function Profile(){
    // get user data on the client side
    const { user } = useUser({ redirectTo: '/login' })
    // loading status received from the server
    if (!user || user.isLoggedIn === false) {
        return <Layout>Loading...</Layout>
    }
    // after the request is completed, user data is
    return (
        <Layout>
            <h1>Your profile</h1>
            <pre>{JSON.stringify(user, null, 2)}</pre>
        </Layout>
    )
}

Server-side rendering authentication

If a page has an asynchronous getServerSideProps function, Next.js will render that page on every request using the data from that function.

export async function getServerSideProps(context){
    return {
        props: {}// will get passed down to a component as props
    }
}

Let’s rewrite the above example. If there is a session, the Profile component will receive the user prop. Note the absence of a template:

// pages/profile.js
import withSession from '../lib/session'
import Layout from '../components/Layout'

export const getServerSideProps = withSession(async (req, res) => {
    const user = req.session.get('user')
    if (!user) {
        return {
            redirect: {
                destination: '/login',
                permanent: false
            }
        }
    }
    return {
        props: {
            user
        }
    }
})
export default function Profile({ user }){
    // display user data, no loading state required
    return (
        <Layout>
            <h1>Your profile</h1>
            <pre>{JSON.stringify(user, null, 2)}</pre>
        </Layout>
    )
}

The advantage of this approach is to prevent the display of unauthenticated content before performing a redirect. Тote that requesting user data in getServerSideProps blocks rendering until the request is resolved. Therefore, to avoid creating bottlenecks and increasing Time to First Byte (TTFB), you should ensure that the authentication service is performing well.

Authentication Providers

Integrating with the users database, consider using one of the following solutions:

  • next-iron-session – low-level encoded stateless session
  • next-auth is a full-fledged authentication system with built-in providers (Google, Facebook, GitHub, and similar), JWT, JWE, email/password, magic links, etc.
  • with-passport-and-next-connect – old good node Password also works for this case

A crash course of Next.js: Routing and Middleware (part 3)

This series is my Next.js study resume, and despite it’s keen to a vanilla Next.js, all the features are applicable with Sitecore SDK. It is similar to the guide I recently wrote about GraphQL and aims to reduce the learning curve for those switching to it from other tech stacks.

  • In part 1 we covered some fundamentals of Next.js – rendering strategies along with the nuances of getStaticProps, getStaticPaths, getServerSideProps as well as data fetching.
  • In part 2 we spoke about UI-related things coming OOB with Next.js – layouts, styles and fonts powerful features, Image and Script components, and of course – TypeScript.

In this post we are going to talk about routing with Next.js – pages, API Routes, layouts, and Middleware.

Routing

Next.js routing is based on the concept of pages. A file located within the pages directory automatically becomes a route. Index.js files refer to the root directory:

  • pages/index.js -> /
  • pages/blog/index.js -> /blog

The router supports nested files:

  • pages/blog/first-post.js -> /blog/first-post
  • pages/dashboard/settings/username.js -> /dashboard/settings/username

You can also define dynamic route segments using square brackets:

  • pages/blog/[slug].js -> /blog/:slug (for example: blog/first-post)
  • pages/[username]/settings.js -> /:username/settings (for example: /johnsmith/settings)
  • pages/post/[...all].js -> /post/* (for example: /post/2021/id/title)

Navigation between pages

You should use the Link component for client-side routing:

import Link from 'next/link'

export default function Home(){
    return (
        <ul>
            <li>
                <Link href="/">
                    Home
                </Link>
            </li>
            <li>
                <Link href="/about">
                    About
                </Link>
            </li>
            <li>
                <Link href="/blog/first-post">
                    First post
                </Link>
            </li>
        </ul>
    )
}

So we have:

  • /pages/index.js
  • /aboutpages/about.js
  • /blog/first-postpages/blog/[slug].js

For dynamic segments feel free to use interpolation:

import Link from 'next/link'

export default function Post({ posts }){
    return (
        <ul>
            {posts.map((post) => (
                <li key={post.id}>
                    <Link href={`/blog/${encodeURIComponent(post.slug)}`}>
                        {post.title}
                    </Link>
                </li>
            ))}
        </ul>
    )
}

Or leverage URL object:

import Link from 'next/link'

export default function Post({ posts }){
    return (
        <ul>
            {posts.map((post) => (
                <li key={post.id}>
                    <Link
                        href={{
                            pathname: '/blog/[slug]',
                            query: { slug: post.slug },
                        }}
                    >
                        <a>{post.title}</a>
                    </Link>
                </li>
            ))}
        </ul>
    )
}

Here we pass:

  • pathname is the page name under the pages directory (/blog/[slug] in this case)
  • query is an object having a dynamic segment (slug in this case)

To access the router object within a component, you can use the useRouter hook or the withRouter utility, and it is recommended practice to use useRouter.

Dynamic routes

If you want to create a dynamic route, you need to add [param] to the page path.

Let’s consider a page pages/post/[pid].js having the following code:

import { useRouter } from 'next/router'

export default function Post(){
    const router = useRouter()
    const { id } = router.query
    return <p>Post: {id}</p>
}

In this scenario, routes /post/1, /post/abc, etc. will match pages/post/[id].js. The matched parameter is passed to a page as a query string parameter, along with other parameters.

For example, for the route /post/abc the query object will look as: { "id": "abc" }

And for the route /post/abc?foo=bar like this: { "id": "abc", "foo": "bar" }

Route parameters overwrite query string parameters, so the query object for the /post/abc?id=123 route will look like this: { "id": "abc" }

For routes with several dynamic segments, the query is formed in exactly the same way. For example, the page pages/post/[id]/[cid].js will match the route /post/123/456, and the query will look like this: { "id": "123", "cid": "456" }

Navigation between dynamic routes on the client side is handled using next/link:

import Link from 'next/link'

export default function Home(){
    return (
        <ul>
            <li>
                <Link href="/post/abc">
                    Leads to `pages/post/[id].js`
                </Link>
            </li>
            <li>
                <Link href="/post/abc?foo=bar">
                    Also leads to `pages/post/[id].js`
                </Link>
            </li>
            <li>
                <Link href="/post/123/456">
                    <a>Leads to `pages/post/[id]/[cid].js`</a>
                </Link>
            </li>
        </ul>
    )
}

Catch All routes

Dynamic routes can be extended to catch all paths by adding an ellipsis (...) in square brackets. For example, pages/post/[...slug].js will match /post/a, /post/a/b, /post/a/b/c, etc.

Please note: slug is not hard-defined, so you can use any name of choice, for example, [...param].

The matched parameters are passed to the page as query string parameters (slug in this case) with an array value. For example, a query for /post/a will have the following form: {"slug": ["a"]} and for /post/a/b this one: {"slug": ["a", "b"]}

Routes for intercepting all the paths can be optional – for this, the parameter must be wrapped in one more square bracket ([[...slug]]). For example, pages/post/[[...slug]].js will match /post, /post/a, /post/a/b, etc.

Catch-all routes are what Sitecore uses by default, and can be found at src\[your_nextjs_all_name]\src\pages\[[...path]].tsx.

The main difference between the regular and optional “catchers” is that the optional ones match a route without parameters (/post in our case).

Examples of query object:

{}// GET `/post` (empty object)
{"slug": ["a"]}// `GET /post/a` (single element array)
{"slug": ["a", "b"]}// `GET /post/a/b` (array with multiple elements)

Please note the following features:

  • static routes take precedence over dynamic ones, and dynamic routes take precedence over catch-all routes, for example:
    • pages/post/create.js – will match /post/create
    • pages/post/[id].js – will match /post/1, /post/abc, etc., but not /post/create
    • pages/post/[...slug].js – will match /post/1/2, /post/a/b/c, etc., but not /post/create and /post/abc
  • pages processed using automatic static optimization will be hydrated without route parameters, i.e. query will be an empty object ({}). After hydration, the application update will fill out the query.

Imperative approach to client-side navigation

As I mentioned above, in most cases, Link component from next/link would be sufficient to implement client-side navigation. However, you can also leverage the router from next/rou

import { useRouter } from 'next/router'

export default function ReadMore(){
    const router = useRouter()
    return (
        <button onClick={() => router.push('/about')}>
            Read about
        </button>
    )
}

Shallow Routing

Shallow routing allows you to change URLs without restarting methods to get data, including the getServerSideProps and getStaticProps functions. We receive the updated pathname and query through the router object (obtained from using useRouter() or withRouter()) without losing the component’s state.

To enable shallow routing, set { shallow: true }:

import { useEffect } from 'react'
import { useRouter } from 'next/router'

// current `URL` is `/`
export default function Page(){
    const router = useRouter()
    useEffect(() => {
        // perform navigation after first rendering
        router.push('?counter=1', undefined, { shallow: true })
    }, [])
    useEffect(() => {
        // value of `counter` has changed!
    }, [router.query.counter])
}

When updating the URL, only the state of the route will change.

Please note: shallow routing only works within a single page. Let’s say we have a pages/about.js page and we do the following:

router.push('?counter=1', '/about?counter=1', { shallow: true })

In this case, the current page is unloaded, a new one is loaded, and the data fetching methods are rerun (regardless of the presence of { shallow: true }).

API Routes

Any file located under the pages/api folder maps to /api/* and is considered to be an API endpoint, not a page. Because of its non-UI nature, the routing code remains server-side and does not increase the client bundle size. The below example pages/api/user.js returns a status code of 200 and data in JSON format:

export default function handler(req, res){
    res.status(200).json({ name: 'Martin Miles' })
}

The handler function receives two parameters:

  • req – an instance of http.IncomingMessage + several built-in middlewares (explained below)
  • res – an instance of http.ServerResponse + some helper functions (explained below)

You can use req.method for handling various methods:

export default function handler(req, res){
    if (req.method === 'POST') {
        // handle POST request
    } else {
        // handle other request types
    }
}

Use Cases

The entire API can be built using a routing interface so that the existing API remains untouched. Other cases could be:

  • hiding the URL of an external service
  • using environment variables (stored on the server) for accessing external services safely and securely

Nuances

  • The routing interface does not process CORS headers by default. This is done with the help of middleware (see below)
  • routing interface cannot be used with next export

As for dynamic routing segments, they are subject to the same rules as the dynamic parts of page routes I explained above.

Middlewares

The routing interface includes the following middlewares that transform the incoming request (req):

  • req.cookies – an object containing the cookies included in the request (default value is {})
  • req.query – an object containing the query string (default value is {})
  • req.body – an object containing the request body asContent-Type header, or null

Middleware Customizations

Each route can export a config object with Middleware settings:

export const config = {
    api: {
        bodyParser: {
            sizeLimit: '2mb'
        }
    }
}
  • bodyParser: false – disables response parsing (returns raw data stream as Stream)
  • bodyParser.sizeLimit – the maximum request body size in any format supported by bytes
  • externalResolver: true – tells the server that this route is being processed by an external resolver, such as express or connect

Adding Middlewares

Let’s consider adding cors middleware. Install the module using npm install cors and add cors to the route:

import Cors from 'cors'
// initialize middleware

const cors = Cors({
    methods: ['GET', 'HEAD']
})
// helper function waiting for a successful middleware resolve
// before executing some other code
// or to throw an exception if middleware fails
const runMiddleware = (req, res, next) =>
    newPromise((resolve, reject) => {
        fn(req, res, (result) =>
            result instanceof Error ? reject(result) : resolve(result)
        )
    })
export default async function handler(req, res){
    // this actually runs middleware
    await runMiddleware(req, res, cors)
    // the rest `API` logic
    res.json({ message: 'Hello world!' })
}

Helper functions

The response object (res) includes a set of methods to improve the development experience and speed up the creation of new endpoints.

This includes the following:

  • res.status(code) – function for setting the status code of the response
  • res.json(body) – to send a response in JSON format, body should be any serializable object
  • res.send(body) – to send a response, body could be a string, object or Buffer
  • res.redirect([status,] path) – to redirect to the specified page, status defaults to 307 (temporary redirect)

This concludes part 3. In part 4 we’ll talk about caching, authentication, and considerations for going live.