Experience Sitecore ! | All posts tagged 'Nextjs'

Experience Sitecore !

More than 200 articles about the best DXP by Martin Miles

A crash course of Next.js: TypeScript (part 5)

This series is my Next.js study resume, and despite it’s keen to a vanilla Next.js, all the features are applicable with Sitecore SDK. It is similar to the guide I recently wrote about GraphQL and aims to reduce the learning curve for those switching to it from other tech stacks.

  • In part 1 we covered some fundamentals of Next.js – rendering strategies along with the nuances of getStaticProps, getStaticPaths, getServerSideProps as well as data fetching.
  • In part 2 we spoke about UI-related things coming to OOB with Next.js – layouts, styles, and fonts powerful features, Image and Script components, and of course – TypeScript.
  • We went through the nuances of Next.js routing and explained middleware in part 3
  • Part 4 was all about caching, authentication, and going live tasks

In this post, we are going to talk about TypeScript. It was already well mentioned in the previous parts, but this time we’d put it under a spotlight.

Intro

TypeScript became an industry standard for strong typing in the current state of the JavaScript world. There is currently no other solution allowing us to effectively implement typing into a project. One can, of course, use some kind of contracts, conventions, or JSDoc to describe types. But all this will be times worse for code readability compared to typical TypeScript annotations. They will allow you not to have your eyes darting up and down, you will simply read the signature and immediately understand everything.

Another point comes to JavaScript support in editors and IDEs being usually based on TypeScript. JavaScript support in all modern IDEs is built on TypeScript! JavaScript support in VS Code is implemented using the TypeScript Language Service. WebStorm’s JavaScript support largely relies on the TypeScript framework and uses its standard library. This reality is – that JavaScript support in editors goes built on top of TypeScript. One more reason to learn TypeScript is that when the editor complains about the JavaScript type mismatch, we’ll have to read the declarations from TypeScript.

However, TypeScript is not a replacement for other code quality tools. TypeScript is just one of the tools that allows you to maintain some conventions in a project and make sure that there are strong types. You are still expected to write tests, do code reviews, and be able to design the architecture correctly.

Learning TypeScript, even in 2024, can be difficult, for many various reasons. Folks like myself who grew from C# may find things that don’t function as they should expect. People who have programmed in JavaScript for most of their lives get scared when the compiler yells at them.

TypeScript is a superset of JavaScript, this means that JavaScript is part of TypeScript: it is in fact made up of JavaScript. Going with TypeScript does not mean throwing out JavaScript with its specific behavior – TypeScript helps us to understand why JavaScript behaves this way. Let’s look at some mistakes people make when getting started with TypeScript. As an example, take error handling. It would be very our natural logical expectation to handle errors, similarly, we are used to doing in other programming languages:

try {
    // let's pull some API with Axios, and some error occured, say Axios fetch failed
} catch (e: AxiosError) {
    //         ^^^^^^^^^^ Error 
}

The above syntax is not possible because that’s not how errors work in JavaScript. Code that would be logical to write in TypeScript cannot be written that way in JavaScript.

A quick recap of a Typescript

  • Strongly typed language developed by Microsoft.
  • Code written in TypeScript compiles to native JavaScript
  • TypeScript extends JavaScript with the ability to statically assign types.

TypeScript supports modern editions of the ECMAScript standards, and code written using them is compiled taking into account the possibility of its execution on platforms that support older versions of the standards. This means that a TS programmer can take advantage of the features of ES2015 and newer standards, such as modules, arrow functions, classes, spread, and destructuring, and do what they can in existing environments that do not yet support these standards.

The language is backward compatible with JavaScript and supports modern editions of the ECMAScript standards. If you feed the compiler pure JavaScript, the compiler will “eat out” your own JS and will not say that it is an error – that would be a valid TypeScript code, however with no type benefits. Therefore one can write a mixed code of ES2015 and newer standards, such as modules, arrow functions, classes, spread, destructuring, etc. using TypeScript syntax, and implementing methods without strongly typing just using pure JS – and that will also be valid.

So, what benefits does language provide?

  • System for working with modules/classes – you can create interfaces, modules, classes
  • You can inherit interfaces (including multiple inheritance), classes
  • You can describe your own data types
  • You can create generic interfaces
  • You can describe the type of a variable (or the properties of an object), or describe what interface the object to which the variable refers should have
  • You can describe the method signature
  • Using TypeScript (as opposed to JavaScript) significantly improves the development process due to IDE receives type information from the TS compiler in real time.

On the other hand, there are some disadvantages:

  • In order to use some external tool (say, “library” or “framework”) with TypeScript, the signature of each of the methods of each module of this tool must be described so that the compiler does not throw errors, otherwise, it simply won’t know about your tool. For the majority of popular tools, the interface description most likely could be found in the repository. Otherwise, you’ll have to describe it yourself
  • Probably the biggest disadvantage is the learning curve and building a habit of thinking and writing TypeScript
  • At least as it goes with me, more time is spent on development compared to vanilla JavaScript: in addition to the actual implementation, it is also necessary to describe all the involved interfaces and method signatures.

One of the serious advantages of TS over JS is the ability to create, in various IDEs, a development environment that allows you to identify common errors directly in the process of entering code. Using TypeScript in large projects can lead to increased reliability of programs that can still be deployed in the same environments where regular JS applications run.

Types

There are expectations that it is enough to learn a few types to start writing in TypeScript and automatically get good code. Indeed, one can simply write types in the code, explaining to the compiler that in this place we are expecting a variable of a certain type. The compiler will prompt you on whether you can do this or not:

let helloFunction = () => { "hello" };
let text: string = helloFunction();
// TS2322: Type 'void' is not assignable to type 'string'.

However, the reality is that it will not be possible to outsource the entire work to the compiler. And let’s see what types you need to learn to progress with TypeScript:

  • basic primitives from JavaScript: boolean, number, string, symbol, bigint, undefined and object.
  • instead of the function type, TypeScript has Function and a separate syntax similar to the arrow function, but for defining types. The object type will mean that the variable can be assigned any object literals in TypeScript.
  • TypeScript-specific primitives: null, unknown, any, void, unique symbol, never, this.
  • Next are the types standard for many object-oriented languages: array, tuple, generic
  • Named types in TypeScript refer to the practice of giving a specific, descriptive name to a type for variables, function parameters, and return types: type User = { name: string; age: number; }
  • TypeScript doesn’t stop there: it offers union and intersection. Special literal types often work in conjunction with string, number, boolean, template string. These are used when a function takes not just a string, but a specific literal value, like “foo” or “bar”, and nothing else. This significantly improves the descriptive power of the code.
  • TypeScript also has typeof, keyof, indexed, conditional, mapped, import, await, const, predicate.

These are just the basic types; many others are built on their basis: for example, a composite Record<T>, or the internal types Uppercase<T> and Lowercase<T>, which are not defined in any way: they are intrinsic types.

P.S. – do not use Function, pass a predefined type, or use an arrow notation instead:

// replace unknown with the types you're using with the function
F extends (...args: unknown[]) => unknown
// example of a function with 'a' and 'b' arguments that reutrns 'Hello'
const func = (a: number, b: number): string => 'Hello'

Map and d.ts files

TypeScript comes with its own set of unique file types that can be puzzling at first glance. Among these are the *.map and *.d.ts files. Let’s demystify these file types and understand their roles in TypeScript development.

What are .map Files in TypeScript?

.map files, or source map files, play a crucial role in debugging TypeScript code. These files are generated alongside the JavaScript output when TypeScript is compiled. The primary function of a .map file is to create a bridge between the original TypeScript code and the compiled JavaScript code. This linkage is vital because it allows developers to debug their TypeScript code directly in tools like browser developer consoles, even though the browser is executing JavaScript.

When you’re stepping through code or setting breakpoints, the .map file ensures that the debugger shows you the relevant TypeScript code, not the transpiled JavaScript. This feature is a game-changer for developers, as it simplifies the debugging process and enhances code maintainability.

Understanding .d.ts Files in TypeScript

On the other side, we have *.d.ts files, known as declaration files in TypeScript. These files are pivotal for using JavaScript libraries in TypeScript projects. Declaration files act as a bridge between the dynamically typed JavaScript world and the statically typed TypeScript world. They don’t contain any logic or executable code but provide type information about the JavaScript code to the TypeScript compiler.

For instance, when you use a JavaScript library like Lodash or jQuery in your TypeScript project, the *.d.ts files for these libraries describe the types, function signatures, class declarations, etc., of the library. This allows TypeScript to understand and validate the types being used from the JavaScript library, ensuring that the integration is type-safe and aligns with TypeScript’s static typing system.

The Key Differences

The primary difference between *.map and *.d.ts files lies in their purpose and functionality. While .map files are all about enhancing the debugging experience by mapping TypeScript code to its JavaScript equivalent, .d.ts files are about providing type information and enabling TypeScript to understand JavaScript libraries.

In essence, .map files are about the developer’s experience during the debugging process, whereas .d.ts files are about maintaining type safety and smooth integration when using JavaScript libraries in TypeScript projects.

Cheatsheet

I won’t get into the details of the basics and operations which can be visualized in this cheat sheet instead:

Better than a thousand words!

Let’s take a look at some other great features not mentioned in the above cheatsheet.

What the heck is Any?

In TypeScript you can use any data type. This allows you to work with any type of data without errors. Just like regular javascript – in fact what you do by using any is downgrading your TypeScript to just JavaScript by eliminating type safety. The best way is to look at it in action:

let car: any = 2024;
console.log(typeof car)// number

car = "Mercedes";
console.log(typeof car)// string

car = false;
console.log(typeof car)// boolean

car = null;
console.log(typeof car)// object

car = undefined;
console.log(typeof car)// undefined

The car variable can be assigned any data type. Any is an evil data type, indeed! If you are going to use any data type, then TypeScript immediately becomes unnecessary. Just write code in JavaScript.

TypeScript can also determine what data type will be used if we do not specify it. We can replace the code from the first example with this one.

const caterpie01: number = 2021;     // number
const caterpie001 = 2021;            // number  - that was chosen by typescript for us

const Metapod01: string = "sleepy";  // string
const Metapod001 = "sleepy";         // string  - that was chosen by typescript for us

const Wartortle01: boolean = true;   // boolean
const Wartortle001 = true;           // boolean - that was chosen by typescript for us

This is a more readable and shorter way of writing. And of course, we won’t be able to assign any other data type to a variable.

let caterpie = 2021;            // in typescript this vairable becomes number after assignment
caterpie = "text";              // type error, as the type was already defined upon the assingment

On the other hand, if we don’t specify a data type for the function’s arguments, TypeScript will use any type. Let’s look at the code:

const sum = (a, b) => {
    return a + b;
}
sum(2021, 9);

In strict mode, the above code will error out with “Parameter 'a' implicitly has an 'any' type; Parameter 'a' implicitly has an 'any' type” message, but will perfectly work outside of strict mode in the same manner as JavaScript code would. I assume that was intentionally done for the compatibility

Null checks and undefined

As simple as that:

if(value){
  ...
}

The expression in parentheses will be evaluated as true if it is not one of the following:

  • null
  • undefined
  • NaN
  • an empty string
  • 0
  • false

TypeScript supports the same type conversion rules as JavaScript does.

Type checking

Surprise-surprise, TypeScript is all about the types and annotations. Therefore, the next piece of advice may seem weird, but, is legit: avoid explicit type checking, if you can.

Instead, always prefer to specify the types of variables, parameters, and return values to harness the full power of TypeScript. This makes future refactoring easier.

function travelToTexas(vehicle: Bicycle | Car){
    if (vehicle instanceof Bicycle) {
        vehicle.pedal(currentLocation, newLocation('texas'));
    } elseif(vehicle instanceof Car){
        vehicle.drive(currentLocation, newLocation('texas'));
    }
}

The below rewrite looks much more readable and therefore easy to maintain. And there are no ugly type checking clauses:

type Vehicle = Bicycle | Car;

function travelToTexas(vehicle: Vehicle){
    vehicle.move(currentLocation, newLocation('texas'));
}

Generics

Use them wherever possible. This will help you better identify the type being used in your code. Unfortunately, they are not often used, but in vain.

function returnType <T> (arg: T): T {
    return arg;
}

returnType <string> ('MJ knows Sitecore')// works well
returnType <number> ('MJ knows Sitecore')// errors out
// ^ Argument of type 'string' is not assignable to parameter of type 'number'.

If you are using a specific type, be sure to use extends:

type AddDot<T extends string> = `${T}.`// receives only strings, otherwise errors out

Ternary operators with extends

extends is very useful, it helps to determine what a type is inherited from in the type hierarchy (any -> number -> …) and make a comparison. Thanks to the combination of extends and ternary operators, you can create such awesome conditional constructs:

type IsNumber <T> = T extends number ? true : false

IsNumber <5>        // true
IsNumber <'lol'>    // false

Readonly and Consts

Use readonly by default to avoid accidentally overwriting types in your interface.

interface User {
    readonly name: string;
    readonly surname: string;
}

Let’s say you have an array that comes from the backend [1, 2, 3, 4] and you need to use only these four numbers, that is, make the array immutable. The as const construction can easily handle this:

const arr = [1, 2, 3, 4]            // curent type is number so that you can assign any number
arr[3] = 5  // [1, 2, 3, 5]

const arr = [1, 2, 3, 4] as const   // now type is locked as readonly [1, 2, 3, 4]
arr[3] = 5  // errors out

Satisfies

That is a relatively recent feature (since version 4.9), but is so helpful as it allows you to impose restrictions without changing the type. This can be very useful when you manage different types that do not share common methods. For example:

type Numbers = readonly [1, 2, 3];
type Val = { value: Numbers | string };

// the valid accepted values could be numbers 1, 2, 3, or a string
const myVal: Val = { value: 'a' };
So far so good. Let’s say we have a string, and must convert it to capital letters. Intuitively trying to use the below code without Satisfies will get you an error:
myVal.value.toUpperCase()
// ^ Property 'toUpperCase' does not exist on type 'Numbers'.
So the right way to deal with it would be to use Satisfies, then everything will be fine:
const myVal = { value: 'a' } satisfies { value: string };
myVal.value.toUpperCase()   // works well and outputs 'A'

Unions

Sometimes you can see code like this:

interface User {
    loginData: "login" | "username";
    getLogin(): void;
    getUsername(): void;
}

This code is bad because you can use a username but still call getLogin() and vice versa. To prevent this, it is better to use unions instead:

interface UserWithLogin {
    loginData: "login";
    getLogin(): void;
}
interface UserWithUsername {
    loginData: "username";
    getUsername(): void;
}

type User = UserWithLogin | UserWithUsername;

What is even more impressive – unions are iterable, meaning they can be used to loop through a test:

type Numbers = 1 | 2 | 3
type OnlyRuKeys = { [R in Numbers]: boolean }
// {1: boolean, 2: boolean, 3: boolean}

Utility Types

TypeScript Utility Types are a set of built-in types that can be used to manipulate data types in code.

  • Required<T> – makes all properties of an object of type T required.
  • Partial<T> – makes all properties of an object of type T optional.
  • Readonly<T> – makes all properties of an object of type T read-only
  • NonNullable<Type> – Retrieves a type from Type, excluding null and undefined.
  • Parameters<Type> – retrieves the types of function arguments Type
  • ReturnType<Type> – retrieves the return type of the function Type
  • InstanceType<Type> – retrieves the type of an instance of the Type class
  • Record<Keys, Type> – creates a type that is a record with keys defined in the first parameter and values of the type defined in the second parameter.
  • Pick<T, K extends keyof T> – selects properties of an object of type T with the keys specified in K.
  • Omit<T, K extends keyof T> – selects properties of an object of type T, excluding those specified in K
  • Exclude<UnionType, ExcludedMembers> – Excludes certain types from the union type
  • Uppercase<StringType>, Lowercase<StringType>, Capitalize<StringType>, Uncapitalize<StringType> – string manipulation utility types that change the case of the string according to their name.

I won’t get into the details on all of them but will showcase just a few for better understanding:

// 1. Required
interface Person {
    name?: string;
    age?: number;
}

let requiredPerson: Required<Person>;   // now requiredPerson could be as { name: string; age: number; }

// 2. Partial
interface Person {
    name: string;
    age: number;
}

let partialPerson: Partial<Person>;    // now partialPerson could be as { name?: string; age?: number; }

// 3. NonNullable
let value: string | null | undefined;
let nonNullableValue: NonNullable<typeof value>;    // now nonNullableValue is a string

// 4. Awaited
asyncfunctiongetData(): Promise <string> {
    return'hello';
}
let awaitedData: Awaited<ReturnType<typeof getData>>;     // now awaitedData could be 'hello'

// 5 Case management
type Uppercased = Uppercase<'hello'>;       // 'HELLO'
type Lowercased = Lowercase<'Hello'>;       // 'hello'
type Capitalized = Capitalize<'hello'>;     // 'Hello'
type Uncapitalized = Uncapitalize<'Hello'>; // 'hello'

These above are just a few examples of utility types in TypeScript. To find out more please refer to the official documentation.

Interface vs types

One of the questions that raises most of the confusion for the C# developer trying TypeScript. What’s the difference between these two below signatures:

interface X {
    a: number
    b: string
}

type X = {
    a: number
    b: string
};

You can use both to describe the shape of an object or a function signature, it’s just the syntax differs.

Unlike interface, the type alias can also be used for other types such as primitives, unions, and tuples:

// primitive
type Name = string;

// object
type PartialPointX = { x: number; };
type PartialPointY = { y: number; };

// union
type PartialPoint = PartialPointX | PartialPointY;

// tuple
type Data = [number, string];

Both can be extended, but again, the syntax differs. Also, an interface can extend a type alias, and vice versa:

// Interface extends interface
interface PartialPointX { x: number; }
interface Point extends PartialPointX { y: number; }

// Type alias extends type alias
type PartialPointX = { x: number; };
type Point = PartialPointX & { y: number; };

// Interface extends type alias
type PartialPointX = { x: number; };
interface Point extends PartialPointX { y: number; }

// Type alias extends interface
interface PartialPointX { x: number; }
type Point = PartialPointX & { y: number; };

A class can implement an interface or type alias, both in the same exact way. However, a class and interface are considered static blueprints. Therefore, they can not implement or extend a type alias that names a union type.

Unlike a type alias, an interface can be defined multiple times and will be treated as a single interface (with members of all declarations being merged):

// These two declarations become:
// interface Point { x: number; y: number; }
interface Point { x: number; }
interface Point { y: number; }

const point: Point = { x: 1, y: 2 };

So, when I should use one against another? If simplified, use types when you might need a union or intersection. Use interfaces when you want to use extends or implements. There is no hard and fast rule though, use what works for you. I admit, that may still be confusing, please read this discussion for more understanding.

Error Handling

Throwing errors is always good: if something goes wrong at runtime, you can terminate the execution at the right moment and investigate the error using the stack trace in the console.

Always use rejects with errors

JavaScript and TypeScript allow you to throw any object. A promise can also be rejected with any reason object. It is recommended to use throw syntax with type Error. This is because your error can be caught at a higher level of code with catch syntax. Instead of this incorrect block:

function calculateTotal(items: Item[]): number{
    throw 'Not implemented.';
}

function get(): Promise < Item[] > {
    return Promise.reject('Not implemented.');
}

use the below:

function calculateTotal(items: Item[]): number{
    throw new Error('Not implemented.');
}
function get(): Promise <Item[]> {
    return Promise.reject(new Error('Not implemented.'));
}
// the above Promise could be rewritten with an async equivalent:
async function get(): Promise <Item[]> {
    throw new Error('Not implemented.');
}

The advantage of using Error types is that they are supported by try/catch/finally syntax and implicitly all errors and have a stack property which is very powerful for debugging. There are other alternatives: don’t use throw syntax and always return custom error objects instead. TypeScript makes this even easier and it works like a charm!

Dealing with Imports

Last but not least, I want to share some tips and best practices for using imports. With simple, clear, and logical import statements, you can faster inspect the dependencies of your current code.

Make sure you use the following good practices for import statements:

  • Import statements should be in alphabetical order and grouped.
  • Unused imports must be removed (linter will come to help you)
  • Named imports must be in alphabetical order, i.e. import {A, B, C} from 'mod';
  • Import sources should be in alphabetical order in groups, i.e.: import * as foo from 'a'; import * as bar from 'b';
  • Import groups are indicated by blank lines.
  • Groups must follow the following order:
    • Polyfills (i.e. import 'reflect-metadata';)
    • Node build modules (i.e. import fs from 'fs';)
    • External modules (i.e. import { query } from 'itiriri';)
    • Internal modules (i.e. import { UserService } from 'src/services/userService';)
    • Modules from the parent directory (i.e. import foo from '../foo'; import qux from '../../foo/qux';)
    • Modules from the same or related directory (i.e. import bar from './bar'; import baz from './bar/baz';)

These rules are not obvious to beginners, but they come with time once you start paying attention to the minors. Just compare the ugly block of imports:

import { TypeDefinition } from '../types/typeDefinition';
import { AttributeTypes } from '../model/attribute';
import { ApiCredentials, Adapters } from './common/api/authorization';
import fs from 'fs';
import { ConfigPlugin } from './plugins/config/configPlugin';
import { BindingScopeEnum, Container } from 'inversify';
import 'reflect-metadata';
against them nicely structured, as below:
import 'reflect-metadata';

import fs from 'fs';
import { BindingScopeEnum, Container } from 'inversify';

import { AttributeTypes } from '../model/attribute';
import { TypeDefinition } from '../types/typeDefinition';

import { ApiCredentials, Adapters } from './common/api/authorization';
import { ConfigPlugin } from './plugins/config/configPlugin';

Which one gets faster to read?

Conclusion

In my opinion, TypeScript’s superpower is that it provides feedback as you write the code, not at runtime: IDE nicely prompts what argument to use when calling a function, and types are defined and navigable. All that comes at the cost of a minor compilation overhead and a slightly increased learning curve. That is a fair deal, TypeScript will stay with us for a long and won’t go away – the earlier you learn it the more time and effort it will save later.

Of course, I left plenty of raw TypeScript features aboard and did that intentionally so as not to overload the readers with it. If the majority of you are coming in the capacity of professionals, either starting with Next.js development in general or switching from other programming languages and tech. stacks (like .NET with C#) where I was myself a year ago – that is definitely the right volume and agenda for you to start with. There are of course a lot of powerful TypeScript features for exploration beyond today’s post, such as:

  1. Decorators
  2. Namespaces
  3. Type Guards and Differentiating Types
  4. Type Assertions
  5. Ambient Declarations
  6. Advanced Types (e.g., Conditional Types, Mapped Types, Template Literal Types)
  7. Module Resolution and Module Loaders
  8. Project References and Build Optimization
  9. Declaration Merging
  10. Using TypeScript with WebAssembly

But I think that is enough for this post. Hope you’ll enjoy writing strongly typed code with TypeScript!

References:

A crash course of Next.js: Caching, Authentication and Going Live tasks (part 4)

This series is my Next.js study resume, and despite it’s keen to a vanilla Next.js, all the features are applicable with Sitecore SDK. It is similar to the guide I recently wrote about GraphQL and aims to reduce the learning curve for those switching to it from other tech stacks.

  • In part 1 we covered some fundamentals of Next.js – rendering strategies along with the nuances of getStaticProps, getStaticPaths, getServerSideProps as well as data fetching.
  • In part 2 we spoke about UI-related things coming to OOB with Next.js – layouts, styles, and fonts powerful features, Image and Script components, and of course – TypeScript.
  • In part 3 we went through the nuances of Next.js routing and explained middleware

In this post we are going to talk about pre-going live optimizations such as caching and reducing bundle size as well as authentication.

Going live consideration

  • use caching wherever possible (see below)
  • make sure that the server and database are located (deployed) in the same region
  • minimize the amount of JavaScript code
  • delay loading heavy JS until you actually use it
  • make sure logging is configured correctly
  • make sure error handling is correct
  • configure 500 (server error) and 404 (page not found) pages
  • make sure the application meets the best performance criteria
  • run Lighthouse to test performance, best practices, accessibility, and SEO. Use an incognito mode to ensure the results aren’t distorted
  • make sure that the features used in your application are supported by modern browsers
  • improve performance by using the following:
    • next/image and automatic image optimization
    • automatic font optimization
    • script optimization

Caching

Caching reduces response time and the number of requests to external services. Next.js automatically adds caching headers to statics from _next/static, including JS, CSS, images, and other media.

Cache-Control: public, max-age=31536000, immutable

To revalidate the cache of a page that was previously rendered into static markup, use the revalidate setting in the getStaticProps function.

Please note: running the application in development mode using next dev disables caching:

Cache-Control: no-cache, no-store, max-age=0, must-revalidate

Caching headers can also be used in getServerSideProps and the routing interface for dynamic responses. An example of using stale-while-revalidate:

// The value is considered fresh and actual for 10 seconds (s-maxage=10).
// If the request is repeated within 10 seconds, the previous cached value
// considered fresh. If the request is repeated within 59 seconds,
// cached value is considered stale, but is still used for rendering
// (stale-while-revalidate=59)
// The request is then executed in the background and the cache is filled with fresh data.
// After updating the page will display the new value
export async function getServerSideProps({ req, res }){
    res.setHeader(
        'Cache-Control',
        'public, s-maxage=10, stale-while-revalidate=59'
    )
    return {
        props: {}
    }
}

Reducing the JavaScript bundle volume/size

To identify what’s included in each JS bundle, you can use the following tools:

  • Import Cost – extension for VSCode showing the size of the imported package
  • Package Phobia is a service for determining the “cost” of adding a new development dependency to a project (dev dependency)
  • Bundle Phobia – a service for determining how much adding a dependency will increase the size of the build
  • Webpack Bundle Analyzer – Webpack plugin for visualizing the bundle in the form of an interactive, scalable tree structure

Each file in the pages directory is allocated into a separate assembly during the next build command. You can use dynamic import to lazily load components and libraries.

Authentication

Authentication is the process of identifying who a user is, while authorization is the process of determining his permissions (or “authority” in other words), i.e. what the user has access to. Next.js supports several authentication patterns.

Authentication Patterns

Each authentication pattern determines the strategy for obtaining data. Next, you need to select an authentication provider that supports the selected strategy. There are two main authentication patterns:

  • using static generation to load state on the server and retrieve user data on the client side
  • receiving user data from the server to avoid “flushing” unauthenticated content (in the meaning of switching application states being visible to a user)

Authentication when using static generation

Next.js automatically detects that a page is static if the page does not have blocking methods to retrieve data, such as getServerSideProps. In this case, the page renders the initial state received from the server and then requests the user’s data on the client side.

One of the advantages of using this pattern is the ability to deliver pages from a global CDN and preload them using next/link. This results in a reduced Time to Interactive (TTI).

Let’s look at an example of a user profile page. On this page, the template (skeleton) is first rendered, and after executing a request to obtain user data, this data is displayed:

// pages/profile.js
import useUser from '../lib/useUser'
import Layout from './components/Layout'

export default function Profile(){
    // get user data on the client side
    const { user } = useUser({ redirectTo: '/login' })
    // loading status received from the server
    if (!user || user.isLoggedIn === false) {
        return <Layout>Loading...</Layout>
    }
    // after the request is completed, user data is
    return (
        <Layout>
            <h1>Your profile</h1>
            <pre>{JSON.stringify(user, null, 2)}</pre>
        </Layout>
    )
}

Server-side rendering authentication

If a page has an asynchronous getServerSideProps function, Next.js will render that page on every request using the data from that function.

export async function getServerSideProps(context){
    return {
        props: {}// will get passed down to a component as props
    }
}

Let’s rewrite the above example. If there is a session, the Profile component will receive the user prop. Note the absence of a template:

// pages/profile.js
import withSession from '../lib/session'
import Layout from '../components/Layout'

export const getServerSideProps = withSession(async (req, res) => {
    const user = req.session.get('user')
    if (!user) {
        return {
            redirect: {
                destination: '/login',
                permanent: false
            }
        }
    }
    return {
        props: {
            user
        }
    }
})
export default function Profile({ user }){
    // display user data, no loading state required
    return (
        <Layout>
            <h1>Your profile</h1>
            <pre>{JSON.stringify(user, null, 2)}</pre>
        </Layout>
    )
}

The advantage of this approach is to prevent the display of unauthenticated content before performing a redirect. Тote that requesting user data in getServerSideProps blocks rendering until the request is resolved. Therefore, to avoid creating bottlenecks and increasing Time to First Byte (TTFB), you should ensure that the authentication service is performing well.

Authentication Providers

Integrating with the users database, consider using one of the following solutions:

  • next-iron-session – low-level encoded stateless session
  • next-auth is a full-fledged authentication system with built-in providers (Google, Facebook, GitHub, and similar), JWT, JWE, email/password, magic links, etc.
  • with-passport-and-next-connect – old good node Password also works for this case

A crash course of Next.js: Routing and Middleware (part 3)

This series is my Next.js study resume, and despite it’s keen to a vanilla Next.js, all the features are applicable with Sitecore SDK. It is similar to the guide I recently wrote about GraphQL and aims to reduce the learning curve for those switching to it from other tech stacks.

  • In part 1 we covered some fundamentals of Next.js – rendering strategies along with the nuances of getStaticProps, getStaticPaths, getServerSideProps as well as data fetching.
  • In part 2 we spoke about UI-related things coming OOB with Next.js – layouts, styles and fonts powerful features, Image and Script components, and of course – TypeScript.

In this post we are going to talk about routing with Next.js – pages, API Routes, layouts, and Middleware.

Routing

Next.js routing is based on the concept of pages. A file located within the pages directory automatically becomes a route. Index.js files refer to the root directory:

  • pages/index.js -> /
  • pages/blog/index.js -> /blog

The router supports nested files:

  • pages/blog/first-post.js -> /blog/first-post
  • pages/dashboard/settings/username.js -> /dashboard/settings/username

You can also define dynamic route segments using square brackets:

  • pages/blog/[slug].js -> /blog/:slug (for example: blog/first-post)
  • pages/[username]/settings.js -> /:username/settings (for example: /johnsmith/settings)
  • pages/post/[...all].js -> /post/* (for example: /post/2021/id/title)

Navigation between pages

You should use the Link component for client-side routing:

import Link from 'next/link'

export default function Home(){
    return (
        <ul>
            <li>
                <Link href="/">
                    Home
                </Link>
            </li>
            <li>
                <Link href="/about">
                    About
                </Link>
            </li>
            <li>
                <Link href="/blog/first-post">
                    First post
                </Link>
            </li>
        </ul>
    )
}

So we have:

  • /pages/index.js
  • /aboutpages/about.js
  • /blog/first-postpages/blog/[slug].js

For dynamic segments feel free to use interpolation:

import Link from 'next/link'

export default function Post({ posts }){
    return (
        <ul>
            {posts.map((post) => (
                <li key={post.id}>
                    <Link href={`/blog/${encodeURIComponent(post.slug)}`}>
                        {post.title}
                    </Link>
                </li>
            ))}
        </ul>
    )
}

Or leverage URL object:

import Link from 'next/link'

export default function Post({ posts }){
    return (
        <ul>
            {posts.map((post) => (
                <li key={post.id}>
                    <Link
                        href={{
                            pathname: '/blog/[slug]',
                            query: { slug: post.slug },
                        }}
                    >
                        <a>{post.title}</a>
                    </Link>
                </li>
            ))}
        </ul>
    )
}

Here we pass:

  • pathname is the page name under the pages directory (/blog/[slug] in this case)
  • query is an object having a dynamic segment (slug in this case)

To access the router object within a component, you can use the useRouter hook or the withRouter utility, and it is recommended practice to use useRouter.

Dynamic routes

If you want to create a dynamic route, you need to add [param] to the page path.

Let’s consider a page pages/post/[pid].js having the following code:

import { useRouter } from 'next/router'

export default function Post(){
    const router = useRouter()
    const { id } = router.query
    return <p>Post: {id}</p>
}

In this scenario, routes /post/1, /post/abc, etc. will match pages/post/[id].js. The matched parameter is passed to a page as a query string parameter, along with other parameters.

For example, for the route /post/abc the query object will look as: { "id": "abc" }

And for the route /post/abc?foo=bar like this: { "id": "abc", "foo": "bar" }

Route parameters overwrite query string parameters, so the query object for the /post/abc?id=123 route will look like this: { "id": "abc" }

For routes with several dynamic segments, the query is formed in exactly the same way. For example, the page pages/post/[id]/[cid].js will match the route /post/123/456, and the query will look like this: { "id": "123", "cid": "456" }

Navigation between dynamic routes on the client side is handled using next/link:

import Link from 'next/link'

export default function Home(){
    return (
        <ul>
            <li>
                <Link href="/post/abc">
                    Leads to `pages/post/[id].js`
                </Link>
            </li>
            <li>
                <Link href="/post/abc?foo=bar">
                    Also leads to `pages/post/[id].js`
                </Link>
            </li>
            <li>
                <Link href="/post/123/456">
                    <a>Leads to `pages/post/[id]/[cid].js`</a>
                </Link>
            </li>
        </ul>
    )
}

Catch All routes

Dynamic routes can be extended to catch all paths by adding an ellipsis (...) in square brackets. For example, pages/post/[...slug].js will match /post/a, /post/a/b, /post/a/b/c, etc.

Please note: slug is not hard-defined, so you can use any name of choice, for example, [...param].

The matched parameters are passed to the page as query string parameters (slug in this case) with an array value. For example, a query for /post/a will have the following form: {"slug": ["a"]} and for /post/a/b this one: {"slug": ["a", "b"]}

Routes for intercepting all the paths can be optional – for this, the parameter must be wrapped in one more square bracket ([[...slug]]). For example, pages/post/[[...slug]].js will match /post, /post/a, /post/a/b, etc.

Catch-all routes are what Sitecore uses by default, and can be found at src\[your_nextjs_all_name]\src\pages\[[...path]].tsx.

The main difference between the regular and optional “catchers” is that the optional ones match a route without parameters (/post in our case).

Examples of query object:

{}// GET `/post` (empty object)
{"slug": ["a"]}// `GET /post/a` (single element array)
{"slug": ["a", "b"]}// `GET /post/a/b` (array with multiple elements)

Please note the following features:

  • static routes take precedence over dynamic ones, and dynamic routes take precedence over catch-all routes, for example:
    • pages/post/create.js – will match /post/create
    • pages/post/[id].js – will match /post/1, /post/abc, etc., but not /post/create
    • pages/post/[...slug].js – will match /post/1/2, /post/a/b/c, etc., but not /post/create and /post/abc
  • pages processed using automatic static optimization will be hydrated without route parameters, i.e. query will be an empty object ({}). After hydration, the application update will fill out the query.

Imperative approach to client-side navigation

As I mentioned above, in most cases, Link component from next/link would be sufficient to implement client-side navigation. However, you can also leverage the router from next/rou

import { useRouter } from 'next/router'

export default function ReadMore(){
    const router = useRouter()
    return (
        <button onClick={() => router.push('/about')}>
            Read about
        </button>
    )
}

Shallow Routing

Shallow routing allows you to change URLs without restarting methods to get data, including the getServerSideProps and getStaticProps functions. We receive the updated pathname and query through the router object (obtained from using useRouter() or withRouter()) without losing the component’s state.

To enable shallow routing, set { shallow: true }:

import { useEffect } from 'react'
import { useRouter } from 'next/router'

// current `URL` is `/`
export default function Page(){
    const router = useRouter()
    useEffect(() => {
        // perform navigation after first rendering
        router.push('?counter=1', undefined, { shallow: true })
    }, [])
    useEffect(() => {
        // value of `counter` has changed!
    }, [router.query.counter])
}

When updating the URL, only the state of the route will change.

Please note: shallow routing only works within a single page. Let’s say we have a pages/about.js page and we do the following:

router.push('?counter=1', '/about?counter=1', { shallow: true })

In this case, the current page is unloaded, a new one is loaded, and the data fetching methods are rerun (regardless of the presence of { shallow: true }).

API Routes

Any file located under the pages/api folder maps to /api/* and is considered to be an API endpoint, not a page. Because of its non-UI nature, the routing code remains server-side and does not increase the client bundle size. The below example pages/api/user.js returns a status code of 200 and data in JSON format:

export default function handler(req, res){
    res.status(200).json({ name: 'Martin Miles' })
}

The handler function receives two parameters:

  • req – an instance of http.IncomingMessage + several built-in middlewares (explained below)
  • res – an instance of http.ServerResponse + some helper functions (explained below)

You can use req.method for handling various methods:

export default function handler(req, res){
    if (req.method === 'POST') {
        // handle POST request
    } else {
        // handle other request types
    }
}

Use Cases

The entire API can be built using a routing interface so that the existing API remains untouched. Other cases could be:

  • hiding the URL of an external service
  • using environment variables (stored on the server) for accessing external services safely and securely

Nuances

  • The routing interface does not process CORS headers by default. This is done with the help of middleware (see below)
  • routing interface cannot be used with next export

As for dynamic routing segments, they are subject to the same rules as the dynamic parts of page routes I explained above.

Middlewares

The routing interface includes the following middlewares that transform the incoming request (req):

  • req.cookies – an object containing the cookies included in the request (default value is {})
  • req.query – an object containing the query string (default value is {})
  • req.body – an object containing the request body asContent-Type header, or null

Middleware Customizations

Each route can export a config object with Middleware settings:

export const config = {
    api: {
        bodyParser: {
            sizeLimit: '2mb'
        }
    }
}
  • bodyParser: false – disables response parsing (returns raw data stream as Stream)
  • bodyParser.sizeLimit – the maximum request body size in any format supported by bytes
  • externalResolver: true – tells the server that this route is being processed by an external resolver, such as express or connect

Adding Middlewares

Let’s consider adding cors middleware. Install the module using npm install cors and add cors to the route:

import Cors from 'cors'
// initialize middleware

const cors = Cors({
    methods: ['GET', 'HEAD']
})
// helper function waiting for a successful middleware resolve
// before executing some other code
// or to throw an exception if middleware fails
const runMiddleware = (req, res, next) =>
    newPromise((resolve, reject) => {
        fn(req, res, (result) =>
            result instanceof Error ? reject(result) : resolve(result)
        )
    })
export default async function handler(req, res){
    // this actually runs middleware
    await runMiddleware(req, res, cors)
    // the rest `API` logic
    res.json({ message: 'Hello world!' })
}

Helper functions

The response object (res) includes a set of methods to improve the development experience and speed up the creation of new endpoints.

This includes the following:

  • res.status(code) – function for setting the status code of the response
  • res.json(body) – to send a response in JSON format, body should be any serializable object
  • res.send(body) – to send a response, body could be a string, object or Buffer
  • res.redirect([status,] path) – to redirect to the specified page, status defaults to 307 (temporary redirect)

This concludes part 3. In part 4 we’ll talk about caching, authentication, and considerations for going live.

A crash course of Next.js: rendering strategies and data fetching (part 1)

This series is my Next.js study resume, and despite it’s keen to a vanilla Next.js, all the features are applicable with Sitecore SDK. It is similar to the guide I recently wrote about GraphQL and aims reducing the learning curve for those switching to it from other tech stack.

Next.js is a React-based framework designed for developing web applications with functionality beyond SPA. As you know, the main disadvantage of SPA is problems with indexing pages of such applications by search robots, which negatively affects SEO. Recently the situation has begun to change for the better, at least the pages of my small SPA-PWA application started being indexed as normal. However, Next.js significantly simplifies the process of developing multi-page and hybrid applications. It also provides many other exciting features I am going to share with you over this course.

Pages

In Next.js a page is a React component that is exported from a file with a .js, .jsx, .ts, or .tsx extension located in the pages directory. Each page is associated with a route by its name. For example, the page pages/about.js will be accessible at /about. Please note that the page should be exported by default using export default:

export default function About(){
  return <div>About us</div>
}

The route for pages/posts/[id].js will be dynamic, i.e. such a page will be available at posts/1, posts/2, etc.

By default, all pages are pre-rendered. This results in better performance and SEO. Each page is associated with a minimum amount of JS. When the page loads, JS code runs, which makes it interactive (this process is called hydration).

There are 2 forms of pre-rendering: static generation (SSG, which is the recommended approach) and server-side rendering (SSR, which is more familiar for those working with other serverside frameworks such as ASP.NET). The first form involves generating HTML at build time and reusing it on every request. The second is markup generation on a server for each request. Generating static markup is the recommended approach for performance reasons.

In addition, you can use client-side rendering, where certain parts of the page are rendered by client-side JS.

SSG

It can generate both pages with data and without.

Without data case is pretty obvious:

export default function About(){
  return <div>About us</div>
}

There are 2 potential scenarios to generate a static page with data:

  1. Page content depends on external data: getStaticProps is used
  2. Page paths depend on external data: getStaticPaths is used (usually in conjunction with getStaticProps)

1. Page content depends on external data

Let’s assume that the blog page receives a list of posts from an external source, like a CMS:

// TODO: requesting `posts`
export default function Blog({ posts }){
    return (
        <ul>
            {posts.map((post) => (
                <li key={post.id}>{post.title}</li>
            ))}
        </ul>
    )
}

To obtain the data needed for pre-rendering, the asynchronous getStaticProps function must be exported from the file. This function is called during build time and allows you to pass the received data to the page in the form of props:

export default function Blog({ posts }){
    // ...
}

export async function getStaticProps(){
    const posts = await(awaitfetch('https://site.com/posts'))?.json()
    // the below syntax is important
    return {
        props: {
            posts
        }
    }
}

2. Page paths depend on external data

To handle the pre-rendering of a static page whose paths depend on external data, an asynchronous getStaticPaths function must be exported from a dynamic page (for example, pages/posts/[id].js). This function is called at build time and allows you to define prerendering paths:

export default function Post({ post }){
    // ...
}
export async function getStaticPaths(){
    const posts = await(awaitfetch('https://site.com/posts'))?.json()
    // pay attention to the structure of the returned array
    const paths = posts.map((post) => ({
        params: { id: post.id }
    }))
    // `fallback: false` means that 404 uses alternative route
    return {
        paths,
        fallback: false
    }
}

pages/posts/[id].js should also export the getStaticProps function to retrieve the post data with the specified id:

export default function Post({ post }){
    // ...
}
export async function getStaticPaths(){
    // ...
}
export async function getStaticProps({ params }){
    const post = await(awaitfetch(`https://site.com/posts/${params.id}`)).json()
    return {
        props: {
            post
        }
    }
}

SSR

To handle server-side page rendering, the asynchronous getServerSideProps function must be exported from the file. This function will be called on every page request.

function Page({ data }){
    // ...
}
export async function getServerSideProps(){
    const data = await(awaitfetch('https://site.com/data'))?.json()
    return {
        props: {
            data
        }
    }
}

Data Fetching

There are 3 functions to obtain the data needed for pre-rendering:

  • getStaticProps (SSG): Retrieving data at build time
  • getStaticPaths (SSG): Define dynamic routes to pre-render pages based on data
  • getServerSideProps (SSR): Get data on every request

getStaticProps

The page on which the exported asynchronous getStaticProps is pre-rendered using the props returned by this function.

export async function getStaticProps(context){
    return {
        props: {}
    }
}

context is an object with the following properties:

    • params – route parameters for pages with dynamic routing. For example, if the page title is [id].js, the params will be { id: … }
    • preview – true if the page is in preview mode
    • previewData – data set using setPreviewData
    • locale – current locale (if enabled)
    • locales – supported locales (if enabled)
    • defaultLocale – default locale (if enabled)

getStaticProps returns an object with the following properties:

  • props – optional object with props for the page
  • revalidate – an optional number of seconds after which the page is regenerated. The default value is false – regeneration is performed only with the next build
  • notFound is an optional boolean value that allows you to return a 404 status and the corresponding page, for example:
export async function getStaticProps(context) {
    const res = awaitfetch('/data')
    const data = await res.json()
    if (!data) {
        return {
            notFound: true
        }
    }
    return {
        props: {
            data
        }
    }
}

Note that notFound is not required in fallback: false mode, since in this mode only the paths returned by getStaticPaths are pre-rendered.

Also, note that notFound: true means a 404 is returned even if the previous page was successfully generated. This is designed to support cases where user-generated content is removed.

  • redirect is an optional object that allows you to perform redirections to internal and external resources, which must have the form {destination: string, permanent: boolean}:
export async function getStaticProps(context){
    const res = awaitfetch('/data')
    const data = await res.json()
    if (!data) {
        return {
            redirect: {
                destination: '/',
                permanent: false
            }
        }
    }
    return {
        props: {
            data
        }
    }
}

Note 1: Build-time redirects are not currently allowed. Such redirects must be declared at next.config.js.

Note 2: Modules imported at the top level for use within getStaticProps are not included in the client assembly. This means that server code, including reads from the file system or from the database, can be written directly in getStaticProps.

Note 3: fetch() in getStaticProps should only be used when fetching resources from external sources.

Use Cases

  • rendering data is available at build time and does not depend on the user request
  • data comes from a headless CMS
  • data can be cached in plain text (and not user-specific data)
  • the page must be pre-rendered (for SEO purposes) and must be very fast – getStaticProps generates HTML and JSON files that can be cached using a CDN

Use it with TypeScript:

import { GetStaticProps } from 'next'
export const getStaticProps: GetStaticProps = async (context) => { }

To get the desired types for props you should use InferGetStaticPropsType<typeof getStaticProps>:

import { InferGetStaticPropsType } from 'next'
type Post = {
    author: string
    content: string
}
export const getStaticProps = async () => {
    const res = awaitfetch('/posts')
    const posts: Post[] = await res.json()
    return {
        props: {
            posts
        }
    }
}
export default function Blog({ posts }: InferGetStaticPropsType < typeof getStaticProps >){
    // posts will be strongly typed as `Post[]`
}

ISR: Incremental static regeneration

Static pages can be updated after the application is built. Incremental static regeneration allows you to use static generation at the individual page level without having to rebuild the entire project.

Example:

const Blog = ({ posts }) => (
    <ul>
        {posts.map((post) => (
            <li>{post.title}</li>
        ))}
    </ul>
)
// Executes while building on a server.
// It can be called repeatedly as a serverless function when invalidation is enabled and a new request arrives
export async function getStaticProps(){
    const res = awaitfetch('/posts')
    const posts = await res.json()
    return {
        props: {
            posts
        },
        // `Next.js` will try regenerating a page:
        // - when a new request arrives
        // - at least once every 10 seconds
        revalidate: 10// in seconds
    }
}
// Executes while building on a server.
// It can be called repeatedly as a serverless function if the path has not been previously generated
export async function getStaticPaths(){
    const res = awaitfetch('/posts')
    const posts = await res.json()
    // Retrieving paths for posts pre-rendering
    const paths = posts.map((post) => ({
        params: { id: post.id }
    }))
    // Only these paths will be pre-rendered at build time
    // `{ fallback: 'blocking' }` will render pages serverside in the absence of a corresponding path
    return { paths, fallback: 'blocking' }
}
export default Blog

When requesting a page that was pre-rendered at build time, the cached page is displayed.

  • The response to any request to such a page before 10 seconds have elapsed is also instantly returned from the cache
  • After 10 seconds, the next request also receives a cached version of the page in response
  • After this, page regeneration starts in the background
  • After successful regeneration, the cache is invalidated and a new page is displayed. If regeneration fails, the old page remains unchanged

Technical nuances

getStaticProps

  • Since getStaticProps runs at build time, it cannot use data from the request, such as query params or HTTP headers.
  • getStaticProps only runs on the server, so it cannot be used to access internal routes
  • when using getStaticProps, not only HTML is generated, but also a JSON file. This file contains the results of getStaticProps and is used by the client-side routing mechanism to pass props to components
  • getStaticProps can only be used in a page component. This is because all the data needed to render the page must be available
  • in development mode getStaticProps is called on every request
  • preview mode is used to render the page on every request

getStaticPaths

Dynamically routed pages from which the asynchronously exported getStaticPaths function will be pre-generated for all paths returned by that function.

export async function getStaticPaths(){
    return {
        paths: [
            params: {}
        ],
        fallback: true | false | 'blocking'
    }
}
Paths

paths defines which paths will be pre-rendered. For example, if we have a page with dynamic routing called pages/posts/[id].js, and the getStaticPaths exported on that page returns paths as below:

return {
    paths: [
        { params: { id: '1' } },
        { params: { id: '2' } },
    ]
}

Then the posts/1 and posts/2 pages will be statically generated based on the pages/posts/[id].js component.

Please note that the name of each params must match the parameters used on the page:

  • if the page title is pages/posts/[postId]/[commentId] then params should contain postId and commentId
  • if the page uses a route interceptor, for example, pages/[...slug], params must contain slug as an array. For example, if such an array looks as ['foo', 'bar'], then the page /foo/bar will be generated
  • If the page uses an optional route interceptor, using null, [], undefined, or false will cause the top-level route to be rendered. For example, applying slug: false to pages/[[...slug]], will generate the page /
Fallback
  • if fallback is false, the missing path will be resolved by a 404 page
  • if fallback is true, the behavior of getStaticProps will be:
      • paths from getStaticPaths will be generated at build time using getStaticProps
      • a missing path will not be resolved by a 404 page. Instead, a fallback page will be returned in response to the request
      • The requested HTML and JSON are generated in the background. This includes calling getStaticProps
      • the browser receives JSON for the generated path. This JSON is used to automatically render the page with the required props. From the user’s perspective, this looks like switching between the backup and full pages
    • the new path is added to the list of pre-rendered pages

Please note: fallback: true is not supported when using next export.

Fallback pages

In the fallback version of the page:

  • prop pages will be empty
  • You can determine that a fallback page is being rendered using the router: router.isFallback will be true
// pages/posts/[id].js
import { useRouter } from 'next/router'
function Post({ post }){
    const router = useRouter()
    // This will be displayed if the page has not yet been generated, 
    // Until `getStaticProps` finishes its work
    if (router.isFallback) {
        return <div>Loading...</div>
    }
    // post rendering
}
export async function getStaticPaths(){
    return {
        paths: [
            { params: { id: '1' } },
            { params: { id: '2' } }
        ],
        fallback: true
    }
}
export async function getStaticProps({ params }){
    const res = awaitfetch(`/posts/${params.id}`)
    const post = await res.json()
    return {
        props: {
            post
        },
        revalidate: 1
    }
}
export default Post

In what cases might fallback: true be useful? It can be useful with a truly large number of static pages that depend on data (for example, a very large e-commerce storefront). We want to pre-render all the pages, but we know the build will take forever.

Instead, we generate a small set of static pages and use fallback: true for the rest. When requesting a missing page, the user will see a loading indicator for a while (while getStaticProps doing its job), then see the page itself. After that, a new page will be returned in response to each request.

Please note: fallback: true does not refresh the generated pages. Incremental static regeneration is used for this purpose instead.

If fallback is set to blocking, the missing path will also not be resolved by the 404 page, but there will be no transition between the fallback and normal pages. Instead, the requested page will be generated on the server and sent to the browser, and the user, after waiting for some time, will immediately see the finished page

Use cases for getStaticPaths

getStaticPaths is used to pre-render pages with dynamic routing. Use it with TypeScript:

import { GetStaticPaths } from 'next'

export const getStaticPaths: GetStaticPaths = async () => { }

Technical nuances:

  • getStaticPaths must be used in conjunction with getStaticProps. It cannot be used in conjunction with getServerSideProps
  • getStaticPaths only runs on the server at build time
  • getStaticPaths can only be exported in a page component
  • in development mode getStaticPaths runs on every request

getServerSideProps

The page from which the asynchronous getServerSideProps function is exported will be rendered on every request using the props returned by this function.

export async function getServerSideProps(context){
    return {
        props: {}
    }
}

context is an object with the following properties:

  • params: see getStaticProps
  • req: HTTP IncomingMessage object (incoming message, request)
  • res: HTTP response object
  • query: object representation of the query string
  • preview: see getStaticProps
  • previewData: see getStaticProps
  • resolveUrl: a normalized version of the requested URL, with the _next/data prefix removed and the original query string values included
  • locale: see getStaticProps
  • locales: see getStaticProps
  • defaultLocale: see getStaticProps

getServerSideProps should return an object with the following fields:

  • props – see getStaticProps
  • notFound – see getStaticProps
    export async function getServerSideProps(context){
        const res = awaitfetch('/data')
        const data = await res.json()
        if (!data) {
            return {
                notFound: true
            }
        }
        return {
            props: {}
        }
    }
  • redirect — see getStaticProps
    export async function getServerSideProps(context){
        const res = awaitfetch('/data')
        const data = await res.json()
        if (!data) {
            return {
                redirect: {
                    destination: '/',
                    permanent: false
                }
            }
        }
        return {
            props: {}
        }
    }

For getServerSideProps there are the same features and limitations as getStaticProps.

Use cases for getServerSideProps

getServerSideProps should only be used when you need to pre-render the page based on request-specific data. Use it with TypeScript:

import { GetServerSideProps } from 'next'
export const getServerSideProps: GetServerSideProps = async () => { }

To get the expected types for props you should use InferGetServerSidePropsType<typeof getServerSideProps>:

import { InferGetServerSidePropsType } from 'next'
type Data = {}
export async functiong etServerSideProps(){
    const res = awaitfetch('/data')
    const data = await res.json()
    return {
        props: {
            data
        }
    }
}
function Page({ data }: InferGetServerSidePropsType <typeof getServerSideProps>){
    // ...
}
export default Page

Technical nuances:

  • getServerSideProps runs only serverside
  • getServerSideProps can only be exported in a page component

Client-side data fetching

If a page has frequently updated data, but at the same time this page doesn’t need to be pre-rendered (for SEO reasons), then it is pretty much possible to fetch its data directly at client-side.

The Next.js team recommends using their useSWR hook for this purpose, which provides features such as data caching, cache invalidation, focus tracking, periodic retrying, etc.

import useSWR from 'swr'
const fetcher = (url) => fetch(url).then((res) => res.json())
function Profile(){
    const { data, error } = useSWR('/api/user', fetcher)
    if (error) return <div>Error while retrieving the data</div>
    if (!data) return <div>Loading...</div>
    return <div>Hello, {data.name}!</div>
}

However, you’re not limited to it, old good React query fetch() functions also perfectly work for this purpose.

This concludes part 1. In part 2 we’ll talk about UI-related things coming to OOB with Next.js – layouts, styles, and fonts powerful features, Image and Script components, and of course – TypeScript.

The correct way of creating own components with XM Cloud

I occasionally evidence folks creating components in XM Cloud incorrectly. Therefore I decided to create and share this guidance with you.

So, there are five steps involved in creating your own component:

  1. Create an SXA module that will serve as a pluggable container for all the necessary assets for your components, if not done yet.
  2. The easiest way to create a component is to clone a mostly matching existing one. If you need to rely on datasource items, clone the one that already leverages datasource. SPE scaffolding script will do the rest of the job for you. Make sure you assign a newly created component to a module from Step 1 above.
  3. Now, having a module with component(s),  you need to make it visible for your website, but adding a module to a chosen site. This empowers your site to see a corresponding section in the toolbox with newly created component(s) for you to use straight away, for both Experience Editor and Pages.
  4. You need to ensure the Component Name field is referencing the name of corresponding TSX codebase files as /src/<jss_app>/src/components/<your component>.tsx or one more level down within a folder with the same name as the component is. Since the component is fully cloned from the existing one, you can also copy the original TSX files under a new name and it will work straight away.
  5. Don’t forget to add all the new locations to the serialization, and check it into source control along with its codebase. Here are the locations to keep in mind:
    • /sitecore/layout/Placeholder Settings/Feature/Tailwind Components
    • /sitecore/templates/Feature/Tailwind Components
    • /sitecore/layout/Layouts/Feature/Tailwind Components
    • /sitecore/layout/Renderings/Feature/Tailwind Components
    • /sitecore/media library/Feature/Tailwind Components
    • /sitecore/templates/Branches/Feature/Tailwind Components
    • /sitecore/system/Settings/Feature/Tailwind Components

To make things simple, I recorded this entire walkthrough (and actually “talkthrough”) showing the entire process:

Hope you find this video helpful!

The ultimate guide to Sitecore XM Cloud

If you ask anyone in the Sitecore community what was the biggest hype of 2022 – there wouldn’t be any other opinion than mentioning XM Cloud. Let’s take a look at this latest and shiny SaaS offering!

Content

Overview, Architecture, Features

Introduction

Sitecore XM Cloud is a cloud-based platform that provides tools and features for managing and optimizing digital customer experiences. The platform is a headless SaaS solution initially designed to resolve certain points of pain XP platform previously suffered from, including

  • Platform upgrades: one of the time-consuming and uncertain activities that eat out a significant budget, but on their own bring very little of new features. I have written a decent series of blog posts exclusively devoted to the Sitecore platform upgrades and the accompanying complexities. With the SaaS model, you get upgrades automatically done for you by Sitecore.
  • Infrastructure management and maintenance, security maintenance, taking care of scalability: all that was previously done by the end clients or partners now is fully managed by Sitecore itself.
  • A bottleneck of Sitecore developers: lack of availability of developers with specific knowledge of Sitecore internals may encounter a staffing bottleneck when building and maintaining Sitecore-based applications, including the complexity of the platform, limited documentation, dependence on the Sitecore support team, and limited resources. Since most of that is now provided by the vendor, development shifts mainly to the “head” and requires generic frontend skills that are much easier to come by.
  • Platform architectural bottleneck: in addition to all the above, previously scaling up your CD servers in a non-headless environment was quite a bottleneck and came at a cost compared to XM Cloud which does not have CDs at all, but only serves content via highly available Edge APIs endpoints.

That results in a drastically improved development process, velocity, and reliability; not to mention saved budgets. Clients can mostly focus on building the sites for visitors, where the platform itself gets care from the vendor.

Licensing

Before talking about licensing, it is important to understand how this SaaS offering is structured.

XM Cloud hosting platform includes the concepts of organizations, projects, and environments. An organization is the equivalent of an XM Cloud subscription. To initially access the XM Cloud portal, you must be a member of an “Organization” admin account at Sitecore. Once you have access to this account, you can manage your users, give them the required access or even set them as admins, create projects, set up environments for those projects, and promote your code through these environments. In old XP terminology, each environment is a Sitecore instance.

There are three environments per project: one production and two non-prod – typically for development, staging, testing, or other purposes. With the XM Cloud portal, you can easily manage and access all of your Sitecore instances in the cloud.

Coming back to the licensing model, it is similar to a subscription license from Sitecore platforms and its cost model was simplified very much to be based on traffic and consumption.

Each XM Cloud license includes a certain number of projects and environments, and you can use these to manage your different Sitecore instances. Depending on your specific requirements, you may be able to use different projects to handle different instances. You can use environments to set up and configure your development, staging, testing, and production environments, as well as any other environments you may need. If you have any questions about how to best use projects and environments, or if you have any other licensing questions, it is recommended that you speak with your Account Manager, who will be able to provide more information and guidance to meet your specific needs.

Interestingly, in order to run local docker containers with XM Cloud, one requires to have a valid Sitecore license. I cannot say if your existing partner’s license file will work with local containers development, as being a Sitecore MVP I was given a universal Sitecore license file which worked well. For building and deploying a source code with a built-in Deploy App you don’t need to provide a license file – it is assumed from your organization (subscription).

xm cloud license

Architecture

In very simplified terms, the architecture could be explained by the below diagram provided by Sitecore:

XM Cloud architecture

Not all the internals of the architecture are mentioned above (ie. ACR, Kubernetes, etc. are missed out), but should you really care about anything within the dashed area? All that is Sitecore-managed and developers typically focus on the development of the front-end website (also known as a “head”) which is most often built with Next.js. Of course, other frameworks also would work with XM Cloud, however, there’s lots of plumbing to be done that Next provides out-of-box.

One of the most important features of XM Cloud is Webhook Framework. It is built into XM Cloud in the same as is for 10.3 platforms. In a composable world of decoupled SaaS products webhooks are used to notify external services about changes in XM Cloud, for example, to validate and even cancel Workflow state transitions.

For example, in the old good XP platform, we used events to notify that publishing to CD has been completed. One of the possible scenarios was using Core database to pass the remote events as that was an architectural feature of a monolith platform. In a composable world that cannot be the case, as systems cannot share resources in that way and can only communicate through APIs. You don’t have CDs and publish to Edge which reasonably also has its own webhooks. You could also utilize webhooks on git repository or at Vercel side for example.

With some obvious architectural limitations, it is possible to customize XM Cloud in a similar way as we did it with XP by applying patches, but there is an expectation is that developers would customize less and less with time and platform growth. From the functionality point of view, these customizations would focus on data and synchronization rather than patching system features.

Speaking about the drawbacks of XM Cloud I could state single region geolocation, which you must specify initially.

Cloud Portal

It is a visual dashboard of your Organization. Here you can get all your tools in the same place, based on what you have in your subscription. In addition – there are shortcuts to Documentation and you can access Support from the Portal as well.

cloud portal

You can create projects and environments right from this portal. Choose between a starter template or setting up your GitHub repository and if the latter has been chosen – once you grant access to XM Cloud Deploy App to your account and choose the desired repository – it will perform the deployment in the background.

You choose a specific git branch for the desired environment (for example – main branch deploys to the production environment, while develop branch – is to testing) and can also enable auto-deploy upon each commit to a chosen branch. There is nothing else to set up or configure on top of the above, like CI/CD pipelines – Deploy App already knows what to do.

Is GitHub the only way to provide source code for build and deployment? The answer is both yes and no: from GUI it indeed currently only supports GitHub, with later plans to add support for other popular version control hosting providers, such as Azure DevOps, GitLab, Bitbucket, etc. But from CLI one can have more configuring options.

And not just that – almost everything you can do on the portal with GUI you can do remotely with CLI, for example. However, many of the Cloud Portal tools are not available for local development: Pages, Components, Explorer, Deploy App, and Dashboard itself all run exclusively in the cloud and are not available to run locally.

So, how would I develop it at my local rendering host and test it then with Pages or Experience Editor running in the cloud? Currently, there’s a workaround to configure tunneling for the local Rendering Host with a reverse proxy like ngrok or localtunnel, so that your local rendering host server becomes available outside.

If using default GitHub is not an option or you want to customize the automation and/or set your own CI/CD pipeline, there’s another feature of Cloud Portal named Authentication Clients available – an access token generator for XM Cloud.

This is how the documentation describes it:

“When you generate an authentication client, the client creates credentials that include a client ID and a client secret. You can use the credentials to request a JSON Web Token for your CM instance or to request a JWT for Experience Edge XM.”

So, that is an effective tool for creating tokens for approving custom tools for the automation and/or setting your own CI/CD pipeline.

The third tab named Status provides a basic but helpful overview of a deployment process.

I would recommend reading through the official Cloud Portal documentation.

Identity

With the XP platform, we used to have an Identity Server however the one is no longer useful in a genuine composable world. Sitecore had to re-think the authentication approach and implemented a new Unified Identity system so that it offers SSO across all applications of the Composable DXP family.

Integrations

How XM Cloud integrates with Content Hub DAM and CMP out-of-box and reference to the CMP/DAM connector now in the base image.

Sitecore Experience Edge

Experience Edge for XM comes out of the box with XM Cloud and is the default destination for publishing content. It is a content delivery service that provides scalable, globally replicated access to Sitecore Experience Platform items, layout, and media through an API. It uses CDN networks to distribute published content across the globe, ensuring a fast experience at scale.

The Edge for XM connector is required to publish the content from your pages and components in Sitecore XM Cloud to the highly scalable Sitecore Experience Edge delivery platform. It is also included.

GraphQL schema used by Experience Edge in XM Cloud is the same as used for 10.3 platforms (with a minor difference in temporal query complexity limits). However, XM Cloud schema is different from those used in Content Hub and Content Hub One as they implement different underlying data structures.

Speaking about security, previously on XP when we dealt with “protected” pages, security data went along with published content to CD servers, where Sitecore platform implemented it in the correct way. With XM Cloud we don’t have a CD server any longer, we publish to Edge Content gets published to Edge regardless of that permissions. Of course, it is not immediately available to the outer world – without a valid API key, it is not possible to access it. But instead of a fully compliant Sitecore CD server we now got a “head” which is a totally detached piece of technology. Having API Key, head may, or may not respect the security rules – Edge will give it out anyway. So, you should extra care and test these things while developing head application on your rendering host.

Familiar XP tools that have gone out of XM Cloud

Since we do not have xDB and xConnect within XM Cloud architecture, many features from XP did not find their way to the new composable world. We have to say goodbye to:

  • XDB
  • EXM
  • Identity Server
  • FXM
  • Path Analyzer

In addition, because of the exclusively headless architecture of XM Cloud, not having CD servers, and operating through Edge these features also went off this SaaS platform:

  • MVC renderings, including classical SXA
  • Publishing Service
  • Sitecore Forms
  • Custom Search Indexes
  • Universal Tracker

Sitecore Forms have gone simply because there is nowhere to submit the data back to. It is expected that Sitecore comes up with a sort of SaaS forms component to be used with a composable family of products. Meanwhile, you can consider using something like Jotforms, Marketo, Hubspot Forms, etc.

As for implementing search functionality – despite Solr still being a part of this platform for supporting XM needs (same as does it for XP), however, without having CD servers there is no more reason to have custom search indexes. Implementing a search feature for your website now requires a composable search component. I will talk through it in more detail below.

Sitecore platform features that still remain

  • Media Library exists and functions exactly the same as with XP
  • The above could be said about Workbox and workflows
  • Content Editor and Experience Editor remain untouched
  • We still have the luxury of poking the system internals with Sitecore PowerShell Extensions, this component is built-in since that is a requirement for the same built-in SXA Headless
  • Many other less important features stay

As said above, custom search indexes are no longer available and one has to select from a choice of composable options: Sitecore Search, Coveo, or Algolia – those first come into my mind. All these products are platform agnostic and will integrate seamlessly with any site regardless of underlying CMS or web engine.

There might be several approaches for implementing external search with XM Cloud:

  • Using search-based queries from Experience Edge directly. The development team can write searches against our Experience Edge endpoints and call from their application.
  • Using a crawling search engine. Sitecore Search has been released and got already proven on several implementations including sitecore.com website. Currently, it indexes against rendered HTML markup, however, Edge support should come up with future releases.
  • There’s still an option of setting up and configuring your own external Solr solution to send content to it. Also, SearchStax Studio for XM Cloud could be another alternative option. 

Headless SXA

The headless part of SXA is provided with Sitecore XM Cloud, a cloud-based CMS platform. It allows developers to build and deploy headless websites using XM Cloud, taking advantage of all the known existing SXA features, such as Page Layout, Partial Layout and especially Rendering Variants.

In addition, new concepts were introduced: Page Branches, and standard values for layout on a per-website basis.

What I found really great was the out-of-box implementation of SEO-related things, such as:

  • sitemaps
  • robots.txt
  • redirect items
  • redirect maps
  • errors (404 and 500)

This version of SXA supports Next.js as a first-class citizen and therefore has a revised list of components, leaving only those most basic to work (here they are all). With years of SXA practices, it became a default way of doing things to clone components and add rendering variants to existing ones. Assuming the same, Rendering Variants remain with us in a headless world, however, Scriban is no longer there, replaced with Next.js. Here is an example of how Rendering Variants are defined within a single promo.tsx file with Next.js implementation – there are two of them named Default and WithText. It’s really simple and similar to Scriban.

When you set up an empty site you have default SXA Headless components (at the components tab). Drag and drop works, and also works dragging components along the screen to the position. The right-hand-side section features component properties, including styling.

component tab

There is a content tab that shows item properties similar to what it was before in Content Editor.

CLI and Serialization

Similar to the XP platform, CLI operates through built-in Sitecore Management Services in order to support Sitecore CLI.

Almost everything that can be done through the Cloud portal can be also done through the Sitecore CLI. The latest version of CLI allows the creation of Projects, New Environments and Deployments, and much more.

Sitecore CLI allows you to take existing serialized content and move it directly into XM Cloud. Since serialization is done in YAML format, it stays the same as with XP and also is compatible with Unicorn, therefore correctly formatted content is easily portable to XM Cloud. I want to emphasize that it is only about serialized content, Unicorn itself is an XP module and cannot be installed on XM Coud.


New Tools and Interfaces

Old good Content Editor and Experience Editor still remain with us and that’s good news for those who habitually stick to the interfaces. they used to. Nevertheless, this blog post will familiarize you with the totally new apps and interfaces coming with XM Cloud.

Sites

This application opens by default. It shows up all of the websites available in the system with the predictive typing search bar to filter them out. Clicking any of the sites opens them up in a Pages app, as expected.

What is more interesting is the ability to create websites directly from this interface by clicking the “Create website” button.

create website

Pages

If you played around Horizon in the latest versions of XP such as 10.2, you will immediately recognize this user interface. Pages are built on the foundation of Horizon:

Pages application

The Pages application has 5 views which you select from the left navigation stripe:

  • Pages
  • Components
  • Content
  • Personalize
  • Analyze

I personally find it a little confusing, that some views have the same names and apps. For example, by clicking the Components view of the above Pages app from the left bar navigation, you will see a component selector which you can drag and drop to the editable page. However, this is totally different from the Components application which is a turbocharged tool for creating components to be used on a page, which you later choose by the mentioned component selector of Pages application.

I want to highlight that Pages was done with attention to minor details. It has very pleasant animation effects for many transitions, leaving you with a premium experience feelings. There is no “Save” button anywhere as Pages application autosaves your changes.

It also has support for multiple websites selected from a dropdown. Preview, Devices Experience, Navigation Tree – all these features are located at the same interface you would expect in order to have a productive editing experience.

Components

The tool is still in its early stages, but it has a lot of potentials. Its purpose is to enable content creators and marketers to create components without the need for a developer to start from scratch.

There are lots of styling options to specify custom types, and custom fonts, at least it looks advanced.

Here we create the component and publish it, so that component becomes available in the component library of the Pages app. The intention is for us to be able to put components together and create them together from basic page elements, images, links, and so forth. Starting with a grid, then you simply start adding columns, set alignment, and apply other UI-related things.

But what is the most impressive feature of Components app is – we not only create and customize individual components similar to the way we do it in SXA, but now we create entire components completely from scratch, we can style them and can hydrate them with data from an external data source. By saying external I mean really external: one of many 3rd party content providers and headless CMSs, like Content Hub One, Contentful, Kontent.ai, etc., and all that with zero code!

All you need to do is to provide an API endpoint and use a mouse to select what fields from the source you want to map. For those who are behind a tough firewall or strict corporate policies, there is an option to directly paste in a sample JSON snippet to map against it. They don’t ever have to hit the live API when creating a datasource.

What is even more mind-breaking is that you can feed the same single component from various external systems at the same time. Image a Promo Block component that you can easily drag and drop to a page that consists of:

  • Heading title that sits natively at Sitecore XM Cloud
  • Image taken from Content Hub DAM
  • Localized promo content taken from Contentful CMS
  • Call to action button that is wired up to CDP

In my opinion, that is a superpower of XM Cloud editing capabilities! I very much recommend watching this short video demonstrating the whole power of the new Components app.

Components

How the data is updated and invalidated with Components? It seems that the data is pulled in at the moment when the connection is established, and not get refreshed at the moment of publication.

Personalize

A cut version of Sitecore Personalize comes built into Pages interface of XM Cloud to allow for basic view event personalization. Therefore there isn’t a one-to-one match of old XP rulesets to the new ones, so a deeper analysis and re-evaluation of your personalization strategy is needed.

With XM Cloud there’s a new way of doing personalization coming out of the box: we have a limited set of rules available to us and those are primarily rules based on the current session, such as day of the week, a visitor’s country, to an operating system of visitor’s current device. However, there’s no way you can manage the built-in Personalize tenant directly from the XM Cloud interface.

Checking carefully, you can still find some rules based on historical data: an example of a history-based rule is the number of page views a visitor has visited pages within the past (max) 30 days. Of course, once you enable full Sitecore Personalize a set of tools will expand the functionality to a wider range of options.

Unlike XP where we would personalize each component individually, in XM cloud we create page variants to achieve that so that we can define the audience that will be exposed to that page variant.

Configuring the audience, you see the default set of rules, I counted 14:

  • Point of sale: The visit is to point(s) of sale
  • Region: The visitor is in region(s) during the current visit
  • Country: The visitor is in country(s) during the current visit
  • Visit day of the month: The visit is on a day of the month that compares to number based on your organization’s time zone
  • Day of the week: The visit is on a day(s) of the week, based on your organization’s time zone
  • Month of visit: The visit is in month(s), based on your organization’s time zone
  • First referrer: The visitor comes from a URL that compares to referrer in the current visit
  • UTM value: The visit includes a UTM type that compares to UTM value
  • Operating system: The visitor is using operating system(s) during the current visit
  • Device: The visitor is using a device type(s) device during the current visit
  • Number of page views: The visitor has visited page within the past x days and the total number of page views compares to a number of views
  • First page: The visit has started on a page that compares to page during the current visit
  • Page view: The visitor has visited page name(s) during the current visit
  • New or returning visitor: The visitor is a specific type to your site

After specifying the audience, you then process customizing components for that page variant.

That is implemented in a very similar manner as it was back in the XP: specify the data source item with the desired content, choose a different component to replace the default one, or simply choose to hide the component as another option.

Personalize

The bigger question comes about how personalization works with Edge, without having CD servers previously responsible for executing personalization. It is important to understand that the personalization logic is now happening at the Vercel/Next.js middleware level.

A middleware package intercept browser request at Vercel in order to do audience matching with Personalize, and then serves the relevant content from the Edge, for example by substituting personalized Edge content from one to another. The above approach does not imply any performance penalties at all since all the parts of this chain are super-fast at CDN/Jamstack level at Vercel and also because the content is already cached at Experience Edge. I would recommend spending some time reading through more details about the built-in personalization in XM Cloud by this link.

Analyze

Another excellent embedded tool powered by Boxever page-level analytics: pages built-in with Pages have embedded tracking tag therefore analytics become auto-available. Just out of the box it will empower you to see:

  • the browsers the operating system
  • the source where they came from
  • the pages: first page, entry page, top pages
  • visited top countries
  • by source visits
  • by time of the day

Analytics presents data with a heat map for the time of the day with most traffic shown by darker rectangles.

analyze

Similar to the previous example of Personalize, this analytics system is now built on top of the Sitecore CDP. Previously all the analytics was taken out from Sitecore XDB, and without having that in SaaS cloud offerings the only option for providing built-in analytics and personalization was to rely on CDP and Personalize in some way.

Explorer

In addition to Pages you can use Explorer for editing content. I think the main idea of this interface was to give editors the ability to switch rapidly between visual and filled editing interfaces.

Explorer presents you the content with some similarity to Horizon performing on XP: kind of a Content Editor with navigation and the ability to publish and edit items at the fields level. You can do many expected content operations here like modify, copy, upload download media assets, etc.

Interestingly that Sitecore decided to walk away from a tree-based style of presenting content structure to a drill-into way of navigation. It definitely makes things clear, and when we need to take a look at the entire structure of the content tree – there is still old good Content Editor to our help.

explorer

Site Identifier

This is an interface to add a tracker to your websites if they are missing so that they become trackable with Analyze section of Pages app.

site identifier

That was an overview of the most important new applications and interfaces coming with XM Cloud. The part of this post will take you through the development experience with XM Cloud.


Development with XM Cloud

In the final post of the XM Cloud series, we’ll talk about the development process with XM Cloud and its nuances. I assume you’re either familiar with the traditional way of Sitecore development or at least get some basic understanding of the process.

In order to identify the difference in the development, let’s quickly recap architectural features of XM Cloud that affect:

  • there are no more Content Delivery (CD) servers available
  • therefore content gets published to Sitecore Edge
  • that, in turn, enforces headless development only

Sitecore Next.js SDK

That means we now have a “head” web application running on a Rendering Host. That would be most likely built with Sitecore Next.js SDK – using Next.js on top of React with TypeScript running on a Node-powered server. Next.js is a framework created by Vercel that is built on top of the React library, designed to help developers build production-ready applications with little need for configuration and boilerplate code.

Of course, as a natively headless platform Sitecore allows using any other framework or technology, like Vue, Angular, or .NET Renderings, however, if React is your choice then Next.js SDK will offer you lots of features available out of the box, including

  • Powerful routing mechanism
  • Layout Service fetching and mapping
  • Placeholder Resolver
  • Multi-language support
  • Field Helper components
  • Components Factory
  • Experience Editor support
  • Sitecore Analytics support

I would recommend going deeply through the relevant documentation, as it covers it all in the detail.

A headless architecture consists of a back end with a layer of services and APIs and a front-end/client/user-facing application. The front-end application, or presentation layer, retrieves data from the CMS using API endpoints and uses that data to populate or hydrate the markup it generates.

Without having CD servers any longer, there is no place to execute HTTP request pipeline extensions. Experience Edge provides just raw published data in a headless way, there is no place to apply any logic. But what if you need to modify a request, for example?

In that case, all that logic gets moved to a “head” application, since changes will need to be made in the hosting environment for the client application.

If the Next.js application is hosted on Vercel, a Vercel Serverless Function can be set up to process incoming HTTP requests and generate a response. For Cloudflare Pages, one can choose Cloudflare Worker for the same purpose, and so on.

Similarly, any other integrations or personalization previously taken place on a CD server should be refactored to work with the “head” application at the Rendering Host. It is recommended to use out-of-process API-based integration as much as possible to maintain loose coupling.

Development Modes

There are two approaches for developers to interact with XM Cloud and implement customer requirements: Edge Mode and Fully Local.

With Edge Mode, you developed a classical JavaScript-based rendering app on a local node server.  This app connects to which will connect to the GraphQL endpoint of the Experience Edge which XM Cloud publishes to. This option does not require a license, as Edge relates to your Cloud Subscription which is aware of your license. Therefore you can scale and outsource the development as much as you desire. You can also develop on Mac or Linux, as soon as the OS can run a Node server.

Fully Local development requires containers running on a Windows-based environment and you need to have a valid license file to run it. If running on a client version of Windows (say, 10 or 11) make sure to reference 1809-based images in your .env file.

SOLUTION_BASE_IMAGE=mcr.microsoft.com/windows/nanoserver:1809
NETCORE_BUILD_IMAGE=mcr.microsoft.com/dotnet/sdk:6.0-nanoserver-1809
NETCORE_RELEASE_IMAGE=mcr.microsoft.com/dotnet/aspnet:6.0-nanoserver-1809

Once you run it, you may see a list of containers – it is pretty similar to what you can find with the classical XM platform:

xm cloud containers

There is a Solr container used for internal purposes, a container with a SQL server, and init containers for both of them. The main container is called CM, but please be aware (mainly for patching purposes) that the role name is not CM any longer but is XMCloud instead – here’s its definition:

<add key="role:define" value="XMCloud" />

Another important container here is Traefik – the one that accepts and distributes external traffic. One of the most frequent errors occurring while trying to set up local containers is that developers often already have a running web server (for example IIS) that occupies port 80, preventing Traefik running in a container from exposing its own port 80, so that it errors out.

The rest of the containers are providing rendering hosts with “head” applications.

The principal difference of local container development is that instead of using Edge from XM CLoud, your rendering host will connect to the GraphQL endpoint of the CM instance which features the same API as cloud-based Experience Edge (if you’re using Headless SXA then Edge endpoint is configured in the site settings item).

What can a developer control?

You can apply config patches for configuring CM instance, the same as you did before with XP (here's an example). There’s also a guide on deploying customizations to the XM Cloud environment.

That means it is possible to turn things on and off, like overriding pipeline processors. Of course, there is no longer request manipulation with XM Cloud as request processing shifts to the rendering host while httpRequestBegin and httpRequestEnd pipelines relate only to the CM instance itself. Nevertheless, there are lots of other familiar pipelines to deal with.

For your custom code Sitecore NuGet Feed will support relevant packages. All the packages follow up the name pattern starting with Sitecore.XMCloud.* so that one cannot mess them up with the packages for XP/XM platforms. For example, the top-level package is called Sitecore.XMCloud.Platform. Versioning of these packages follows up SemVer version of XMCloud.

Sitecore has an expectation that developers customize it less and less with time, with most of the customization happening around data synchronization rather than the altering system itself.

One of the good news is that you still have the PowerShell Extensions, but that is disabled by default. Enabling it requires setting SITECORE_SPE_ELEVATION environment variable to either Confirm or Allow with new deployment to take effect. Once complete – you get full power, including direct database access. I recorded a short video showing how to enable SPE for your XM Cloud environment, if you also need SPE remoting - read this post for additional instructions.

Sitecore Connect for Content Hub: CMP or DAM connector provided by XM Cloud, is also included in the base container image. There is a promise of adding connectors to other popular systems and DXPs with time.

Unfortunately, installing familiar modules is no longer available in the easy way of package installation we used to.

Build configuration

There is a new important configuration file with XM Cloud – xmcloud.build.json. It configures build targets, editing hosts, XDT transforms, if any, serialization modules to deploy to the XM instance for that environment, etc. Its structure looks as below:

{
  "deployItems": {
    "modules": []
  },
  "buildTargets": [],
  "renderingHosts": {
    "<key>": <value>
  },
  "transforms": [],
  "postActions": {
    "actions": {
      "<key>": <value>
    }
  }
}

For example, the default build pipeline of XM Cloud is configured to look at a single *.sln file at the root of your repository to process, but with xmcloud.build.json a developer can override this behavior and specify what exactly to build.

Read more about configuring build configuration with XM Cloud.

Sitecore CLI

With Sitecore XP most of you likely used CLI for serialization purposes, with some rare developers experimenting with itemres plugin for creating items as resources from serialization modules. With XM Cloud Sitecore CLI becomes much more useful – lots of management could (and should) be done using it, like creating the projects and environments, performing initial deployments using CLI, etc.

In order to install it you must have .NET 6 already installed on your machine. The tool consists of two parts – Sitecore Management Services running on CM and the command line tool itself. Management Services comes already preinstalled for XM Cloud (in both development container and cloud instance), so it only comes to installing CLI from the project folder:

dotnet nuget add source -n Sitecore https://sitecore.myget.org/F/sc-packages/api/v3/index.json
dotnet tool install Sitecore.CLI

Here’s an example of using CLI to list projects running against my XM Cloud organization (subscription).

CLI shows list of projects

In the XM Cloud terminology, a project is a set of environments. Each environment is in fact its own XM Cloud CM instance. Therefore, we can list a set of its environments and can take actions towards each individual environment, let’s say publish its content to Experience edge:

dotnet sitecore publish --pt Edge -n <environment-name>

Old good serialization also works well with CLI in order to synchronize items between remote and local XM Cloud instances:

dotnet sitecore serialization pull -n development
dotnet sitecore serialization push -n development

Similarly to 10.x platforms, you may benefit from a Sitecore for Visual Studio if you prefer using GUI for Sitecore Content Serialization. It is not free to use and requires purchasing a TDS license (notably, TDS itself does not support XM Cloud).

One of its greatest features in my opinion is Sitecore Module Explorer for you you to navigate, create and modify Sitecore Content Serialization configuration files as items in a tree structure.

Head Application

Configuring front-end applications mainly comes to these 3 settings:

  • JSS_APP_NAME – the name of a site
  • GRAPH_QL_ENDPOINT should be pointing to Edge GraphQL
  • SITECORE_API_KEY for Edge Token, typically found under /sitecore/system/Settings/Services/API Keys

I would recommend watching this video for the front-end developer setup for XM Cloud.

The really impressive productivity feature is running npm run start:connected where you set up a watch mode against the source code so that changes get immediately updated in the browser.

You can also configure an external editing host for XM Cloud instances so that you could also benefit from editing experiences in Pages.

Edge and GraphQL

There is a common misunderstanding about content published to Experience Edge becoming unprotected. It comes from the official documentation, saying: “Experience Edge for XM does not enforce security constraints on Sitecore content. You must apply publishing restrictions to avoid publishing content that you do not want to be publicly accessible“. In fact, publishing to Edge is not equal to making content publically exposed – no one can access it without providing a valid Edge Token you typically configure as SITECORE_API_KEY parameter for your environment. API Key should never be shared publicly – each time you need to pull the data out of Edge you should delegate that task to your head application in order to obscure API Keys rather than making such calls directly from JavaScript code running in the client’s browser, thus exposing your secrets.

Previously with XP/XM it was possible to extend the GraphQL Preview endpoint for additional features like comparison for integers and dates, coordinates, multifield searches, faceting, etc. It is worth mentioning that now you cannot perform any changes around GraphQL endpoints via pipelines with XM Cloud, because it uses Edge. You can still do the following operators in addition to searching fields by a value: compare with EQ / NEQ, CONTAINS / NCONTAINS, order by field names both ways, do pagination, logical AND, and OR.

You can read more about Edge Schema at this link. Also, it is worth mentioning that Edge has its own webhooks. This can be helpful for notifying a caller when publishing is complete.

Use New-EdgeToken.ps1 script to create one for any desired environment. Upon completion, this script opens up GraphQL IDE automatically as well as returns X-GQL-Token for you to use with IDE straight away.

Note that you can use the GraphQL IDE only if you have installed Experience Edge. However, Edge tenant and edge connecter are created automatically upon the creation of an environment with XM Cloud.

Another thing to keep in mind is that despite you're empowered to use Media Library, there's a GraphQL limitation of 50Mb per resource. For larger files (likely videos), consider using DAMs, given they typically allow wider management options.

XM Cloud Play! Summit Demo

This is an exceptional gift from Sitecore Demo Team to us! It allows you to experiment working with XM Cloud and its features, and editing capabilities in both local containers and by deploying it to your XM Cloud environment.

The demo solution features many of the latest tech working together for us to play with:

  • Sitecore XM Cloud
  • Content Hub DAM and CMP
  • Headless SXA
  • JavaScript Services (JSS)
  • Next.js
  • Vercel
  • Tailwind CSS

Since the demo is available in the GitHub repository which means it is easily deployable with Deploy App in a few clicks, as demonstrated below.

Headless SXA

XM Cloud comes with a built-in headless SXA as well as it is included in the base XM Cloud container image, so the question comes up – should I go with it or without it?

Of course, it is possible to build a headless app on XM Cloud without using SXA, but it is not recommended. SXA provides many benefits and is included by default with Sitecore XM Cloud, so it is generally a better option for building a new site. If you are in a migration scenario, you may not have the option to use SXA, but other than that, it is recommended to use SXA and headless services for the best results.

There is an XM Cloud Starter Kit with Next JS for your faster journey to XM Cloud development.

Rendering Variants was always one of the most powerful features of SXA and it took its own evolution from NVelocity templates to scriban, now became powered with Next.js – here’s an example of a Promo component having two rendering variants – Default and WithText. You can have as many variants other than Default as you want within a *.tsx file following this syntax:

export const Default = (props: PromoProps): JSX.Element => {
  if (props.fields) {
    return (
    // ... markup merged with props 
    );
  }
  return <PromoDefaultComponent {...props} />;
};

SXA is perfect for multi-site implementations as it comes with cross-site capabilities for sharing page and partial designs, renderings, content, and cross-linking. In a previous post, I already mentioned a new feature – Page Branches – that allow setting standard values for layout on a per-website basis. Site-specific standard values feature is another example of managing default on a per-site basis. The development team is also working on implementing Site Templates – the idea comes about blueprinting the entire sites from pre-defined templates.

Speaking about the UI – by default, SXA comes with the grid system Bootstrap 5, but that is configurable. Default renderings respect both grid and styles through parameter templates, you must care of that once creating your own custom renderings unless, of course, you clone existing ones.

I remember a while ago when I only started working with SXA there was a lack of proper solutions around renderings having a background image set – we had to invent our own approaches to that. Slightly later SXA got that feature, but today I was glad to find support for a decent number of stretch modes with it: parallax, stretch, vertical and horizontal tiles, fixed. And all that works in headlessly!

Deploy App

Sitecore XM Cloud comes with an integrated Deploy App that performs exactly what is called for – deploying your solution using existing source code to XM Cloud from a friendly GUI as an alternative to using CLI. You can also create a project using a starter template.

choosing deployment method

In order to demonstrate its capabilities, let’s Start from your existing XM Cloud code, using Play! Summit demo mentioned earlier. Currently, the only provider to work with Deploy App is GitHub. So let’s go ahead with it.

deployment with github

After providing access to your GitHub account, you configure and choose the branch and the specific environment it deploys to if it is production or not. There is an option to trigger a deployment on commit to that branch to trigger an automatic deployment each time someone makes a push to the specified repository branch.

deployment with github parameters

That’s it! The deployment starts and makes things done automatically: provisioning tenants and environments, configuring Edge, pulling and building the codebase, deploying the artifacts, and eventually running post-build actions, if any.

deployment log

The whole process takes around 15 minutes and is impressive – the whole CI/CD pipeline comes out of the box, preconfigured. However, there’s one more thing left to do.

Deploying to Vercel

Since our head application is decoupled from Sitecore, we also need to deploy and reference it to Experience Edge. There are various options but the best would be using Vercel hosting. Vercel is the company that invented and maintains Next.js which is why their powerful hosting platform ideally fits solutions built with it.

Vercel is an all-in-one development platform that combines the best developer experience with an obsessive focus on end-user performance. As a developer-centric stack, Vercel is accessible to developers of all skill sets and removes a historical lock-in to .NET. Vercel provides a scalable solution to the largest of organizations with the newest best practices in content delivery while helping to reduce infrastructure/deployment overhead previously required to deploy Sitecore applications.

Deployment is pretty simple. Upon creating an account and project, you link your GitHub account and use the same repository used for XM Cloud deployment previously. Vercel is smart enough to autosuggest a folder with your next.js application highlighting it with an icon. After providing three familiar environmental parameters (App name, Edge endpoint, and API key for Edge token) deployment takes place quickly and then you’re all set!

Tip: do not modify .env file directly, instead - create an "override" file called .env.local. The difference is that in starterkits that file by default is excluded from source control with .gitignore, plus in addition to preventing mess with other developers, you can easily maintain your specific environmental setting in the same place.

XM Cloud also allows you to set up a Vercel hosting directly from its interface and add integration.

vercel setup hosting

If you do not have an account - you'll also be able to create one with quickly and easily, currently, there's only support for Personal Account types. Also, make sure you choose All Project access.

vercel link to vercel

You will still need to log in to Vercel and manually add GitHub integration to the same repository and branch, add a few more parameters, and at least 3 above environmental variables. Once done you can immediately trigger the deployment.

I hope this overview helps you familiarize XM Cloud development options and encourages you to try it earlier rather than later. Would you have any questions – please feel free reaching me!

Evolutional approach to Next.js and its modes

As you've might hear, Sitecore has chosen Next.js to be used along with its JSS SDK. But what makes Next that great tool for most of us switching to a new paradigm of development for Sitecore? In this blog post, I'll go through Sitecore development evolution, starting with a review of Sitecore development progressing with time.

Old school development

A decade ago, we used classical ASP.NET WebForms to render a page on a server and pass it to the client. The whole idea of WebForms was faulty as it tried to mimic the event-based model of desktop development to make web development feel familiar to them. That was at cost of ignoring the stateless nature of HTML, creating weird ugly abstractions (ie. ViewState and EventValidation).

It was later made obsolete with an MVC approach which turned ASP.NET web development to what it should be in a better world: no state and events abstractions, server controls, and Master pages. Proper separation of code and markup (which itself went better and readable with the introduction of Razor views). It all benefited from an MVC architecture, proven with other web technologies, such as Ruby on Rails. Moreover, the implementation allowed extensibility at always every lifecycle of web request while ASP.NET MVC going open-source allowed writing your code aligned with the exact implementation of the framework.

MVC made a great step ahead and stayed a default way of making sites with Sitecore for as long as 5-8 years. Being so close to a raw request was a great strength but at a cost of having a lot of repetitive activities.

The introduction of SXA fixed most of these issues by strongly relying on Sitecore PowerShell for addressing most things that should and were in fact automated. The overall developers and editors' experience has improved with SXA due to the introduction of Page and Partial Designs, powerful components adjustable with rendering variants, most popular grid systems support, flexible search and SEO tools.

SXA was great in most aspects, except the one but most important - it was still based on top of MVC. That means web pages were generated at the server by rendering content into HTML views. Or in other words, it was not headless...

Headless

Meanwhile, the world of front-end development has experienced massive growth and after half-a-decade craziness of JS frameworks appearing one after another, a triad of winners stood out: React, Angular, and Vue. Those good old days of using jQuery came to an end giving way to industry-proven frameworks with the bigger feature sets and revised architecture that suits modern web development.

With time It became even harder and harder to split work between back-end and front-end teams (as for full-stack guys most of them tend to choose either side). Even bigger efforts have been spent on unwanted work of merging FE and BE teams in sync, which could not last long as both sides were struggling from that situation.

The headless approach was the right answer resolving all those issues with JSS being Sitecore response for that.

With the release of JSS, it became possible to separate BE and FE in a way that page each side becomes responsible for only its own duties. Front-end becomes free of previous limitations and could use React / Vue / Angular as much as they wanted. They did not need to use a heavily loaded web server with Sitecore for generating HTML pages - a new component called Rendering Host did that job exclusively for them. The only interaction with the back-end left was receiving just the necessary data asynchronously thanks to Layout Service and GraphQL.

NOTE: Actually headless means anything can consume the data from back-end services, not just FE frameworks. That is well done in ASP.NET Core renderings as an alternative option for headless implementation for Sitecore.

Client-Side Rendering

With a typical non-Sitecore single-page application the webserver firstly sends the browser an HTML page being in some initial state. Once that page gets loaded, the browser executes its JavaScript code which raises an asynchronous request(s) to an API endpoint in order to get actual data. As the user progresses with this app, more requests are sent by the browser, which will partially update content on a page without the whole page reloading from a web server. This approach is known as Client-Side Rendering, CSR and it brings lots of advantages such as apps responding faster and reducing traffic between client and server.

What's wrong with single-page applications?

Since single-page apps only load an initial HTML page once, this is the same as what search engine bots get. They struggle to obtain follow-up data from APIs and cannot index the page. Also without page reload the URL reaming the same and it can vary by only appending a #-anchor to a page URL. Often these URLs cannot be correctly processed when called directly.

Next.js

To address the above we have Next.js - a framework for statically generated and server-rendered React applications that opens up a lot of possibilities for developers: creating ready-to-use, zero-configuration applications, code separation, static HTML exporting, better UX, faster performance, and more. You can see many of its features below:

Next.js will ensure SEO without any extra actions from users beyond creating an application. Just to make clear, that results not from Next.js specifically, but from server-side rendering.

Once can do some SEO reports with Lighthouse even at earlier stages as you begin building your application.

But that still wasn't that....

SSG challenge

The idea behind Jamstack is truly attractive: instead of serving webpages in real-time (even when taking those from a cache), the webpages are already pre-rendered and deployed to CDN being globally accessible immediately upon publishing. In a simple scenario, one does not even have to keep a running server up as the traffic never reaches it going to CDN. Static content is fast, resilient to downtime, and gets indexed immediately by crawlers.

This approach however has some issues.

Let's think about a huge site with millions of pages. Deploying such a site may last hours rather than minutes due to static pages generation and the number of files to process. An increasing amount of content means increasing generation time. It seems to be reasonable to re-generating only those pages been updated, but it is only a small part of the solution (deployment becomes complicated and even one character change in a common part like a header will still make you process all the pages).

ISR

That is where Incremental Static Regeneration (ISR) comes into play. ISR is a new evolution step for Jamstack. Next.js allows you to create or update static pages beyond you’ve built a site. Incremental Static Regeneration enables developers and content editors to use static-generation on a per-page basis, without needing to rebuild the entire site. With ISR, you can benefit best from both worlds while scaling to millions of pages.

The principle difference is that now Static pages could be generated on-demand at runtime. The developers' job is now deciding which portion of pages you pre-generate, i.e. well known 80/20 Pareto's Law where 80% of traffic is served by only 20% of pages, while the other 80% of pages get the remaining 20% of traffic.

So it makes good sense to pre-generate that heavily used 20 % of pages. How to know which pages or sections to go through? You've got an arsenal of tools like analytics, A/B testing, alternative metrics - in any case, you got the flexibility to make your own tradeoff on build times, as the image below compares:

With being given a choice now, developers can define options A or B and choose between them: Selecting option A build time gets faster, while option B generates more pages.

This becomes crucial when working on large eCommerce implementations or headless CMSs such as Sitecore.

How that works

ISR relies on that same API being used for static sites generation getStaticProps. The difference is that by setting revalidate parameter to 60 we make Next.js using ISR for a page. Here's how the request goes with ISR:

  1. With Next.js one can define a revalidation time per page (ie. 60 seconds)
  2. The initial request to a product page will return the cached page with the original price
  3. At this stage, someone makes changes into a product data, affected in the database changes
  4. All requests to the page after the initial request but before 60 seconds are returned immediately as are cached.
  5. After a given 60-second window, the following request will still show the cached (old) page. But Next.js triggers background regeneration of that page. Once completed, it will update a cache for that single page or keep an old cached page upon a background regeneration failure.

Finding a compromise

Since all the sites vary by volume, audience, purpose, and internal architecture - there's no a silver bullet to cover them all with a universal solution. That is why Next.js is end-user-centric, offering developers shifting between solutions without leaving the bounds of the framework. It's for you to choose the right tool for a project.

Edge caching

In certain cases, ISR is not the best option, like some apps where live data display is crucial. Those would be better handled with server rendering, with some option of own Cache-Control headers with surrogate keys to invalidate content. Server rendered pages could get cached at some edge servers. With a hybrid framework, one can make own tradeoff and still stay within the framework.

SSR with edge server caching may look similar to ISR (especially with stale-while-revalidate headers for cache control).

The major difference comes from the way of handling the first request. With ISR it returns a statically rendered page that ensures the user will see a page even in case of API connectivity loss or database failure. SSR allows setting the pages depending on the specific features of requests.

One thing to care about in that case is using SSR whiteout caching may affect the performance as every millisecond of wait is important. In addition SSR with no cache badly impacts the TTFB metric (Time to First Byte) being used by Lighthouse.

In addition to that, ISR is not beneficial for small websites. That is reasonable if build time for the whole site is times lower than the revalidation parameter - just use classic SSR instead.

ISP fallback options

This is an important parameter with two potential options. When working with data that is fast to retrieve it makes sense using fallback: blocking. In that case, you do not need to display using a temporal "in progress" page while the data retrieval. That will guarantee users see the right page regardless of it is cached or not.

For uncertain or slow loading data the above approach will affect UX badly, therefore setting fallback: true makes an immediate display of the "please wait" page while data is processed.

SEO is the cause

SEO (search engine optimization) is a set of techniques (and even unobvious tricks) for changing your site in order to attract higher traffic from search engines. In order to increase the site's search rate, one needs to keep in mind many of them, such as:

Visitors won’t wait an eternity until your page loads. Performance is actually a crucial factor for SEO and therefore should be the main concern when building an app. In addition to FTFB (mentioned previously), there is another important parameter abbreviated as FCP (First Contentful Paint). Google uses FCP as a key metric for performance - FCP directly affects SEO rating. You can read more about improving FCP.

With Next.js you can analyze FCP and LCP (Largest Contentful Paint - time used for major content shown) by creating App component with a reportWebVitals function:

// pages/_app.js
export function reportWebVitals(metric)
{
  console.log(metric)
}

Once these parameters get calculated reportWebVitals function is called with all the metrics for you to log and analyze. Follow this link for more details about measuring performance with Next.js

I hope this post gives an overall highlight on the rendering evolution from the old days till ISR and nuances choosing them with Next.js.