Experience Sitecore ! | More than 200 articles about the best DXP by Martin Miles

Experience Sitecore !

More than 200 articles about the best DXP by Martin Miles

XM Cloud Forms Builder

XM Cloud Forms Builder released

Composable Forms Builder is now available with Sitecore XM Cloud. Let’s take a look at this one of the most anticipated modules for Sitecore’s flagship hybrid headless cloud platform.

Historically, we had an external module called Web Forms for Marketers that one could install on top of their Sitecore instance in order to gain the desired functionality of collecting user input. This module was later well reconsidered and reworked, later finding its reincarnation as Sitecore Forms, an integral part of Sitecore platforms since version 9. Customers enjoyed this built-in solution provided along with their primary DXP, however with the headless architecture of XM Cloud there were no CD servers any longer, therefore no suitable place for saving the collected user input. There was clearly a need for a SaaS forms solution, and this gap if finally filled out!

An interesting fact: until the release of Forms with XM Cloud the relevant composable solution for interacting with the visitors was Sitecore Send, and because of that Sitecore logically decided to derive XM Cloud Form module from Sitecore Send codebase (as it already had plenty of the desired features), rather than from legacy Sitecore Forms.

Sitecore XM Cloud Forms

So, what we’ve got?

The main goal was to release a new Forms product as SaaS solution that integrates with any custom web front-end. The actual challenge was to combine the ultimate simplicity of creating and publishing forms for the majority of marketing professionals along with tailoring this offering optimized for typical headless projects. In my opinion, despite the complexities, it was well achieved!

Let’s first take a look at its desired/expected capabilities:

  • Template Library
  • Work with Components Builder
  • Use external datasources for pre-populating form
  • Reporting and analytics
  • Ability to create multi-step and multi-page forms
  • Conditional logic (not available yet)

One would ask, if there’s no CD server or any managed backend at all, where does the submission go into? There might be some SaaS-provided storage along with the required interface to manage the collected input. Incorrect! There’s none. It was actually a smart move by Sitecore developers who decided to kill two birds with one stone: save effort for not building a universal UI/UX with the tool that will hardly satisfy variable needs from such a diverse range of customers/industries, that would be hardly doable. But the second reason is more legit: avoid storing any Personally Identifiable Information data, so that it won’t be processed within XM Cloud, leaving particular implementation decisions to leave to customers’ discretion.

That is done in a very composable SaaS manner, offering you to configure a webhook, passing a payload of collected data to the desired system of your choice.

Webhooks

Upon the form submission, the webhook is triggered to submit the gathered data to the configured system, it could be a database, CRM, CDP, or whichever backend is for some form. Even more, you can have shared webhooks so that multiple forms can use the same configured webhook. Similarly to the legacy forms that submitted their data into xDB, the most logical choice would be using powerful Sitecore CDP for processing this data. However with webhooks, the use case of XM Cloud forms becomes truly universal, and if you combine it with Sitecore Connect – that could span to whichever integration gets provided by it.
Webhooks come with multiple authentication options, covering any potential backend requirement.

Let’s see it in action!

The editor looks and feels like XM Cloud Pages – similar styling, layout, and UI elements:
02 03

First, let’s pick up the layout by simply dragging and dropping it on a canvas. For simplicity, I selected Full Width Layout. Once there, you can start dropping fields to a chosen layout:

04
Available Fields:
  • Action Button
  • Email
  • Phone
  • Short Text
  • Long Text
  • Select (single dropdown)
  • Multi Select (where you can define the number of options, say selected 3 of 9)
  • Date
  • Number
  • Radio
  • Checkbox (single)
  • Checkbox Group
  • Terms & Conditions
  • reCAPTCHA
Besides input-capturing elements, you can also define “passive” UI elements from underneath Basic expander:
  • Text
  • Spacer
  • Social Media – a set of clickable buttons to socials you define
  • Image, which has a pretty strong set of source options:
Media Upload options
Look at the variety of configuration options on the right panel. You can define:
  • Background Color within that field – transparent is the default one. You can even put a background image instead!
  • Settings for the field box shadows which also define Horizontal and Vertical Lengths, Blur and Spread Radiuses, and of course – the shadow color
  • Help text that is shown below and prompts some unobvious guidance you’d like a user to follow
  • For text boxes, you can set placeholder and prefill values
  • The field could be made mandatory and/or hidden by the correspondent checkboxes
  • Validation is controlled by a regex pattern and character length limit
  • Additionally, you can style pretty everything: field itself, label, placeholder, and help text, as well as set the overall padding to it

Please note, that at the current stage, the edited form is in a Draft state. Clicking Save button suggests you run your form in a Preview before saving, and that was very helpful – in my case, I left the Full Name field as a hidden field by mistake, and preview mode immediately showed me that. After fixing the visibility, I am good to go with saving.

The Forms home screen shows all the available forms. To Activate, I need to create a Webhook first, then assign it to the form. In addition, you define the action you want to do upon webhook submission – redirect to URL, display success message, or maybe – do nothing, as well as configure failure submission message.

This time Activate button works well and my form is listed as Active. From now on you cannot edit fields anymore and you cannot change the status back from Active. Therefore always verify your form in Preview before publishing.

Weirdly, you cannot even delete a form in Active status. What you can do however is a duplicate active form into a draft one, and you could go on with fields editing from there.

Forms listing

Testing Form

The most obvious desire at this stage is to real-test your form before usage. And luckily developers took care of that as well.

Testing webhook
I also created a webhook catcher with Pipedream RequestBin. On my first go, I faced a deadlock being unable to submit the form for the test. The reason for this was that I mistakenly checked both the Hidden and Required checkboxes on a field and could bo progress from there. Even the validation message did not show on a hidden field. Another mistake was that I overlooked that on Preview and published form into the active state. Hopefully, developers will find a solution to it sooner rather than later.

I give it another try to test how validation works:

Validation in action

Once validation passes, Test Form Submission dialog shows you the JSON payload as it goes out along with HTTP headers supplied with this webhook request. Let’s hit Submit button and see the confirmation – I chose to see a confirmation message that shows up.

Webhook catcher shows all my submitted data along with HTTP headers, everything was sent and received successfully!

webhook catcher

Multipage Forms

Multipage Forms are supported and that is impressive. Pay attention to the Add Page link button in between page canvases. Once activated, it also enforces Next button on non-final page canvases that trigger switching to the next form page:
20

What’s Next? Pages Editor!

Let’s use this newly created form from XM Cloud pages. Please note a new section called Forms under the Components tab. That is where all of your active forms reside. You can simply drag-drop this form to a desired placeholder, as you normally do in the Pages editor.

Consume Forms from Pages Editor

Please note: you must have your site deployed to an editing host running on Headless (JSS) SDK version 21.6 or newer to make it work – that is when XM Cloud Forms support was added. In other case, you face this error:

BYOC error before SDK 21.6

Experience Editor and Components Builder

Surprisingly, created forms are available from Components Builder:
Forms in Components Builder
However, Experience Editor does not have a direct way of consuming XM Cloud Forms. I tried the following chain of steps in order to make it work:
  1. Create and Activate a new form from Forms editor
  2. Consume it from the Components builder into a new component using BYOC, then publish this component
  3. Open Pages app, find the component with an embedded form you’ve built at step (2) and drop it to a page, then publish
  4. Open that same page in Experience Editor

Live Demo in Action

As you know, often a video is worth a thousand words, so here it is below. I’ve recorded the whole walkthrough from explaining to showcasing it all in action up to еhe most extreme example – when you create and publish a form, then consume it from the XM Cloud Components builder, making the part of a composite component, which in turn is used Pages editor to put down on a page which also opens up successfully in the Experience Editor. Unbelievable, and it all functions well. Just take a look yourself:

Developers Experience

As developers, how would we integrate forms into our “head” applications? That should work with a Forms BYOC component for your Next.js App coming out of the box with SDK. I spotted some traces of XM Cloud forms a while ago as a part of Headless JSS SDK 21.6.0 a while ago when it was in a state of “Canary” build. Now it got released and among the features, one can see an import of SitecoreForm component into the sample next.js app, as part of pull request merged into this release.

The documentation is available here, but … everything is so absolutely intuitive, that you hardly need one, don’t you?

Template Library

It is worth mentioning that XM Cloud Forms contains a Template Library handful of pre-configured forms you can use straight away or slightly modify as per your needs. There is an expectation it will grow with time covering any potential scenario one could ever have.
Template Library

Licensing

Since Forms are bundled into XM Cloud they’re included with every XM Cloud subscription.

What is missing?

  • file upload feature is not supported – webhooks alone are not sufficient to handle it
  • ability for customization and extension – hopefully, it comes as there’s an empty section for custom fields

Hopefully, the product developers will implement these and more features in the upcoming releases. But even with what was released today, I really enjoyed XM Cloud Forms builder!

A crash course of Next.js: Caching, Authentication and Going Live tasks (part 4)

This series is my Next.js study resume, and despite it’s keen to a vanilla Next.js, all the features are applicable with Sitecore SDK. It is similar to the guide I recently wrote about GraphQL and aims to reduce the learning curve for those switching to it from other tech stacks.

  • In part 1 we covered some fundamentals of Next.js – rendering strategies along with the nuances of getStaticProps, getStaticPaths, getServerSideProps as well as data fetching.
  • In part 2 we spoke about UI-related things coming to OOB with Next.js – layouts, styles, and fonts powerful features, Image and Script components, and of course – TypeScript.
  • In part 3 we went through the nuances of Next.js routing and explained middleware

In this post we are going to talk about pre-going live optimizations such as caching and reducing bundle size as well as authentication.

Going live consideration

  • use caching wherever possible (see below)
  • make sure that the server and database are located (deployed) in the same region
  • minimize the amount of JavaScript code
  • delay loading heavy JS until you actually use it
  • make sure logging is configured correctly
  • make sure error handling is correct
  • configure 500 (server error) and 404 (page not found) pages
  • make sure the application meets the best performance criteria
  • run Lighthouse to test performance, best practices, accessibility, and SEO. Use an incognito mode to ensure the results aren’t distorted
  • make sure that the features used in your application are supported by modern browsers
  • improve performance by using the following:
    • next/image and automatic image optimization
    • automatic font optimization
    • script optimization

Caching

Caching reduces response time and the number of requests to external services. Next.js automatically adds caching headers to statics from _next/static, including JS, CSS, images, and other media.

Cache-Control: public, max-age=31536000, immutable

To revalidate the cache of a page that was previously rendered into static markup, use the revalidate setting in the getStaticProps function.

Please note: running the application in development mode using next dev disables caching:

Cache-Control: no-cache, no-store, max-age=0, must-revalidate

Caching headers can also be used in getServerSideProps and the routing interface for dynamic responses. An example of using stale-while-revalidate:

// The value is considered fresh and actual for 10 seconds (s-maxage=10).
// If the request is repeated within 10 seconds, the previous cached value
// considered fresh. If the request is repeated within 59 seconds,
// cached value is considered stale, but is still used for rendering
// (stale-while-revalidate=59)
// The request is then executed in the background and the cache is filled with fresh data.
// After updating the page will display the new value
export async function getServerSideProps({ req, res }){
    res.setHeader(
        'Cache-Control',
        'public, s-maxage=10, stale-while-revalidate=59'
    )
    return {
        props: {}
    }
}

Reducing the JavaScript bundle volume/size

To identify what’s included in each JS bundle, you can use the following tools:

  • Import Cost – extension for VSCode showing the size of the imported package
  • Package Phobia is a service for determining the “cost” of adding a new development dependency to a project (dev dependency)
  • Bundle Phobia – a service for determining how much adding a dependency will increase the size of the build
  • Webpack Bundle Analyzer – Webpack plugin for visualizing the bundle in the form of an interactive, scalable tree structure

Each file in the pages directory is allocated into a separate assembly during the next build command. You can use dynamic import to lazily load components and libraries.

Authentication

Authentication is the process of identifying who a user is, while authorization is the process of determining his permissions (or “authority” in other words), i.e. what the user has access to. Next.js supports several authentication patterns.

Authentication Patterns

Each authentication pattern determines the strategy for obtaining data. Next, you need to select an authentication provider that supports the selected strategy. There are two main authentication patterns:

  • using static generation to load state on the server and retrieve user data on the client side
  • receiving user data from the server to avoid “flushing” unauthenticated content (in the meaning of switching application states being visible to a user)

Authentication when using static generation

Next.js automatically detects that a page is static if the page does not have blocking methods to retrieve data, such as getServerSideProps. In this case, the page renders the initial state received from the server and then requests the user’s data on the client side.

One of the advantages of using this pattern is the ability to deliver pages from a global CDN and preload them using next/link. This results in a reduced Time to Interactive (TTI).

Let’s look at an example of a user profile page. On this page, the template (skeleton) is first rendered, and after executing a request to obtain user data, this data is displayed:

// pages/profile.js
import useUser from '../lib/useUser'
import Layout from './components/Layout'

export default function Profile(){
    // get user data on the client side
    const { user } = useUser({ redirectTo: '/login' })
    // loading status received from the server
    if (!user || user.isLoggedIn === false) {
        return <Layout>Loading...</Layout>
    }
    // after the request is completed, user data is
    return (
        <Layout>
            <h1>Your profile</h1>
            <pre>{JSON.stringify(user, null, 2)}</pre>
        </Layout>
    )
}

Server-side rendering authentication

If a page has an asynchronous getServerSideProps function, Next.js will render that page on every request using the data from that function.

export async function getServerSideProps(context){
    return {
        props: {}// will get passed down to a component as props
    }
}

Let’s rewrite the above example. If there is a session, the Profile component will receive the user prop. Note the absence of a template:

// pages/profile.js
import withSession from '../lib/session'
import Layout from '../components/Layout'

export const getServerSideProps = withSession(async (req, res) => {
    const user = req.session.get('user')
    if (!user) {
        return {
            redirect: {
                destination: '/login',
                permanent: false
            }
        }
    }
    return {
        props: {
            user
        }
    }
})
export default function Profile({ user }){
    // display user data, no loading state required
    return (
        <Layout>
            <h1>Your profile</h1>
            <pre>{JSON.stringify(user, null, 2)}</pre>
        </Layout>
    )
}

The advantage of this approach is to prevent the display of unauthenticated content before performing a redirect. Тote that requesting user data in getServerSideProps blocks rendering until the request is resolved. Therefore, to avoid creating bottlenecks and increasing Time to First Byte (TTFB), you should ensure that the authentication service is performing well.

Authentication Providers

Integrating with the users database, consider using one of the following solutions:

  • next-iron-session – low-level encoded stateless session
  • next-auth is a full-fledged authentication system with built-in providers (Google, Facebook, GitHub, and similar), JWT, JWE, email/password, magic links, etc.
  • with-passport-and-next-connect – old good node Password also works for this case

A crash course of Next.js: Routing and Middleware (part 3)

This series is my Next.js study resume, and despite it’s keen to a vanilla Next.js, all the features are applicable with Sitecore SDK. It is similar to the guide I recently wrote about GraphQL and aims to reduce the learning curve for those switching to it from other tech stacks.

  • In part 1 we covered some fundamentals of Next.js – rendering strategies along with the nuances of getStaticProps, getStaticPaths, getServerSideProps as well as data fetching.
  • In part 2 we spoke about UI-related things coming OOB with Next.js – layouts, styles and fonts powerful features, Image and Script components, and of course – TypeScript.

In this post we are going to talk about routing with Next.js – pages, API Routes, layouts, and Middleware.

Routing

Next.js routing is based on the concept of pages. A file located within the pages directory automatically becomes a route. Index.js files refer to the root directory:

  • pages/index.js -> /
  • pages/blog/index.js -> /blog

The router supports nested files:

  • pages/blog/first-post.js -> /blog/first-post
  • pages/dashboard/settings/username.js -> /dashboard/settings/username

You can also define dynamic route segments using square brackets:

  • pages/blog/[slug].js -> /blog/:slug (for example: blog/first-post)
  • pages/[username]/settings.js -> /:username/settings (for example: /johnsmith/settings)
  • pages/post/[...all].js -> /post/* (for example: /post/2021/id/title)

Navigation between pages

You should use the Link component for client-side routing:

import Link from 'next/link'

export default function Home(){
    return (
        <ul>
            <li>
                <Link href="/">
                    Home
                </Link>
            </li>
            <li>
                <Link href="/about">
                    About
                </Link>
            </li>
            <li>
                <Link href="/blog/first-post">
                    First post
                </Link>
            </li>
        </ul>
    )
}

So we have:

  • /pages/index.js
  • /aboutpages/about.js
  • /blog/first-postpages/blog/[slug].js

For dynamic segments feel free to use interpolation:

import Link from 'next/link'

export default function Post({ posts }){
    return (
        <ul>
            {posts.map((post) => (
                <li key={post.id}>
                    <Link href={`/blog/${encodeURIComponent(post.slug)}`}>
                        {post.title}
                    </Link>
                </li>
            ))}
        </ul>
    )
}

Or leverage URL object:

import Link from 'next/link'

export default function Post({ posts }){
    return (
        <ul>
            {posts.map((post) => (
                <li key={post.id}>
                    <Link
                        href={{
                            pathname: '/blog/[slug]',
                            query: { slug: post.slug },
                        }}
                    >
                        <a>{post.title}</a>
                    </Link>
                </li>
            ))}
        </ul>
    )
}

Here we pass:

  • pathname is the page name under the pages directory (/blog/[slug] in this case)
  • query is an object having a dynamic segment (slug in this case)

To access the router object within a component, you can use the useRouter hook or the withRouter utility, and it is recommended practice to use useRouter.

Dynamic routes

If you want to create a dynamic route, you need to add [param] to the page path.

Let’s consider a page pages/post/[pid].js having the following code:

import { useRouter } from 'next/router'

export default function Post(){
    const router = useRouter()
    const { id } = router.query
    return <p>Post: {id}</p>
}

In this scenario, routes /post/1, /post/abc, etc. will match pages/post/[id].js. The matched parameter is passed to a page as a query string parameter, along with other parameters.

For example, for the route /post/abc the query object will look as: { "id": "abc" }

And for the route /post/abc?foo=bar like this: { "id": "abc", "foo": "bar" }

Route parameters overwrite query string parameters, so the query object for the /post/abc?id=123 route will look like this: { "id": "abc" }

For routes with several dynamic segments, the query is formed in exactly the same way. For example, the page pages/post/[id]/[cid].js will match the route /post/123/456, and the query will look like this: { "id": "123", "cid": "456" }

Navigation between dynamic routes on the client side is handled using next/link:

import Link from 'next/link'

export default function Home(){
    return (
        <ul>
            <li>
                <Link href="/post/abc">
                    Leads to `pages/post/[id].js`
                </Link>
            </li>
            <li>
                <Link href="/post/abc?foo=bar">
                    Also leads to `pages/post/[id].js`
                </Link>
            </li>
            <li>
                <Link href="/post/123/456">
                    <a>Leads to `pages/post/[id]/[cid].js`</a>
                </Link>
            </li>
        </ul>
    )
}

Catch All routes

Dynamic routes can be extended to catch all paths by adding an ellipsis (...) in square brackets. For example, pages/post/[...slug].js will match /post/a, /post/a/b, /post/a/b/c, etc.

Please note: slug is not hard-defined, so you can use any name of choice, for example, [...param].

The matched parameters are passed to the page as query string parameters (slug in this case) with an array value. For example, a query for /post/a will have the following form: {"slug": ["a"]} and for /post/a/b this one: {"slug": ["a", "b"]}

Routes for intercepting all the paths can be optional – for this, the parameter must be wrapped in one more square bracket ([[...slug]]). For example, pages/post/[[...slug]].js will match /post, /post/a, /post/a/b, etc.

Catch-all routes are what Sitecore uses by default, and can be found at src\[your_nextjs_all_name]\src\pages\[[...path]].tsx.

The main difference between the regular and optional “catchers” is that the optional ones match a route without parameters (/post in our case).

Examples of query object:

{}// GET `/post` (empty object)
{"slug": ["a"]}// `GET /post/a` (single element array)
{"slug": ["a", "b"]}// `GET /post/a/b` (array with multiple elements)

Please note the following features:

  • static routes take precedence over dynamic ones, and dynamic routes take precedence over catch-all routes, for example:
    • pages/post/create.js – will match /post/create
    • pages/post/[id].js – will match /post/1, /post/abc, etc., but not /post/create
    • pages/post/[...slug].js – will match /post/1/2, /post/a/b/c, etc., but not /post/create and /post/abc
  • pages processed using automatic static optimization will be hydrated without route parameters, i.e. query will be an empty object ({}). After hydration, the application update will fill out the query.

Imperative approach to client-side navigation

As I mentioned above, in most cases, Link component from next/link would be sufficient to implement client-side navigation. However, you can also leverage the router from next/rou

import { useRouter } from 'next/router'

export default function ReadMore(){
    const router = useRouter()
    return (
        <button onClick={() => router.push('/about')}>
            Read about
        </button>
    )
}

Shallow Routing

Shallow routing allows you to change URLs without restarting methods to get data, including the getServerSideProps and getStaticProps functions. We receive the updated pathname and query through the router object (obtained from using useRouter() or withRouter()) without losing the component’s state.

To enable shallow routing, set { shallow: true }:

import { useEffect } from 'react'
import { useRouter } from 'next/router'

// current `URL` is `/`
export default function Page(){
    const router = useRouter()
    useEffect(() => {
        // perform navigation after first rendering
        router.push('?counter=1', undefined, { shallow: true })
    }, [])
    useEffect(() => {
        // value of `counter` has changed!
    }, [router.query.counter])
}

When updating the URL, only the state of the route will change.

Please note: shallow routing only works within a single page. Let’s say we have a pages/about.js page and we do the following:

router.push('?counter=1', '/about?counter=1', { shallow: true })

In this case, the current page is unloaded, a new one is loaded, and the data fetching methods are rerun (regardless of the presence of { shallow: true }).

API Routes

Any file located under the pages/api folder maps to /api/* and is considered to be an API endpoint, not a page. Because of its non-UI nature, the routing code remains server-side and does not increase the client bundle size. The below example pages/api/user.js returns a status code of 200 and data in JSON format:

export default function handler(req, res){
    res.status(200).json({ name: 'Martin Miles' })
}

The handler function receives two parameters:

  • req – an instance of http.IncomingMessage + several built-in middlewares (explained below)
  • res – an instance of http.ServerResponse + some helper functions (explained below)

You can use req.method for handling various methods:

export default function handler(req, res){
    if (req.method === 'POST') {
        // handle POST request
    } else {
        // handle other request types
    }
}

Use Cases

The entire API can be built using a routing interface so that the existing API remains untouched. Other cases could be:

  • hiding the URL of an external service
  • using environment variables (stored on the server) for accessing external services safely and securely

Nuances

  • The routing interface does not process CORS headers by default. This is done with the help of middleware (see below)
  • routing interface cannot be used with next export

As for dynamic routing segments, they are subject to the same rules as the dynamic parts of page routes I explained above.

Middlewares

The routing interface includes the following middlewares that transform the incoming request (req):

  • req.cookies – an object containing the cookies included in the request (default value is {})
  • req.query – an object containing the query string (default value is {})
  • req.body – an object containing the request body asContent-Type header, or null

Middleware Customizations

Each route can export a config object with Middleware settings:

export const config = {
    api: {
        bodyParser: {
            sizeLimit: '2mb'
        }
    }
}
  • bodyParser: false – disables response parsing (returns raw data stream as Stream)
  • bodyParser.sizeLimit – the maximum request body size in any format supported by bytes
  • externalResolver: true – tells the server that this route is being processed by an external resolver, such as express or connect

Adding Middlewares

Let’s consider adding cors middleware. Install the module using npm install cors and add cors to the route:

import Cors from 'cors'
// initialize middleware

const cors = Cors({
    methods: ['GET', 'HEAD']
})
// helper function waiting for a successful middleware resolve
// before executing some other code
// or to throw an exception if middleware fails
const runMiddleware = (req, res, next) =>
    newPromise((resolve, reject) => {
        fn(req, res, (result) =>
            result instanceof Error ? reject(result) : resolve(result)
        )
    })
export default async function handler(req, res){
    // this actually runs middleware
    await runMiddleware(req, res, cors)
    // the rest `API` logic
    res.json({ message: 'Hello world!' })
}

Helper functions

The response object (res) includes a set of methods to improve the development experience and speed up the creation of new endpoints.

This includes the following:

  • res.status(code) – function for setting the status code of the response
  • res.json(body) – to send a response in JSON format, body should be any serializable object
  • res.send(body) – to send a response, body could be a string, object or Buffer
  • res.redirect([status,] path) – to redirect to the specified page, status defaults to 307 (temporary redirect)

This concludes part 3. In part 4 we’ll talk about caching, authentication, and considerations for going live.

A crash course of Next.js: UI-related and environmental (part 2)

In part 1 we covered some fundamentals of Next.js – rendering strategies along with the nuances of getStaticProps, getStaticPaths, getServerSideProps as well as data fetching.

Now we are going to talk about UI-related things, such as layouts, styles and fonts, serving statics, as well as typescript and environmental variables.

Built-in CSS support

Importing global styles

To add global styles, the corresponding table must be imported into the pages/_app.js file (pay attention to the underscore):

// pages/_app.js
import './style.css'
// This export is mandatory by default
export default function App({ Component, pageProps }){
    return <Component {...pageProps} />
}

These styles will be applied to all pages and components in the application. Please note: to avoid conflicts, global styles can only be imported into pages/_app.js.

When building the application, all styles are combined into one minified CSS file.

Import styles from the node_modules directory

Styles can be imported from node_modules, here’s an example of importing global styles:

// pages/_app.js
import 'bootstrap/dist/css/bootstrap.min.css'
export default function App({ Component, pageProps }){
    return <Component {...pageProps} />
}

Example of importing styles for a third-party component:

// components/Dialog.js
import { useState } from 'react'
import { Dialog } from '@reach/dialog'
import VisuallyHidden from '@reach/visually-hidden'
import '@reach/dialog/styles.css'
export function MyDialog(props){
    const [show, setShow] = useState(false)
    const open = () => setShow(true)
    const close = () => setShow(false)
    return (
        <div>
            <button onClick={open} className='btn-open'>Expand</button>
            <Dialog>
                <button onClick={close} className='btn-close'>
                    <VisuallyHidden>Collapse</VisuallyHidden>
                    <span>X</span>
                </button>
                <p>Hello!</p>
            </Dialog>
        </div>
    )
}

Adding styles at the component level

Next.js supports CSS modules out of the box. CSS modules must be named as [name].module.css. They create a local scope for the corresponding styles, which allows you to use the same class names without the risk of collisions. A CSS module is imported as an object (usually called styles) whose keys are the names of the corresponding classes.

Example of using CSS modules:

/* components/Button/Button.module.css */
.danger {
    background-color:red;
    color: white;
}
// components/Button/Button.js
import styles from './Button.module.css'
export const Button = () => (
    <button className={styles.danger}>
        Remove
    </button>
)

When built, CSS modules are concatenated and separated into separate minified CSS files, which allows you to load only the necessary styles.

SASS support

Next.js supports .scss and .sass files. SASS can also be used at the component level (.module.scss and .module.sass). To compile SASS to CSS you need to have SASS installed:

npm install sass

The behavior of the SASS compiler can be customized in the next.config.js file, for example:

const path = require('path')
module.exports = {
    sassOptions: {
        includePaths: [path.join(__dirname, 'styles')]
    }
}

CSS-in-JS

You can use any CSS-in-JS solution in Next.js. The simplest example is to use inline styles:

export const Hi = ({ name }) => <p style={{ color: 'green' }}>Hello, {name}!</p>

Next.js also has styled-jsx support built-in:

export const Bye = ({ name }) => (
    <div>
        <p>Bye, {name}. See you soon!</p>
        <style jsx>{`
      div {
        background-color: #3c3c3c;
      }
      p {
        color: #f0f0f0;
      }
      @media (max-width: 768px) {
        div {
          backround-color: #f0f0f0;
        }
        p {
          color: #3c3c3c;
        }
      }
    `}</style>
        <style global jsx>{`
      body {
        margin: 0;
        min-height: 100vh;
        display: grid;
        place-items: center;
      }
    `}</style>
    </div>
)

Layouts

Developing a React application involves dividing the page into separate components. Many components are used across multiple pages. Let’s assume that each page uses a navigation bar and a footer:

// components/layout.js
import Navbar from './navbar'
import Footer from './footer'
export default function Layout({ children }){
    return (
        <>
            <Navbar />
            <main>{children}</main>
            <Footer />
        </>
    )
}

Examples

Single Layout

If an application only uses one layout, we can create a custom app and wrap the application in a layout. Since the layout component will be reused when pages change, its state (for example, input values) will be preserved:

// pages/_app.js
import Layout from '../components/layout'
export default function App({ Component, pageProps }){
    return (
        <Layout>
            <Component {...pageProps} />
        </Layout>
    )
}

Page-Level Layouts

The getLayout property of a page allows you to return a component for the layout. This allows you to define layouts at the page level. The returned function allows you to construct nested layouts:

// pages/index.js
import Layout from '../components/layout'
import Nested from '../components/nested'
export default function Page(){
    return {
        // ...
    }
}
Page.getLayout = (page) => (
    <Layout>
        <Nested>{page}</Nested>
    </Layout>
)
// pages/_app.js
export default function App({ Component, pageProps }){
    // use the layout defined ata page level, if exists
    const getLayout = Component.getLayout || ((page) => page)
    return getLayout(<Component {...pageProps} />)
}

When switching pages, the state of each of them (input values, scroll position, etc.) will be saved.

Use it with TypeScript

When using TypeScript, a new type is first created for the page that includes getLayout. You should then create a new type for AppProps that overwrites the Component property to allow the previously created type to be used:

// pages/index.tsx
importtype{ ReactElement } from 'react'
import Layout from '../components/layout'
import Nested from '../components/nested'

export default function Page(){
    return {
        // ...
    }
}
Page.getLayout = (page: ReactElement) => (
    <Layout>
        <Nested>{page}</Nested>
    </Layout>
)


// pages/_app.tsx
importtype{ ReactElement, ReactNode } from 'react'
importtype{ NextPage } from 'next'
importtype{ AppProps } from 'next/app'

type NextPageWithLayout = NextPage & {
    getLayout?: (page: ReactElement) => ReactNode
}
type AppPropsWithLayout = AppProps & {
    Component: NextPageWithLayout
}
export default function App({ Component, pageProps }: AppPropsWithLayout){
    const getLayout = Component.getLayout ?? ((page) => page)
    return getLayout(<Component  {...pageProps} />)
}

Fetching data

The data in the layout can be retrieved on the client side using useEffect or utilities like SWR. Because the layout is not a page, it currently cannot use getStaticProps or getServerSideProps:

import useSWR from 'swr'
import Navbar from './navbar'
import Footer from './footer'

export default function Layout({ children }){
    const { data, error } = useSWR('/data', fetcher)
    if (error) return <div>Error</div>
    if (!data) return <div>Loading...</div>
    return (
        <>
            <Navbar />
            <main>{children}</main>
            <Footer />
        </>
    )
}

Image component and image optimization

The Image component, imported from next/image, is an extension of the img HTML tag designed for the modern web. It includes several built-in optimizations to achieve good Core Web Vitals performance. These optimizations include the following:

  • performance improvement
  • ensuring visual stability
  • speed up page loading
  • ensuring flexibility (scalability) of images

Example of using a local image:

import Image from 'next/image'
import imgSrc from '../public/some-image.png'

export default function Home(){
    return (
        <>
            <h1>Home page</h1>
            <Image
                src={imgSrc}
                alt=""
                role="presentation"
            />
        </h1 >
    )
}

Example of using a remote image (note that you need to set the image width and height):

import Image from 'next/image'

export default function Home(){
    return (
        <>
            <h1>Home page</h1>
            <Image
                src="/some-image.png"
                alt=""
                role="presentation"
                width={500}
                height={500}
            />
        </h1 >
    )
}

Defining image dimensions

Image expects to receive the width and height of the image:

  • in the case of static import (local image), the width and height are calculated automatically
  • width and height can be specified using appropriate props
  • if the image dimensions are unknown, you can use the layout prop with the fill value

There are 3 ways to solve the problem of unknown image sizes:

  • Using fill layout mode: This mode allows you to control the dimensions of the image using the parent element. In this case, the dimensions of the parent element are determined using CSS, and the dimensions of the image are determined using the object-fit and object-position properties
  • image normalization: if the source of the images is under our control, we can add resizing to the image when it is returned in response to the request
  • modification of API calls: the response to a request can include not only the image itself but also its dimensions

Rules for stylizing images:

  • choose the right layout mode
  • use className – it is set to the corresponding img element. Please note: style prop is not passed
  • when using layout="fill" the parent element must have position: relative
  • when using layout="responsive" the parent element must have display: block

See below for more information about the Image component.

Font optimization

Next.js automatically embeds fonts in CSS at build time:

// before
<link href="https://fonts.googleapis.com/css2?family=Inter" rel="stylesheet" />

// after
<style data-href="https://fonts.googleapis.com/css2?family=Inter">
  @font-face {font-family:'Inter';font-style:normal...}
</style>

To add a font to the page, use the Head component, imported from next/head:

// pages/index.js
import Head from 'next/head'

export default function IndexPage(){
    return (
        <div>
            <Head>
                <link rel="stylesheet" href="https://fonts.googleapis.com/css2?family=Inter&display=optional" />
            </Head>
            <p>Hello world!</p>
        </div>
    )
}

To add a font to the application, you need to create a custom document:

// pages/_document.js
import Document, { Html, Head, Main, NextScript } from 'next/document'
class MyDoc extends Document {
    render() {
        return (
            <Html>
                <Head>
                    <link rel="stylesheet" href="https://fonts.googleapis.com/css2?family=Inter&display=optional" />
                </Head>
                <body>
                    <Main />
                    <NextScript />
                </body>
            </Html>
        )
    }
}

Automatic font optimization can be disabled:

// next.config.js
module.exports = {
    optimizeFonts: false
}

For more information about the Head component, see below.

Script Component

The Script component allows developers to prioritize the loading of third-party scripts, saving time and improving performance.

The script loading priority is determined using the strategy prop, which takes one of the following values:

  • beforeInteractive: This is for important scripts that need to be loaded and executed before the page becomes interactive. Such scripts include, for example, bot detection and permission requests. Such scripts are embedded in the initial HTML and run before the rest of the JS
  • afterInteractive: for scripts that can be loaded and executed after the page has become interactive. Such scripts include, for example, tag managers and analytics. Such scripts are executed on the client side and run after hydration
  • lazyOnload: for scripts that can be loaded during idle periods. Such scripts include, for example, chat support and social network widgets

Note:

  • Script supports built-in scripts with afterInteractive and lazyOnload strategies
  • inline scripts wrapped in Script must have an id attribute to track and optimize them

Examples

Please note: the Script component should not be placed inside a Head component or a custom document.

Loading polyfills:

import Script from 'next/script'

export default function Home(){
    return (
        <>
            <Script src="https://polyfill.io/v3/polyfill.min.js?features=IntersectionObserverEntry%2CIntersectionObserver" strategy="beforeInteractive" />
        </>
    )
}

Lazy load:

import Script from 'next/script'

export default function Home(){
    return (
        <>
            <Script src="https://connect.facebook.net/en_US/sdk.js"strategy="lazyOnload" />
        </>
    )
}

Executing code after the page has fully loaded:

import { useState } from 'react'
import Script from 'next/script'

export default function Home(){
    const [stripe, setStripe] = useState(null)
    return (
        <>
            <Script
                id="stripe-js"
                src="https://js.stripe.com/v3/"
                onLoad={() => {
                    setStripe({ stripe: window.Stripe('pk_test_12345') })
                }}
            />
        </>
    )
}

Inline scripts:

import Script from 'next/script'

<Script id="show-banner" strategy="lazyOnload">
{`document.getElementById('banner').classList.remove('hidden')`}
</Script>

// or
<Script
  id="show-banner"
  dangerouslySetInnerHTML={{
    __html: `document.getElementById('banner').classList.remove('hidden')`
  }}
/>
Passing attributes:
import Script from 'next/script'

export default function Home(){
    return (
        <>
            <Script
                src="https://www.google-analytics.com/analytics.js"
                id="analytics"
                nonce="XUENAJFW"
                data-test="analytics"
            />
        </>
    )
}

Serving static files

Static resources should be placed in the public directory, located in the root directory of the project. Files located in the public directory are accessible via the base link /:

import Image from 'next/image'

export default function Avatar(){
    return <Image src="/me.png" alt="me" width="64" height="64" >
}

This directory is also great for storing files such as robots.txt, favicon.png, files necessary for Google site verification and other static files (including .html).

Real-time update

Next.js supports real-time component updates while maintaining local state in most cases (this only applies to functional components and hooks). The state of the component is also preserved when errors (non-rendering related) occur.

To reload a component, just add // @refresh reset anywhere.

TypeScript

Next.js supports TypeScript out of the box. There are special types GetStaticProps, GetStaticPaths and GetServerSideProps for getStaticProps, getStaticPaths and getServerSideProps:

import { GetStaticProps, GetStaticPaths, GetServerSideProps } from 'next'

export const getStaticProps: GetStaticProps = async (context) => {
    // ...
}
export const getStaticPaths: GetStaticPaths = async () => {
    // ...
}
export const getServerSideProps: GetServerSideProps = async (context) => {
    // ...
}

An example of using built-in types for the routing interface (API Routes):

import type{ NextApiRequest, NextApiResponse } from 'next'

export default(req: NextApiRequest, res: NextApiResponse)=> {
    res.status(200).json({ message: 'Hello!' })
}

There’s nothing prevents us from typing the data contained in the response:

import type{ NextApiRequest, NextApiResponse } from 'next'
type Data = {
    name: string
}

export default(req: NextApiRequest, res: NextApiResponse < Data >)=> {
    res.status(200).json({ message: 'Bye!' })
}

Next.js supports paths and baseUrl settings defined in tsconfig.json.

Environment Variables

Next.js has built-in support for environment variables, which allows you to do the following:

use .env.local to load variables

extrapolate variables to the browser using the NEXT_PUBLIC_ prefix

Assume we have .env.local file, as below:

DB_HOST=localhost
DB_USER=myuser
DB_PASS=mypassword

This will automatically load process.env.DB_HOST, process.env.DB_USER and process.env.DB_PASS into the Node.js runtime

// pages/index.js
export async function getStaticProps(){
    const db = await myDB.connect({
        host: process.env.DB_HOST,
        username: process.env.DB_USER,
        password: process.env.DB_PASS
    })
    // ...
}

Next.js allows you to use variables inside .env files:

HOSTNAME=localhost
PORT=8080
HOST=http://$HOSTNAME:$PORT

To pass an environment variable to the browser, you need to add the NEXT_PUBLIC_ prefix to it:

NEXT_PUBLIC_ANALYTICS_ID=value
// pages/index.js
import setupAnalyticsService from '../lib/my-analytics-service'
setupAnalyticsService(process.env.NEXT_PUBLIC_ANALYTICS_ID)

function HomePage() {
    return <h1>Hello world!</h1>
}

export default HomePage

In addition to .env.local, you can create .env for both modes, .env.development (for development mode), and .env.production (for production mode) files. Please note: .env.local always takes precedence over other files containing environment variables.

This concludes part 2. In part 3 we’ll talk about routing with Next.js – pages, API Routes, layouts, Middleware, and caching.









A crash course of Next.js: rendering strategies and data fetching (part 1)

This series is my Next.js study resume, and despite it’s keen to a vanilla Next.js, all the features are applicable with Sitecore SDK. It is similar to the guide I recently wrote about GraphQL and aims reducing the learning curve for those switching to it from other tech stack.

Next.js is a React-based framework designed for developing web applications with functionality beyond SPA. As you know, the main disadvantage of SPA is problems with indexing pages of such applications by search robots, which negatively affects SEO. Recently the situation has begun to change for the better, at least the pages of my small SPA-PWA application started being indexed as normal. However, Next.js significantly simplifies the process of developing multi-page and hybrid applications. It also provides many other exciting features I am going to share with you over this course.

Pages

In Next.js a page is a React component that is exported from a file with a .js, .jsx, .ts, or .tsx extension located in the pages directory. Each page is associated with a route by its name. For example, the page pages/about.js will be accessible at /about. Please note that the page should be exported by default using export default:

export default function About(){
  return <div>About us</div>
}

The route for pages/posts/[id].js will be dynamic, i.e. such a page will be available at posts/1, posts/2, etc.

By default, all pages are pre-rendered. This results in better performance and SEO. Each page is associated with a minimum amount of JS. When the page loads, JS code runs, which makes it interactive (this process is called hydration).

There are 2 forms of pre-rendering: static generation (SSG, which is the recommended approach) and server-side rendering (SSR, which is more familiar for those working with other serverside frameworks such as ASP.NET). The first form involves generating HTML at build time and reusing it on every request. The second is markup generation on a server for each request. Generating static markup is the recommended approach for performance reasons.

In addition, you can use client-side rendering, where certain parts of the page are rendered by client-side JS.

SSG

It can generate both pages with data and without.

Without data case is pretty obvious:

export default function About(){
  return <div>About us</div>
}

There are 2 potential scenarios to generate a static page with data:

  1. Page content depends on external data: getStaticProps is used
  2. Page paths depend on external data: getStaticPaths is used (usually in conjunction with getStaticProps)

1. Page content depends on external data

Let’s assume that the blog page receives a list of posts from an external source, like a CMS:

// TODO: requesting `posts`
export default function Blog({ posts }){
    return (
        <ul>
            {posts.map((post) => (
                <li key={post.id}>{post.title}</li>
            ))}
        </ul>
    )
}

To obtain the data needed for pre-rendering, the asynchronous getStaticProps function must be exported from the file. This function is called during build time and allows you to pass the received data to the page in the form of props:

export default function Blog({ posts }){
    // ...
}

export async function getStaticProps(){
    const posts = await(awaitfetch('https://site.com/posts'))?.json()
    // the below syntax is important
    return {
        props: {
            posts
        }
    }
}

2. Page paths depend on external data

To handle the pre-rendering of a static page whose paths depend on external data, an asynchronous getStaticPaths function must be exported from a dynamic page (for example, pages/posts/[id].js). This function is called at build time and allows you to define prerendering paths:

export default function Post({ post }){
    // ...
}
export async function getStaticPaths(){
    const posts = await(awaitfetch('https://site.com/posts'))?.json()
    // pay attention to the structure of the returned array
    const paths = posts.map((post) => ({
        params: { id: post.id }
    }))
    // `fallback: false` means that 404 uses alternative route
    return {
        paths,
        fallback: false
    }
}

pages/posts/[id].js should also export the getStaticProps function to retrieve the post data with the specified id:

export default function Post({ post }){
    // ...
}
export async function getStaticPaths(){
    // ...
}
export async function getStaticProps({ params }){
    const post = await(awaitfetch(`https://site.com/posts/${params.id}`)).json()
    return {
        props: {
            post
        }
    }
}

SSR

To handle server-side page rendering, the asynchronous getServerSideProps function must be exported from the file. This function will be called on every page request.

function Page({ data }){
    // ...
}
export async function getServerSideProps(){
    const data = await(awaitfetch('https://site.com/data'))?.json()
    return {
        props: {
            data
        }
    }
}

Data Fetching

There are 3 functions to obtain the data needed for pre-rendering:

  • getStaticProps (SSG): Retrieving data at build time
  • getStaticPaths (SSG): Define dynamic routes to pre-render pages based on data
  • getServerSideProps (SSR): Get data on every request

getStaticProps

The page on which the exported asynchronous getStaticProps is pre-rendered using the props returned by this function.

export async function getStaticProps(context){
    return {
        props: {}
    }
}

context is an object with the following properties:

    • params – route parameters for pages with dynamic routing. For example, if the page title is [id].js, the params will be { id: … }
    • preview – true if the page is in preview mode
    • previewData – data set using setPreviewData
    • locale – current locale (if enabled)
    • locales – supported locales (if enabled)
    • defaultLocale – default locale (if enabled)

getStaticProps returns an object with the following properties:

  • props – optional object with props for the page
  • revalidate – an optional number of seconds after which the page is regenerated. The default value is false – regeneration is performed only with the next build
  • notFound is an optional boolean value that allows you to return a 404 status and the corresponding page, for example:
export async function getStaticProps(context) {
    const res = awaitfetch('/data')
    const data = await res.json()
    if (!data) {
        return {
            notFound: true
        }
    }
    return {
        props: {
            data
        }
    }
}

Note that notFound is not required in fallback: false mode, since in this mode only the paths returned by getStaticPaths are pre-rendered.

Also, note that notFound: true means a 404 is returned even if the previous page was successfully generated. This is designed to support cases where user-generated content is removed.

  • redirect is an optional object that allows you to perform redirections to internal and external resources, which must have the form {destination: string, permanent: boolean}:
export async function getStaticProps(context){
    const res = awaitfetch('/data')
    const data = await res.json()
    if (!data) {
        return {
            redirect: {
                destination: '/',
                permanent: false
            }
        }
    }
    return {
        props: {
            data
        }
    }
}

Note 1: Build-time redirects are not currently allowed. Such redirects must be declared at next.config.js.

Note 2: Modules imported at the top level for use within getStaticProps are not included in the client assembly. This means that server code, including reads from the file system or from the database, can be written directly in getStaticProps.

Note 3: fetch() in getStaticProps should only be used when fetching resources from external sources.

Use Cases

  • rendering data is available at build time and does not depend on the user request
  • data comes from a headless CMS
  • data can be cached in plain text (and not user-specific data)
  • the page must be pre-rendered (for SEO purposes) and must be very fast – getStaticProps generates HTML and JSON files that can be cached using a CDN

Use it with TypeScript:

import { GetStaticProps } from 'next'
export const getStaticProps: GetStaticProps = async (context) => { }

To get the desired types for props you should use InferGetStaticPropsType<typeof getStaticProps>:

import { InferGetStaticPropsType } from 'next'
type Post = {
    author: string
    content: string
}
export const getStaticProps = async () => {
    const res = awaitfetch('/posts')
    const posts: Post[] = await res.json()
    return {
        props: {
            posts
        }
    }
}
export default function Blog({ posts }: InferGetStaticPropsType < typeof getStaticProps >){
    // posts will be strongly typed as `Post[]`
}

ISR: Incremental static regeneration

Static pages can be updated after the application is built. Incremental static regeneration allows you to use static generation at the individual page level without having to rebuild the entire project.

Example:

const Blog = ({ posts }) => (
    <ul>
        {posts.map((post) => (
            <li>{post.title}</li>
        ))}
    </ul>
)
// Executes while building on a server.
// It can be called repeatedly as a serverless function when invalidation is enabled and a new request arrives
export async function getStaticProps(){
    const res = awaitfetch('/posts')
    const posts = await res.json()
    return {
        props: {
            posts
        },
        // `Next.js` will try regenerating a page:
        // - when a new request arrives
        // - at least once every 10 seconds
        revalidate: 10// in seconds
    }
}
// Executes while building on a server.
// It can be called repeatedly as a serverless function if the path has not been previously generated
export async function getStaticPaths(){
    const res = awaitfetch('/posts')
    const posts = await res.json()
    // Retrieving paths for posts pre-rendering
    const paths = posts.map((post) => ({
        params: { id: post.id }
    }))
    // Only these paths will be pre-rendered at build time
    // `{ fallback: 'blocking' }` will render pages serverside in the absence of a corresponding path
    return { paths, fallback: 'blocking' }
}
export default Blog

When requesting a page that was pre-rendered at build time, the cached page is displayed.

  • The response to any request to such a page before 10 seconds have elapsed is also instantly returned from the cache
  • After 10 seconds, the next request also receives a cached version of the page in response
  • After this, page regeneration starts in the background
  • After successful regeneration, the cache is invalidated and a new page is displayed. If regeneration fails, the old page remains unchanged

Technical nuances

getStaticProps

  • Since getStaticProps runs at build time, it cannot use data from the request, such as query params or HTTP headers.
  • getStaticProps only runs on the server, so it cannot be used to access internal routes
  • when using getStaticProps, not only HTML is generated, but also a JSON file. This file contains the results of getStaticProps and is used by the client-side routing mechanism to pass props to components
  • getStaticProps can only be used in a page component. This is because all the data needed to render the page must be available
  • in development mode getStaticProps is called on every request
  • preview mode is used to render the page on every request

getStaticPaths

Dynamically routed pages from which the asynchronously exported getStaticPaths function will be pre-generated for all paths returned by that function.

export async function getStaticPaths(){
    return {
        paths: [
            params: {}
        ],
        fallback: true | false | 'blocking'
    }
}
Paths

paths defines which paths will be pre-rendered. For example, if we have a page with dynamic routing called pages/posts/[id].js, and the getStaticPaths exported on that page returns paths as below:

return {
    paths: [
        { params: { id: '1' } },
        { params: { id: '2' } },
    ]
}

Then the posts/1 and posts/2 pages will be statically generated based on the pages/posts/[id].js component.

Please note that the name of each params must match the parameters used on the page:

  • if the page title is pages/posts/[postId]/[commentId] then params should contain postId and commentId
  • if the page uses a route interceptor, for example, pages/[...slug], params must contain slug as an array. For example, if such an array looks as ['foo', 'bar'], then the page /foo/bar will be generated
  • If the page uses an optional route interceptor, using null, [], undefined, or false will cause the top-level route to be rendered. For example, applying slug: false to pages/[[...slug]], will generate the page /
Fallback
  • if fallback is false, the missing path will be resolved by a 404 page
  • if fallback is true, the behavior of getStaticProps will be:
      • paths from getStaticPaths will be generated at build time using getStaticProps
      • a missing path will not be resolved by a 404 page. Instead, a fallback page will be returned in response to the request
      • The requested HTML and JSON are generated in the background. This includes calling getStaticProps
      • the browser receives JSON for the generated path. This JSON is used to automatically render the page with the required props. From the user’s perspective, this looks like switching between the backup and full pages
    • the new path is added to the list of pre-rendered pages

Please note: fallback: true is not supported when using next export.

Fallback pages

In the fallback version of the page:

  • prop pages will be empty
  • You can determine that a fallback page is being rendered using the router: router.isFallback will be true
// pages/posts/[id].js
import { useRouter } from 'next/router'
function Post({ post }){
    const router = useRouter()
    // This will be displayed if the page has not yet been generated, 
    // Until `getStaticProps` finishes its work
    if (router.isFallback) {
        return <div>Loading...</div>
    }
    // post rendering
}
export async function getStaticPaths(){
    return {
        paths: [
            { params: { id: '1' } },
            { params: { id: '2' } }
        ],
        fallback: true
    }
}
export async function getStaticProps({ params }){
    const res = awaitfetch(`/posts/${params.id}`)
    const post = await res.json()
    return {
        props: {
            post
        },
        revalidate: 1
    }
}
export default Post

In what cases might fallback: true be useful? It can be useful with a truly large number of static pages that depend on data (for example, a very large e-commerce storefront). We want to pre-render all the pages, but we know the build will take forever.

Instead, we generate a small set of static pages and use fallback: true for the rest. When requesting a missing page, the user will see a loading indicator for a while (while getStaticProps doing its job), then see the page itself. After that, a new page will be returned in response to each request.

Please note: fallback: true does not refresh the generated pages. Incremental static regeneration is used for this purpose instead.

If fallback is set to blocking, the missing path will also not be resolved by the 404 page, but there will be no transition between the fallback and normal pages. Instead, the requested page will be generated on the server and sent to the browser, and the user, after waiting for some time, will immediately see the finished page

Use cases for getStaticPaths

getStaticPaths is used to pre-render pages with dynamic routing. Use it with TypeScript:

import { GetStaticPaths } from 'next'

export const getStaticPaths: GetStaticPaths = async () => { }

Technical nuances:

  • getStaticPaths must be used in conjunction with getStaticProps. It cannot be used in conjunction with getServerSideProps
  • getStaticPaths only runs on the server at build time
  • getStaticPaths can only be exported in a page component
  • in development mode getStaticPaths runs on every request

getServerSideProps

The page from which the asynchronous getServerSideProps function is exported will be rendered on every request using the props returned by this function.

export async function getServerSideProps(context){
    return {
        props: {}
    }
}

context is an object with the following properties:

  • params: see getStaticProps
  • req: HTTP IncomingMessage object (incoming message, request)
  • res: HTTP response object
  • query: object representation of the query string
  • preview: see getStaticProps
  • previewData: see getStaticProps
  • resolveUrl: a normalized version of the requested URL, with the _next/data prefix removed and the original query string values included
  • locale: see getStaticProps
  • locales: see getStaticProps
  • defaultLocale: see getStaticProps

getServerSideProps should return an object with the following fields:

  • props – see getStaticProps
  • notFound – see getStaticProps
    export async function getServerSideProps(context){
        const res = awaitfetch('/data')
        const data = await res.json()
        if (!data) {
            return {
                notFound: true
            }
        }
        return {
            props: {}
        }
    }
  • redirect — see getStaticProps
    export async function getServerSideProps(context){
        const res = awaitfetch('/data')
        const data = await res.json()
        if (!data) {
            return {
                redirect: {
                    destination: '/',
                    permanent: false
                }
            }
        }
        return {
            props: {}
        }
    }

For getServerSideProps there are the same features and limitations as getStaticProps.

Use cases for getServerSideProps

getServerSideProps should only be used when you need to pre-render the page based on request-specific data. Use it with TypeScript:

import { GetServerSideProps } from 'next'
export const getServerSideProps: GetServerSideProps = async () => { }

To get the expected types for props you should use InferGetServerSidePropsType<typeof getServerSideProps>:

import { InferGetServerSidePropsType } from 'next'
type Data = {}
export async functiong etServerSideProps(){
    const res = awaitfetch('/data')
    const data = await res.json()
    return {
        props: {
            data
        }
    }
}
function Page({ data }: InferGetServerSidePropsType <typeof getServerSideProps>){
    // ...
}
export default Page

Technical nuances:

  • getServerSideProps runs only serverside
  • getServerSideProps can only be exported in a page component

Client-side data fetching

If a page has frequently updated data, but at the same time this page doesn’t need to be pre-rendered (for SEO reasons), then it is pretty much possible to fetch its data directly at client-side.

The Next.js team recommends using their useSWR hook for this purpose, which provides features such as data caching, cache invalidation, focus tracking, periodic retrying, etc.

import useSWR from 'swr'
const fetcher = (url) => fetch(url).then((res) => res.json())
function Profile(){
    const { data, error } = useSWR('/api/user', fetcher)
    if (error) return <div>Error while retrieving the data</div>
    if (!data) return <div>Loading...</div>
    return <div>Hello, {data.name}!</div>
}

However, you’re not limited to it, old good React query fetch() functions also perfectly work for this purpose.

This concludes part 1. In part 2 we’ll talk about UI-related things coming to OOB with Next.js – layouts, styles, and fonts powerful features, Image and Script components, and of course – TypeScript.

A full guide to creating a multi-language sites with Sitecore XM Cloud and Next.js

Historically, it was quite challenging to add custom languages to the sitecore, as it was dependent on the cultures registered in the .net framework on the OS level. Of course, there were a few workarounds like registering the custom culture on Windows, but it only added other challenges for scenarios such as having more than one Content Delivery Server.

Luckily, both XM Cloud and SXA changed the way we deal with it, not to mention retiring CD servers. I am going to show the entire walkthrough, in action – everything you need to do on the CM side of things and your Next.js head application, on an example of a custom component. So, here we go!

In the beginning, I only had a basic website running in a multi-site SXA-based environment. Because of that it benefits from a responsive Header component with navigation items. It will be a good place to locate a language dropdown selector, as that’s where users traditionally expect it to be. But first, let’s add languages into the system as English is the default and the only one I have so far. Since I live in North America, I am going to add two most popular languages – French and Spanish, as commonly spoken in Quebec and Mexico correspondingly.

Adding Languages

In XM Cloud, languages are located under /sitecore/system/Languages folder. If a language is not present there, you won’t be able to use it, which is my case. I like the functional language selector dialog provided by the system:

Adding language into the system

Pro tip: don’t forget to add languages into serialization.

After the language lands into a system, we can add it to a specific website. I enjoy plenty of scaffolding in SXA is based on SPE scripts and here’s a case. To add the site language, choose Scripts from a context menu, then select Add Site Language:

02

Then specify the language of choice, as well as some parameters, including the language fallback option to the defaults.

03

In XM Cloud you can find a new Custom Culture section on the language item, which has two important fields:

  • Base ISO Culture Code: the base language you want to use, for example: en
  • Fallback Region Display Name: display name that can be used in the content editor or in the Sitecore pages.

Now both the system and website have these new languages. The next step would be introducing a drop-down language selector, at the top right corner of a header.

Unlike the traditional non-headless versions of XP/XM platforms, XM Cloud is fully headless and serves the entire layout of a page item with all the components via Experience Edge, or a local GraphQL endpoint running on a local CM container with the same schema. Here’s what it looks like in GraphQL IDE Playground:

04

There are two parts to it: context which contains Sitecore-context useful information, such as path, site, editing mode, current language, and route with the entire layout of placeholders, components, and field values. Since the language selector is a part of the header and is shown on every single page, that would be really great (and logical) to provide the list of available languages to feed this component with a context. How can we achieve that?

The good news is pretty doable through Platform customization by extending getLayoutServiceContextpipeline and adding ContextExtension processor:

<?xml version="1.0"?>
<configuration
	xmlns:patch="http://www.sitecore.net/xmlconfig/"
	xmlns:set="http://www.sitecore.net/xmlconfig/set/"
	xmlns:role="http://www.sitecore.net/xmlconfig/role/">
	<sitecore>
		<pipelines>
			<groupgroupName="layoutService">
				<pipelines>
					<getLayoutServiceContext>
						<processortype="JumpStart.Pipelines.ContextExtension, JumpStart"resolve="true"/>
					</getLayoutServiceContext>
				</pipelines>
			</group>
		</pipelines>
	</sitecore>
</configuration>

and the implementation:

using System.Collections.Generic;
using Sitecore.Data.Items;
using Sitecore.Diagnostics;
using Sitecore.Globalization;
using Sitecore.JavaScriptServices.Configuration;
using Sitecore.LayoutService.ItemRendering.Pipelines.GetLayoutServiceContext;
namespace JumpStart.Pipelines
{
    public class ContextExtension : Sitecore.JavaScriptServices.ViewEngine.LayoutService.Pipelines.
            GetLayoutServiceContext.JssGetLayoutServiceContextProcessor
    {
        public ContextExtension(IConfigurationResolver configurationResolver) : base(configurationResolver)
        {
        }

        protected override void DoProcess(GetLayoutServiceContextArgs args, AppConfiguration application)
        {
            Assert.ArgumentNotNull(args, "args");
            var langVersions = new List<Language>();
            Item tempItem = Sitecore.Context.Item;
            foreach (var itemLanguage in tempItem.Languages)
            {
                var item = tempItem.Database.GetItem(tempItem.ID, itemLanguage);
                if (item.Versions.Count > 0 || item.IsFallback)
                {
                    langVersions.Add(itemLanguage);
                }
            }
            args.ContextData.Add("Languages", langVersions);
        }
    }
}

To make this code work we need to reference Sitecore.JavaScriptServices package. There was an issue that occurred after adding a package: the compiler errored out demanding to specify the exact version number of this package. It should be done at packages.props at the root of a mono repository as below:

<PackageReference Update="Sitecore.JavaScriptServices.ViewEngine" Version="21.0.583" />

After deploying, I am receiving site languages as a part of Sitecore context object for every single page:

08

If for some reason you do not see language in the graphQL output, but the one exists in both system and your site – make sure it has language fallback specified:

05

You also need to configure language fallback on a system, as per the official documentation. I ended up with this config patch:

<?xml version="1.0"?>
<configuration
	xmlns:patch="http://www.sitecore.net/xmlconfig/"
	xmlns:set="http://www.sitecore.net/xmlconfig/set/"
	xmlns:role="http://www.sitecore.net/xmlconfig/role/">
	<sitecore>
		<settings>
			<settingname="ExperienceEdge.EnableItemLanguageFallback"value="true"/>
			<settingname="ExperienceEdge.EnableFieldLanguageFallback"value="true"/>
		</settings>
		<sites>
			<sitename="shell">
				<patch:attributename="contentStartItem">/Jumpstart/Jumpstart/Home
				</patch:attribute>
				<patch:attributename="enableItemLanguageFallback">true
				</patch:attribute>
				<patch:attributename="enableFieldLanguageFallback">true
				</patch:attribute>
			</site>
			<sitename="website">
				<patch:attributename="enableItemLanguageFallback">true
				</patch:attribute>
				<patch:attributename="enableFieldLanguageFallback">true
				</patch:attribute>
			</site>
		</sites>
	</sitecore>
</configuration>

I also followed the documentation and configured language fallback on a page item level at the template’s standard values:

10

Now when I try to navigate to my page by typing https://jumpstart.localhost/es-MX/Demo, browser shows me 404. Why so?

07

This happens because despite we added languages in XM Cloud CM and even specified the fallback, the next.js head application knows nothing about these languages and cannot serve the corresponding routes. The good news is that the framework easily supports that by just adding served languages into next.config.js of a relevant JSS application:

i18n:{
  locales:[
    'en',
    'fr-CA',
    'es-MX',
],
  defaultLocale: jssConfig.defaultLanguage,
  localeDetection:false,
}

After the JSS app restarts and upon refreshing a page, there’s no more 404 error. If done right, you might already see a header.

But in my case the page was blank: no error, but no header. The reason for this is pretty obvious – since I benefit from reusable components by a truly multisite architecture of SXA, my header component belongs to a Partial Layout, which in turn belongs to a Shared website. Guess what? It does not have installed languages, so need to repeat language installation for the Shared site as well. Once done – all works as expected and you see the header from a shared site’s partial layout:

09

Dropdown Language Selector

Now, I decided to implement a dropdown Language Selector component at the top right corner of a header that picks up all the available languages from the context and allows switching. This will look something like the below:

import { SitecoreContextValue } from '@sitecore-jss/sitecore-jss-nextjs';
import { useRouter } from 'next/router';
import { ParsedUrlQueryInput } from 'querystring';
import { useEffect, useState } from 'react';
import { ComponentProps } from 'lib/component-props';
import styles from './LanguageSelector.module.css';

export type HeaderContentProps = ComponentProps & {
    pathname?: string;
    asPath?: string;
    query?: string | ParsedUrlQueryInput;
    sitecoreContext: SitecoreContextValue;
};
const LanguageSelector = (props: HeaderContentProps): JSX.Element => {
    const router = useRouter();
    const [languageLabels, setLanguageLabels] = useState([]);
    const sxaStyles = `${props.params?.styles || ''}`;
    const languageNames = new Intl.DisplayNames(['en'], { type: 'language' });
    const languageList = props.sitecoreContext['Languages'] as NodeJS.Dict<string | string>[];
    useEffect(() => {
        const labels = languageList.map((language) => languageNames.of(language['Name']));
        setLanguageLabels(labels);
    const changeLanguage = (lang: string) => {
    }, []);
        if (props.pathname && props.asPath && props.query) {
            router.push(
                {
                    pathname: props.pathname,
                    query: props.query,
                },
                props.asPath,
                {
                    locale: lang,
                    shallow: false,
                }
            );
        }
    };
    const languageSelector = languageList && languageLabels.length > 0 && (
        <select
            onChange={(e) => changeLanguage(e.currentTarget.value)}
            className="languagePicker"
            value={props.sitecoreContext.language}
        >
            {languageList.map((language, index) => (
                <option
                    key={index}
                    value={language['Name']}
                    label={languageLabels[index]}
                    className="languageItem"
                >
                    {languageNames.of(language['Name'])}
                </option>
            ))}
        </select>
    );
    return (
        <>
            <div className={`${styles.selector}${sxaStyles}`}>{languageSelector}</div>
        </>
    );
};
export default LanguageSelector;

Since I made the header responsive with a “hamburger” menu seen on mobiles, I am also referencing responsive styles for this component:

.selector {
    float: right;
    position: relative;
    top: 13px;
    right: 40px;
}

@mediascreenand (max-width: 600px) {
    .selector {
        right: 0px;
    }
}

Now it can be used from the header as:

<LanguageSelector pathname={pathname} asPath={asPath} query={query} sitecoreContext={sitecoreContext}{...props} />

and it indeed looks and works well:

11

switching to Spanish correctly leverages next.js for switching the language context and changing the URL:

12

Now, let’s progress with a multi-language website by adding a demo component and playing it over.

Adding a Component

For the sake of the experiment, I decided to gith something basic – an extended Rich Text component that in addition to a datasource also receives background color from Rendering Parameters. There are 3 lines with it:

  • the top line in bold is always static and is just a hardcoded name of the component, should not be translated
  • the middle line is internal to the component rendering, therefore I cannot take it from the datasource, so use Dictionary instead
  • the bottom one is the only line editable in Pages/EE and comes from the datasource item, the same as with the original RichText

Here’s what it looks like on a page:

13

And here’s its code (ColorRichText.tsx):

import React from 'react';
import { Field, RichText as JssRichText } from '@sitecore-jss/sitecore-jss-nextjs';
import { useI18n } from 'next-localization';
interface Fields {
    Text: Field<string>;
}
export type RichTextProps = {
    params: { [key: string]: string };
    fields: Fields;
};
export const Default = (props: RichTextProps): JSX.Element => {
    const text = props.fields ? (
        <JssRichText field={props.fields.Text} />
    ) : (
        <span className="is-empty-hint">Rich text</span>
    );
    const id = props.params.RenderingIdentifier;
    const { t } = useI18n();
    return (
        <div
            className={`component rich-text ${props.params.styles?.trimEnd()}`}
            id={id ? id : undefined}
        >
            <div className="component-content">
                <h4>Rich Text with Background Color from Rendering Parameters</h4>
                <span>{t('Rendering Parameters') || 'fallback content also seen in EE'}: </span>
                {text}
                <style jsx>{`
          .component-content {
            background-color: ${props.params.textColor
                        ? props.params.textColor?.trimEnd()
                        : '#FFF'};
          }
        `}</style>
            </div>
        </div>
    );
};

What is also special about this component, I am using I18n for reaching out to Dictionary items, see these lines:

import{ useI18n } from 'next-localization';
const{ t } = useI18n();
<span>{t('Rendering Parameters') || 'fallback content, it is also seen in EE when defaults not configured'}: </span>

Next, create a version of each language for the datasource item and provide the localized content. You have to create at least one version per language to avoid falling back to the default language – English. The same also applies to the dictionary item:

14

The result looks as below:

16

15

Rendering Parameters

Now, you’ve probably noticed that the English version of the component has a yellow background. That comes from rendering parameters in action configured per component so that editors can choose a desired background color from a dropdown (of course, it is a very oversimplified example for demo purposes).

22

What is interesting in localization is that you can also customize Rendering Parameters per language (or keep them shared by the language fallback).

Rendering Parameters are a bit tricky, as they are stored in the __Renderings and __Final Renderings fields of a page item (for Shared and Versioned layouts correspondingly), derived from Standard template (/sitecore/Templates/System/Templates/Standard template). That means when you come to a page with a language fallback, you cannot specify Rendering Parameters for that language unless you create a version of the entire page. Both Content Editor and EE will prevent you from doing that while there is a language fallback for the page item:

Content Editor

Experience Editor

Creating a new language version can be very excessive effort as by default it will make you set up the entire page layout add components (again) and re-assigning all the datasources for that version. It could be simplified by copying the desired layout from the versioned __Final Renderings field to the shared __Renderings field, so that each time you create a new language version for a page item, you “inherit” from that shared design and not create it from scratch, however, that approach also has some caveats – you may find some discussions around that (here and there).

In any case, we’ve got the desired result:

19

English version

20

French version

21

Spanish version

Hope you find this article helpful!

Reviewing my 2023 Sitecore MVP contributions

The Sitecore MVP program is designed to recognize individuals who have demonstrated advanced knowledge of the Sitecore platform and a commitment to sharing knowledge and technical expertise with community partners, customers, and prospects over the past year. The program is open to anyone who is passionate about Sitecore and has a desire to contribute to the community.

Over the past application year starting from December 1st, 2022, I have been actively involved in the Sitecore community, contributing in a number of ways.

Sitecore Blogs

  1. This year I have written 45(!) blog posts and that’s at the Perficient site only on the various topics related to Sitecore, including my top-notch findings about XM Cloud and other composable products, best practices, tips and tricks, and case studies. Listing them all by the bullets would make this post too excessive, therefore instead I leave the link to the entire list of them.
  2. I’ve been also posting on my very own blog platform, which already contains more than 200 posts about Sitecore accumulated over the past years.

Sitecore User Groups

  1. Organized four Los Angeles Sitecore User Groups  (#15, #16, #17 and #18)
  2. Established a organized the most wanted user group of a year – Sitecore Headless Development UserGroup. This one is very special since headless development has become the new normal of delivering sites with Sitecore, while so many professionals feel left behind unable to catch up with the fast emerging tech. I put it as my personal mission to help the community learn and grow “headlessly” and that is one of my commitments to it. There were two events so far (#1 and #2) with a #3 scheduled for December 12th.
  3. Facilitated and co-organized Sitecore Southeast Europe User Group (Balkans / Serbia), I am sponsoring it from my own pocket (Meetup + Zoom).
  4. Presented my new topic “Mastery of XM Cloud” on 17 Jan  at the Jaipur user group, India

SUGCON Conferences 2023

  1. Everyone knows how I love these conferences, especially SUGCON Europe. This year I submitted my speech papers again and got chosen again with the topic Accelerate Headless SXA Builds with XM Cloud. Last year I hit the same stage with The Ultimate Sitecore Upgrade session.
  2. I was very eager attending to SUGCON India and submitted a joint speech with my genius mentee Tiffany Laster – this proposal got chosen (yay!). Unfortunately, at the very last minute, my company changed the priorities, and we were not allowed to travel. Since the remote presentation was not an option there, I have the warmest hopes to present there the following year. Two of my other mentees (Neha Pasi and Mahima Patel) however found their way to that stage and presented a meaningful session on Sitecore Discover. Tiffany was also chosen as a speaker for the following SUGCON NA however with a different topic – DAM Maturity Model.
  3. I was a proud member of the SUGCON NA 2023 Organization Committee, which we thought over the past 15 months. We collectively were responsible for a wide range of tasks, but my primary personal responsibilities were organizing the session recording, building the event website, choosing the speakers from the speech proposals for building the agenda, and several more. I served as Room Captain for each and every session timeslot on Thursday and the majority on Friday.

GitHub

Sifon project keeps going not just maintained but also receiving new features. Sifon gets support for each version of XM/XP releases almost the next day. I am also PoC a composable version of Sifon Cloud, if proven that would be a big thing for XM Cloud. Every time I am involved in a Sitecore platform upgrade or any other development or PoCs working outside of containers – Sifon saves me a lot of time and effort.

I keep Awesome Sitecore up and actual. This repository has plenty of stars on GitHub and is an integral part of a big Awesome Lists family, if you haven’t heard of Awesome Lists and its significance I  highly recommend reading these articles – first and the second.

At the beginning of the year made guidance and a demo on how one can pair .NET Headless SDK with XM Cloud in the same containers working nicely together, along with releasing the source code to it.

There are also a few less significant repositories among my contributions that are still meaningful and helpful.

Sitecore Mentor Program

  • With lessons learned from the previous year of mentorship, this time I got 5 mentees, all young, ambitious, and genius and I cannot stress out enough how I am proud of them all and their achievements!
  • 3 of them found their way to SUGCON conferences as speakers (see above)
  • the others still deliver valuable contributions to the community.

MVP Program

  • I participate in all the webinars and MVP Lunches (often in both timezones per event) I can only reach out.
  • Every past year, I have participated in a very honorable activity helping to review the first-time applicants for the MVP Program. This is the first line of the evaluation and we carefully match every first-time applicant against high Sitecore MVP standards.
  • I think MVP Summit is the best perk of the MVP Program, so never miss it out. This year I’ve learned so much and provided feedback to the product teams, as usual.

Sitecore Learning

I collaborated with the Sitecore Learning team for the past 2-3 years but this year my contributions exceeded the previous ones. Here are some:

  • I volunteered to become a beta tester for a new Instructor-led XM Cloud training and provided valuable feedback upon the completion
  • Collaborated with the team on XM Cloud certification exam (sorry cannot be more explicit here due to the NDA)
  • I was proud to be chosen as an expert for opening a new Sitecore Tips & Tricks series organized by Sitecore Learning team. In 60 minutes I demonstrated Sitecore Components Builder with an external feed integration from zero to hero actually running it deployed to Vertical, all that with writing zero lines of code (part 1 and part 2). Impressive!

Sitecore Telegram

  • Contributed to it even more than in any of the previous years, I am making Telegram a premium-level channel for delivering Sitecore news and materials. Telegram has a unique set of features that no other software can offer, and I am leveraging these advantages for more convenience to my subscribers.
  • Started in 2017 as a single channel, it was expanding rapidly and now reached a milestone of 1,000 subscribers!
  • Growth did not stop but escalated further beyond Sitecore going composable with having a dedicated channel for almost any composable product. Here all they are:

Other Contributions

  • Three times over this year I was an invited guest and expert to Sitecore Fireside at Apple Podcasts (one, two, and three)
  • I am very active on my LinkedIn (with 4K followers) and Twitter aka X (with almost ~1.2K subscribers), multiple posts per week, sometimes a few a day.
  • With my dedication to the new flagship product of Sitecore – XM Cloud there was no wonder I managed to get certified with it among the first. It is a great product and I wish it to become even better!

The above is what I memorized from a year of contributions so far. I think it could serve as a good example of the annual applicant’s contribution and what Sitecore MVP standards stand for. Wish you all join this elite club for the coming year.



The correct way of creating own components with XM Cloud

I occasionally evidence folks creating components in XM Cloud incorrectly. Therefore I decided to create and share this guidance with you.

So, there are five steps involved in creating your own component:

  1. Create an SXA module that will serve as a pluggable container for all the necessary assets for your components, if not done yet.
  2. The easiest way to create a component is to clone a mostly matching existing one. If you need to rely on datasource items, clone the one that already leverages datasource. SPE scaffolding script will do the rest of the job for you. Make sure you assign a newly created component to a module from Step 1 above.
  3. Now, having a module with component(s),  you need to make it visible for your website, but adding a module to a chosen site. This empowers your site to see a corresponding section in the toolbox with newly created component(s) for you to use straight away, for both Experience Editor and Pages.
  4. You need to ensure the Component Name field is referencing the name of corresponding TSX codebase files as /src/<jss_app>/src/components/<your component>.tsx or one more level down within a folder with the same name as the component is. Since the component is fully cloned from the existing one, you can also copy the original TSX files under a new name and it will work straight away.
  5. Don’t forget to add all the new locations to the serialization, and check it into source control along with its codebase. Here are the locations to keep in mind:
    • /sitecore/layout/Placeholder Settings/Feature/Tailwind Components
    • /sitecore/templates/Feature/Tailwind Components
    • /sitecore/layout/Layouts/Feature/Tailwind Components
    • /sitecore/layout/Renderings/Feature/Tailwind Components
    • /sitecore/media library/Feature/Tailwind Components
    • /sitecore/templates/Branches/Feature/Tailwind Components
    • /sitecore/system/Settings/Feature/Tailwind Components

To make things simple, I recorded this entire walkthrough (and actually “talkthrough”) showing the entire process:

Hope you find this video helpful!

A crash course of GraphQL – from Zero to Hero

Almost anyone attending XM Cloud sessions at SUGCON North America earlier saw GraphQL queries as a part of such presentations. For mature headless developers getting through each query requires some amount of time, while for newbies that “user-friendly” syntax stands as an unreachable barrier. Once in the past, I failed to find any good article about this without any doubt great query language – some of them were way too excessive while others missed out on a lot of basics. I decided to fill this gap by creating this article providing exactly that: the minimum required information for the maximum productive start. I write it for you in that exact manner as I wish it was written for me earlier.

Logo

What is QraphQL and why do I need it?

QraphQL is a query language and backend framework for open-source APIs introduced by Facebook in 2012 and was designed to make it easier managing endpoints for REST-based APIs. In 2015, GraphQL was made open source, and Airbnb, GitHub, Pinterest, Shopify, and many other companies now use GraphQL.

When Facebook developers created a mobile application, they looked for ways to speed things up. There was a difficulty: when simultaneously querying from different types of databases, for example from cloud Redis and MySQL, the application slowed down terribly. To solve the problem, Facebook came up with its own query language that addresses a single endpoint and simplifies the form of the requested data. This was especially valuable for a social network with lots of connections and requests for related elements: say, getting posts from all subscribers of user X.

REST is a good and functional technology, but it has some problems:

  • Firstly, there’s redundancy or lack of data in the response. In REST APIs, clients often receive either too much data that they don’t need, or too little, forcing them to make multiple requests to get the information they need. GraphQL allows clients to request only the data they need and receive it in a single request, making communication more efficient.
  • Also, in a REST API, each endpoint usually corresponds to a specific resource, which can lead to extensibility problems and support for multiple API versions. GraphQL, however, features a single endpoint for all requests, and the API schema is defined server-side. This makes the API more flexible and easier to develop.
  • When working with related data in many REST APIs, the N+1 requests problem arises, when obtaining related data makes you do additional request roundtrips to a server. GraphQL allows you to define relationships between requested data and retrieve everything required in a single query.

Coming back to the above use case – a social network has many users, and for each user, we are required to get a list of his latest posts. To obtain such data in a typical REST API, one needs to make several requests: one request to the users endpoint to obtain a list of users, followed by another request to the posts endpoint to obtain posts for all required users deriving from the previous request (in the worst case it implements as a request for each the desired user). GraphQL solves this problem more efficiently. You can request a list of users and at the same time specify what exactly you want to get with the user details, in our case – the latest posts for each user.

Take a look at the example of GraphQL query implementing exactly that: request users with their 5 most recent posts:

query {
  users {
    id
    name
    posts(last:5){
      id
      text
      timestamp
    }
  }
}

What makes this work? This works thanks to the GraphQL structure.

But why at all it has a Graph in its name? That’s because it represents a data structure in the form of a graph, where the nodes of the graph represent objects, and there are connections between these objects. This reflects the way data and queries are organized in GraphQL, where clients can query related data as well as only the data they need.

A graph shows the relationships of say a social network:

Graph

How do we access a graph via GraphQL? GraphQL goes to a specific record, called the root node, and instructs it to get all the details of that record. We can take, for example, user 1, and get their subscriber data. Let’s write a GraphQL query snippet to show how to access it:

query {
    user(id:"1"){
        followers {
            tweets {
                content
            }
        }
    }
}

Here we are asking GraphQL to navigate to the graph from the root node, which is the user object with argument id: 1, and access the content of the follower’s tweet.

Graph Query

So far, so good. Let’s discuss the query types in GraphQL in more detail.

GraphQL Request Types

There are three main request types in GraphQL:

  • Query
  • Mutation
  • Subscription

Sitecore uses only the first two and does not support subscriptions, but to keep this guide full I will still mention how they work.

Queries in GraphQL

We have already become familiar with them from our earlier examples.

Using a query, GraphQL receives the necessary data from the server. This request type is an analog of what GET does in REST. Requests are string values sent in the body of an HTTP POST request. Please note that all GraphQL request types are sent via POST which is de-facto the most common option of HTTP data exchange. GraphQL can also work over Websockets, gRPC, and on top of other transport protocols.

We have already seen Query examples above, but let’s do it again to get the fname and age of all users:

query {
  users {
    fname
    age
  }
}

The server sends response data in JSON format so that the response structure matches the request structure:

data :{
    users [
        {
            "fname":"Mark",
            "age":23
        },
        {
            "fname":"David",
            "age":29
        }
    ]
}

The response contains JSON with the data key and also the errors key (in case there are any errors). Below is an example of a faulty response when an error occurred – due to the fact that Maria’s age was mistakenly passed as a string value:

{
    "errors":[
        {
        "message":"Error: 'age' field has incorrect value 'test'.",
        "locations":[
            {
                "line":5,
                "column":5
            }
        ],
        "path":["users",0,"age"]
        }
    ],
    "data":{
        "users":[
            {
                "fname":"Maria",
                "age":"test"
            },
            {
                "fname":"Megan",
                "age":32
            }
        ]
    }
}

Mutations in GraphQL

Using mutations you can add or modify the data. Mutation is an analogue of POST and PUT in REST. Here’s a mutation request example:

mutation createUser{
  addUser(fname:"Martin", age:42){
    id
  }
}

This createUser mutation adds a user with fname Martin and age 42. The server sends a JSON response to this request with the result record id. The answer may look like below:

data :{
  addUser :"a12e5d"
}

Subscription in GraphQL

With the help of subscriptions, the client receives database changes in real time. Under the hood, subscriptions use WebSockets. Here’s an example:

subscription listenLikes {
  listenLikes {
    fname
    likes
  }
}

The above query can, for example, return a list of users with their names and the count of likes every time it changes. Extremely helpful!

For example, when a user with fname Matt receives a like, the response would look like:

data:{
    listenLikes:{
        "fname":"Matt",
        "likes":245
    }
}

A similar request can be used to update the likes count in real-time, say for the voting form results.

GraphQL Concepts

Now that we know different query types, let’s figure out how to deal with elements that are used in GraphQL.

Concepts I am going to cover below:

  1. Fields
  2. Arguments
  3. Aliases
  4. Fragments
  5. Variables
  6. Directives

1. Fields

Look at a simple GraphQL query:

{
  user {
    name
  }
}

In this request, you see 2 fields. The user field returns an object containing another field of type String. GraphQL server will return a user object with only the user’s name. So simple, so let’s move on.

2. Arguments

In the example below, an argument is passed to indicate which user to refer to:

{
  user(id:"1"){
    name
  }
}

Here in particular we’re passing the user’s id, but we could also pass a name argument, assuming the API has a backend function to return such a response. We can also have a limit argument indicating how many subscribers we want to return in the response. The below query returns the name of the user with id=1 and their first 50 followers:

{
  user(id:"1"){
    name
    followers(limit:50)
  }
}

3. Aliases

GraphQL uses aliases to rename fields within a query response. It might be useful to retrieve data from multiple fields having the same names so that you ensure these fields will have different names in the response to distinguish. Here’s an example of a GraphQL query using aliases:

query {
  products {
    name
    description
  }
  users {
    userName: name
    userDescription: description
  }
}

as well as the response to it:

{
    "data":{
        "products":[
            {
            "name":"Product A",
            "description":"Description A"
            },
            {
            "name":"Product B",
            "description":"Description B"
            }
        ],
        "users":[
            {
            "userName":"User 1",
            "userDescription":"User Description 1"
            },
            {
            "userName":"User 2",
            "userDescription":"User Description 2"
            }
        ]
    }
}

This way we can distinguish the name and description of the product from the name and description of the user in the response. It reminds me of the way we did this in SQL when joining two tables, to distinguish between the same names of two joined columns. This problem most often occurs with the id and name columns.

4. Fragments

The fragments are often used to break up complex application data requirements into smaller chunks, especially when you need to combine many UI components with different fragments into one initial data sample.

{
  leftComparison: tweet(id:1){
    ...comparisonFields
  }
  rightComparison: tweet(id:2){
    ...comparisonFields
  }
}

fragment comparisonFields on tweet {
  userName
  userHandle
  date
  body
  repliesCount
  likes
}

What’s going on with this request?

  1. We sent two requests to obtain information about two different tweets: a tweet with id equal 1 and tweet with id equal 2.
  2. For each request, we create aliases: leftComparison and rightComparison.
  3. We use the fragment comparisonFields, which contains a set of fields that we want to get for each tweet. Fragments allow us to avoid duplicating code and reuse the same set of fields in multiple places in the request (DRY principle).

It returns the following response:

{
    "data":{
        "leftComparison":{
            userName:"foo",
            userHandle:"@foo",
            date:"2019-05-01",
            body:"Life is good",
            repliesCount:10,
            tweetsCount:200,
            likes:500,
        },
        "rightComparison":{
            userName:"boo",
            userHandle:"@boo",
            date:"2018-05-01",
            body:"This blog is awesome",
            repliesCount:15,
            tweetsCount:500,
            likes:700
        }
    }
}

5. Variables

GraphQL variables are a way to dynamically pass a value into a query. The example below provides a user id statically to the request:

{
  accholder: user(id:"1"){
    fullname: name
  }
}

Let’s now replace the static value by adding a variable. The above can be rewritten as:

query GetAccHolder($id: String){
  accholder: user(id: $id){
    fullname: name
  }
}
{
"id":"1"
}

In this example, GetAccHolder is a named function that is useful when you have plenty of requests in your application.

Then we declared the variable $id of type String. Well, then it’s exactly the same as in the original request, instead of a fixed id, we provided the variable $id to the request. The actual values of the variables are passed in a separate block.

We can also specify a default value for a variable:

query GetAccHolder($id: String = "1"){
  accholder: user(id: $id){
    fullname: name
  }
}

Additionally, it is possible to define a variable mandatory by adding ! to data type:

query GetAccHolder($id: String!){
  accholder: user(id: $id){
    fullname: name
  }
}

6. Directives

We can dynamically generate a query structure by using directives. They help us dynamically change the structure and form of our queries using variables. @include and @skip are two directives available in GraphQL.

Examples of directives:

  • @include(if: Boolean)include the field if the value of the boolean variable = true
  • @skip(if: Boolean) — skip field if boolean variable value = true
query GetFollowers($id: String){
  user(id: $id){
    fullname: name,
    followers: @include(if: $getFollowers){
      name
      userHandle
      tweets
    }
  }
}

{
"id":"1",
"$getFollowers":false
}

Since $getFollowers equals true, the followers field will get skipped, i.e. excluded from the response.

GraphQL Schema

In order to work with GraphQL on the server, you need to deploy a GraphQL Schema, which describes the logic of the GraphQL API, types, and data structure. A schema consists of two interrelated objects: typeDefs and resolvers.

In order for the server to work with GraphQL types, they must be defined. The typeDef object defines a list of available types, its code looks as below:

const typeDefs= gql`
  type User {
    id: Int
    fname: String
    age: Int
    likes: Int
    posts:[Post]
}
  type Post {
    id: Int
    user: User
    body: String
}
  type Query {
    users(id: Int!): User!
    posts(id: Int!): Post!
}
  type Mutation {
    incrementLike(fname: String!):[User!]
}
  type Subscription {
    listenLikes :[User]
}
`;

The above code defines a type User, which specifies fname, age, likes as well as other data. Each field defines a data type: String or Int, an exclamation point next to it means that a field is required. GraphQL supports four data types:

  1. String
  2. Int
  3. Float
  4. Boolean

The above example also defines all three types – Query, Mutation, and Subscription.

  • The first type which contains Query, is called users. It takes an id and returns an object with the user’s data, it is a required field. There is another Query type called posts which is designed the same way as users.
  • The Mutation type is called incrementLike. It takes a fname parameter and returns a list of users.
  • The Subscription type is called listenLikes. It returns a list of users.

After defining the types, you need to implement their logic so that the server knows how to respond to requests from a client. We use Resolvers to address that. Resolver is a function that returns specific field data of the type defined in the schema. Resolvers can be asynchronous. You can use resolvers to retrieve data from a REST API, database, or any other source.

So, let’s define resolvers:

const resolvers= {
    Query:{
        users(root, args){return users.filter(user=> user.id=== args.id)[0]},
        posts(root, args){return posts.filter(post=> post.id=== args.id)[0]}
    },
    User:{
        posts:(user)=> {
            return posts.filter(post=> post.userId=== user.id)
    }
    },
    Post:{
        user:(post)=> {
            return users.filter(user=> user.id=== post.userId)[0]
    }
    },
    Mutation:{
        incrementLike(parent, args){
        users.map((user)=> {
            if(user.fname=== args.fname) user.likes++return user
    })
        pubsub.publish('LIKES',{listenLikes: users});
        return users
    }
    },
    Subscription:{
        listenLikes:{
        subscribe:()=> pubsub.asyncIterator(['LIKES'])
    }
    }
};

The above example features six functions:

  1. The users request returns a user object having the passed id.
  2. The posts request returns a post object having the passed id.
  3. In the posts User field, the resolver accepts the user’s data and returns a list of his posts.
  4. In the user Posts field, the function accepts post data and returns the user who published the post.
  5. The incrementLike mutation changes the users object: it increases the number of likes for the user with the corresponding fname. After this, users get published in pubsub with the name LIKES.
  6. listenLikes subscription listens to LIKES and responds when pubsub is updated.

Two words about pubsub. This tool is a real-time information transfer system using WebSockets. pubsub is convenient to use, since everything related to WebSockets is placed in separate abstractions.

Why GraphQL is conceptually successful

  • Flexibility. GraphQL does not impose restrictions on query types, making it useful for both typical CRUD operations (create, read, update, delete) and queries with multiple data types.
  • Schema definition. GraphQL automatically creates a schema for the API, and the hierarchical code organization with object relationships reduces the complexity.
  • Query optimization. GraphQL allows clients to request the exact information they need. This reduces server response time and the volume of data to be transferred over the network.
  • Context. GraphQL takes care of the requests and responses implementation so that developers can focus on business logic. Strong Typing helps prevent errors before executing a request.
  • Extensibility. GraphQL allows extending the API schema and adding new data types along with reusing existing code and data sources to avoid code redundancy.

Sources and references

GitHub Action with XM Cloud

The approach I am going to show you would work for any CI/CD pipeline with XM Cloud with some level of customization, however, I will be showing it on an example of GitHub Actions.

Why?

One would ask – if my codebase is located at GitHub, why on earth would I need to leverage GitHub Actions if the XM Cloud Deploy app already provides build&deploy pipelines for GitHub? It is a valid question, so let’s answer it:

  • XM Cloud Deploy app is a black box where you have no control other than developers allow you to specify within xmcloud.build.json configuration file.
  • GitHub Actions in opposite give you much more precise control over all aspects of the process
  • It relies on the ready-use and well-tested open-source re-usable actions for you simply pick and use
  • Thanks to the above it is quick and low-code compared to other CI/CD approaches, but at the same time, it is highly customizable and may suit any enterprise-level needs (consider GitHub Enterprise in such case).
  • Actions use any OS-based runners that execute at GitHub, while the Deploy app utilizes shared XM Cloud infrastructure
  • Seamless integration into the GitHub account allows keeping all the eggs in the same basket.

With that in mind, let’s take a look at how easily we can set up multisite multi-environment XM Cloud CI/CD workflows.

Preparing XM Cloud

Let’s start with the creation of XM Cloud Project and 2 environments – Staging and Production.

Of course, you can do the above manually by using XM Cloud Deploy app, however, I automated that with a reusable PowerShell code. In order to manipulate XM Cloud from scripts, I need to obtain a pair of automation ClientID and ClientSecret first. This pair is required internally by Login.ps1 script but will be used all the way down this exercise, so save it carefully.

# Script to created a project and environments with the named provided
$projectName = "JumpStart"
$environmentStaging = "Staging"
$environmentProd = "Production"
& "$PSScriptRoot/../Security/Login.ps1"

functionCreate-Project{
    param([string]$projectName)
    $projectList = dotnet sitecore cloud project list --json | ConvertFrom-Json
    $project = $projectList | Where-Object{$_.name -eq $projectName}
    
    if(-not $project){
        Write-Warning "Project '$projectName' not found. Creating new project..."
        $output = dotnet sitecore cloud project create --name $projectName --json
        if($output -eq "Organization tier does not allow more projects"){
            return $null;
        }
        else{
            $projectList = dotnet sitecore cloud project list --json | ConvertFrom-Json
            $project = $projectList | Where-Object{$_.name -eq $projectName}
            return $project.id
        }
    }
    else{
        Write-Warning "Project $projectName already exists. Skipping create."
        return $project.id
    }
}

functionCreate-Environment{
    param(
        [string]$environmentName,
        [string]$projectId,
        [bool]$isProd = $false,
        [array]$environmentList
    )

    # Checking if environment exists.
    $environment = $environmentList | Where-Object{$_.name -eq $environmentName}
    
    if(-not $environment){
        Write-Warning "Environment '$environmentName' not found. Creating new environment..."
        $output = dotnet sitecore cloud environment create --name $environmentName --project-id$projectId --prod $isProd --json | ConvertFrom-Json
        if($output.Status -eq "Operation failed"){
            $output.Message
            return $null
        }
        else{
            return $output.id
        }
    }
    else{
        $environmentId = $environment.id
        "Environment $environmentName already exists"
        return $environmentId
    }
}
$projectId = Create-Project -projectName $projectName
$environmentList = dotnet sitecore cloud environment list --project-id$projectId --json | ConvertFrom-Json
$stagingId = Create-Environment -environmentName $environmentStaging -projectId $projectId -environmentList $environmentList
$prodId = Create-Environment -environmentName $environmentProd -projectId $projectId -isProd $true -environmentList $environmentListpo

Upon completion it will return you Environment IDs for both created environments, you can also get this information after refreshing Deploy app page:

XM Cloud Projects And Environments

Additionally, I’d like to enable SPE and Authoring and Management GraphQL API, before the deployment takes place so that I don’t have to redeploy it later:

dotnet sitecore cloud environment variable upsert -n SITECORE_SPE_ELEVATION -val Allow -id $stagingId
dotnet sitecore cloud environment variable upsert -n Sitecore_GraphQL_ExposePlayground -val true -id $stagingId
dotnet sitecore cloud environment variable upsert -n SITECORE_SPE_ELEVATION -val Allow -id $prodId
dotnet sitecore cloud environment variable upsert -n Sitecore_GraphQL_ExposePlayground -val true -id $prodId

So far so good. Let’s deploy now.

XM Cloud Provisioning

Here is the entire code of the GitHub Actions workflow I will be using for provisioning XmCloud:

name: Build & Deploy - XM Cloud Environments
on:
  workflow_dispatch:
  push:
    branches:[ JumpStart ]
    paths:
    - .github/workflows/CI-CD_XM_Cloud.yml
    - .github/workflows/deploy_xmCloud.yml
    - .github/workflows/build_DotNet.yml
    - 'xmcloud.build.json'
    - 'src/platform/**'
    - 'src/items/**'
  pull_request:
    branches:[ JumpStart ]
    paths:
    - .github/workflows/CI-CD_XM_Cloud.yml
    - .github/workflows/deploy_xmCloud.yml
    - .github/workflows/build_DotNet.yml
    - 'xmcloud.build.json'
    - 'src/platform/**'
    - 'src/items/**'
jobs:
  build-dotnet:
    uses: ./.github/workflows/build_DotNet.yml
    with:
      buildConfiguration: Release

  deploy-staging:
    uses: ./.github/workflows/deploy_xmCloud.yml
    needs: build-dotnet
    with:
      environmentName: Staging
    secrets:
      XM_CLOUD_CLIENT_ID: ${{ secrets.XM_CLOUD_CLIENT_ID }}
      XM_CLOUD_CLIENT_SECRET: ${{ secrets.XM_CLOUD_CLIENT_SECRET }}
      XM_CLOUD_ENVIRONMENT_ID: ${{ secrets.STAGING_XM_CLOUD_ENVIRONMENT_ID }}

  deploy-prod:
    if: github.ref == 'refs/heads/JumpStart'
    needs: build-dotnet
    uses: ./.github/workflows/deploy_xmCloud.yml
    with:
      environmentName: Production
  secrets:
    XM_CLOUD_CLIENT_ID: ${{ secrets.XM_CLOUD_CLIENT_ID }}
    XM_CLOUD_CLIENT_SECRET: ${{ secrets.XM_CLOUD_CLIENT_SECRET }}
    XM_CLOUD_ENVIRONMENT_ID: ${{ secrets.PRODUCTION_XM_CLOUD_ENVIRONMENT_ID }}

Please pay attention to the following parts of it:

  • on: push, pull_request and workflow_dispatch – define event to trigger. The last one means manual trigger from the GitHub UI, I will use it below.
  • branches specify to which branches push or pull request triggers apply to.
  • paths iterate the filesystem locations to be used further ahead with a runner.
  • jobs: specify what we’re going to perform, in which consequence and the dependencies between these actions.
  • each of these jobs executes a consequence of steps to take, referred from another file by uses parameter
  • needs specify the dependency from a previous action to complete successfully, prior to this one to execute.
  • if clauses define conditions for the job to run, if not met job will receive ‘Skipped’ status along with all the other dependant jobs.
  • secrets are taken from stored GitHub Actions secrets and passed down to the jobs

Secrets

For each of the jobs I need to provide 3 parameters from the secrets:

  • XM_CLOUD_CLIENT_ID and XM_CLOUD_CLIENT_SECRET – is a pair of automation ClientID and ClientSecret, the same we obtained at the beginning of this article.
  • STAGING_XM_CLOUD_ENVIRONMENT_ID or PRODUCTION_XM_CLOUD_ENVIRONMENT_ID are the IDs we obtained upon environment creation. You may always look them up in the Deploy app.

So, we have 3 jobs created from 2 steps of consequences of action:

  • Build the DotNet solution
  • Deploy the solution and items to an XM Cloud instance

Build the DotNet solution workflow:

name: Build the DotNet Solution

on:
  workflow_call:
    inputs:
      buildConfiguration:
        required:true
        type: string

jobs:
  build-dotnet:
    name: Build the .NET Solution
      runs-on: windows-latest
      steps:
      - uses: actions/checkout@v3
      - name: Setup MSBuild path
        uses: microsoft/setup-msbuild@v1.1
      - name: Setup NuGet
        uses: NuGet/setup-nuget@v1.0.6
      - name: Restore NuGet packages
        run: nuget restore JumpStart.sln
      - name: Build
        run: msbuild JumpStart.sln /p:Configuration=${{ inputs.buildConfiguration }}

The top part of the file within on section receives the parameters from a calling workflow. Within jobs, we specify steps to take. Important clause – uses – executes an action from the repository of published actions.

The codebase is open so you may take a lookup for a better understanding of what it is doing and how parameters are being used, for example, we’re passing buildConfiguration parameter down to the action in order to define if we need to debug or release.

Now let’s take a look at a more advanced workflow Deploy the solution and items to an XM Cloud instance:

name: Deploy the solution and items to an XM Cloud instance

on:
  workflow_call:
    inputs:
      environmentName:
        required:true
        type: string
    secrets:
      XM_CLOUD_CLIENT_ID:
        required:true
      XM_CLOUD_CLIENT_SECRET:
        required:true
      XM_CLOUD_ENVIRONMENT_ID:
        required:true

jobs:

  deploy:
    name: Deploy the XM Cloud ${{ inputs.environmentName }} Site
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v3
    - uses: actions/setup-dotnet@v2
      with:
        dotnet-version:'6.0.x'
    - run: dotnet tool restore
    - run: dotnet sitecore --help
    - name: Authenticate CLI with XM Cloud
      run: dotnet sitecore cloud login --client-credentials --client-id ${{ secrets.XM_CLOUD_CLIENT_ID }} --client-secret ${{ secrets.XM_CLOUD_CLIENT_SECRET }} --allow-write
    - name: Deploy the CM assets to XM Cloud
      run: |
        result=$(dotnet sitecore cloud deployment create --environment-id ${{ secrets.XM_CLOUD_ENVIRONMENT_ID }} --upload --json)
        echo $result
        isTimedOut=$(echo $result | jq ' .IsTimedOut')
        isCompleted=$(echo $result | jq ' .IsCompleted')
        if [ $isTimedOut = true]
        then
            echo "Operation Timed Out."
            exit -1
        fi
        if ! [ $isCompleted = true]
        then
            echo "Operation Failed."
            exit -1
        fi
        echo "Deployment Completed"

Please pay attention to actions/setup-dotnet@v2 – it relies on this codebase and you pass the desired dotnet version as the parameter to it: with: dotnet-version: '6.0.x'.

You can also execute commands within the context of an isolated VM where the steps execute, by using run clause, such as run: dotnet tool restore.

What is notable here is that Sitecore CLI is written with .NET Core which means it is truly cross-platform and may run on Mac and Linux. Therefore we may employ better lightweight runtimes for it with runs-on: ubuntu-latest clause, instead of using Windows-based runtime.

We may pass secrets right into the executed command and capture the execution results into a variable to process:

result=$(dotnet sitecore cloud deployment create --environment-id ${{ secrets.XM_CLOUD_ENVIRONMENT_ID }} --upload --json)

Note, that we actually must do the above in order to receive the outcomes of the above command rather than a binary flag showing if it was executed or not. If the CLI commands execute in principle, it returns positive status code 0, while we need to process the output and throw status codes based on it.

TIP: Actions and workflows related to a specific git branch they belong to. However, they won’t be seen in GitHub Action until you bring them to main branch. Once they reach main, they become seen from the UI and you can trigger and manually execute the workflows specifying any desired branch.

I already took care of the above so now can execute, this time manually:

Run Workflow

.. and the result:

Provision Xmc Workflow

After the execution completes we can optionally test the environments if they are up and running. They are good, and feature in my case three websites per each of the environments. These websites were provisioned from the serialization I’ve previously done, however, they only exist in these CM environments and have not yet been published.

Sites To Publish

You can do that by clicking Publish all sites button, however, I prefer using the command line:

# need to connect to the environment first
    dotnet sitecore cloud environment connect --environment-id STGqNKHBXMEENSWZIVEbQ
    dotnet sitecore publish --pt Edge -n Staging
    dotnet sitecore cloud environment connect --environment-id PRDukrgzukQPp0CVOOKFhM
    dotnet sitecore publish --pt Edge -n Production

After publishing is complete, we can optionally verify it using GrpahQL IDE and generate an Edge token to be used as the Sitecore API Key. Both could be done by running New-EdgeToken.ps1 script which will generate and output a token and then launch GraphQL IDE to test it.

Configuring Vercel

For the sake of an experiment, I am using my personal “hobby”-tier Vercel account. Of course, you don’t have to use Vercel and can consider other options, such as Netlify, Azure Static Web Apps, or AWS Amplify. I am going to talk about configuring those in later posts, but today will focus on Vercel.

Let’s navigate to Account Settings. There we need to obtain two parameters:

  • Vercel ID from the General tab
  • A token that allows external apps to control Vercel account, under the Tokens tabAccount Settings Tokens

I am going to create two projects in it, named staging-jumpstart and production-jumpstart which will deploy under staging-jumpstart.vercel.app and production-jumpstart.vercel.app hostnames correspondingly. To do so firstly I need to provide a relevant source code repository, in my case that would be obviously GitHub. Other than that it requires choosing the implemented framework (Next.js) and providing a path to the source folder of Next.js app, which it nicely auto-recognized and highlights with Next.js icon. Finally, we need to provide at least three environmental variables:

  • JSS_APP_NAME – in my case it is jumpstart.
  • GRAPH_QL_ENDPOINT which is a known value https://edge.sitecorecloud.io/api/graphql/v1
  • SITECORE_API_KEY which we obtained at a previous step from running New-EdgeToken.ps1 script.

Vercel Setup Project

Clicking Deploy after submitting the above will deploy the website and it will be already accessible by the hostname, correctly pulling the layout data from Experience Edge because:

  • we already published all the sites for each environment to Experience Edge, so it is available from there
  • we instructed the site on how to pull the data from Edge with a combination of JSS_APP_NAME, GRAPH_QL_ENDPOINT, and SITECORE_API_KEY.

Vercel Projects

At this stage we can celebrate yet another milestone and will grab a Project ID parameter from each of these deployed sites – staging-jumpstart and production-jumpstart in my case:

Result Production Deploy Vercel After Creation And Passing Tokens

Build and Deploy Next.js app

Finally, we got enough to configure another workflow for building and deploying Next.js application. The syntax is the same as we did for XM Cloud workflow.

We need to provide a workflow the following parameters, and we have them all:

  • VERCEL_ORG_ID to specify which Vercel account it applies to
  • VERCEL_TOKEN so that it becomes able to control a given Vercel account
  • VERCEL_JUMPSTART_STAGING_ID and VERCEL_JUMPSTART_PRODUCTION_ID – a project ID to deploy

Homework: you can go ahead and parametrize this script for even better re-usability passing site name as a parameter, from a caller workflow.

name: Build & Deploy - JumpStart Site

on:
  workflow_dispatch:
  push:
    branches:[ JumpStart ]
    paths:
      - .github/workflows/CI-CD_JumpStart.yml
      - .github/workflows/build_NextJs.yml
      - .github/workflows/deploy_vercel.yml
      - 'src/jumpstart/**'
  pull_request:
    branches:[ JumpStart ]
    paths:
      - .github/workflows/CI-CD_JumpStart.yml
      - .github/workflows/build_NextJs.yml
      - .github/workflows/deploy_vercel.yml
      - 'src/jumpstart/**'

jobs:
  build-jumpstart-site:
  # if: github.ref != 'refs/heads/JumpStart'
  uses: ./.github/workflows/build_NextJs.yml
  with:
    workingDirectory: ./src/jumpstart

  deploy-jumpstart-staging:
    uses: ./.github/workflows/deploy_vercel.yml
    needs: build-jumpstart-site
    if: always() &&
      github.repository_owner == 'PCPerficient' && needs.build-jumpstart-site.result != 'failure' && needs.build-jumpstart-site.result != 'cancelled'
    secrets:
      VERCEL_TOKEN: ${{ secrets.VERCEL_TOKEN }}
      VERCEL_ORG_ID: ${{ secrets.VERCEL_ORG_ID }}
      VERCEL_PROJECT_ID: ${{ secrets.VERCEL_JUMPSTART_STAGING_ID }}

  deploy-jumpstart-production:
    uses: ./.github/workflows/deploy_vercel.yml
    needs: build-jumpstart-site
    if: always() &&
      github.repository_owner == 'PCPerficient' && needs.build-jumpstart-site.result != 'failure' && needs.build-jumpstart-site.result != 'cancelled'
    secrets:
      VERCEL_TOKEN: ${{ secrets.VERCEL_TOKEN }}
      VERCEL_ORG_ID: ${{ secrets.VERCEL_ORG_ID }}
      VERCEL_PROJECT_ID: ${{ secrets.VERCEL_JUMPSTART_PRODUCTION_ID }}

There are three jobs here, with the last two running in parallel:

  • build-jumpstart-site
  • deploy-jumpstart-staging
  • deploy-jumpstart-production

Build job:

name: Build a Next.js Application

on:
  workflow_call:
    inputs:
      workingDirectory:
        required:true
      type: string

jobs:
  build:
    name: Build the NextJs Application
    runs-on: ubuntu-latest
    env:
      FETCH_WITH: GraphQL
      GRAPH_QL_ENDPOINT:https://www.google.com
      DISABLE_SSG_FETCH:true
    defaults:
      run:
        working-directory: ${{ inputs.workingDirectory }}
    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-node@v3
        with:
          node-version:18.12.1
      - run: npm install
      - run: npm run build
      - run: npm run lint

Deploy job:

name: Deploy asset to Vercel

on:
  workflow_call:
    secrets:
      VERCEL_TOKEN:
        required:true
      VERCEL_ORG_ID:
        required:true
      VERCEL_PROJECT_ID:
        required:true

jobs:
  deploy:
    name: Deploy the rendering host to Vercel
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - name: Setup Node.js
        uses: actions/setup-node@v3
        with:
          node-version: lts/*
      - uses: amondnet/vercel-action@v20
        with:
          vercel-token: ${{ secrets.VERCEL_TOKEN }}
          vercel-args: ${{ fromJSON('["--prod", ""]')[github.ref != 'refs/heads/JumpStart']}}
          vercel-org-id: ${{ secrets.VERCEL_ORG_ID}}
          vercel-project-id: ${{ secrets.VERCEL_PROJECT_ID}}
          scope: ${{ secrets.VERCEL_ORG_ID}}
          working-directory: ./

At this stage, you’ve got and learned everything enough to implement the above approach for your own XM Cloud solution.

Visual Code Extension

The good news is that GitHub Action has an extension for VS Code which allows to manage workflows and runs:

  • Manage your workflows and runs without leaving your editor.
  • Keep track of your CI builds and deployments.
  • Investigate failures and view logs.
  1. Install the extension from the Marketplace.
  2. Sign in with your GitHub account and when prompted allow GitHub Actions access to your GitHub account.
  3. Open a GitHub repository.
  4. You will be able to utilize the syntax features in Workflow files, and you can find the GitHub Actions icon on the left navigation to manage your Workflows.

Hope you enjoyed the simplicity of GitHub Actions and will consider implementing your solutions with it!