Experience Sitecore ! | More than 200 articles about the best DXP by Martin Miles

Experience Sitecore !

More than 200 articles about the best DXP by Martin Miles

The correct way of creating own components with XM Cloud

I occasionally evidence folks creating components in XM Cloud incorrectly. Therefore I decided to create and share this guidance with you.

So, there are five steps involved in creating your own component:

  1. Create an SXA module that will serve as a pluggable container for all the necessary assets for your components, if not done yet.
  2. The easiest way to create a component is to clone a mostly matching existing one. If you need to rely on datasource items, clone the one that already leverages datasource. SPE scaffolding script will do the rest of the job for you. Make sure you assign a newly created component to a module from Step 1 above.
  3. Now, having a module with component(s),  you need to make it visible for your website, but adding a module to a chosen site. This empowers your site to see a corresponding section in the toolbox with newly created component(s) for you to use straight away, for both Experience Editor and Pages.
  4. You need to ensure the Component Name field is referencing the name of corresponding TSX codebase files as /src/<jss_app>/src/components/<your component>.tsx or one more level down within a folder with the same name as the component is. Since the component is fully cloned from the existing one, you can also copy the original TSX files under a new name and it will work straight away.
  5. Don’t forget to add all the new locations to the serialization, and check it into source control along with its codebase. Here are the locations to keep in mind:
    • /sitecore/layout/Placeholder Settings/Feature/Tailwind Components
    • /sitecore/templates/Feature/Tailwind Components
    • /sitecore/layout/Layouts/Feature/Tailwind Components
    • /sitecore/layout/Renderings/Feature/Tailwind Components
    • /sitecore/media library/Feature/Tailwind Components
    • /sitecore/templates/Branches/Feature/Tailwind Components
    • /sitecore/system/Settings/Feature/Tailwind Components

To make things simple, I recorded this entire walkthrough (and actually “talkthrough”) showing the entire process:

Hope you find this video helpful!

A crash course of GraphQL – from Zero to Hero

Almost anyone attending XM Cloud sessions at SUGCON North America earlier saw GraphQL queries as a part of such presentations. For mature headless developers getting through each query requires some amount of time, while for newbies that “user-friendly” syntax stands as an unreachable barrier. Once in the past, I failed to find any good article about this without any doubt great query language – some of them were way too excessive while others missed out on a lot of basics. I decided to fill this gap by creating this article providing exactly that: the minimum required information for the maximum productive start. I write it for you in that exact manner as I wish it was written for me earlier.

Logo

What is QraphQL and why do I need it?

QraphQL is a query language and backend framework for open-source APIs introduced by Facebook in 2012 and was designed to make it easier managing endpoints for REST-based APIs. In 2015, GraphQL was made open source, and Airbnb, GitHub, Pinterest, Shopify, and many other companies now use GraphQL.

When Facebook developers created a mobile application, they looked for ways to speed things up. There was a difficulty: when simultaneously querying from different types of databases, for example from cloud Redis and MySQL, the application slowed down terribly. To solve the problem, Facebook came up with its own query language that addresses a single endpoint and simplifies the form of the requested data. This was especially valuable for a social network with lots of connections and requests for related elements: say, getting posts from all subscribers of user X.

REST is a good and functional technology, but it has some problems:

  • Firstly, there’s redundancy or lack of data in the response. In REST APIs, clients often receive either too much data that they don’t need, or too little, forcing them to make multiple requests to get the information they need. GraphQL allows clients to request only the data they need and receive it in a single request, making communication more efficient.
  • Also, in a REST API, each endpoint usually corresponds to a specific resource, which can lead to extensibility problems and support for multiple API versions. GraphQL, however, features a single endpoint for all requests, and the API schema is defined server-side. This makes the API more flexible and easier to develop.
  • When working with related data in many REST APIs, the N+1 requests problem arises, when obtaining related data makes you do additional request roundtrips to a server. GraphQL allows you to define relationships between requested data and retrieve everything required in a single query.

Coming back to the above use case – a social network has many users, and for each user, we are required to get a list of his latest posts. To obtain such data in a typical REST API, one needs to make several requests: one request to the users endpoint to obtain a list of users, followed by another request to the posts endpoint to obtain posts for all required users deriving from the previous request (in the worst case it implements as a request for each the desired user). GraphQL solves this problem more efficiently. You can request a list of users and at the same time specify what exactly you want to get with the user details, in our case – the latest posts for each user.

Take a look at the example of GraphQL query implementing exactly that: request users with their 5 most recent posts:

query {
  users {
    id
    name
    posts(last:5){
      id
      text
      timestamp
    }
  }
}

What makes this work? This works thanks to the GraphQL structure.

But why at all it has a Graph in its name? That’s because it represents a data structure in the form of a graph, where the nodes of the graph represent objects, and there are connections between these objects. This reflects the way data and queries are organized in GraphQL, where clients can query related data as well as only the data they need.

A graph shows the relationships of say a social network:

Graph

How do we access a graph via GraphQL? GraphQL goes to a specific record, called the root node, and instructs it to get all the details of that record. We can take, for example, user 1, and get their subscriber data. Let’s write a GraphQL query snippet to show how to access it:

query {
    user(id:"1"){
        followers {
            tweets {
                content
            }
        }
    }
}

Here we are asking GraphQL to navigate to the graph from the root node, which is the user object with argument id: 1, and access the content of the follower’s tweet.

Graph Query

So far, so good. Let’s discuss the query types in GraphQL in more detail.

GraphQL Request Types

There are three main request types in GraphQL:

  • Query
  • Mutation
  • Subscription

Sitecore uses only the first two and does not support subscriptions, but to keep this guide full I will still mention how they work.

Queries in GraphQL

We have already become familiar with them from our earlier examples.

Using a query, GraphQL receives the necessary data from the server. This request type is an analog of what GET does in REST. Requests are string values sent in the body of an HTTP POST request. Please note that all GraphQL request types are sent via POST which is de-facto the most common option of HTTP data exchange. GraphQL can also work over Websockets, gRPC, and on top of other transport protocols.

We have already seen Query examples above, but let’s do it again to get the fname and age of all users:

query {
  users {
    fname
    age
  }
}

The server sends response data in JSON format so that the response structure matches the request structure:

data :{
    users [
        {
            "fname":"Mark",
            "age":23
        },
        {
            "fname":"David",
            "age":29
        }
    ]
}

The response contains JSON with the data key and also the errors key (in case there are any errors). Below is an example of a faulty response when an error occurred – due to the fact that Maria’s age was mistakenly passed as a string value:

{
    "errors":[
        {
        "message":"Error: 'age' field has incorrect value 'test'.",
        "locations":[
            {
                "line":5,
                "column":5
            }
        ],
        "path":["users",0,"age"]
        }
    ],
    "data":{
        "users":[
            {
                "fname":"Maria",
                "age":"test"
            },
            {
                "fname":"Megan",
                "age":32
            }
        ]
    }
}

Mutations in GraphQL

Using mutations you can add or modify the data. Mutation is an analogue of POST and PUT in REST. Here’s a mutation request example:

mutation createUser{
  addUser(fname:"Martin", age:42){
    id
  }
}

This createUser mutation adds a user with fname Martin and age 42. The server sends a JSON response to this request with the result record id. The answer may look like below:

data :{
  addUser :"a12e5d"
}

Subscription in GraphQL

With the help of subscriptions, the client receives database changes in real time. Under the hood, subscriptions use WebSockets. Here’s an example:

subscription listenLikes {
  listenLikes {
    fname
    likes
  }
}

The above query can, for example, return a list of users with their names and the count of likes every time it changes. Extremely helpful!

For example, when a user with fname Matt receives a like, the response would look like:

data:{
    listenLikes:{
        "fname":"Matt",
        "likes":245
    }
}

A similar request can be used to update the likes count in real-time, say for the voting form results.

GraphQL Concepts

Now that we know different query types, let’s figure out how to deal with elements that are used in GraphQL.

Concepts I am going to cover below:

  1. Fields
  2. Arguments
  3. Aliases
  4. Fragments
  5. Variables
  6. Directives

1. Fields

Look at a simple GraphQL query:

{
  user {
    name
  }
}

In this request, you see 2 fields. The user field returns an object containing another field of type String. GraphQL server will return a user object with only the user’s name. So simple, so let’s move on.

2. Arguments

In the example below, an argument is passed to indicate which user to refer to:

{
  user(id:"1"){
    name
  }
}

Here in particular we’re passing the user’s id, but we could also pass a name argument, assuming the API has a backend function to return such a response. We can also have a limit argument indicating how many subscribers we want to return in the response. The below query returns the name of the user with id=1 and their first 50 followers:

{
  user(id:"1"){
    name
    followers(limit:50)
  }
}

3. Aliases

GraphQL uses aliases to rename fields within a query response. It might be useful to retrieve data from multiple fields having the same names so that you ensure these fields will have different names in the response to distinguish. Here’s an example of a GraphQL query using aliases:

query {
  products {
    name
    description
  }
  users {
    userName: name
    userDescription: description
  }
}

as well as the response to it:

{
    "data":{
        "products":[
            {
            "name":"Product A",
            "description":"Description A"
            },
            {
            "name":"Product B",
            "description":"Description B"
            }
        ],
        "users":[
            {
            "userName":"User 1",
            "userDescription":"User Description 1"
            },
            {
            "userName":"User 2",
            "userDescription":"User Description 2"
            }
        ]
    }
}

This way we can distinguish the name and description of the product from the name and description of the user in the response. It reminds me of the way we did this in SQL when joining two tables, to distinguish between the same names of two joined columns. This problem most often occurs with the id and name columns.

4. Fragments

The fragments are often used to break up complex application data requirements into smaller chunks, especially when you need to combine many UI components with different fragments into one initial data sample.

{
  leftComparison: tweet(id:1){
    ...comparisonFields
  }
  rightComparison: tweet(id:2){
    ...comparisonFields
  }
}

fragment comparisonFields on tweet {
  userName
  userHandle
  date
  body
  repliesCount
  likes
}

What’s going on with this request?

  1. We sent two requests to obtain information about two different tweets: a tweet with id equal 1 and tweet with id equal 2.
  2. For each request, we create aliases: leftComparison and rightComparison.
  3. We use the fragment comparisonFields, which contains a set of fields that we want to get for each tweet. Fragments allow us to avoid duplicating code and reuse the same set of fields in multiple places in the request (DRY principle).

It returns the following response:

{
    "data":{
        "leftComparison":{
            userName:"foo",
            userHandle:"@foo",
            date:"2019-05-01",
            body:"Life is good",
            repliesCount:10,
            tweetsCount:200,
            likes:500,
        },
        "rightComparison":{
            userName:"boo",
            userHandle:"@boo",
            date:"2018-05-01",
            body:"This blog is awesome",
            repliesCount:15,
            tweetsCount:500,
            likes:700
        }
    }
}

5. Variables

GraphQL variables are a way to dynamically pass a value into a query. The example below provides a user id statically to the request:

{
  accholder: user(id:"1"){
    fullname: name
  }
}

Let’s now replace the static value by adding a variable. The above can be rewritten as:

query GetAccHolder($id: String){
  accholder: user(id: $id){
    fullname: name
  }
}
{
"id":"1"
}

In this example, GetAccHolder is a named function that is useful when you have plenty of requests in your application.

Then we declared the variable $id of type String. Well, then it’s exactly the same as in the original request, instead of a fixed id, we provided the variable $id to the request. The actual values of the variables are passed in a separate block.

We can also specify a default value for a variable:

query GetAccHolder($id: String = "1"){
  accholder: user(id: $id){
    fullname: name
  }
}

Additionally, it is possible to define a variable mandatory by adding ! to data type:

query GetAccHolder($id: String!){
  accholder: user(id: $id){
    fullname: name
  }
}

6. Directives

We can dynamically generate a query structure by using directives. They help us dynamically change the structure and form of our queries using variables. @include and @skip are two directives available in GraphQL.

Examples of directives:

  • @include(if: Boolean)include the field if the value of the boolean variable = true
  • @skip(if: Boolean) — skip field if boolean variable value = true
query GetFollowers($id: String){
  user(id: $id){
    fullname: name,
    followers: @include(if: $getFollowers){
      name
      userHandle
      tweets
    }
  }
}

{
"id":"1",
"$getFollowers":false
}

Since $getFollowers equals true, the followers field will get skipped, i.e. excluded from the response.

GraphQL Schema

In order to work with GraphQL on the server, you need to deploy a GraphQL Schema, which describes the logic of the GraphQL API, types, and data structure. A schema consists of two interrelated objects: typeDefs and resolvers.

In order for the server to work with GraphQL types, they must be defined. The typeDef object defines a list of available types, its code looks as below:

const typeDefs= gql`
  type User {
    id: Int
    fname: String
    age: Int
    likes: Int
    posts:[Post]
}
  type Post {
    id: Int
    user: User
    body: String
}
  type Query {
    users(id: Int!): User!
    posts(id: Int!): Post!
}
  type Mutation {
    incrementLike(fname: String!):[User!]
}
  type Subscription {
    listenLikes :[User]
}
`;

The above code defines a type User, which specifies fname, age, likes as well as other data. Each field defines a data type: String or Int, an exclamation point next to it means that a field is required. GraphQL supports four data types:

  1. String
  2. Int
  3. Float
  4. Boolean

The above example also defines all three types – Query, Mutation, and Subscription.

  • The first type which contains Query, is called users. It takes an id and returns an object with the user’s data, it is a required field. There is another Query type called posts which is designed the same way as users.
  • The Mutation type is called incrementLike. It takes a fname parameter and returns a list of users.
  • The Subscription type is called listenLikes. It returns a list of users.

After defining the types, you need to implement their logic so that the server knows how to respond to requests from a client. We use Resolvers to address that. Resolver is a function that returns specific field data of the type defined in the schema. Resolvers can be asynchronous. You can use resolvers to retrieve data from a REST API, database, or any other source.

So, let’s define resolvers:

const resolvers= {
    Query:{
        users(root, args){return users.filter(user=> user.id=== args.id)[0]},
        posts(root, args){return posts.filter(post=> post.id=== args.id)[0]}
    },
    User:{
        posts:(user)=> {
            return posts.filter(post=> post.userId=== user.id)
    }
    },
    Post:{
        user:(post)=> {
            return users.filter(user=> user.id=== post.userId)[0]
    }
    },
    Mutation:{
        incrementLike(parent, args){
        users.map((user)=> {
            if(user.fname=== args.fname) user.likes++return user
    })
        pubsub.publish('LIKES',{listenLikes: users});
        return users
    }
    },
    Subscription:{
        listenLikes:{
        subscribe:()=> pubsub.asyncIterator(['LIKES'])
    }
    }
};

The above example features six functions:

  1. The users request returns a user object having the passed id.
  2. The posts request returns a post object having the passed id.
  3. In the posts User field, the resolver accepts the user’s data and returns a list of his posts.
  4. In the user Posts field, the function accepts post data and returns the user who published the post.
  5. The incrementLike mutation changes the users object: it increases the number of likes for the user with the corresponding fname. After this, users get published in pubsub with the name LIKES.
  6. listenLikes subscription listens to LIKES and responds when pubsub is updated.

Two words about pubsub. This tool is a real-time information transfer system using WebSockets. pubsub is convenient to use, since everything related to WebSockets is placed in separate abstractions.

Why GraphQL is conceptually successful

  • Flexibility. GraphQL does not impose restrictions on query types, making it useful for both typical CRUD operations (create, read, update, delete) and queries with multiple data types.
  • Schema definition. GraphQL automatically creates a schema for the API, and the hierarchical code organization with object relationships reduces the complexity.
  • Query optimization. GraphQL allows clients to request the exact information they need. This reduces server response time and the volume of data to be transferred over the network.
  • Context. GraphQL takes care of the requests and responses implementation so that developers can focus on business logic. Strong Typing helps prevent errors before executing a request.
  • Extensibility. GraphQL allows extending the API schema and adding new data types along with reusing existing code and data sources to avoid code redundancy.

Sources and references

GitHub Action with XM Cloud

The approach I am going to show you would work for any CI/CD pipeline with XM Cloud with some level of customization, however, I will be showing it on an example of GitHub Actions.

Why?

One would ask – if my codebase is located at GitHub, why on earth would I need to leverage GitHub Actions if the XM Cloud Deploy app already provides build&deploy pipelines for GitHub? It is a valid question, so let’s answer it:

  • XM Cloud Deploy app is a black box where you have no control other than developers allow you to specify within xmcloud.build.json configuration file.
  • GitHub Actions in opposite give you much more precise control over all aspects of the process
  • It relies on the ready-use and well-tested open-source re-usable actions for you simply pick and use
  • Thanks to the above it is quick and low-code compared to other CI/CD approaches, but at the same time, it is highly customizable and may suit any enterprise-level needs (consider GitHub Enterprise in such case).
  • Actions use any OS-based runners that execute at GitHub, while the Deploy app utilizes shared XM Cloud infrastructure
  • Seamless integration into the GitHub account allows keeping all the eggs in the same basket.

With that in mind, let’s take a look at how easily we can set up multisite multi-environment XM Cloud CI/CD workflows.

Preparing XM Cloud

Let’s start with the creation of XM Cloud Project and 2 environments – Staging and Production.

Of course, you can do the above manually by using XM Cloud Deploy app, however, I automated that with a reusable PowerShell code. In order to manipulate XM Cloud from scripts, I need to obtain a pair of automation ClientID and ClientSecret first. This pair is required internally by Login.ps1 script but will be used all the way down this exercise, so save it carefully.

# Script to created a project and environments with the named provided
$projectName = "JumpStart"
$environmentStaging = "Staging"
$environmentProd = "Production"
& "$PSScriptRoot/../Security/Login.ps1"

functionCreate-Project{
    param([string]$projectName)
    $projectList = dotnet sitecore cloud project list --json | ConvertFrom-Json
    $project = $projectList | Where-Object{$_.name -eq $projectName}
    
    if(-not $project){
        Write-Warning "Project '$projectName' not found. Creating new project..."
        $output = dotnet sitecore cloud project create --name $projectName --json
        if($output -eq "Organization tier does not allow more projects"){
            return $null;
        }
        else{
            $projectList = dotnet sitecore cloud project list --json | ConvertFrom-Json
            $project = $projectList | Where-Object{$_.name -eq $projectName}
            return $project.id
        }
    }
    else{
        Write-Warning "Project $projectName already exists. Skipping create."
        return $project.id
    }
}

functionCreate-Environment{
    param(
        [string]$environmentName,
        [string]$projectId,
        [bool]$isProd = $false,
        [array]$environmentList
    )

    # Checking if environment exists.
    $environment = $environmentList | Where-Object{$_.name -eq $environmentName}
    
    if(-not $environment){
        Write-Warning "Environment '$environmentName' not found. Creating new environment..."
        $output = dotnet sitecore cloud environment create --name $environmentName --project-id$projectId --prod $isProd --json | ConvertFrom-Json
        if($output.Status -eq "Operation failed"){
            $output.Message
            return $null
        }
        else{
            return $output.id
        }
    }
    else{
        $environmentId = $environment.id
        "Environment $environmentName already exists"
        return $environmentId
    }
}
$projectId = Create-Project -projectName $projectName
$environmentList = dotnet sitecore cloud environment list --project-id$projectId --json | ConvertFrom-Json
$stagingId = Create-Environment -environmentName $environmentStaging -projectId $projectId -environmentList $environmentList
$prodId = Create-Environment -environmentName $environmentProd -projectId $projectId -isProd $true -environmentList $environmentListpo

Upon completion it will return you Environment IDs for both created environments, you can also get this information after refreshing Deploy app page:

XM Cloud Projects And Environments

Additionally, I’d like to enable SPE and Authoring and Management GraphQL API, before the deployment takes place so that I don’t have to redeploy it later:

dotnet sitecore cloud environment variable upsert -n SITECORE_SPE_ELEVATION -val Allow -id $stagingId
dotnet sitecore cloud environment variable upsert -n Sitecore_GraphQL_ExposePlayground -val true -id $stagingId
dotnet sitecore cloud environment variable upsert -n SITECORE_SPE_ELEVATION -val Allow -id $prodId
dotnet sitecore cloud environment variable upsert -n Sitecore_GraphQL_ExposePlayground -val true -id $prodId

So far so good. Let’s deploy now.

XM Cloud Provisioning

Here is the entire code of the GitHub Actions workflow I will be using for provisioning XmCloud:

name: Build & Deploy - XM Cloud Environments
on:
  workflow_dispatch:
  push:
    branches:[ JumpStart ]
    paths:
    - .github/workflows/CI-CD_XM_Cloud.yml
    - .github/workflows/deploy_xmCloud.yml
    - .github/workflows/build_DotNet.yml
    - 'xmcloud.build.json'
    - 'src/platform/**'
    - 'src/items/**'
  pull_request:
    branches:[ JumpStart ]
    paths:
    - .github/workflows/CI-CD_XM_Cloud.yml
    - .github/workflows/deploy_xmCloud.yml
    - .github/workflows/build_DotNet.yml
    - 'xmcloud.build.json'
    - 'src/platform/**'
    - 'src/items/**'
jobs:
  build-dotnet:
    uses: ./.github/workflows/build_DotNet.yml
    with:
      buildConfiguration: Release

  deploy-staging:
    uses: ./.github/workflows/deploy_xmCloud.yml
    needs: build-dotnet
    with:
      environmentName: Staging
    secrets:
      XM_CLOUD_CLIENT_ID: ${{ secrets.XM_CLOUD_CLIENT_ID }}
      XM_CLOUD_CLIENT_SECRET: ${{ secrets.XM_CLOUD_CLIENT_SECRET }}
      XM_CLOUD_ENVIRONMENT_ID: ${{ secrets.STAGING_XM_CLOUD_ENVIRONMENT_ID }}

  deploy-prod:
    if: github.ref == 'refs/heads/JumpStart'
    needs: build-dotnet
    uses: ./.github/workflows/deploy_xmCloud.yml
    with:
      environmentName: Production
  secrets:
    XM_CLOUD_CLIENT_ID: ${{ secrets.XM_CLOUD_CLIENT_ID }}
    XM_CLOUD_CLIENT_SECRET: ${{ secrets.XM_CLOUD_CLIENT_SECRET }}
    XM_CLOUD_ENVIRONMENT_ID: ${{ secrets.PRODUCTION_XM_CLOUD_ENVIRONMENT_ID }}

Please pay attention to the following parts of it:

  • on: push, pull_request and workflow_dispatch – define event to trigger. The last one means manual trigger from the GitHub UI, I will use it below.
  • branches specify to which branches push or pull request triggers apply to.
  • paths iterate the filesystem locations to be used further ahead with a runner.
  • jobs: specify what we’re going to perform, in which consequence and the dependencies between these actions.
  • each of these jobs executes a consequence of steps to take, referred from another file by uses parameter
  • needs specify the dependency from a previous action to complete successfully, prior to this one to execute.
  • if clauses define conditions for the job to run, if not met job will receive ‘Skipped’ status along with all the other dependant jobs.
  • secrets are taken from stored GitHub Actions secrets and passed down to the jobs

Secrets

For each of the jobs I need to provide 3 parameters from the secrets:

  • XM_CLOUD_CLIENT_ID and XM_CLOUD_CLIENT_SECRET – is a pair of automation ClientID and ClientSecret, the same we obtained at the beginning of this article.
  • STAGING_XM_CLOUD_ENVIRONMENT_ID or PRODUCTION_XM_CLOUD_ENVIRONMENT_ID are the IDs we obtained upon environment creation. You may always look them up in the Deploy app.

So, we have 3 jobs created from 2 steps of consequences of action:

  • Build the DotNet solution
  • Deploy the solution and items to an XM Cloud instance

Build the DotNet solution workflow:

name: Build the DotNet Solution

on:
  workflow_call:
    inputs:
      buildConfiguration:
        required:true
        type: string

jobs:
  build-dotnet:
    name: Build the .NET Solution
      runs-on: windows-latest
      steps:
      - uses: actions/checkout@v3
      - name: Setup MSBuild path
        uses: microsoft/setup-msbuild@v1.1
      - name: Setup NuGet
        uses: NuGet/setup-nuget@v1.0.6
      - name: Restore NuGet packages
        run: nuget restore JumpStart.sln
      - name: Build
        run: msbuild JumpStart.sln /p:Configuration=${{ inputs.buildConfiguration }}

The top part of the file within on section receives the parameters from a calling workflow. Within jobs, we specify steps to take. Important clause – uses – executes an action from the repository of published actions.

The codebase is open so you may take a lookup for a better understanding of what it is doing and how parameters are being used, for example, we’re passing buildConfiguration parameter down to the action in order to define if we need to debug or release.

Now let’s take a look at a more advanced workflow Deploy the solution and items to an XM Cloud instance:

name: Deploy the solution and items to an XM Cloud instance

on:
  workflow_call:
    inputs:
      environmentName:
        required:true
        type: string
    secrets:
      XM_CLOUD_CLIENT_ID:
        required:true
      XM_CLOUD_CLIENT_SECRET:
        required:true
      XM_CLOUD_ENVIRONMENT_ID:
        required:true

jobs:

  deploy:
    name: Deploy the XM Cloud ${{ inputs.environmentName }} Site
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v3
    - uses: actions/setup-dotnet@v2
      with:
        dotnet-version:'6.0.x'
    - run: dotnet tool restore
    - run: dotnet sitecore --help
    - name: Authenticate CLI with XM Cloud
      run: dotnet sitecore cloud login --client-credentials --client-id ${{ secrets.XM_CLOUD_CLIENT_ID }} --client-secret ${{ secrets.XM_CLOUD_CLIENT_SECRET }} --allow-write
    - name: Deploy the CM assets to XM Cloud
      run: |
        result=$(dotnet sitecore cloud deployment create --environment-id ${{ secrets.XM_CLOUD_ENVIRONMENT_ID }} --upload --json)
        echo $result
        isTimedOut=$(echo $result | jq ' .IsTimedOut')
        isCompleted=$(echo $result | jq ' .IsCompleted')
        if [ $isTimedOut = true]
        then
            echo "Operation Timed Out."
            exit -1
        fi
        if ! [ $isCompleted = true]
        then
            echo "Operation Failed."
            exit -1
        fi
        echo "Deployment Completed"

Please pay attention to actions/setup-dotnet@v2 – it relies on this codebase and you pass the desired dotnet version as the parameter to it: with: dotnet-version: '6.0.x'.

You can also execute commands within the context of an isolated VM where the steps execute, by using run clause, such as run: dotnet tool restore.

What is notable here is that Sitecore CLI is written with .NET Core which means it is truly cross-platform and may run on Mac and Linux. Therefore we may employ better lightweight runtimes for it with runs-on: ubuntu-latest clause, instead of using Windows-based runtime.

We may pass secrets right into the executed command and capture the execution results into a variable to process:

result=$(dotnet sitecore cloud deployment create --environment-id ${{ secrets.XM_CLOUD_ENVIRONMENT_ID }} --upload --json)

Note, that we actually must do the above in order to receive the outcomes of the above command rather than a binary flag showing if it was executed or not. If the CLI commands execute in principle, it returns positive status code 0, while we need to process the output and throw status codes based on it.

TIP: Actions and workflows related to a specific git branch they belong to. However, they won’t be seen in GitHub Action until you bring them to main branch. Once they reach main, they become seen from the UI and you can trigger and manually execute the workflows specifying any desired branch.

I already took care of the above so now can execute, this time manually:

Run Workflow

.. and the result:

Provision Xmc Workflow

After the execution completes we can optionally test the environments if they are up and running. They are good, and feature in my case three websites per each of the environments. These websites were provisioned from the serialization I’ve previously done, however, they only exist in these CM environments and have not yet been published.

Sites To Publish

You can do that by clicking Publish all sites button, however, I prefer using the command line:

# need to connect to the environment first
    dotnet sitecore cloud environment connect --environment-id STGqNKHBXMEENSWZIVEbQ
    dotnet sitecore publish --pt Edge -n Staging
    dotnet sitecore cloud environment connect --environment-id PRDukrgzukQPp0CVOOKFhM
    dotnet sitecore publish --pt Edge -n Production

After publishing is complete, we can optionally verify it using GrpahQL IDE and generate an Edge token to be used as the Sitecore API Key. Both could be done by running New-EdgeToken.ps1 script which will generate and output a token and then launch GraphQL IDE to test it.

Configuring Vercel

For the sake of an experiment, I am using my personal “hobby”-tier Vercel account. Of course, you don’t have to use Vercel and can consider other options, such as Netlify, Azure Static Web Apps, or AWS Amplify. I am going to talk about configuring those in later posts, but today will focus on Vercel.

Let’s navigate to Account Settings. There we need to obtain two parameters:

  • Vercel ID from the General tab
  • A token that allows external apps to control Vercel account, under the Tokens tabAccount Settings Tokens

I am going to create two projects in it, named staging-jumpstart and production-jumpstart which will deploy under staging-jumpstart.vercel.app and production-jumpstart.vercel.app hostnames correspondingly. To do so firstly I need to provide a relevant source code repository, in my case that would be obviously GitHub. Other than that it requires choosing the implemented framework (Next.js) and providing a path to the source folder of Next.js app, which it nicely auto-recognized and highlights with Next.js icon. Finally, we need to provide at least three environmental variables:

  • JSS_APP_NAME – in my case it is jumpstart.
  • GRAPH_QL_ENDPOINT which is a known value https://edge.sitecorecloud.io/api/graphql/v1
  • SITECORE_API_KEY which we obtained at a previous step from running New-EdgeToken.ps1 script.

Vercel Setup Project

Clicking Deploy after submitting the above will deploy the website and it will be already accessible by the hostname, correctly pulling the layout data from Experience Edge because:

  • we already published all the sites for each environment to Experience Edge, so it is available from there
  • we instructed the site on how to pull the data from Edge with a combination of JSS_APP_NAME, GRAPH_QL_ENDPOINT, and SITECORE_API_KEY.

Vercel Projects

At this stage we can celebrate yet another milestone and will grab a Project ID parameter from each of these deployed sites – staging-jumpstart and production-jumpstart in my case:

Result Production Deploy Vercel After Creation And Passing Tokens

Build and Deploy Next.js app

Finally, we got enough to configure another workflow for building and deploying Next.js application. The syntax is the same as we did for XM Cloud workflow.

We need to provide a workflow the following parameters, and we have them all:

  • VERCEL_ORG_ID to specify which Vercel account it applies to
  • VERCEL_TOKEN so that it becomes able to control a given Vercel account
  • VERCEL_JUMPSTART_STAGING_ID and VERCEL_JUMPSTART_PRODUCTION_ID – a project ID to deploy

Homework: you can go ahead and parametrize this script for even better re-usability passing site name as a parameter, from a caller workflow.

name: Build & Deploy - JumpStart Site

on:
  workflow_dispatch:
  push:
    branches:[ JumpStart ]
    paths:
      - .github/workflows/CI-CD_JumpStart.yml
      - .github/workflows/build_NextJs.yml
      - .github/workflows/deploy_vercel.yml
      - 'src/jumpstart/**'
  pull_request:
    branches:[ JumpStart ]
    paths:
      - .github/workflows/CI-CD_JumpStart.yml
      - .github/workflows/build_NextJs.yml
      - .github/workflows/deploy_vercel.yml
      - 'src/jumpstart/**'

jobs:
  build-jumpstart-site:
  # if: github.ref != 'refs/heads/JumpStart'
  uses: ./.github/workflows/build_NextJs.yml
  with:
    workingDirectory: ./src/jumpstart

  deploy-jumpstart-staging:
    uses: ./.github/workflows/deploy_vercel.yml
    needs: build-jumpstart-site
    if: always() &&
      github.repository_owner == 'PCPerficient' && needs.build-jumpstart-site.result != 'failure' && needs.build-jumpstart-site.result != 'cancelled'
    secrets:
      VERCEL_TOKEN: ${{ secrets.VERCEL_TOKEN }}
      VERCEL_ORG_ID: ${{ secrets.VERCEL_ORG_ID }}
      VERCEL_PROJECT_ID: ${{ secrets.VERCEL_JUMPSTART_STAGING_ID }}

  deploy-jumpstart-production:
    uses: ./.github/workflows/deploy_vercel.yml
    needs: build-jumpstart-site
    if: always() &&
      github.repository_owner == 'PCPerficient' && needs.build-jumpstart-site.result != 'failure' && needs.build-jumpstart-site.result != 'cancelled'
    secrets:
      VERCEL_TOKEN: ${{ secrets.VERCEL_TOKEN }}
      VERCEL_ORG_ID: ${{ secrets.VERCEL_ORG_ID }}
      VERCEL_PROJECT_ID: ${{ secrets.VERCEL_JUMPSTART_PRODUCTION_ID }}

There are three jobs here, with the last two running in parallel:

  • build-jumpstart-site
  • deploy-jumpstart-staging
  • deploy-jumpstart-production

Build job:

name: Build a Next.js Application

on:
  workflow_call:
    inputs:
      workingDirectory:
        required:true
      type: string

jobs:
  build:
    name: Build the NextJs Application
    runs-on: ubuntu-latest
    env:
      FETCH_WITH: GraphQL
      GRAPH_QL_ENDPOINT:https://www.google.com
      DISABLE_SSG_FETCH:true
    defaults:
      run:
        working-directory: ${{ inputs.workingDirectory }}
    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-node@v3
        with:
          node-version:18.12.1
      - run: npm install
      - run: npm run build
      - run: npm run lint

Deploy job:

name: Deploy asset to Vercel

on:
  workflow_call:
    secrets:
      VERCEL_TOKEN:
        required:true
      VERCEL_ORG_ID:
        required:true
      VERCEL_PROJECT_ID:
        required:true

jobs:
  deploy:
    name: Deploy the rendering host to Vercel
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - name: Setup Node.js
        uses: actions/setup-node@v3
        with:
          node-version: lts/*
      - uses: amondnet/vercel-action@v20
        with:
          vercel-token: ${{ secrets.VERCEL_TOKEN }}
          vercel-args: ${{ fromJSON('["--prod", ""]')[github.ref != 'refs/heads/JumpStart']}}
          vercel-org-id: ${{ secrets.VERCEL_ORG_ID}}
          vercel-project-id: ${{ secrets.VERCEL_PROJECT_ID}}
          scope: ${{ secrets.VERCEL_ORG_ID}}
          working-directory: ./

At this stage, you’ve got and learned everything enough to implement the above approach for your own XM Cloud solution.

Visual Code Extension

The good news is that GitHub Action has an extension for VS Code which allows to manage workflows and runs:

  • Manage your workflows and runs without leaving your editor.
  • Keep track of your CI builds and deployments.
  • Investigate failures and view logs.
  1. Install the extension from the Marketplace.
  2. Sign in with your GitHub account and when prompted allow GitHub Actions access to your GitHub account.
  3. Open a GitHub repository.
  4. You will be able to utilize the syntax features in Workflow files, and you can find the GitHub Actions icon on the left navigation to manage your Workflows.

Hope you enjoyed the simplicity of GitHub Actions and will consider implementing your solutions with it!

Keeping your own XM Cloud repository in sync with official XM Cloud starter kit template

XM Cloud is a live evolving platform – the development team releases new base images almost on a weekly basis, and new features are coming to the product regularly, which gets reflected in the underlying dependencies, as well as public starter kit templates such as XM Cloud Foundation Head Starter Kit.

At the same time XM Cloud professionals and enthusiasts and of course – the partners, are building their own starter kits based on publicly available templates provided by Sitecore. I alone have made more than a couple dozen personal improvements over the base starter kit that I am using almost on a daily basis. More to say, here at Perficient I am involved in building our brilliant JumpStart solution that comes as the essence of the company’s collective experience, with our best XM Cloud and Headless developers bringing and documenting their expertise as a part of the solution. The question comes to how to stay in sync with ever-coming changes and what would the best strategy for it?

One such strategy proposed was using a local copy of Foundation Head with semi-automating syncs using software called WinMerge. Despite finding this approach interesting and worth consideration, it does not fit the goals of Perficient XM Cloud JumpStart and is more suitable for smaller or personal repos. The fork-based solution is what seems to be the right path for JumpStart, retaining the ability to pull the latest features from the public template and merge them into its own private starter kit with minimal effort. And of course – being able to pull request back into a public repository, since we’re acting in the open-source community.

The problem that arises here is – Foundation Head is a public template repository on GitHub, and GitHub forking is done in such a manner that you can only fork public repositories into other public repos. We need a private repo with the ability to centrally control contributors’ access with SSO, GitHub Enterprise offers all that, but forking needs to get resolved first.

The Walkthrough

So here’s the walkthrough I keep in mind, simplified: I need to create a new private repo, clone the original repo locally, set up an additional remote, a new private would be an origin, and so on.  Below are the steps in detail.

First of all, we need to create a private repository, in my case, it will be called JumpStart, as we normally do with GitHub:

Create Repo

Next, let’s git clone the public repository, but with bare flag option:

git clone --bare https://github.com/sitecorelabs/xmcloud-foundation-head.git

bare repository is a special type of repository that does not have a working directory. It only contains the Git data (branches, tags, commits, etc.) without the actual project files. Bare repositories are typically used for server-side purposes, such as serving as a central repository for collaboration or as a backup/mirror of a repository. It creates a new bare repository in the current directory, cloning all branches and tags from the source repository.

cd xmcloud-foundation-head.git
git push --mirror https://github.com/PCPerficient/JumpStart.git

This command is used to push all branches and tags from your local repository to a remote repository in a way that mirrors the source repository. In the context of creating and maintaining a mirror or backup, you would typically use this command to push changes from your local bare repository (created with git clone --bare) to another remote repository.

1.clone And Mirror

So far so good. After mirroring, let’s clone the private repo so that we can work on it, as normal:

git clone https://github.com/PCPerficient/JumpStart
cd JumpStart
# do some changes as a part of normal workflow
git commit
git push origin master

Syncing Updated From Public Repository

Now, the interesting part: pulling new latest from the public template repo:

cd JumpStart
git remote add public https://github.com/sitecorelabs/xmcloud-foundation-head.git
git pull public master # this line creates a merge commit
git push origin master

2.pulling New From Public Repo

Awesome, your private repo now has the latest code from the public repo plus your changes.

Pull Request Back to the Open-Source

Finally, create a pull request from our private repository back to origin public repository. Assume, you’ve done some meaningful work in your clone private repository which you want to contribute back to the open-source community. So by that time you might have a feature branch in order to make a pull request into a master/main branch of your private repo

Unfortunately, you also won’t be able to do that with GitHub UI beyond the first step: with the GitHub UI of the public repository, create a fork (using “Fork” button at the top right of the public repository). Once done, our account (PCPerficient is this example) will have a public fork of the original repository (xmcloud-foundation-head). Then:

git clone https://github.com/PCPerficient/xmcloud-foundation-head.git
cd xmcloud-foundation-head
git remote add JumpStart https://github.com/PCPerficient/JumpStart.git
git checkout -b the_branch_you_want_to_pull_request
git pull private_repo_yourname master # you need to pull first prior making any pushes
git push origin the_branch_you_want_to_pull_request

The original public repository will get a pull request from a public fork at your GitHub account and will be able to review that and accept it.

Building Traefik Images with ltsc2022 for your Sitecore Deployments

Recently you can benefit from Sitecore providing ltsc2022 images for your XM/XP solutions which I previously covered in a seperate article. However, looking at your cluster you may see not all the images are ltsc2022 compatible – there is a 1809-based Traefik image, which is coming separately outside of the Sitecore docker registry.

Now, it’s a good time to get rid of that only left 1809-based Traefik image.

traefik - Official Image

The bad news is that there’s no ltsc2022 image for Traefik for us to use, the good news is that original Dockerfile is available, so I can rewrite it to consume ltsc2022 images. In addition, I took the latest (by the time) version of it which is 2.9.8, while the officially supported is 2.2.0, so it would make sense to parametrize the version as well, taking its settings from .env file settings.

I created a new docker\build\traefik folder and ended up with the following Dockerfile within there:

ARG IMAGE_OS
FROM mcr.microsoft.com/windows/servercore:${IMAGE_OS}
ARG VERSION

SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]

RUN Invoke-WebRequest \
        -Uri "https://github.com/traefik/traefik/releases/download/$env:VERSION/traefik_${env:VERSION}_windows_amd64.zip" \
        -OutFile "/traefik.zip"; \
    Expand-Archive -Path "/traefik.zip" -DestinationPath "/" -Force; \
    Remove-Item "/traefik.zip" -Force
EXPOSE 80
ENTRYPOINT ["/traefik"]

# Metadata
LABEL org.opencontainers.image.vendor="Traefik Labs" \
org.opencontainers.image.url="https://traefik.io" \
org.opencontainers.image.source="https://github.com/traefik/traefik" \
org.opencontainers.image.title="Traefik" \
org.opencontainers.image.description="A modern reverse-proxy" \
org.opencontainers.image.version=$env:VERSION \
org.opencontainers.image.documentation="https://docs.traefik.io"
        

Because of that I also had to update the related docker-compose section of docker-compose.override.yml file:

traefik:
    isolation: ${ISOLATION}
    image: ${REGISTRY}traefik:${TRAEFIK_VERSION}-servercore-${EXTERNAL_IMAGE_TAG_SUFFIX}
    build:
    context: ../../docker/build/traefik
    args:
        IMAGE_OS: ${EXTERNAL_IMAGE_TAG_SUFFIX}
        VERSION: ${TRAEFIK_VERSION}
    volumes:
    - ../../docker/traefik:C:/etc/traefik
    depends_on:
    - rendering

What I want to pay attention to here - I am now using ${ISOLATION} as the rest of the containers are using instead of dedicated TRAEFIK_ISOLATION which can now be removed from .env.

Another thing is that I am passing fully parametrized image names:

image: ${REGISTRY}traefik:${TRAEFIK_VERSION}-servercore-${EXTERNAL_IMAGE_TAG_SUFFIX}

I intentionally do not prefix it with ${COMPOSE_PROJECT_NAME} so that this image becomes reusable between several solutions on the same machine, which saves some disk drive space.

Last step would be adding .env file parameter TRAEFIK_VERSION=v2.9.8 and removing TRAEFIK_IMAGE parameter which is no longer needed. Good to go!

Traefik in action

Verdict

I tested all of the important features of the platform, including Experience Editor and it all works, and what is especially important – works impressively fast with the Process isolation mode. And since all the containers are built with ltsc2022 and run in Process isolation, one doesn’t need Hyper-V at all!

As for me, I ended up having a nice and powerful Windows 11 laptop suitable for modern Sitecore headless operations with a minimum overhead due to the Process isoolation.

Enjoy faster development!

Challenges of an international travelling in 2023 or how things can go unexpectedly wrong

I am the kind of person who tries to predict and avoid potential problems way before they even can occur. Risk Management presents in every single cell circulating in my blood – partly because of some sort of professional deformation as well as a natural curiosity and lessons learned from others’ mistakes. But sometimes things can go very unpredictably and you’re left on your own.

That is a triple miracle that I made it back to the US from the conference, but in fact that is a set of three independent miracles.

First, getting to and from Spain. It was a lucky coincidence of me buying both onward and return tickets on those rare lucky days right in between a series of air traffic control strikes across European airports.

I made my flight back early on Saturday and some of those who left for a weekend could not make it because of air control strikes in France and Germany. Even if you’re not French or German, there is a big change of making layout/change at one of their airports, as there are no direct flights to the USA from the medium Spanish airports. Likewise, when flying in, I changed the LAX plane in Frankfurt, for Malaga. After Spain itself joined the strikes from that weekend, it would be even fewer chances to fly away, so I feel exceptionally lucky from departing early and departing through the UK which joins the airport strikes slightly later giving me enough time to leave Europe.

Going next. Early Monday morning I showed up at Heathrow airport as normal and then got denied boarding for an “expired” barcode on my COVID certificate. I used one for flying all the time including in the UK and it has never been a problem. That could be a minor problem, at least I am vaccinated there in the UK and the records should be available. I memorize all my passwords so can easily log in to the app or the website…. Incorrect! Whoever did the app made it with mandatory 2-factor authentication by sending a text to your number. But I don’t have my old UK number, after moved back to the states. Now you see, how one minor problem turns into a much bigger one.

Trying to escalate it with all levels of management did not help at all, this type of person is just simply sitting their paid hours and do not want to step the extra mile. “Computer says no” – is an accurate description of dealing with them. So, I was denied boarding for a stupid reason, and the clock’s ticking…

In a critical situation, your mind works differently, brainstorming any possible outcomes under stress. I remembered that did switching that original number (I did not even remember the actual digits) to a pre-paid plan and put it somewhere in storage along with some old phone. Or had to call someone who could access it there, but it was 3AM there in California. Chances to: 1) wake up the right person in order to ..2) understand your uncertain instructions and... 3) manage to follow them up correctly - that multiplied together are so low! But I made all that happen in a permitted window of 30 minutes - such a miracle! Unbelievable!

After receiving the code, I was able to pass through a line of unwanted difficult questions and eventually generate my certificates in the mobile app. And guess what? That check-in lady neither did not scan the updated barcode nor entered it somewhere. At all! Just said “now ok” and that was it. So she potentially could let me board with an "expired" "barcode" since everything she “checked” was the date label above it. The impact of losing a flight and being stuck in an airport limbo with heavy bags on you (not to say $1-2K to pay for a replacement flight) is a huge penalty when things go wrong mainly because of inadequate and non-transparent procedures and human robots who follow them. This system is definitely broken. The humans behind it are also “broken” in a similar way.

That’s not all. By the time I passed the above line of traps for showing robots-people the right label they wanted to see, they had put my ticket into a STANBY status, which means I was not guaranteed a seat for BOTH legs of my flight, not just trans-Atlantic segment. They boarded me to Phoenix without giving me a following ticked, which I need to get there.

The first segment of my trip was delayed for 2 hours so I only had something less than 40 minutes to clear the customs and immigration, re-check the bags to a final destination (praying it reaches the plane in time) and run myself long way to the departure gate. Long story short, I was the fastest person to get off the plane and pass all the procedures, rechecking the bags, running through additional security, etc. but reached the “gate closed” door, and boarding assistance are just moving away. I had to run as fast as possible, waive my arm and shout "do not close" to pay their attention, then ask them to let me on the plane. Emotions burst and the timing was so precise - an extra 20 seconds would leave me staying overnight at Phoenix and possibly paying for a final segment, but this type of luck followed me the whole day so both I and my bags magically arrived at Orange County airport in time.

What a crazy day it was!

XM Cloud: a modern way to content management and import with Authoring GraphQL API

When it comes to content management, how would you deal with automating it?

You’ve probably thought of Data Exchange Framework for setting up content import from external sources on a regular basis, or Sitecore PowerShell Extensions as the universal Swiss Army knife that allows doing everything.

Sitecore XM Cloud is a modern SaaS solution and therefore offers you one more way of managing and importing content via GraphQL mutations. This is also an option for the latest 10.3 XM/XP platforms bringing them a step closer to the composable world of today.

There is a video walkthrough of the wholee exercise at the bottom of this post.

Please welcome: Authoring and Management API!

The documentation prompts a broad and awe-inspiring list of things you can do with Authoring API against your instance: create and delete items, templates, and media. It also empowers you to do some operations around site context and perform a content search on your CM instance.

Management API in addition gives you control over operations using queries and mutations for the following GraphQL types:

  • Archiving
  • Database
  • Indexing
  • Job
  • Language
  • Publishing
  • Security
  • Workflow
  • Rules

With that in mind, you can create and structure your content, reindex, and publish it to Experience Edge entirely using this API. So let’s take a look at how it works!

Uploading a picture to Media Library using GraphQL Authoring API

First of all, it is disabled by default, so we need to switch by setting Sitecore_GraphQL_ExposePlayground environmental variable to true. Since these variables expand at build time you also need to re-deploy the environments

Enable Api

Once deployment is complete, you can start playing with it. Security in a composable world typically works with OAuth, Authoring and Management API is not an exclusion here. In order to obtain an access token, you need to authorize it first with your client ID and client secret which you set up with XM Cloud Deploy app:

Token

There are different ways of authorization (for example, using CLI dotnet sitecore cloud login command), but since the need to fully automate the routine, I will be using /oauth/token endpoint. Also, it is worth mentioning that after getting initially already authorized with CLI, your client ID / secret pair is stored at .sitecore\user.json file so let’s take it from there. Here’s the code:

$userJson = "$PSScriptRoot/../../.sitecore/user.json"

if(-not(Test-Path$userJson)){
    Write-Error"The specified file '$userJson' does not exist."
    return
}

$userJson = Get-Content$userJson | ConvertFrom-Json
$clientId = $userJson.endpoints.xmCloud.clientId
$clientSecret = $userJson.endpoints.xmCloud.clientSecret
$authorityUrl = $userJson.endpoints.xmCloud.authority
$audience = $userJson.endpoints.xmCloud.audience
$grantType = "client_credentials"

$body = @{
    client_id = $clientId
    client_secret = $clientSecret
    audience = $audience
    grant_type = $grantType
}

$response = Invoke-RestMethod -Uri "${authorityUrl}oauth/token" -Method Post -ContentType "application/x-www-form-urlencoded" -Body $body
return $response.access_token

Now we got the access token and it should be passed as a header with every single request to GraphQL API:

“Authorization” = “Bearer <access_token>”

Next, let’s make a mutation query that returns us a pre-signed upload URL from the passing API endpoint and a target Sitecore path that you want to upload your media to. Here’s the code:

[CmdletBinding()]
Param(
    [Parameter(Mandatory=$true, HelpMessage="The URL of the endpoint where the file will be uploaded.")]
    [string]$EndpointUrl,
    [Parameter(Mandatory=$true, HelpMessage="The JWT token to use for authentication.")]
    [string]$JWT,
    [Parameter(Mandatory=$true, HelpMessage="The path of the file to be uploaded.")]
    [string]$UploadPath
)

$query = @"
mutation
{
  uploadMedia(input: { itemPath: "$UploadPath" }) {
    presignedUploadUrl
  }
}
"@

$body = @{ query = $query} | ConvertTo-Json
$headers = @{
    "Content-Type" = "application/json"
    "Authorization" = "Bearer $JWT"
}

# Invoke the GraphQL endpoint using Invoke-RestMethod and pass in the query and headers
$response = Invoke-RestMethod -Method POST -Uri $EndpointUrl -Headers $headers -Body $body
$result = $response.data.uploadMedia
return $result.presignedUploadUrl

Now that we have the pre-signed upload URL, we can perform media upload passing the local file to process:

[CmdletBinding()]
Param(
    [Parameter(Mandatory=$true, HelpMessage="The URL to upload the file to.")]
    [string]$UploadUrl,
    [Parameter(Mandatory=$true, HelpMessage="The JWT token to use for authentication.")]
    [string]$JWT,
    [Parameter(Mandatory=$true, HelpMessage="The path to the file to be uploaded.")]
    [string]$FilePath
)

if(-not(Test-Path$FilePath)){
    Write-Error"The specified file '$FilePath' does not exist."
    return
}

$result = & curl.exe --request POST $UploadUrl --header "Authorization: Bearer $JWT" --form =@"$FilePath" -s
$result = $result | ConvertFrom-Json
return $result

This script will return the details of a newly uploaded media item, such as:

  • item name
  • item full path
  • item ID

I combined all the above cmdlets into a single Demo-UploadPicture.ps1 script that cares about passing all the parameters and performs the upload operation:

Powershell

The upload immediately results in Media Library at the requested path:

Result

Pros:

  • a modern platform agnostic approach
  • works nicely with webhooks
  • allows automating pretty much everything
  • excellent management options making DevOps easier

Cons:

  • cumbersome token operations
  • doesn’t allow batching, therefore takes a request per each operation

Verdict

It is great to have a variety of different tools in your belt rather than having a single hammer in a hand with everything around turning into nails. Hope this new tool brings your automation skills to a new level!

Tunneling out Sitecore 10.3 from a local machine containers for the full global access

I am experiencing an urgent request to prepare a Sitecore instance for the test of some external tools our prospect partners making demos for us. In good times, I'd, of course, spin up a proper PaaS / Kubernetes environment, however, I am occasionally out of control of any cloud subscription, and what is much more important - time! The deadline for such tasks is usually "yesterday", so I started thinking of potential "poor man's deployment" options.

Like many developers do, I also have a "server in a wardrobe", however, that is not a retired laptop but a proper high-speck machine that currently serves me as a hypervisor server plugged by a gigabit Google Fiber connection. My cat loves spending time there, and I am generally OK with that given she does not block the heat sink vents output:

This server runs on ltsc2022 kernel, which provides me additional performance benefits, as I wrote about in a previous post of running 10.3 in Process Isolation mode. So why not re-use the codebase from that same containerized Next.js starter kit for the sake of PoC?

Please note: you should not utilize this approach for hosting real-life projects or going for something bigger than a quick PoC of a demo showcase. The stability of a tunneled channel remains at the courtesy of a service provider, and it also may violate one's particular license, so please use it with care.

The next question comes of how do I make it accessible from the outer global internet so that people making demos can login from where they are and work with Sitecore as they would normally do. Typically, to make that happen I need to undertake three steps:

  1. Define a hostname and configure Sitecore to install with its subdomains.
  2. Generate a wildcard certificate for a domain name of the above hostname.
  3. Make required DNC changes to change A-record and subdomains to point public IP of that machine.

But wait a bit, do I have a public IP? Sadly, I don't, therefore, start looking for a variety of DynDNS options which still required more effort than I initially was going to commit into. Eventually, I remembered a specific class of tunneling software that serves exactly that purpose. From a wide range, LocalTunnel appeared to be the most promising free-to-use solution that some folks use to proxy out their basic sites for demos. 

Looking at its features it looks very much attractive:

  • it is totally free of charge
  • does not require any registration/tokens
  • ultrasimple installation with npm
  • because of the above, it potentially can tunnel directly into containers
  • gives you an option of a temporal claiming subdomain, if one is available
  • allows mapping invalid SSL certificates

The typical installation and execution are ultra-simple:

npm install -g localtunnel
lt --port 8080

After the second command, LocalTunnel responds with a URL navigating which will tunnel your request to port 8080 of the host machine it was run at.

But how do I apply that knowledge to a complicated Sitecore installation, given that most of the Sitecore services in containers are behind Traefik, which also serves SSL offload point? In addition, the Identity Server requires a publically accessible URL to return the successfully authenticated request.

The more advanced call syntax looks as below:

lt --local-host HOST_ON_LOCAL_MACHINE --local-https --allow-invalid-cert --port 443 --subdomain SUBDOMAIN_TO_REQUEST

Basically, for Sitecore to operate from outside I must set it up in a way that external URLs match those URLs run locally at the host where LocalTunnel runs. With the above command, if a subdomain request is satisfied, it will be served by the URL https://SUBDOMAIN_TO_REQUEST.loca.lt which leads to HOST_ON_LOCAL_MACHINE on port 443.

So, in a headless Sitecore we have four typical parts running on subdomains of a hostname served by a wildcard certificate:

  • Content Management (aka Sitecore itself)
  • Content Delivery
  • Identity Server
  • Rendering Host

OOB they are served by default with something like cm.YourProject.localhost, cd.YourProject.localhost, id.YourProject.localhost and www.YourProject.localhost correspondingly. In order to match HOST_ON_LOCAL_MACHINE to SUBDOMAIN_TO_REQUEST for the sake of this exercise, I choose the following hostnames for installation:

The scripts that create Next.js StarteKit Template and Init.ps1 script don't modify all the required hostname changes, so just in case you'll do it manually (recommended), I will name the locations to change

1. Init.ps1 - a block that makes and installs certificates (search by & $mkcert -install)

2. Init.ps1 - a block that adds host file entries (search by Add-HostsEntry)

3. Init.ps1 - a block that sets the environment variables (search Set-EnvFileVariable "CM_HOST")

4. Up.ps1 - authentication using Sitecore CLI (search by dotnet sitecore login)

5. Up.ps1 - final execution in the browser (search by Start-Process at the bottom of the file)

6. .env file - replace CM_HOST, ID_HOST, RENDERING_HOST and CD_HOST variables

7. Make sure Traefik config (docker\traefik\config\dynamic\certs_config.yaml) references the correct certificate and key files

8. Create-jss-project.ps1 - features --layoutServiceHost and --deployUrl parameters of jss setup command

9. src\rendering\scjssconfig.json

10. src\rendering\src\temp\config.js

After the whole installation completes successfully and you see Sitecore CM and Rendering Host in the browser with alternated domain URLs.

Now you can start LocalTunnel:

lt --local-host identity.loca.lt --local-https --allow-invalid-cert --port 443 --subdomain identity
lt --local-host sitecore.loca.lt --local-https --allow-invalid-cert --port 443 --subdomain sitecore
lt --local-host rendering.loca.lt --local-https --allow-invalid-cert --port 443 --subdomain rendering
lt --local-host delivery.loca.lt --local-https --allow-invalid-cert --port 443 --subdomain delivery

On the first run from outside it may show you a notification screen that LocalTunnel serves given URL, and with relatively great piing, that's it.

I briefly tested it and it works well: no SSL issues, Experience Editor runs and allows to bring changes, then publishes them correctly so that they get reflected while browsing Rendering Host. All seems to work well and as expected!

LTSC2022 images for Sitecore containers released: what does it mean to me?

Exciting news! Sitecore kept the original promise and released the new ltsc2022 container images for all the topologies of both the 10.3 and 10.2 versions of their platform.

The biggest benefits of new images are improved image sizes – almost 50% smaller than ltsc2019, and support for running Process Isolation on Windows 11.

Check it yourself:

So, what does that mean for developers and DevOps?

First and most, running Sitecore 10.3 on Windows Server 2022 is now officially supported. You may consider upgrading your existing solutions to benefit from Server 2022 runtime.

Developers working on Windows 11 now also got so much wanted support, containers built from the new images can run in Process isolation mode without a hypervisor. That brings your cluster performance to nearly bare metal metrics.


Let's try it in action!

I decided to give it a try and test if that would work and how effectively. I recently purchased a new Microsoft Surface  8 Pro laptop which had Windows 11 pre-installed and therefore useless for my professional purposes, so it seems to be excellent test equipment.

After initial preparation and installing all the prerequisites, I was ready to go. Choosing the codebase I decided to go with the popular Sitecore Containers Template for JSS Next.js apps and Sitecore 10.3 XM1 topology, as the most proven and well-preconfigured starter kit.

Since I initialized my codebase with -Topology XM1 parameter, all the required container configurations are located under /MyProject/run/sitecore-xm1 folder. We are looking for .env file which stores all the necessary parameters.

The main change to do here is setting these two environmental settings to benefit from ltsc2022 images:

SITECORE_VERSION=10.3-ltsc2022
EXTERNAL_IMAGE_TAG_SUFFIX=ltsc2022

The other important change in .env file would be changing to ISOLATION=process. Also, please note that TRAEFIK_ISOLATION=hyperv stays unchanged due to a lack of ltsc2022 support for Traefik, so sadly you still need to have Hyper-V installed on this machine. The difference is that it serves only Traefik, the rest of Sitecore resources will work in the Process mode.

I also did a few optional improvements upgrading important components to their recent versions:

MANAGEMENT_SERVICES_IMAGE=scr.sitecore.com/sxp/modules/sitecore-management-services-xm1-assets:5.1.25-1809
HEADLESS_SERVICES_IMAGE=scr.sitecore.com/sxp/modules/sitecore-headless-services-xm1-assets:21.0.583-1809

Also, changed node to reflect the recent LTS version:

NODEJS_VERSION=18.14.1

Please note, that sitecore-docker-tools-assets did not get any changes from the previous version of Sitecore (10.2), so I left it untouched.

Last thing – to make sure I indeed build and run in the Process isolation mode, I set ISOLATION=process changing this value from default. The rest of .env file was correctly generated for me by Init.ps1 script.

All changes complete, let’s hit .\up.ps1 in PowerShell terminal with administrative mode and wait until it downloads and builds images:


Advanced Part: building Traefik with ltsc2022

Now, let's get rid of the only left 1809-based container, which is Traefik. Luckily, its Dockerfile is available, so I can rewrite it to consume ltsc2022 images. In addition, I took the latest (by the time) version of it which is 2.9.8, while the officially supported is 2.2.0, so it would make sense to parametrize the version as well, taking its settings from .env settings.

I created a new docker\build\traefik folder and ended up with the following Dockerfile within there:

ARG IMAGE_OS
FROM mcr.microsoft.com/windows/servercore:${IMAGE_OS}

ARG VERSION
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]

RUN Invoke-WebRequest \
        -Uri "https://github.com/traefik/traefik/releases/download/$env:VERSION/traefik_${env:VERSION}_windows_amd64.zip" \
        -OutFile "/traefik.zip"; \
    Expand-Archive -Path "/traefik.zip" -DestinationPath "/" -Force; \
    Remove-Item "/traefik.zip" -Force

EXPOSE 80
ENTRYPOINT [ "/traefik" ]

# Metadata
LABEL org.opencontainers.image.vendor="Traefik Labs" \
    org.opencontainers.image.url="https://traefik.io" \
    org.opencontainers.image.source="https://github.com/traefik/traefik" \
    org.opencontainers.image.title="Traefik" \
    org.opencontainers.image.description="A modern reverse-proxy" \
    org.opencontainers.image.version=$env:VERSION \
    org.opencontainers.image.documentation="https://docs.traefik.io"

Because of that I also had to update the related docker-compose section of docker-compose.override.yml file:

  traefik:
    isolation: ${ISOLATION}
    image: ${REGISTRY}traefik:${TRAEFIK_VERSION}-servercore-${EXTERNAL_IMAGE_TAG_SUFFIX}
    build:
      context: ../../docker/build/traefik
      args:
        IMAGE_OS: ${EXTERNAL_IMAGE_TAG_SUFFIX}
        VERSION: ${TRAEFIK_VERSION}
    volumes:
      - ../../docker/traefik:C:/etc/traefik
    depends_on:
    - rendering

What I want to pay attention here - I am now using ${ISOLATION} as the rest of the containers are using instead of dedicated TRAEFIK_ISOLATION which can now be removed from .env.

Another thing is that I am passing fully parametrized image name:

image: ${REGISTRY}traefik:${TRAEFIK_VERSION}-servercore-${EXTERNAL_IMAGE_TAG_SUFFIX}

I intentionally do not prefix it with ${COMPOSE_PROJECT_NAME} so that this image becomes reusable between several solutions on the same machine, which saves some disk drive space.

Last step would be adding .env parameter TRAEFIK_VERSION=v2.9.8 and removing TRAEFIK_IMAGE parameter which is no longer needed. Good to go!


Outcomes and verdict

I tested all of the important features of the platform, including Experience Editor and it all works, and what is especially important – works impressively fast with the Process isolation mode. And since all the containers are built with ltsc2022 and run in Process isolation, one doesn't need Hyper-V at all!

As for me, I ended up having a nice and powerful laptop suitable for modern Sitecore headless operations.

Enjoy faster development!

Content Hub One full review: good, bad and ugly

Most of my readers know me as a dedicated Sitecore professional, however, those who are close to me are aware of the variety of my hobbies. Some of them also know me as a Scotch whisky expert and collector. After living for almost 15 years in the UK I got a pretty decent collection of these spirits and learned hundreds of facts from attending dozens of whisky distilleries in Scotland.

Once I got my hands on a new SaaS offering from Sitecore - Content Hub One, I decided to give it a try on a practical example and try its capabilities as I was doing a real application. What would I use for the demo purposes? Something I know much about  - that's how exposing my whisky collection was chosen. Let's go through all the way starting with content modeling, going through actual data and media authoring and publishing, and eventually creating a headless app for content delivery.

Content

First look

Once I got access to Content Hub ONE, I felt curious about what can I do using it. After logging through the portal, I got the ascetic main interface:

It exactly mimics your expected activities here: Content Types is used for Content modeling, Media - is for uploading media assets and Content is for creating content from your types and referencing uploaded media.

Content Hub One comes with handy documentation that helps understand the operations.


Content Modeling

For my purpose, I need to set up two content types - a listing type featuring items from the collection and item types itself to be used on the corresponding pages (marketers also know them as PLP and PDP).

Let's start with a Whisky type which represents the actual item from my collection. You can only choose from these basic field types:

  • Text type can be either short single-line value or multi-line long text up to 50,000 characters
  • Rich text includes markup and can take even more - 200,000 characters. It does not accept raw HTML.
  • Number, Boolean, and Date/Time are obvious and speak for themselves.
  • Reference gives the ability to link other content records to this item, with the unfortunate limit of max 10 items per field
  • Media is similar to the above with the difference that it allows referencing uploaded media items.

Unfortunately, some crucial fields are missing, such as those used for storing Links, URLs, and email addresses.

I ended up with the following structure for a Whisky item type that features as many of various field types as possible:

Next, let's create a Collection type to include a collection of items as well as some descriptive content within Rich text type:

Pay attention to the Archive field. From the home page, I want to distribute a zip archive with all 50 images of my collection, so I included this media field. The challenges of this implementation are described below.


Media

Content Hub ONE users can upload media so that it gets published to Experience Edge CDN. However, its usage is very limited to only images of GIF, JPG, PNG, and WEBP formats.

That is not sufficient for my demo purposes. I also need to upload videos of creative ads for each of my whisky items, as is referenced at Whisky type and I also want to upload a ZIP archive with all the 50 images featuring my entire collection, referenced at Collection type. This is not something extraordinary and is very common for content-powered websites.

So, the question is - can I upload archives and videos? Officially - no, you cannot. However, nothing stops you from renaming your asses to something like video.mp4.jpg or archive.zip.jpg so that it successfully passes upload validation and actually gets uploaded and later published to Edge. With 70Mb limit per media item, it can host pretty much reasonably converted videos, archives, or whatever you may want to put there.

Note: please be aware that since anything else than images isn't officially supported, you may lose access to that content once. Use it at your own risk!

Further below I will show how to build a head application that can consume such content, including "alternative" non-supported media types.


Development

There is the documentation for the developers, a good start at least.

CLI

Content Hub One comes with helpful CLI and useful documentation. It has support for docker installation, but when speaking about local installation I personally enjoy support for installing using my favorite Chocolatey package management tool:

choco install Sitecore.ContentHubOne.Cli --source https://nuget.sitecore.com/resources/v2

With CLI you execute commands against the tenants with only one active at the moment. Adding a tenant is easy, but in order to do you must provide the following four parameters:

  • organization-id
  • tenant-id
  • client-id
  • client-secret

Using CLI you can do serialization the same as with XP/XM platforms and see the difference and that is a pretty important feature here. I pulled all my content into a folder using ch-one-cli serialization pull content-item -c pdp command where pdp is my type for whisky items:

The serialized item looks as below:

id: kghzWaTk20i2ZZO3USdEaQ
name: Glenkinchie
fields:
  vendor:
    value: 'Glenkinchie '
    type: ShortText
  brand:
    value: 
    type: ShortText
  years:
    value: 12
    type: Integer
  description:
    value: >
      The flagship expression from the Glenkinchie distillery, one of the stalwarts of the Lowlands. A fantastic introduction to the region, Glenkinchie 12 Year Old shows off the characteristic lightness and grassy elements that Lowland whiskies are known for, with nods to cooked fruit and Sauternes wine along the way. A brilliant single malt to enjoy as an aperitif on a warm evening.
    type: LongText
  picture:
    value:
    - >-
      {
        "type": "Link",
        "relatedType": "Media",
        "id": "lMMd0sL2mE6MkWxFPWiJqg",
        "uri": "http://content-api-weu.sitecorecloud.io/api/content/v1/media/lMMd0sL2mE6MkWxFPWiJqg"
      }
    type: Media
  video:
    value:
    - >-
      {
        "type": "Link",
        "relatedType": "Media",
        "id": "Vo5NteSyGUml53YH67qMTA",
        "uri": "http://content-api-weu.sitecorecloud.io/api/content/v1/media/Vo5NteSyGUml53YH67qMTA"
      }
    type: Media
After modifying it locally and saving the changes, it is possible to validate and promote these changes back to Content Hub ONE CMS. With that in mind, you can automate all the things about for your CI/CD pipelines using PowerShell, for example. I would also recommend watching this walkthrough video to familiarize yourself with Content Hub ONE CLI in action.


SDK

There is a client SDK available with the support of two languages: JavaScript and C#. For the sake of simplicity and speed, I decided to use C# SDK for my ASP.NET head application. At a first glance, SDK looked decent and promising:

And quite easy to deal with:

var content = await _client.ContentItems.GetAsync();

var collection = content.Data
    .FirstOrDefault(i => i.System.ContentType.Id == "collection");

var whiskies = content.Data
    .Where(i => i.System.ContentType.Id == "pdp")
    .ToList();
However, it has one significant drawback: the only way to get media content for use in a head application is via Experience Edge & GraphQL. After spending a few hours troubleshooting and doing various attempts I came to this conclusion. Unfortunately, I did not find anything about that if the documentation. In any case with GraphQL querying Edge my client code looks nicer and more pretty, with fewer queries and fewer dependencies. The one and only dependency I got for this is a GraphQL.Client library. The additional thing to add for querying Edge is setting X-GQL-Token with a value, you obtain from the Settings menu.

The advantage of GraphQL is that you can query against the endpoints specifying quite complex structures of what you want to get back as a single response and receive only that without any unwanted overhead. I ended up having two queries:

For the whole collection:

{
  collection(id: ""zTa0ARbEZ06uIGNABSCIvw"") {
    intro
    rich
    archive {
        results {
        fileUrl
        name
        }
    }
    items{
    results{
        ... on Pdp {
        id
        vendor
        brand
        years
        description
        picture {
            results {
            fileUrl
            name
            }
            }
        }
      }
    }
  }
}

And for specific whisky record item requested from a PDP page:

{
  pdp(id: $id) {
    id
    vendor
    brand
    years
    description
    picture {
        results {
        fileUrl
        name
          }
        }
    video {
        results {
        fileUrl
        name
            }
        }
    }
}

The last query results get easily retrieved in the code as:

var response = await Client.SendQueryAsync<Data>(request);
var whiskyItem = response.Data.pdp;

 

Some challenges occur at the front-end part of the head application. 

When dealing with Rich text fields you have to come up with building your own logic (my inline oversimplified example, lines 9-50) for rendering HTML output from a JSON structure you got for that field. The good news is that .NET gets it nicely deserialized so that you can at least iterate through this markup:

Sitecore provided an extremely helpful GraphQL IDE tool for us to test and craft queries, so below is how the same Rich text filed value looks in a JSON format:

You may end up wrapping all clumsy business logic for rendering Rich text fields into a single Html Helper producing HTML output for the entire Rich text field, which may accept several customization parameters. I did not do that as it is labor-heavy, but for the sake of example, produced such a helper for Long Text field type:

public static class TextHelper
{
    public static IHtmlContent ToParagraphs(this IHtmlHelper htmlHelper, string text)
    {
        var modifiedText = text.Replace("\n", "<br>");
        var p = new TagBuilder("p");
        p.InnerHtml.AppendHtml(modifiedText);
        return p;
    }
}

which can be called from a view as:

@Html.ToParagraphs(Model.Description)


Supporting ZIP downloads

On the home page, there is a download link sitting within Rich text content. This link references a controller action that returns zip archive with the correct mime types.

public async Task<IActionResult> Download()
{
    // that method id overkill, ideally
    var collection = await _graphQl.GetCollection();

    if (collection.Archive.Results.Any())
    {
        var url = collection.Archive.Results[0].FileUrl;
        var name = collection.Archive.Results[0].Name;
        name = Path.GetFileNameWithoutExtension(name);

        // gets actual bytes from ZIP binary stored as CH1 media
        var binaryData = await Download(url);
        if (binaryData != null)
        {
            // Set the correct MIME type for a zip file
            Response.Headers.Add("Content-Disposition", $"attachment; filename={name}");
            Response.ContentType = "application/zip";

            // Return the binary data as a FileContentResult
            return File(binaryData, "application/zip");
        }
    }

    return StatusCode(404);
}


Supporting video

For the sake of a demo, I simply embedded a video player to a page and referenced the URL of published media from CDN:

<video width="100%" style="margin-top: 20px;" controls>
    <source src="@Model.Video.Results[0].FileUrl" type="video/mp4">
    Your browser does not support the video tag.
</video>


Bringing it all together

I built and deployed the demo at https://whisky.martinmiles.net. You can also find the source code of the resulting .NET 7 head application project at this GitHub link.

Now, run it in a browser. All the content seen on a page is editable from Content Hub ONE, as modeled and submitted earlier. Here's what it looks like:

Criticism

Content Hub ONE developers did a great job in the shortest time and there should not be any questions to them. However, from my point of view, there is a big number of both minor and major issues that prevent using this platform in its current stage for commercial usage. Let's take a look at them.

1. Lack of official support for media items other than four types of images is a big blocker. Especially given that there is no technical barrier to doing that in principle. Hopefully, that gets sorted with time.

2. Many times while working with CH1 I got phantom errors, without understanding the cause. For example, I want to upload media but got Cannot read properties of undefined (reading 'error') in return. Later I realized that was caused by session expiration, which for some reason is not handled well in the cases. What is more frustrating - I got these session issues even after just navigating the site, as if navigation did not reset the session expiration timer. But since that is SaaS product - it's only my guesses without having access to internals.

3. Another issue experienced today was CH1 got down with UI showing me a Failed to fetch error. That also occurred with my cloud-deployed head app which also failed to fetch content from CH1. Unannounced/planned maintenance?

4. Not being able to reference more than 10 other records seriously limits platform usage. With my specific example, I had around 50 items of whisky to be exposed through this app but was able to include only max 10 of them. What is worse - there are no error messages around it or UI informing me about the limitation in any other way.

5. When playing around with the existing type I cannot change the field type, and that limitation is understood. The obvious solution would be deleting that field instead and regrating it with the same name but another type (let's assume there's no content to be affected). Wrong, that's not possible and eds with Failed entity definition saving with name: 'HC.C.collection' error. You can only recreate the field with a new name, not the same one you've just deleted. If you got lots of queries in your client code - need to locate them and update them correspondingly.

6. Not enough field types. For example, URL could be simply placed into a small text field, but without proper validation, editors may end up having broken links if they put a faulty URL value on a page.


There is some of UI/UX to be improved

1. Content Hub One demands more clicks for content modeling creation compared to let's say XP. For example, if you publish a content item, related media does not get published automatically. You need manually click through media, locate it and publish explicitly. On large volume of content that annoys and adds unwanted labor.

2. To help with the above, why not add a "Publish" menu item into the context menu upon an uploaded item in a Draft state? That eliminates the unwanted step of clicking into the item for publishing.

3. On the big monitors the name of a record is mislocated in the top left corner making it unobvious to edit it, given that, it is not located with a form field, so not immediately obvious that is editable. That is especially important for records that are not possible to rename after creation. Bringing the name close to the other fields would definitely help!

4. Lack of drag&drop. It would be much easier to upload media by simply dragging the files onto a media listbox, or any other reasonable control.

5. Speaking about media, UI does not support selecting multiple files for an upload. Users have to click one after another.

6. Need better UI around grouping and managing assets. Currently, there are facets but need something more than that, maybe the ability to group records into folders. I don't have a desired view on that, but definitely see the need for such a feature, as my ultra-simple demo case already requires navigational effort.


Conclusion

I don't want to end up with the criticism only leaving a negative impression about this product: there are plenty of positives as well. I would only mention just a few: decent SDKs, attention to the details where the feature is actually implemented (like the order of referenced items follows up the order you select them), nice idea of a modern asynchronous UI that can notify you about when the resource gets published to Edge (just need to sort out the session expiration issues).

Content Hub One is definitely in the early stages of its career. I would wish the development team and product managers to eventually overcome the "child sickness" stage of the product and deliver us a lightweight but reasonably powerful headless CMS that will speed up content modeling and content delivery experience. The foot is already in the door, so the team needs to push on it!