Experience Sitecore ! | April 2019

Experience Sitecore !

More than 200 articles about the best DXP by Martin Miles

SUGCON 2019 Takeaways

SUGCON 2019 was a blast! 

More and more of thrilling changes are coming into our beloved platform, you find them out below. Also, it was a pleasure to receive a physical trophy of my Sitecore MVP award!

With this post, I am not going to do what everyone else does, like generously describing sessions attended. Instead, I'd share a brief bullet list of my own takeaways from this important event. Please forgive me my memory - not everything is covered below, but the most important is. Let me know if I've missed something. So, here we go:


General takeaways

  • Sitecore 9.1.1 has been released - this release is mostly focused on bugfixes and stability
  • Sitecore 9.2 is coming shortly
  • JSS becomes a massive game-changer for Sitecore
  • Docker will soon be in high demand for DevOps (XM and XP 9.1.1 images are already available)
  • Helix principles got updated with guidance for Commerce and xConnect, simpler VS project structure, new samples, some UX.


Sitecore XP 9.1.1 features

  • other templates now possible to run before the ARM templates (useful when installing Solr and others)
  • logging of xConnect connections errors improved
  • plenty other fixes covering:
    • EXM
    • Experience Editor
    • Experience Optimization
    • Sitecore Forms
    • Security
    • Marketing Foundation
    • Marketing Automation
    • Sitecore Services Client
    • etc.

Sitecore XP 9.2 expected to feature

  • graphical installer assistant is coming back - a simple GUI over SIF installer
  • Rainbow library will be used for serialization in Sitecore 9.2 OOB and also TDS is moving to YAML
  • better robot detection
  • new Active Personalization Report - a well-in-demand tool that we should have got earlier 
  • publishing services with Sitecore Host support
  • reworked Links Database tool:
    • it will work in batches, works way quicker
    • lowers impact on xConnect
  • improvements to Universal Tracker:
    • reduced number of requests to xConnect
    • session:end improvements
    • server role assignment
  • search improvements:
    • new Sitecore search role
    • Lucene becomes obsolete
    • better control on fields indexing in Azure Search

Sitecore Commerce

  • Commerce 9.1 released out very shortly after SUGCON
  • version numbers and runtime aligned with compatible XM / XP products
  • could be used in conjunction with JSS (calling commerce API through a proxy)

SXA

  • creating Sitecore JSS site with SXA, including full separation presentation from content (with partial and page designs)
  • improvements with background images
  • plenty of minor improvements coming with version 1.8.1

 JSS

  • basic SXA support
  • full support for Sitecore Forms fully working with JSS client site, also for multistep and validations
  • introduced JSS Rendering Host to offload server-side rendering
  • no requirements for Windows for local development

Sitecore Host

  • upgraded to the latest .NET Core version
  • architecture changes for scalability and maintainability
  • in addition to Identity Server already using Sitecore Host, we now expect Publishing Service to move there
  • a NuGet package to help build Sitecore Host plugins is available for download

Is anything missing from the list, anything got forgotten? Of course - it's Horizon

Since it was not mentioned, I assume it will later become part of the platform, not in 9.2. It was previously announced at the Sitecore Symposium and currently, only Sitecore MVPs have access to Horizon previews in order to provide feedback with the development team. Maybe It might get available with Sitecore 9.2 as a separate add-on (as Sitecore normally does with their new features before actually integrating them into XP/XM platform, but yet no news on that). Let's keep fingers crossed!

And finally, if you missed any of the presentations, you can get slides and videos from the official SUGCON Video Download page.

Running IIS on local Windows Nano Server in Hyper-V rather than in docker

I've been enjoying Docker for a while, its flexibility and optimized images. However, there are two things, in general, I'd like to improve with the ops process:

  1. getting more control over the containers
  2. obtaining even more persistence, like the ability to create snapshots, switching between them, and few more features very common for traditional VMs

I have a host machine running Windows 10 x64 Professional, that's why I benefit from Hyper-V coming out-of-the-box with this OS. Saying that I mean creating a chain of checkpoints, switching between them, backing up and restoring images as well as other traditional ops activities - that all happens nicely and rapidly! Not to say that VMs work as effective, as the host does - at least I do not feel any difference.

The downside, however, is an extreme size of VMs - some are taking 50+ gigs of drive space!  That's due to Windows 10 running inside, so obviously looking at docker windows systems I felt quite jealous since Windows Server Core takes about 4 gigs and Nano Server is only half a gig! That's still 100 times more than smallest Alpine image, but still the smallest windows unit option possible.

So wondered if that's possible to have Nano Server, but hosted at local Hyper-V in Windows 10, so that I can not just cluster them as many as I may need without any host machine performance impact, but also mix them with other OS VMs sharing the same network and resources. And of course, get the ability to do checkpoints and remote management (PowerShell in that case since Nano does not have any UI). Going ahead, I confirm that all of the above was achieved, and this article explains how.


Content

  1. Defining objectives
  2. Preparing virtual drive
  3. Setting up a VM
  4. Running Nano Server
  5. Remote administration
  6. The result


1. Defining objectives

Achievables for this exercise are listed below:
  
  • Running Nano Server as a guest OS from Hyper-V manager using UI.
  • Being able to utilize all Hyper-V features, such as checkpoints, backups and restore
  • Run several machines in the same stable and configurable virtual network
  • Mixing various types of containers, such as bot GUI and non-GUI Windows OS along with Linux
  • Get the minimal Windows-based OS with IIS running
  • Being able to manage that OS remotely (with PowerShell since that OS does not have UI)

2. Preparing a virtual drive


While you can manually create Nano server virtual hard drive with a (long) PowerShell command, there is a nice tool that allows you to define what you'd like to get as the result. Here's this tool - NanoServerImageBuilder.msi (2.6MB). Once run it will look up for the prerequisites.

If you miss any of the dependencies, Nano Server Image Builder will identify them, download and install:


After installation, you'll have two options, and since we are going to build a virtual hard drive for VM - select the top one:


At this stage, you need to provide Windows Server 2016 installation ISO.


As the dialogue box kindly advises, the server OS image should contain the NanoServer folder at the root of ISO image:

When choosing an edition - pick up Datacenter as it is the smallest one. From optional packages make sure you check IIS as that's among out achievable lisе:

Give a machine name and admin password:

If you intend to manage your VM from a Hyper-V virtual subnet only (and not from outside networks), leave this unchecked:

As for IP address - we'll use DHCP client, but having static IP is an optional benefit - feel free to choose if you need that.

And that's it! As you see, this tool is just a GUI wrapper over a PowerShell command extracting necessary components and generating a Nano Server virtual hard drive out of traditional Windows Server 2016 ISO image. That actual command is listed below as well:

The tool ends up with creating a VHD-file which is a virtual hard drive containing our Nano Server (and yes, it is half-a-gig in size!), but at the next stage we need to create a new VM and attach created drive to it in order to run a VM:


3. Setting up a VM

As usual, create a new Virtual Machine. Once done, attach the drive and complete the wizard:


4. Running Nano Server

In Hyper-V Manager you are now able to run Nano Sever:


That's the only UI you may be able to see from your image. Enter the Administrator password you provided upon image creation:


The choice of available options with UI is not as impressive at all:


Networking is probably the only useful configuration screen here. Obviously, that's where you configure network parameters such as IP addresses, network mask, DHCP and the rest of them - use it not configured network correctly at the stage of preparing a virtual hard drive.

Done and let's finally turn to the most interesting part - remote management!


5. Remote administration

Just before you start, run the command below in order to find out if WSMan in enabled. The WSMan provider is a set of tools for PowerShell that lets you add, change, clear, and delete WS-Management configuration data on local or remote computers. 
The screenshot below tells me it is up and running there at Nano Server:
Test-WsMan IP_ADDRESS


At the next step, you may need to run these two commands: first enables PowerShell Remoting and the second creates a rule of treating everything as a trusted host (it is safer to use exact IP address instead of *-asterisk wildcard).

Enable-PSRemoting           #     we need to enable PowerShell remoting first
Set-Item WSMan:\localhost\Client\TrustedHosts *        #     adding hosts to trusted

Now let's try to execute PowerShell command remotely (within a context of given Nano Server). This command runs code within curly braces on Nano Server - an example below shows root level of remote C-drive:

Invoke-Command -ComputerName 192.168.181.136 -ScriptBlock { Get-ChildItem C:\ } -credential Administrator


Alternatively, you may "enter" an interactive mode (same that you get by using -it with Docker), and run commands one by one, all in the context of remote Nano Server. Let's, for example, navigate into IIS directory and check what is there:

Enter-PSSession -ComputerName 192.168.181.136 -Credential Administrator



6. The result

You might have seen those default HTM and PNG files, coming with IIS by default at the previous remote output. When starting correctly prepared Nano Server, IIS is also up and running showing the default website. 


From now on I can backup, restore and duplicate the instance that has IIS running straight away and is remotely manageable via PowerShell. I can use it as a base image for further experiments without an impact on the performance of a host machine.

Please bear in mind that Nano Server does not support full .NET Framework, so you can only run .NET Core and static websites from such IIS instance. If you need full .NET Framework - use a similar approach with Windows Server Core.

Hope you find this useful!

Starting with Docker and Sitecore

There was much of recent buzz around containers as technology and Docker in particular, especially now where more and more community efforts focused at Docker in conjunction with Sitecore. Plenty of articles do explain how it works on the very top level, what the benefits are but very rare do precise guidance. As for an ultimate beginner - I know how it is important to get a quick start, the minimal positive experience as a further point of development. This blog exactly about how to achieve the bare minimal Sitecore running in Docker.

Content


1. Terminology

  • Docker images - a blueprint for creating containers, is what you pull from remote registry
  • Docker container - an implementation of a specific image, you run and work with containers, not images
  • Docker repository - a logical unit containing one or many built images (specified by tags)
  • Docker registry - works like a remote git repo, it's a hosted solution to you push your built images into

There are plenty more of terminology, but these are the essentials for a demo below

2. Installing Docker

If you have it up and running, you may skip to the next part.

In order to operate the repository for given walk-through, you need to have Windows 10 x64 with at least build 1809.

The simplest way is to install it from Chocolatey gallery:

cinst docker-desktop

Missing something from your host OS installation? Docker will manage that itself!

Once done, you'll need to switch a mode. Docker for Windows can work in either Windows or Linux mode at the same time - which means you cannot mix types of containers.

One of the biggest issues at the moment is the size of Windows base images - the minimal Nano Server with almost everything cut off is already 0.5Gb and Server Core (this one either does not have UI - just a console) goes to 4Gb. That's too much comparing to minimal Linux images starting from as little as 5 megs. That's why it may seem very attractive to run Solr from Linux image (as both Solr and Java it requires are cross-platform and there are ready-to-use images there), same for Ms SQL Server which also has been ported on Linux and its images are also available.

Until the very-very recent the short answer was no - one could manage only single type of container at a time (while already running Linux container will keep up running unmanageable, there are also few workarounds how to make them work in parallel, but that's out of scope for now), but as from April 2019 it is doable (from Linux mode on Windows), I managed to combine NGINX on Linux with IIS on Windows.

Switching mode from UI is done by right-clicking Docker icon context menu from a system tray:


3. Docker registry

What you should do next is to provide a docker registry. Docker Hub is probably the first option for any docker beginner.

Docker Hub however allows only one private repository for free. You need to ensure sure all your repositories are private. Images you're building will contain your license file, having them in a public access will also be treated as sharing Sitecore binaries on your own, which you're not allowed - only Sitecore can distribute that publicly.

Alternatively, you may consider Canister project, they give up to 20 private repositories for free.

Pluralsight has a course on how to implement your own self-hosted docker registry.

But what is even more surprising, Docker itself provides a docker image with Docker Registry for storing and distributing Docker images.


4. Preparing images

Now let's clone Sitecore images repository from GitHub - https://github.com/sitecoreops/sitecore-images

If you don't have git installed - use Chocolatey tool already familiar from previous steps: cinst git (once complete - you'll need to reopen console window so that PATH variable gets updated).

To keep things minimal, I go to sitecore-images\images folder and delete everything unwanted - as for this demo, I keep only 9.1.1 images, removing the rest. So, I left only 9.1.1 images and sitecore-openjdk required for Solr:

Images folder contains instructions on how to build you new Sitecore images. As input data they require Sitecore installers and license file, so put them into this folder:

And the last but not the least, create build.ps1 PowerShell script.

Important: do not use an email as the username, it's not your email (in my case username is martinmiles), and from what I've heard many people find this confusing and were wondering why getting some errors.

This is my build script, replace usernames and password and you're good to go:

"YOUR_DOCKER_REGISTRY_PASSWORD" | docker login --username martinmiles --password-stdin

# Load module
Import-Module (Join-Path $PSScriptRoot "\modules\SitecoreImageBuilder") -Force

# Build and push
SitecoreImageBuilder\Invoke-Build `
    -Path (Join-Path $PSScriptRoot "\images") `
    -InstallSourcePath "c:\Docker\Install\9.1.1" `
    -Registry "martinmiles" `
    -Tags "*" `
    -PushMode "WhenChanged"


5. Building images

Run the build script. If receiving security errors, you may also be required to change execution policy prior to running build:

set-executionpolicy unrestricted
Finally you get your base images downloading and build process working:

As I said, the script pulls all the base images and builds, as scripted. Please be patient as it may take a while. Once built, your images will be pushed to the registry. Here's what I finally got - 15 images built and pushed to Docker Hub. Again, please pay attention to Private badge next to each repository:

Docker Hub has a corresponding setting for defining privacy defaults:


6. Running containers

Images are built and pushed to registry, so we are OK to run them now. Navigate to tests\9.1.1 rev. 002459\ltsc2019 folder where you see two docker-compose files - one for XM topology and another for XP. If simplified, docker compose is a configuration for running multiple containers together defining common virtual infrastructure, it is written in YAML format.

Since we are going the simplest route - we keep with XM topology, but that same principle works well for anything else.

Rename docker-compose.xm.yml to docker-compose.yml and open it in the editor. What you see is a declarative YAML syntax of how containers will start and interact with each other.

version: '2.4'

services:

  sql:
    image: sitecore-xm1-sqldev:9.1.1-windowsservercore-ltsc2019
    volumes:
      - .\data\sql:C:\Data
    mem_limit: 2GB
    isolation: hyperv
    ports:
      - "44010:1433"

  solr:
    image: sitecore-xm1-solr:9.1.1-nanoserver-1809
    volumes:
      - .\data\solr:C:\Data
    mem_limit: 1GB
    isolation: hyperv
    ports:
      - "44011:8983"

  cd:
    image: sitecore-xm1-cd:9.1.1-windowsservercore-ltsc2019
    volumes:
      - .\data\cd:C:\inetpub\sc\App_Data\logs
    isolation: hyperv
    ports:
      - "44002:80"
    links:
      - sql
      - solr

  cm:
    image: sitecore-xm1-cm:9.1.1-windowsservercore-ltsc2019
    volumes:
      - .\data\cm:C:\inetpub\sc\App_Data\logs
    isolation: hyperv
    ports:
      - "44001:80"
    links:
      - sql
      - solr

If your docker-compose has isolation set to process, please change it to hyperv (this is mandatory hosts on Windows 10, while on Windows Server docker can also run its process natively). In that case, processes will run in a hypervisor and is not a naked process next to native windows processes, that prevents you from memory allocations errors such as PAGE_FAULT_IN_NONPAGED_AREA and TERMINAL_SERVER_DRIVER_MADE_INCORRECT_MEMORY_REFERENCE

Notice data folder? This is how volumes work in docker. All these folders within data are created on your host OS file system - upon creation, a folder from container is mapped to a folder on a host system, and once container terminates, data still remains persistent on a host drive.

For example, one running SQL Server in docker can place and reference SQL database files (*.mdf and *.ldf files) on a volume externally in such a manner so that databases actually exist on a host OS and will not be re-created on each container run.

My data folder already has data folders mapped to data folders from all the various roles' containers run on previous executions (yours will be blank at that moment before the first run):

Just for a curiosity, below is an example of what you can find within cm folder, looks familiar, right?


Anyway, we are ready to run docker-compose:

docker-compose up

You'll then see 4 containers being created, then Solr container starts making its job, providing plenty of output:

In a minute you'll be able to use these containers. In order to log into Sitecore, one needs to know an IP address of a container running a particular role - so need to refer to a cm container. Every container has its hash value which serves as an identifier, so with docker ps command you can list all docker containers currently running, get hash of cm and execute ipconfig command within a context of that cm container (so that ipconfig runs inside of it internally):

Now I can call 172.22.32.254/sitecore in order to login to CMS:

What else can you do?

With docker you may also execute commands in interactive mode (sample) with -it switch, so you may do all the things such as deploying your code there (it is always good to deploy on top of clean Sitecore instance). That's how to enter an interactive session with remote command prompt:

docker exec -it CONTAINER_HASH cmd

You may go with more folder mappings using volume. Running XP topology offers even more interesting but safe playground for experiments.

Building other versions of Sitecore allows regression testing your code against legacy systems - always quick manner and always on clean! Going ahead you may use it for the development having only Visual Studio running on a host machine, with no IIS and no SQL server installed, publishing from VS directly in docker. Plenty other scenarios possible - it's only for you to choose from.


7. Stopping and clean-up

Stopping containers occurs in a similar way:

docker-compose down
After finished, you won't see any of the containers running by executing
docker ps
You'll be still able to see existing images on a system and their size occupied:
docker image ls

So if finally, after playing it over and want to clean-up you drive a noticed there's way less free disk space now. I want to beware you from doing one more common mistake - don't delete containers data manually! If you navigate to c:\ProgramData\Docker\windowsfilter folder, one can see plenty of them:

These are not container folders, they are symlinks (references) to windows system resources folders - deleting data from these symlinks actually deleted you resources from your host OS bringing you to a sorry state. Instead, use the command:

docker prune -a

This gets rid of all the images and containers from your host system correctly and safely.


8. Afterthoughts

Docker is a very strong and flexible tool, it is great for devops purposes. I personally find questionable using it for production purposes. That may be fine with Linux containers, but as for Windows... I'd rather opt out, for now, however I am aware of people doing that.

Proper use docker will definitely improve your processes, especially combining it with other means of virtualisation. Containers may take you a while to get properly into, but after getting your hands on you'll have your cookbook of docker recipes for plenty of day-to-day tasks.

As for Sitecore world, I do understand it is all only starting yet, but docker with Sitecore becomes more inevitable, as Sitecore drills deeper into microservices. Replacing Solr and SQL Server with Linux-powered images is only a matter of time, and what I am anticipating so much is XP and XC finally running together in Docker, facaded by IDS (ideally also on Linux) just in fractions after calling docker-compose. Fingers crossed for that.

Hope this material serves you as a great starting point for containers and Docker!

Staying productive on Sitecore development

Always having productivity and effectiveness as the major criteria of measuring my work, I have identified most of the time-wasters and came up with the list of things that slow down my development process. It is important to differ productivity from performance - the first applies to your personal bottlenecks while the second - to your solution or configuration. Performance tuning has been covered by numerous blog posts so far so I will be mentioning only those affecting my personal productivity. I am also not going to cover things like CI / CD and Application Insights / KUDOs at this post, while all of the mentioned is the proven great tools, they are not related to pure development productivity adn more tend to be DevOps.

I am sharing this list with you also accompanied by some improvements, tips & tricks that can help to decrease time to market for your products. Despite it is Sitecore-focused, there are more-less generic recommendations at the bottom as well.

Content

  1. Sitecore Productivity
  2. Software
  3. Hardware
  4. Organizational


1. SITECORE PRODUCTIVITY

If you are on Sitecore 9.1 and onwards - use XM topology for starting up, prototyping and coming up with an early PoC or MVP. XM topology is now shipped with all analytics configs, DLLs and the rest of unused stuff physically cut out of the provisioned system, resulting in quicker operational times. I am assuming you very unlikely need analytics features at early development stages. however, please be aware of personalization limitations.


If you are using XP - you may disable EXM unless you develop for it:

<appSettings>
    <add key="exmEnabled:define" value="no"/>
</appSettings>


Use the trick with cutting out unwanted languages from core database. Do you really need all these languages for Sitecore interface? The way Sitecore is built is that it uses Sitecore items for translating itself. That create unwanted loops and leads to unwanted performance losses. The items for deletion are:

/sitecore/system/Languages/da
/sitecore/system/Languages/de-DE
/sitecore/system/Languages/ja-JP
/sitecore/system/Languages/zh-CN

Be careful to avoid deletion of English by mistake as you'll then have to reinstall your Sitecore instance. By default these language items are greyed-out and the message says "You cannot edit this item because it is protected" so you need to "Unprotect it" firstly. You'll be double-asked for confirmation:

Agree OK and that's it! Of course, you can do things quicker and unobtrusively from Sitecore PowerShell:

cd core:/
Get-Item "/sitecore/system/Languages/da" | remove-item


Get rid of Notification table. This table is known for supporting clones, but you don't need it unless you're actually using this functionality. In that case, you can at least remove it from if from web database, as clones only work at CM helping editors with managing numerous clones of the specific item being modified, before getting published. Also, there was an alternative solution coming from Sitecore KnowledgeBase.

Anyway, disabling clones is as simple as the setting:

<setting name="ItemCloning.Enabled" value="false"/>


Disable Links Database (it's a table, in fact) on CD database. It is normally used for identifying links between items, but there's no need of it on web database where items turn into URLs.


Publishing productivity tips:

  • May sound obvious but publish only what you've changed and rebuild only what you need.
  • Consider using Publishing Service - that's really quick and saves batches directly into the database.
  • Or even better - build it encapsulated in Docker or at least VM so that you can just have it referenced from any of your dev-environment.
  • Running Sitecore in Live Mode vs. default publishing mode may successfully save some time on publishing for you. For those lucky who are developing with SXA it is way more simple to switch for using both master database and index: just select master from Database field of /sitecore/content/Tenant/Site/Settings/Site Grouping/Site item - no rebuild, restart or "re-whatever" is required.


Use Virtual Machines with snapshots and/or Docker. You may consider a nice triple-combo of Hyper-V - Vagrant - Boxstarter. Wisely configuring and using allows save plenty of time on switching between VMs, restoring, experimenting the code - in other words changing the state, which in Sitecore world is an expensive operation. You may also run an entire farm of VMs configured together, and also (partly or entirely) remotely. Microsoft even gives totally free Hyper-V Server to manage your VMs.

As for Docker, нou may use it as a non-production-unit-of-deployment - it can save plenty of time for some cases for example when working Sitecore-agnostic but very good front-end developers on non-JSS website; I want them, however, be able to have their copy temp of Sitecore instance without all the mess of setup and which they "cannot break".


Fight instance cold starts that happen after you change config or DLLs! There are several things you may do in order to improve your development environment:

  • Consider switching <compilation optimizeCompilations="true"> but before make sure you understand what is dynamic ASP.NET compilation and how it works. This is the biggest save for cold starts.
  • Tune up your prefetch cache for master database to the minimum
  • Disable content testing Sitecore.ContentTesting.config as
  • Not a silver bullet, but when starting a new project, why not consider working with SXA or even better - with JSS? While the first several times reduces the number of cold starts, the second eliminates them entirely!
  • Reduce time ListManagement agent to run every hour rather than every 10 seconds, used for EXM mostly:
    <scheduling>
        <agent type="Sitecore.ListManagement.Operations.UpdateListOperationsAgent, Sitecore.ListManagement">
            <patch:attribute name="interval">01:00:00</patch:attribute>
        </agent>
    </scheduling>
  • Do the same frequency change for IndexingStateSwitcher - from 10 seconds to, let's say, 1 hour:
    <scheduling>
        <agent type="Sitecore.ContentSearch.SolrProvider.Agents.IndexingStateSwitcher, Sitecore.ContentSearch.SolrProvider">
            <patch:attribute name="interval">01:00:00</patch:attribute>
        </agent>
    </scheduling>
  • Also, turn off rebuilding indexes automatically:
    <scheduling>
        <agent name="Core_Database_Agent">
            <patch:attribute name="interval">00:00:00</patch:attribute>
        </agent>
        <agent name="Master_Database_Agent">
            <patch:attribute name="interval">00:00:00</patch:attribute>
        </agent>
    </scheduling>
  • Processors that aren't used while in development, you can remove them too:
    <pipelines>
        <initialize>
            <processor type="Sitecore.Pipelines.Loader.ShowVersion, Sitecore.Kernel"><patch:delete /></processor>
            <processor type="Sitecore.Pipelines.Loader.ShowHistory, Sitecore.Kernel"><patch:delete /></processor>
            <processor type="Sitecore.Analytics.Pipelines.Initialize.ShowXdbInfo, Sitecore.Analytics"><patch:delete /></processor>
            <processor type="Sitecore.Pipelines.Loader.DumpConfigurationFiles, Sitecore.Kernel"><patch:delete /></processor>
        </initialize>
    </pipelines>
  • Last but not the least, since cold starts are inevitable, I still spend this time with use looking the emails, planning, scoping out or ... just attending kitchen for a fresh cup of green tea.

Content Editor

  • Favourites tab under Navigate menu allows you adding some items for quick access. Once added, it will be stored under /sitecore/content/Documents and settings/<domain>_<username>/Favorites in core database.
  • Similarly to the previous one, did you know that you may create Sitecore Desktop shortcuts - the same way as you do on Windows desktop? Use this feature in order to speed you accessing your frequent items.
  • For LaunchPad, you can set some tools seen by admin only there, like Unicorn, ShowConfig, File Manager etc. (package).
  • Pre-load tabs in Content editor. Seriously, I noticed that plenty, if not the majority of folks work in Content Editor in just one windows! Navigating tree structure аor me is an insane loss of productivity while switching between opened windows in Sitecore desktop has zero overhead. For example, working on SXA website I have the following for opened and pre-loaded:
    1. Home page
    2. Data folder 
    3. Partial designs (if I currently work with page structure)
    4. Rendering variants
    5. Renderings
    6. Media Library
    7. Templates
    8. PowerShell ISE
    Once again, these points (except the last one) are site-related items that sit normally deeper in SXA (ie: /sitecore/templates/Project/Tenant/Site vs. /sitecore/templates). This trick saves me seconds, but does that constantly! So normally it looks for me like that:

    You can even automate that, I blogged out automation approach in this post.
  • Expand Collapse buttons are especially helpful when working on large Helix-based solutions so that you can quickly collapse all sections and open only the desired one.
  • Remove unused Content Editor stuff from Application Options (under hamburger menu), also unchecking View -> Standard fields will improve Content Editor performance up to twice faster.
  • Limit number of versions to 10
  • Setting Field Section Sort order will also help saving time by having the most important sections at the top 


2. SOFTWARE

Visual Studio, VS Code and most useful Visual Studio extensions, I can mention a few of them:

  • ReSharper is the king of all the extensions and worth of every dollar spent. VS 2019 takes some of its features but is still far from ReSharper functionality
  • Attach to IIS extension, that adds Attach to IIS into VS Debug menu, so that you also can assign hotkeys to debug you Sitecore
  • Use snippets instead of manually typing the code (one, two, three - plenty of them) or make your own.
  • T4 templates code generator (use them in conjunction with Glass Mapper)

It's important to have some sort of master productivity tool. For example, I am using Total Commander, that far more than just a great two-panel file manager - I made it as a power pack so it includes:

  • Diff tool (I configured to use Beyond Compare with Total Commander) but it comes with a free and fair built-in tool as well
  • Built-in FTP client with encrypted password storage
  • Hotkeys on almost everything you can do.
  • Rapid access to the most important folder you define (and yes - hotkeys for that)
  • true and reliable search by content, regex, ... and also in archives (ex compare with windows)
  • Ooverride system file associations and assign your tool of choice, with parameters
  • I integrated TeraCopy into Commander, so that I have the best and fastest copying tool as well
  • I also integrated PowerShell console into TC so helps a lot save much time opening it in the right context
  • plenty of plugins and much more useful stuff that I struggle to remember at the moment
That's why I feel quite surprised seeing majority developers still using classical Windows Explorer - it is such a bottleneck (in my humble opinion). This tool alone saves me about an hour daily!

Since I've just mentioned PowerShell, nowadays it helps you automating almost everything. This includes:

  • managing Windows server, all its dependencies, all types of activities on a filesystem, registry, MMC, etc.
  • installing, modifying, deploying Sitecore and all the dependencies
  • building images, starting, stopping, deploying containers
  • do all that same with Hyper-V virtual machines (and I assume, that also should be possible with VMs from other vendors)
  • all the type of management and configuration with IIS and SQL
It is probably more difficult to imagine what is not doable with PowerShell. And of course, investing time in mastering PowerShell brings you more benefits when using Sitecore PowerShell Extensions. Combining both you can benefit from Sitecore PowerShell Remoting accessing Sitecore assets and resource from outside of your instance.

Other tools that significantly save my day-to-day life:

  • Chocolatey. I put it to the first place intentionally - that is a console package manager for Windows, that allows you installing almost any software from a Windows console (not even PowerShell!)
  • LockHunter helps me to find out what process lockout folders/files and force releasing them. Biggest abusers typically are IIS, console windows left open and of course our beloved Visual Studio.
  • Slack became the most used tool for the self-organized team, especially nowadays - with the growth of Agile-based approaches. With an ability to create channels on any aspect and having great mobile clients - it helps thousands of distributed teams globally. When setting up CI / CD pipelines I usually configure sending build notifications to a dedicated channel. Slack is also a proven solution to replace boring meetings.
  • dotPeek .NET decompiler should be made mandatory for each Sitecore developer since that's the most genuine way to how it all works internally in Sitecore
  • Synergy helps me to unite few laptops (2 having Windows and 1 Mac OS) into one large multi-screen environment with keyboard, mouse and even clipboard shared across the different OS.
  • Postman and Fiddler are tools for creating RESTful web requests and intercepting others, even those coming by HTTPS
  • smtp4dev becomes inevitable when you start developing emailing functionality. It intercepts your email sending attempts, grabs these email messages and even puts them into your mail app. You don't need to have SMTP server anymore!
  • PCloud - expensive cloud storage but worth of every penny spent and brings true Swiss quality. I got lifetime subscription with them, including crypto folder (which is truly crypto!). Currently trying to entirely replace Dropbox with PCloud
  • Telegram messenger (I give more explanation about it below)
  • Jing screenshot creator tool that comes far beyond Alt + PrnScr built-in Windows functionality
  • Instance backup/restore tool
  • Yet one more triple-combo of Evernote + Dropbox + 1Password - save me plenty of time on a daily basis.

Source control

Won't be unique saying that I prefer using git along with Git Flow approach. Master branch is used only to keep primary releases, develop branch is all developers' cumulative snapshot that is always deployable and used for CI / CD. Further down we got functional (feature and integration) branches. Its approach also allows my teams to avoid serialization conflicts when doing large structural refactorings on Sitecore items.

For git I use three tools in parallel:

  • SourceTree is an excellent free tool for history visualization, branches tracking, etc. Unfortunately, it still has buggy UI especially after it was re-written with WPF - sometimes it struggles to reappear from the minimized state, errors out on some repos with a long history and plenty of branches, it is also quite slow in starting up.
  • Tortoise GIT - I use for commit, historical comparison, repo browser and few more, mostly due to my legacy habits of using SVN 10-15 years ago (Tortoise SVN of course). It also comes free of charge.
  • Console - for everything else.
The simplest to get them is as usually from Chocolatey:
cinst sourcetree
cinst tortoisegit
cinst git.install

One of the positive habits that I want to share is making last-minute check all the items and the code immediately before committing it. When writing a code, you're a deeply focused into some functionality you're building, while pre-commit check merely shows you overall picture. I also check for something stupid, like potential null references, badly named variables, and other high level but important stuff.

Another productivity improvement I am using working with git is creating aliases. This allows assigning short and easy to remember aliases to long-to-remember commands with parameters. This is how I am assigning an alias:
git config --global alias.lga "log --decorate --graph --oneline --all"
Now I can call git lga and it will give me that same result a calling long version git log --decorate --graph --oneline --all:


Browser today is still our primary target application, so it's also a point of tuning the productivity:

  • organize your bookmarks properly into folders and subfolders. Once done, it will sync across all your machines where you've logged into.
  • User scrips with Greasemonkey (Firefox) or Tampermonkey (Chrome) allow improving the functionality of many websites, where authors have intentionally or occasionally forgotten to improve UI / UX. Plenty ready-to-use of scripts are available through GitHub and custom repositories.
  • Invest time in mastering DevTools - this is the first-class tool for any web developer and will start saving your time and efforts quite shortly, if not immediately. Pluralsight has few useful courses for that (one, two)
  • Use some other helpful Chrome extensions: Sitecore Developer Tool, Sitecore Extensions, Grammarly, EditThis Cookie, AdBlock, OneClick URL Shortener.
  • Browsers hotkeys are nowadays mostly crossbrowserly universal. I promice you'll come across at least one big finding after going through the list of keyboard shortcuts, and it is not the full one. For the full list please refere to your specific cbrowser documentation.


3. HARDWARE

Hardware is very crucial for productivity. For my productive setup I use:
  • Dell XPS 15" with 32 Gb RAM and fastest Samsung PM961 SSD. I got 1 TB of storage but even that volume is hard enough due to numerous VMs and snapshots. That is an expensive laptop, but you get what you pay for - frameless 4K touch screen and top spec: as for 2019, you can get a version with i9-8950HK processor and 2TB SSD.
  • Craft Keyboard and MX Anywhere 2S mouse - both top-spec input devices from Logitech work perfectly together in conjunction through the same receiver (but can hot-switch between three of them) and are configured through the same software.
  • I normally use 3 monitors (one of which is the laptop itself). If a monitor has Pivot Function (as on an image below) - that's an excellent bonus to productivity. Such a vertical layout is excellent for code. The left-hand side monitor is usually used for browser with Sitecore always and/or running live website under development. Laptop's monitor is for everything else - file manager, configuration, notes, Slack, etc.
This is how it works all together:



4. ORGANISATIONAL

Approaching technical debt

Technical debt is a deliberate decision to implement a not-the-best solution or write not-the-best code to release software faster. Taking on some technical debt is inevitable and can increase speed in software development in the short run. However, in the long run, it contributes to system complexity, which slows the developers down. Non-programmers often underestimate the loss of productivity and are tempted to always move forward, and that becomes an issue. But if refactoring is never part of the priorities, it will not only impact productivity but also product quality. 

Someone wise identified an approach to managerial stuff regarding managing technical debt - an image below show how is correctly explain technical debt to managers:


General productivity thoughts

In general, productivity is a combination of 3 parameters: time, energy spent on achieving a goal and level of concentration. These pieces of advice are disclosed below:

  • Try staying in The Flow - for the developers it is the state when they feel the most focused and productive. Most of their work is done during this state. For most developers productivity follows Pareto's Law with 20% of time delivering 80% of the result, and the rest 80% of time bringing the rest 20% of the result.
  • Minimize distractions from open space, headphones on! BTW, can recommend Rainy Mood which is my recent finding. Every distraction switches you out of context, and switching contexts is an expensive activity in terms of time and efforts.
  • Avoid meetings where it makes sense to. Only 30-40 percent of meetings are important, the rest invite you to participate "just in case" (they may need to ask something from you and sometimes they do). But at which cost? A single meeting can blow a whole afternoon by breaking it into two pieces, each too small to do anything hard in, again due to switching contexts.
  • In addition, it highly demotivates when management spending your time so loosely, especially when the timeframes are tight and you have to work overtime in order to meet the deadlines. Just to highlight my point - some meetings are useful and very important, especially in the planning stage, but unfortunately, people overuse meetings.
  • Because of the above, a working week of 4 days x 10 hours is way more productive than 5 x 8, despite the same hours worked. The first case has hidden costs of switching the context, plus it also adds me one roundtrip of commute (3 hours for in average).
  • The general approach would be identifying what your actual biggest bottlenecks. Theory of constraints is something that may come to your help. Also, anything outside of job description (along with learning new stuff) should be by definition treated as a non-productive waste of your time.
  • Organize your own notes / knowledge base/ todo lists / planners with the quickest access for both read and write. These can be any tools of your choice, if they give you immediate (and offline as well) access to your important information. Surprisingly, Telegram became such a tool for me despite being primarily a messenger, due to its built-in cloud, offline access, cross-platform sync, and immediate access.
  • Everything you come across which is worth of further checking (and not at the moment,) should be recorded in your "hot" operational notes in order to avoid switching context. and making sure that your brain’s capacity is not consumed by "remembering stuff" rather than focusing on the most important.
  • Identify all you most frequent actions across the system, IDE and most used software and find keyboard shortcut combinations for them, or assign your own.
  • Finally, I'd recommend reading this hacks list of tactics - likely you'll pick something out of there.


That's what came into my head, as for now. And what productivity tips do you have?

Yet another SXA rendering variant - Script Reference Tag coming to improve your SEO

Note! The code used in this post can be cloned from GitHib repository: SXA.Foundation.Variants

I previously wrote a post about having a rendering variant holding an inline JavaScript one might need along adding some basic JS functionality into your components. 

This is useful when you're early developing your pages and have no possibility or capacity of recompiling entire frontend and updating Creative Exchange package into your solution because of adding/changing few lines; however, given approach is not SEO-friendly as search engines penalize sites for excessive inline scripts and styles. So use it considering to be technical debt, that should be addressed prior to going to production.

The very minimal change one can do is to replace the inline script with a reference to that same script stored in Media Library - same that SXA does itself with themes. This blog post below reveals an approach:

Firstly, create a template:

Then reference given template IDs within Constants.cs file:

using Sitecore.Data;

namespace Platform.Foundation.Variants.Pipelines.VariantFields.ScriptReferenceTag
{
    public static partial class Constants
    {
        public static partial class RenderingVariants
        {
            public static partial class Templates
            {
                public static ID ScriptReferenceTag { get; } = new ID("{0EC036D7-384D-4CF6-AD1F-FE949E96126A}");
            }

            public static partial class Fields
            {
                public static class ScriptReferenceTag
                {
                    public static ID ScriptMedia { get; } = new ID("{F1497AF9-7DD3-4B38-BE22-5F092007F929}");
                }
            }
        }
    }
}
Model class, having just one property that stores a GUID of a referenced script from Media Library 
using Sitecore.Data.Items;
using Sitecore.XA.Foundation.RenderingVariants.Fields;

namespace Platform.Foundation.Variants.Pipelines.VariantFields.ScriptReferenceTag
{
    public class VariantScriptReferenceTag : RenderingVariantFieldBase
    {
        public string ScriptMedia { get; set; }

        public VariantScriptReferenceTag(Item variantItem) : base(variantItem)
        {
        }
    }
}
Parser:
using Sitecore.Data;
using Sitecore.XA.Foundation.Variants.Abstractions.Pipelines.ParseVariantFields;

namespace Platform.Foundation.Variants.Pipelines.VariantFields.ScriptReferenceTag
{
    public class ParseScriptReferenceTag : ParseVariantFieldProcessor
    {
        public override ID SupportedTemplateId =>  Constants.RenderingVariants.Templates.ScriptReferenceTag;
        
        public override void TranslateField(ParseVariantFieldArgs args)
        {
            ParseVariantFieldArgs variantFieldArgs = args;

            var variantHtmlTag = new VariantScriptReferenceTag(args.VariantItem) { Tag = "script" };
            variantHtmlTag.ScriptMedia = args.VariantItem[Constants.RenderingVariants.Fields.ScriptReferenceTag.ScriptMedia];
            variantFieldArgs.TranslatedField = variantHtmlTag;
        }
    }
}
Renderer:
using System;
using Sitecore.Data;
using System.Web.UI.HtmlControls;
using Sitecore.XA.Foundation.RenderingVariants.Pipelines.RenderVariantField;
using Sitecore.XA.Foundation.Variants.Abstractions.Pipelines.RenderVariantField;
using Sitecore.Resources.Media;

namespace Platform.Foundation.Variants.Pipelines.VariantFields.ScriptReferenceTag
{
    public class RenderScriptReferenceTag : RenderVariantField
    {
        public override Type SupportedType => typeof(VariantScriptReferenceTag);

        public override void RenderField(RenderVariantFieldArgs args)
        {
            var variantField = args.VariantField as VariantScriptReferenceTag;
            if (variantField != null)
            {
                var id = variantField?.ScriptMedia;
                if (string.IsNullOrWhiteSpace(id))
                {
                    return;
                }

                var scriptItem = Context.Database.GetItem(new ID(id));
                if(scriptItem == null)
                {
                    return;
                }

                var url = MediaManager.GetMediaUrl(scriptItem);

                var tag = new HtmlGenericControl(variantField.Tag);
                tag.Attributes.Add("type", "text/javascript");
                tag.Attributes.Add("defer", String.Empty);
                tag.Attributes.Add("src", url);

                args.ResultControl = tag;
                args.Result = RenderControl(args.ResultControl);
            }
        }
    }
}

Example of usage:

This rendering variant field generates the following output:

<script src="/-/media/Project/Platform/Other/Scripts/Header-script.js" type="text/javascript" defer="" ></script>

This approach works perfectly well. But once again for a second, have you ever considered moving such scripts into a Theme along with related component (if any) instead of leaving it like that? Hope this helps!