Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt explicabo. Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur, adipisci velit, sed quia non numquam eius modi tempora incidunt ut labore et dolore magnam aliquam quaerat voluptatem. Ut enim ad minima veniam, quis nostrum exercitationem ullam corporis suscipit laboriosam, nisi ut aliquid ex ea commodi consequatur? Quis autem vel eum iure reprehenderit qui in ea voluptate velit esse quam nihil molestiae consequatur, vel illum qui dolorem eum fugiat quo voluptas nulla pariatur?
Experience Sitecore! | Martin Miles on Sitecore

Experience Sitecore!

Martin Miles on Sitecore

Starting with Docker and Sitecore

There was much of recent buzz around containers as technology and Docker in particular, especially now where more and more community efforts focused at Docker in conjunction with Sitecore. Plenty of articles do explain how it works on the very top level, what the benefits are but very rare do precise guidance. As for an ultimate beginner - I know how it is important to get a quick start, the minimal positive experience as a further point of development. This blog exactly about how to achieve the bare minimal Sitecore running in Docker.

Content


1. Terminology

  • Docker images - a blueprint for creating containers, is what you pull from remote registry
  • Docker container - an implementation of a specific image, you run and work with containers, not images
  • Docker repository - a logical unit containing one or many built images (specified by tags)
  • Docker registry - works like a remote git repo, it's a hosted solution to you push your built images into

There are plenty more of terminology, but these are the essentials for a demo below

2. Installing Docker

If you have it up and running, you may skip to the next part.

In order to operate the repository for given walk-through, you need to have Windows 10 x64 with at least build 1809.

The simplest way is to install it from Chocolatey gallery:

cinst docker-desktop

Missing something from your host OS installation? Docker will manage that itself!

Once done, you'll need to switch a mode. Docker for Windows can work in either Windows or Linux mode at the same time - which means you cannot mix types of containers.

One of the biggest issues at the moment is the size of Windows base images - the minimal Nano Server with almost everything cut off is already 0.5Gb and Server Core (this one either does not have UI - just a console) goes to 4Gb. That's too much comparing to minimal Linux images starting from as little as 5 megs. That's why it may seem very attractive to run Solr from Linux image (as both Solr and Java it requires are cross-platform and there are ready-to-use images there), same for Ms SQL Server which also has been ported on Linux and its images are also available.

Until the very-very recent the short answer was no - one could manage only single type of container at a time (while already running Linux container will keep up running unmanageable, there are also few workarounds how to make them work in parallel, but that's out of scope for now), but as from April 2019 it is doable (from Linux mode on Windows), I managed to combine NGINX on Linux with IIS on Windows.

Switching mode from UI is done by right-clicking Docker icon context menu from a system tray:


3. Docker registry

What you should do next is to provide a docker registry. Docker Hub is probably the first option for any docker beginner.

Docker Hub however allows only one private repository for free. You need to ensure sure all your repositories are private. Images you're building will contain your license file, having them in a public access will also be treated as sharing Sitecore binaries on your own, which you're not allowed - only Sitecore can distribute that publicly.

Alternatively, you may consider Canister project, they give up to 20 private repositories for free.

Pluralsight has a course on how to implement your own self-hosted docker registry.

But what is even more surprising, Docker itself provides a docker image with Docker Registry for storing and distributing Docker images.


4. Preparing images

Now let's clone Sitecore images repository from GitHub - https://github.com/sitecoreops/sitecore-images

If you don't have git installed - use Chocolatey tool already familiar from previous steps: cinst git (once complete - you'll need to reopen console window so that PATH variable gets updated).

To keep things minimal, I go to sitecore-images\images folder and delete everything unwanted - as for this demo, I keep only 9.1.1 images, removing the rest. So, I left only 9.1.1 images and sitecore-openjdk required for Solr:

Images folder contains instructions on how to build you new Sitecore images. As input data they require Sitecore installers and license file, so put them into this folder:

And the last but not the least, create build.ps1 PowerShell script.

Important: do not use an email as the username, it's not your email (in my case username is martinmiles), and from what I've heard many people find this confusing and were wondering why getting some errors.

This is my build script, replace usernames and password and you're good to go:

"YOUR_DOCKER_REGISTRY_PASSWORD" | docker login --username martinmiles --password-stdin

# Load module
Import-Module (Join-Path $PSScriptRoot "\modules\SitecoreImageBuilder") -Force

# Build and push
SitecoreImageBuilder\Invoke-Build `
    -Path (Join-Path $PSScriptRoot "\images") `
    -InstallSourcePath "c:\Docker\Install\9.1.1" `
    -Registry "martinmiles" `
    -Tags "*" `
    -PushMode "WhenChanged"


5. Building images

Run the build script. If receiving security errors, you may also be required to change execution policy prior to running build:

set-executionpolicy unrestricted
Finally you get your base images downloading and build process working:

As I said, the script pulls all the base images and builds, as scripted. Please be patient as it may take a while. Once built, your images will be pushed to the registry. Here's what I finally got - 15 images built and pushed to Docker Hub. Again, please pay attention to Private badge next to each repository:

Docker Hub has a corresponding setting for defining privacy defaults:


6. Running containers

Images are built and pushed to registry, so we are OK to run them now. Navigate to tests\9.1.1 rev. 002459\ltsc2019 folder where you see two docker-compose files - one for XM topology and another for XP. If simplified, docker compose is a configuration for running multiple containers together defining common virtual infrastructure, it is written in YAML format.

Since we are going the simplest route - we keep with XM topology, but that same principle works well for anything else.

Rename docker-compose.xm.yml to docker-compose.yml and open it in the editor. What you see is a declarative YAML syntax of how containers will start and interact with each other.

version: '2.4'

services:

  sql:
    image: sitecore-xm1-sqldev:9.1.1-windowsservercore-ltsc2019
    volumes:
      - .\data\sql:C:\Data
    mem_limit: 2GB
    isolation: hyperv
    ports:
      - "44010:1433"

  solr:
    image: sitecore-xm1-solr:9.1.1-nanoserver-1809
    volumes:
      - .\data\solr:C:\Data
    mem_limit: 1GB
    isolation: hyperv
    ports:
      - "44011:8983"

  cd:
    image: sitecore-xm1-cd:9.1.1-windowsservercore-ltsc2019
    volumes:
      - .\data\cd:C:\inetpub\sc\App_Data\logs
    isolation: hyperv
    ports:
      - "44002:80"
    links:
      - sql
      - solr

  cm:
    image: sitecore-xm1-cm:9.1.1-windowsservercore-ltsc2019
    volumes:
      - .\data\cm:C:\inetpub\sc\App_Data\logs
    isolation: hyperv
    ports:
      - "44001:80"
    links:
      - sql
      - solr

If your docker-compose has isolation set to process, please change it to hyperv (this is mandatory hosts on Windows 10, while on Windows Server docker can also run its process natively). In that case, processes will run in a hypervisor and is not a naked process next to native windows processes, that prevents you from memory allocations errors such as PAGE_FAULT_IN_NONPAGED_AREA and TERMINAL_SERVER_DRIVER_MADE_INCORRECT_MEMORY_REFERENCE

Notice data folder? This is how volumes work in docker. All these folders within data are created on your host OS file system - upon creation, a folder from container is mapped to a folder on a host system, and once container terminates, data still remains persistent on a host drive.

For example, one running SQL Server in docker can place and reference SQL database files (*.mdf and *.ldf files) on a volume externally in such a manner so that databases actually exist on a host OS and will not be re-created on each container run.

My data folder already has data folders mapped to data folders from all the various roles' containers run on previous executions (yours will be blank at that moment before the first run):

Just for a curiosity, below is an example of what you can find within cm folder, looks familiar, right?


Anyway, we are ready to run docker-compose:

docker-compose up

You'll then see 4 containers being created, then Solr container starts making its job, providing plenty of output:

In a minute you'll be able to use these containers. In order to log into Sitecore, one needs to know an IP address of a container running a particular role - so need to refer to a cm container. Every container has its hash value which serves as an identifier, so with docker ps command you can list all docker containers currently running, get hash of cm and execute ipconfig command within a context of that cm container (so that ipconfig runs inside of it internally):

Now I can call 172.22.32.254/sitecore in order to login to CMS:

What else can you do?

With docker you may also execute commands in interactive mode (sample) with -it switch, so you may do all the things such as deploying your code there (it is always good to deploy on top of clean Sitecore instance). That's how to enter an interactive session with remote command prompt:

docker exec -it CONTAINER_HASH cmd

You may go with more folder mappings using volume. Running XP topology offers even more interesting but safe playground for experiments.

Building other versions of Sitecore allows regression testing your code against legacy systems - always quick manner and always on clean! Going ahead you may use it for the development having only Visual Studio running on a host machine, with no IIS and no SQL server installed, publishing from VS directly in docker. Plenty other scenarios possible - it's only for you to choose from.


7. Stopping and clean-up

Stopping containers occurs in a similar way:

docker-compose down
After finished, you won't see any of the containers running by executing
docker ps
You'll be still able to see existing images on a system and their size occupied:
docker image ls

So if finally, after playing it over and want to clean-up you drive a noticed there's way less free disk space now. I want to beware you from doing one more common mistake - don't delete containers data manually! If you navigate to c:\ProgramData\Docker\windowsfilter folder, one can see plenty of them:

These are not container folders, they are symlinks (references) to windows system resources folders - deleting data from these symlinks actually deleted you resources from your host OS bringing you to a sorry state. Instead, use the command:

docker prune -a

This gets rid of all the images and containers from your host system correctly and safely.


8. Afterthoughts

Docker is a very strong and flexible tool, it is great for devops purposes. I personally find questionable using it for production purposes. That may be fine with Linux containers, but as for Windows... I'd rather opt out, for now, however I am aware of people doing that.

Proper use docker will definitely improve your processes, especially combining it with other means of virtualisation. Containers may take you a while to get properly into, but after getting your hands on you'll have your cookbook of docker recipes for plenty of day-to-day tasks.

As for Sitecore world, I do understand it is all only starting yet, but docker with Sitecore becomes more inevitable, as Sitecore drills deeper into microservices. Replacing Solr and SQL Server with Linux-powered images is only a matter of time, and what I am anticipating so much is XP and XC finally running together in Docker, facaded by IDS (ideally also on Linux) just in fractions after calling docker-compose. Fingers crossed for that.

Hope this material serves you as a great starting point for containers and Docker!

Staying productive on Sitecore development

Always having productivity and effectiveness as the major criteria of measuring my work, I have identified most of the time-wasters and came up with the list of things that slow down my development process. It is important to differ productivity from performance - the first applies to your personal bottlenecks while the second - to your solution or configuration. Performance tuning has been covered by numerous blog posts so far so I will be mentioning only those affecting my personal productivity. I am also not going to cover things like CI / CD and Application Insights / KUDOs at this post, while all of the mentioned is the proven great tools, they are not related to pure development productivity adn more tend to be DevOps.

I am sharing this list with you also accompanied by some improvements, tips & tricks that can help to decrease time to market for your products. Despite it is Sitecore-focused, there are more-less generic recommendations at the bottom as well.

Content

  1. Sitecore Productivity
  2. Software
  3. Hardware
  4. Organizational


1. SITECORE PRODUCTIVITY

If you are on Sitecore 9.1 and onwards - use XM topology for starting up, prototyping and coming up with an early PoC or MVP. XM topology is now shipped with all analytics configs, DLLs and the rest of unused stuff physically cut out of the provisioned system, resulting in quicker operational times. I am assuming you very unlikely need analytics features at early development stages. however, please be aware of personalization limitations.


If you are using XP - you may disable EXM unless you develop for it:

<appSettings>
    <add key="exmEnabled:define" value="no"/>
</appSettings>


Use the trick with cutting out unwanted languages from core database. Do you really need all these languages for Sitecore interface? The way Sitecore is built is that it uses Sitecore items for translating itself. That create unwanted loops and leads to unwanted performance losses. The items for deletion are:

/sitecore/system/Languages/da
/sitecore/system/Languages/de-DE
/sitecore/system/Languages/ja-JP
/sitecore/system/Languages/zh-CN

Be careful to avoid deletion of English by mistake as you'll then have to reinstall your Sitecore instance. By default these language items are greyed-out and the message says "You cannot edit this item because it is protected" so you need to "Unprotect it" firstly. You'll be double-asked for confirmation:

Agree OK and that's it! Of course, you can do things quicker and unobtrusively from Sitecore PowerShell:

cd core:/
Get-Item "/sitecore/system/Languages/da" | remove-item


Get rid of Notification table. This table is known for supporting clones, but you don't need it unless you're actually using this functionality. In that case, you can at least remove it from if from web database, as clones only work at CM helping editors with managing numerous clones of the specific item being modified, before getting published. Also, there was an alternative solution coming from Sitecore KnowledgeBase.

Anyway, disabling clones is as simple as the setting:

<setting name="ItemCloning.Enabled" value="false"/>


Disable Links Database (it's a table, in fact) on CD database. It is normally used for identifying links between items, but there's no need of it on web database where items turn into URLs.


Publishing productivity tips:

  • May sound obvious but publish only what you've changed and rebuild only what you need.
  • Consider using Publishing Service - that's really quick and saves batches directly into the database.
  • Or even better - build it encapsulated in Docker or at least VM so that you can just have it referenced from any of your dev-environment.
  • Running Sitecore in Live Mode vs. default publishing mode may successfully save some time on publishing for you. For those lucky who are developing with SXA it is way more simple to switch for using both master database and index: just select master from Database field of /sitecore/content/Tenant/Site/Settings/Site Grouping/Site item - no rebuild, restart or "re-whatever" is required.


Use Virtual Machines with snapshots and/or Docker. You may consider a nice triple-combo of Hyper-V - Vagrant - Boxstarter. Wisely configuring and using allows save plenty of time on switching between VMs, restoring, experimenting the code - in other words changing the state, which in Sitecore world is an expensive operation. You may also run an entire farm of VMs configured together, and also (partly or entirely) remotely. Microsoft even gives totally free Hyper-V Server to manage your VMs.

As for Docker, нou may use it as a non-production-unit-of-deployment - it can save plenty of time for some cases for example when working Sitecore-agnostic but very good front-end developers on non-JSS website; I want them, however, be able to have their copy temp of Sitecore instance without all the mess of setup and which they "cannot break".


Fight instance cold starts that happen after you change config or DLLs! There are several things you may do in order to improve your development environment:

  • Consider switching <compilation optimizeCompilations="true"> but before make sure you understand what is dynamic ASP.NET compilation and how it works. This is the biggest save for cold starts.
  • Tune up your prefetch cache for master database to the minimum
  • Disable content testing Sitecore.ContentTesting.config as
  • Not a silver bullet, but when starting a new project, why not consider working with SXA or even better - with JSS? While the first several times reduces the number of cold starts, the second eliminates them entirely!
  • Reduce time ListManagement agent to run every hour rather than every 10 seconds, used for EXM mostly:
    <scheduling>
        <agent type="Sitecore.ListManagement.Operations.UpdateListOperationsAgent, Sitecore.ListManagement">
            <patch:attribute name="interval">01:00:00</patch:attribute>
        </agent>
    </scheduling>
  • Do the same frequency change for IndexingStateSwitcher - from 10 seconds to, let's say, 1 hour:
    <scheduling>
        <agent type="Sitecore.ContentSearch.SolrProvider.Agents.IndexingStateSwitcher, Sitecore.ContentSearch.SolrProvider">
            <patch:attribute name="interval">01:00:00</patch:attribute>
        </agent>
    </scheduling>
  • Also, turn off rebuilding indexes automatically:
    <scheduling>
        <agent name="Core_Database_Agent">
            <patch:attribute name="interval">00:00:00</patch:attribute>
        </agent>
        <agent name="Master_Database_Agent">
            <patch:attribute name="interval">00:00:00</patch:attribute>
        </agent>
    </scheduling>
  • Processors that aren't used while in development, you can remove them too:
    <pipelines>
        <initialize>
            <processor type="Sitecore.Pipelines.Loader.ShowVersion, Sitecore.Kernel"><patch:delete /></processor>
            <processor type="Sitecore.Pipelines.Loader.ShowHistory, Sitecore.Kernel"><patch:delete /></processor>
            <processor type="Sitecore.Analytics.Pipelines.Initialize.ShowXdbInfo, Sitecore.Analytics"><patch:delete /></processor>
            <processor type="Sitecore.Pipelines.Loader.DumpConfigurationFiles, Sitecore.Kernel"><patch:delete /></processor>
        </initialize>
    </pipelines>
  • Last but not the least, since cold starts are inevitable, I still spend this time with use looking the emails, planning, scoping out or ... just attending kitchen for a fresh cup of green tea.

Content Editor

  • Favourites tab under Navigate menu allows you adding some items for quick access. Once added, it will be stored under /sitecore/content/Documents and settings/<domain>_<username>/Favorites in core database.
  • Similarly to the previous one, did you know that you may create Sitecore Desktop shortcuts - the same way as you do on Windows desktop? Use this feature in order to speed you accessing your frequent items.
  • For LaunchPad, you can set some tools seen by admin only there, like Unicorn, ShowConfig, File Manager etc. (package).
  • Pre-load tabs in Content editor. Seriously, I noticed that plenty, if not the majority of folks work in Content Editor in just one windows! Navigating tree structure аor me is an insane loss of productivity while switching between opened windows in Sitecore desktop has zero overhead. For example, working on SXA website I have the following for opened and pre-loaded:
    1. Home page
    2. Data folder 
    3. Partial designs (if I currently work with page structure)
    4. Rendering variants
    5. Renderings
    6. Media Library
    7. Templates
    8. PowerShell ISE
    Once again, these points (except the last one) are site-related items that sit normally deeper in SXA (ie: /sitecore/templates/Project/Tenant/Site vs. /sitecore/templates). This trick saves me seconds, but does that constantly! So normally it looks for me like that:

    You can even automate that, I blogged out automation approach in this post.
  • Expand Collapse buttons are especially helpful when working on large Helix-based solutions so that you can quickly collapse all sections and open only the desired one.
  • Remove unused Content Editor stuff from Application Options (under hamburger menu), also unchecking View -> Standard fields will improve Content Editor performance up to twice faster.
  • Limit number of versions to 10
  • Setting Field Section Sort order will also help saving time by having the most important sections at the top 


2. SOFTWARE

Visual Studio, VS Code and most useful Visual Studio extensions, I can mention a few of them:

  • ReSharper is the king of all the extensions and worth of every dollar spent. VS 2019 takes some of its features but is still far from ReSharper functionality
  • Attach to IIS extension, that adds Attach to IIS into VS Debug menu, so that you also can assign hotkeys to debug you Sitecore
  • Use snippets instead of manually typing the code (one, two, three - plenty of them) or make your own.
  • T4 templates code generator (use them in conjunction with Glass Mapper)

It's important to have some sort of master productivity tool. For example, I am using Total Commander, that far more than just a great two-panel file manager - I made it as a power pack so it includes:

  • Diff tool (I configured to use Beyond Compare with Total Commander) but it comes with a free and fair built-in tool as well
  • Built-in FTP client with encrypted password storage
  • Hotkeys on almost everything you can do.
  • Rapid access to the most important folder you define (and yes - hotkeys for that)
  • true and reliable search by content, regex, ... and also in archives (ex compare with windows)
  • Ooverride system file associations and assign your tool of choice, with parameters
  • I integrated TeraCopy into Commander, so that I have the best and fastest copying tool as well
  • I also integrated PowerShell console into TC so helps a lot save much time opening it in the right context
  • plenty of plugins and much more useful stuff that I struggle to remember at the moment
That's why I feel quite surprised seeing majority developers still using classical Windows Explorer - it is such a bottleneck (in my humble opinion). This tool alone saves me about an hour daily!

Since I've just mentioned PowerShell, nowadays it helps you automating almost everything. This includes:

  • managing Windows server, all its dependencies, all types of activities on a filesystem, registry, MMC, etc.
  • installing, modifying, deploying Sitecore and all the dependencies
  • building images, starting, stopping, deploying containers
  • do all that same with Hyper-V virtual machines (and I assume, that also should be possible with VMs from other vendors)
  • all the type of management and configuration with IIS and SQL
It is probably more difficult to imagine what is not doable with PowerShell. And of course, investing time in mastering PowerShell brings you more benefits when using Sitecore PowerShell Extensions. Combining both you can benefit from Sitecore PowerShell Remoting accessing Sitecore assets and resource from outside of your instance.

Other tools that significantly save my day-to-day life:

  • Chocolatey. I put it to the first place intentionally - that is a console package manager for Windows, that allows you installing almost any software from a Windows console (not even PowerShell!)
  • LockHunter helps me to find out what process lockout folders/files and force releasing them. Biggest abusers typically are IIS, console windows left open and of course our beloved Visual Studio.
  • Slack became the most used tool for the self-organized team, especially nowadays - with the growth of Agile-based approaches. With an ability to create channels on any aspect and having great mobile clients - it helps thousands of distributed teams globally. When setting up CI / CD pipelines I usually configure sending build notifications to a dedicated channel. Slack is also a proven solution to replace boring meetings.
  • dotPeek .NET decompiler should be made mandatory for each Sitecore developer since that's the most genuine way to how it all works internally in Sitecore
  • Synergy helps me to unite few laptops (2 having Windows and 1 Mac OS) into one large multi-screen environment with keyboard, mouse and even clipboard shared across the different OS.
  • Postman and Fiddler are tools for creating RESTful web requests and intercepting others, even those coming by HTTPS
  • smtp4dev becomes inevitable when you start developing emailing functionality. It intercepts your email sending attempts, grabs these email messages and even puts them into your mail app. You don't need to have SMTP server anymore!
  • PCloud - expensive cloud storage but worth of every penny spent and brings true Swiss quality. I got lifetime subscription with them, including crypto folder (which is truly crypto!). Currently trying to entirely replace Dropbox with PCloud
  • Telegram messenger (I give more explanation about it below)
  • Jing screenshot creator tool that comes far beyond Alt + PrnScr built-in Windows functionality
  • Instance backup/restore tool
  • Yet one more triple-combo of Evernote + Dropbox + 1Password - save me plenty of time on a daily basis.

Source control

Won't be unique saying that I prefer using git along with Git Flow approach. Master branch is used only to keep primary releases, develop branch is all developers' cumulative snapshot that is always deployable and used for CI / CD. Further down we got functional (feature and integration) branches. Its approach also allows my teams to avoid serialization conflicts when doing large structural refactorings on Sitecore items.

For git I use three tools in parallel:

  • SourceTree is an excellent free tool for history visualization, branches tracking, etc. Unfortunately, it still has buggy UI especially after it was re-written with WPF - sometimes it struggles to reappear from the minimized state, errors out on some repos with a long history and plenty of branches, it is also quite slow in starting up.
  • Tortoise GIT - I use for commit, historical comparison, repo browser and few more, mostly due to my legacy habits of using SVN 10-15 years ago (Tortoise SVN of course). It also comes free of charge.
  • Console - for everything else.
The simplest to get them is as usually from Chocolatey:
cinst sourcetree
cinst tortoisegit
cinst git.install

One of the positive habits that I want to share is making last-minute check all the items and the code immediately before committing it. When writing a code, you're a deeply focused into some functionality you're building, while pre-commit check merely shows you overall picture. I also check for something stupid, like potential null references, badly named variables, and other high level but important stuff.

Another productivity improvement I am using working with git is creating aliases. This allows assigning short and easy to remember aliases to long-to-remember commands with parameters. This is how I am assigning an alias:
git config --global alias.lga "log --decorate --graph --oneline --all"
Now I can call git lga and it will give me that same result a calling long version git log --decorate --graph --oneline --all:


Browser today is still our primary target application, so it's also a point of tuning the productivity:

  • organize your bookmarks properly into folders and subfolders. Once done, it will sync across all your machines where you've logged into.
  • User scrips with Greasemonkey (Firefox) or Tampermonkey (Chrome) allow improving the functionality of many websites, where authors have intentionally or occasionally forgotten to improve UI / UX. Plenty ready-to-use of scripts are available through GitHub and custom repositories.
  • Invest time in mastering DevTools - this is the first-class tool for any web developer and will start saving your time and efforts quite shortly, if not immediately. Pluralsight has few useful courses for that (one, two)
  • Use some other helpful Chrome extensions: Sitecore Developer Tool, Sitecore Extensions, Grammarly, EditThis Cookie, AdBlock, OneClick URL Shortener.
  • Browsers hotkeys are nowadays mostly crossbrowserly universal. I promice you'll come across at least one big finding after going through the list of keyboard shortcuts, and it is not the full one. For the full list please refere to your specific cbrowser documentation.


3. HARDWARE

Hardware is very crucial for productivity. For my productive setup I use:
  • Dell XPS 15" with 32 Gb RAM and fastest Samsung PM961 SSD. I got 1 TB of storage but even that volume is hard enough due to numerous VMs and snapshots. That is an expensive laptop, but you get what you pay for - frameless 4K touch screen and top spec: as for 2019, you can get a version with i9-8950HK processor and 2TB SSD.
  • Craft Keyboard and MX Anywhere 2S mouse - both top-spec input devices from Logitech work perfectly together in conjunction through the same receiver (but can hot-switch between three of them) and are configured through the same software.
  • I normally use 3 monitors (one of which is the laptop itself). If a monitor has Pivot Function (as on an image below) - that's an excellent bonus to productivity. Such a vertical layout is excellent for code. The left-hand side monitor is usually used for browser with Sitecore always and/or running live website under development. Laptop's monitor is for everything else - file manager, configuration, notes, Slack, etc.
This is how it works all together:



4. ORGANISATIONAL

Approaching technical debt

Technical debt is a deliberate decision to implement a not-the-best solution or write not-the-best code to release software faster. Taking on some technical debt is inevitable and can increase speed in software development in the short run. However, in the long run, it contributes to system complexity, which slows the developers down. Non-programmers often underestimate the loss of productivity and are tempted to always move forward, and that becomes an issue. But if refactoring is never part of the priorities, it will not only impact productivity but also product quality. 

Someone wise identified an approach to managerial stuff regarding managing technical debt - an image below show how is correctly explain technical debt to managers:


General productivity thoughts

In general, productivity is a combination of 3 parameters: time, energy spent on achieving a goal and level of concentration. These pieces of advice are disclosed below:

  • Try staying in The Flow - for the developers it is the state when they feel the most focused and productive. Most of their work is done during this state. For most developers productivity follows Pareto's Law with 20% of time delivering 80% of the result, and the rest 80% of time bringing the rest 20% of the result.
  • Minimize distractions from open space, headphones on! BTW, can recommend Rainy Mood which is my recent finding. Every distraction switches you out of context, and switching contexts is an expensive activity in terms of time and efforts.
  • Avoid meetings where it makes sense to. Only 30-40 percent of meetings are important, the rest invite you to participate "just in case" (they may need to ask something from you and sometimes they do). But at which cost? A single meeting can blow a whole afternoon by breaking it into two pieces, each too small to do anything hard in, again due to switching contexts.
  • In addition, it highly demotivates when management spending your time so loosely, especially when the timeframes are tight and you have to work overtime in order to meet the deadlines. Just to highlight my point - some meetings are useful and very important, especially in the planning stage, but unfortunately, people overuse meetings.
  • Because of the above, a working week of 4 days x 10 hours is way more productive than 5 x 8, despite the same hours worked. The first case has hidden costs of switching the context, plus it also adds me one roundtrip of commute (3 hours for in average).
  • The general approach would be identifying what your actual biggest bottlenecks. Theory of constraints is something that may come to your help. Also, anything outside of job description (along with learning new stuff) should be by definition treated as a non-productive waste of your time.
  • Organize your own notes / knowledge base/ todo lists / planners with the quickest access for both read and write. These can be any tools of your choice, if they give you immediate (and offline as well) access to your important information. Surprisingly, Telegram became such a tool for me despite being primarily a messenger, due to its built-in cloud, offline access, cross-platform sync, and immediate access.
  • Everything you come across which is worth of further checking (and not at the moment,) should be recorded in your "hot" operational notes in order to avoid switching context. and making sure that your brain’s capacity is not consumed by "remembering stuff" rather than focusing on the most important.
  • Identify all you most frequent actions across the system, IDE and most used software and find keyboard shortcut combinations for them, or assign your own.
  • Finally, I'd recommend reading this hacks list of tactics - likely you'll pick something out of there.


That's what came into my head, as for now. And what productivity tips do you have?

Yet another SXA rendering variant - Script Reference Tag coming to improve your SEO

Note! The code used in this post can be cloned from GitHib repository: SXA.Foundation.Variants

I previously wrote a post about having a rendering variant holding an inline JavaScript one might need along adding some basic JS functionality into your components. 

This is useful when you're early developing your pages and have no possibility or capacity of recompiling entire frontend and updating Creative Exchange package into your solution because of adding/changing few lines; however, given approach is not SEO-friendly as search engines penalize sites for excessive inline scripts and styles. So use it considering to be technical debt, that should be addressed prior to going to production.

The very minimal change one can do is to replace the inline script with a reference to that same script stored in Media Library - same that SXA does itself with themes. This blog post below кумуфдыthat approach:

Firstly, create a template:

Then reference given template IDs within <code>Constamts.cs</code> file:

using Sitecore.Data;

namespace Platform.Foundation.Variants.Pipelines.VariantFields.ScriptReferenceTag
{
    public static partial class Constants
    {
        public static partial class RenderingVariants
        {
            public static partial class Templates
            {
                public static ID ScriptReferenceTag { get; } = new ID("{0EC036D7-384D-4CF6-AD1F-FE949E96126A}");
            }

            public static partial class Fields
            {
                public static class ScriptReferenceTag
                {
                    public static ID ScriptMedia { get; } = new ID("{F1497AF9-7DD3-4B38-BE22-5F092007F929}");
                }
            }
        }
    }
}
Model class, having just one property that stores a GUID of a referenced script from Media Library 
using Sitecore.Data.Items;
using Sitecore.XA.Foundation.RenderingVariants.Fields;

namespace Platform.Foundation.Variants.Pipelines.VariantFields.ScriptReferenceTag
{
    public class VariantScriptReferenceTag : RenderingVariantFieldBase
    {
        public string ScriptMedia { get; set; }

        public VariantScriptReferenceTag(Item variantItem) : base(variantItem)
        {
        }
    }
}
Parser:
using Sitecore.Data;
using Sitecore.XA.Foundation.Variants.Abstractions.Pipelines.ParseVariantFields;

namespace Platform.Foundation.Variants.Pipelines.VariantFields.ScriptReferenceTag
{
    public class ParseScriptReferenceTag : ParseVariantFieldProcessor
    {
        public override ID SupportedTemplateId =>  Constants.RenderingVariants.Templates.ScriptReferenceTag;
        
        public override void TranslateField(ParseVariantFieldArgs args)
        {
            ParseVariantFieldArgs variantFieldArgs = args;

            var variantHtmlTag = new VariantScriptReferenceTag(args.VariantItem) { Tag = "script" };
            variantHtmlTag.ScriptMedia = args.VariantItem[Constants.RenderingVariants.Fields.ScriptReferenceTag.ScriptMedia];
            variantFieldArgs.TranslatedField = variantHtmlTag;
        }
    }
}
Renderer:
using System;
using Sitecore.Data;
using System.Web.UI.HtmlControls;
using Sitecore.XA.Foundation.RenderingVariants.Pipelines.RenderVariantField;
using Sitecore.XA.Foundation.Variants.Abstractions.Pipelines.RenderVariantField;
using Sitecore.Resources.Media;

namespace Platform.Foundation.Variants.Pipelines.VariantFields.ScriptReferenceTag
{
    public class RenderScriptReferenceTag : RenderVariantField
    {
        public override Type SupportedType => typeof(VariantScriptReferenceTag);

        public override void RenderField(RenderVariantFieldArgs args)
        {
            var variantField = args.VariantField as VariantScriptReferenceTag;
            if (variantField != null)
            {
                var id = variantField?.ScriptMedia;
                if (string.IsNullOrWhiteSpace(id))
                {
                    return;
                }

                var scriptItem = Context.Database.GetItem(new ID(id));
                if(scriptItem == null)
                {
                    return;
                }

                var url = MediaManager.GetMediaUrl(scriptItem);

                var tag = new HtmlGenericControl(variantField.Tag);
                tag.Attributes.Add("type", "text/javascript");
                tag.Attributes.Add("defer", String.Empty);
                tag.Attributes.Add("src", url);

                args.ResultControl = tag;
                args.Result = RenderControl(args.ResultControl);
            }
        }
    }
}

Example of usage:

This rendering variant field generates the following output:

<script src="/-/media/Project/Platform/Other/Scripts/Header-script.js" type="text/javascript" defer="" ></script>


This approach works perfectly well. But once again for a second, have you ever considered moving such scripts into a Theme along with related component (if any) instead of leaving it like that? Hope this helps!

Adding Show Config icon to Sitecore Launchpad

Previously I wrote about adding Unicorn to Sitecore LaunchPad to be available for admin users. This time I am adding one more tool icon - Show Config that leads admin users to the page where they can see patched and merged configuration that is used by given Sitecore instance

No more need to remember and manually type https://instance.hostname/sitecore/admin/showConfig.aspx URL in order to access it.

You may download the installation package at the bottom of this blog post.


The actual button item is located in core database under the following path: /sitecore/client/Applications/Launchpad/PageSettings/Buttons/Tools/Show Config.

Download ready to use package (19.7kb, keep in mind that Unicorn icon will be shown to admin users only).


Productivity improvement: implementing Expand all and Collapse all buttons to Content Editor

One day I was working on a page that had way too many Content Editor section opened, plus those plenty coming out from Standard fields also added frustration. I thought it would be great to have Collapse All button implemented, that closes all the sections in order to help navigation. 

I went checking the way Content Editor works this out and later wrote a JavaScript snippet, that implements and wires desired functionality. I also added Expand All button bringing the reverse behavior. Here's the code:

scContentEditor.prototype.onDomReady = function (evt) {
    this.addCollapser(window.jQuery || window.$sc);
};
scContentEditor.prototype.addCollapser = function ($) {
    $ = $ || window.jQuery || window.$sc;
    if (!$) { return; }

    $('#EditorTabs').append("<style>.toggler { border: 1px solid #bdbdbd; box-shadow: 0 1px #ffffff inset; cursor: pointer; height: 35px; margin: 16px 1px 0; }</style>");
    $('#EditorTabs').append("<button id='expander' class='toggler'>Expand all</button><button id='collapser' class='toggler'>Collapse all</button>");
    $('#EditorTabs').on("click", "#collapser", function () {

        $('.scEditorSectionCaptionExpanded').each(function () {
            var script = $(this).attr("onclick");
            eval(script);
        });
        return false;
    });
    $('#EditorTabs').on("click", "#expander", function () {

        $('.scEditorSectionCaptionCollapsed').each(function () {
            var script = $(this).attr("onclick");
            eval(script);
        });
        return false;
    });
};

All you need to do is to append to the bottom of <your_web_root>\sitecore\shell\Applications\Content Manager\Content Editor.js and that's it!

For those like me who like automation, I am attaching this JavaScript file, right click this link and save it, and then use PowerShell:

$file2 = Get-Content "ExpanderCollapser.js"
Add-Content "C:\inetpub\wwwroot\<YOUR_WEB_ROOT>\sitecore\shell\Applications\Content Manager\Content Editor.js" $file2

Once done, you'll see the result in Content:



There is also package available for download (compatible with Sitecore 9.0.* - 9.2)


Implementing Sitecore security domain role multi-selector field

I was working on implementing a subscription model system, where authenticated users visit website with a specific role coming from Identity Server (or, unauthenticated - anonymous, of course), so that I can apply personalization of content, as we normally do.

The difference was, however, that subscription level were logical units, more complicated and not matching IDS roles. They also should be adjustable from Sitecore by business users. That made using personalization by these users type quite complicated, due to complex rules creation, especially those with inverted logic except when. But even with that in mind, I could not simply use personalization for preventing unauthorised users (for example, those registered and logged, but still having insufficient permissions) from accessing specific types of content. The business requirement demands all the pages to be accessible by anyone, but when users don't have required access level - most of content apart from few teasing paragraphs in the beginning, needs to be greyed out by a components encouraging them to increase their subscription level in order to get full access.

So, in order to address these requirements, I decided to implemented a simple role-mapping Subscription Model, something as could be described by this template:

But wait! There is no possibility to use Sitecore security roles in an item!

So I decided to implement the one. After quick googling I came across Mike Reynold's experiments with fields and templates and went similar way on implementing Role Multilist Selector field. 

The ready-to-use code, along with required core database serialization I have published to GitHub repository: Sitecore.Foundation.Fields

Once done, core database needs to get a new field type registered - Roles, which is implemented in a way of traditional multi-select field:


So now, I can use it as an ordinary Sitecore item field. Please note, that Source column at first screenshot above contains Domain=ids - that is a set of parameters passed in a format of URL string (UrlString is .NET class that accepts these parameters in the code). I've implemented that as a Sitecore domain filtering parameter, where ids is the domain name.

Now we can select roles - they will be stored in pipe-separated format in given field:



Finally, after implementing a logical layer of Subscription Model, I also had to create custom rules conditions to apply personalization operating these logical subscriptions, but that made business users' life way easier.

Hope this helps!

Image tag wrapped with anchor both having own classes but without any unwanted component wrappings, easy? Not OOB in SXA, but here's the fix!

Note! The code used in this post can be cloned from GitHib repository: SXA.Foundation.Variants

Image Link rendering variant field

This is quite powerful and at the same time very simple rendering variant field - it nicely renders <img> tag surrounded with <a> anchor tag without any of other unwanted wrappings normally coming when nesting components in SXA, as below:

<a href="http://link.to/internal-or-external-item" class="individual-class-for-anchor">
    <img src="/-/Media-item-from-sitecore" class="individual-class-for-image"/>
</a>
What is specific - is that you can attach individual classes to each element's node!

A good advantage is that an image from media library can be statically referenced or taken out of a context item field of Image type. Please note, that statistic reference always takes over a context field, if both set.

Another not easy achievable OOB feature - is the ability to specify individual CSS classes for both <a> and <img> tags. If link is not set - it will simply render an image with class



The image above shows a usage example where I use this variant field to show a company logo in header, so that it is a link to a home page and both elements have their front-end CSS styles set.
In give example a static reference to media item (image with company logo) is used, since that is a header implementation, which means components sit on a partial design and it becomes a context item. Image is being referenced only once on a header, so there is no need to create new instances exposing a datasource with the only purpose of referencing an image - with this approach we can  reference media items directly!

As usual, the entire code and the Sitecore package with fields templates are located in GitHub repository for SXA.Foundation.Variants, you can also find a documentation on usage there 

Walktrough: creating a footer for SXA website implementing a precisely demanded front end markup

Note! This is a second walkthrough explaining an implementation of real-life scenario with Sitecore SXA.
It reveals best practices and several powerful techniques, such as:

  • structuring data for complex components to be both easy to maintain and editors-friendly
  • referencing other renderings using Component field, setting datasource and rendering variant
  • reusing existing built components and their templates by Clone rendering PowerShell script
  • nesting rendering variants and looping through them
  • using Query Variant field for accessing child items
  • restricting rendering variants by certain page templates

When starting working with SXA I faced some lack of good guidance and walkthrough (with an exception of excellent series by Adam Najmanowicz). Today I am going to eliminate this gap by adding my own walkthrough of implementing a footer, what could be simpler, I thought. After reading some official tutorials I expected it to be an exercise of dropping structure components (ie. splitters with rows and columns) and assigning link lists into them so that it could be later styled by front-end team. Wrong! Things appeared to be not as easy.

To start with, my strict front-end team came to me with quite a precise requirement for a footer, and below is they demand from me.

Requirements

They send me an image with all the blocks assigned:


It was accompanied by the HTML output expected for me to achieve:


Minimal and effective, nice job, front-enders! Now the ball is on my side in order to get all that implemented.

Implementation

1. As per Sitecore SXA recommendations, I create Footer partial design and open it in Experience Editor for editing. Footer component will be created for that partial design, which itself will be used to construct a resulting page.

2. Not to mess with any OOB components, I create Footer rendering by cloning one of the existing renderings with datasource, so that I ensured a copy of datasource template and folder created. Also make sure this rendering stays outside of Experience Accelerator folder, but on feature layer where serialization enabled.

3. Assign new rendering to Available Renderings (/sitecore/content/Tenant/Site/Presentation/Available Renderings/Module) so that it appears in my custom components sectionand also in Toolbox

4. Now it is a good time to adjust a Template for Footer,it was created automatically by cloning, but this is how I defined it:


What is important to explain here - Elements field is pointing to Footer Elements Folder. At a first glance, that folder contains just a collection of link lists. But that's not right! There are at least 3 types of footer blocks: first four blocks are indeed Link List blocks,however,followed by Rich Text blocks, and finally, there is Social Presence block and all three can be added intoFooter Elements Folder. These three types are defined along with Footer template, you may see on the screenshots below.

5. Elements templates. This is how I defined them, Link ist footer element:

Rich text footer element:

Social presence footer element:


6. Insert Options to be configured for Footer folder to accept both Footer items and Footer Elements Folder. Configure Footer Elements Folder to accept these 3 types. Now one can insert the data. Once done, that will be how the data folder looks like:


First 4 items (About Us, Hot topics, Other sites, Help and Support) reference corresponding Link Lists as defined as below:


Next two items point to reusable Rich Text items under /Data folder. And the last one is a reference to a Social Presence I have implemented previously.

After items of all three types are created, we can assign them into footer datasource item itself:



7. Rendering variants. In the previous steps, we have defined rendering, templates, folders and actual data. Now it is time to make it all together work to produce the output by creating rendering variants. This is the most tricky part of the current walkthrough


Default is the main and only rendering variant to be called or footer. Other three variants are "service" variants and designed to be called from Defaultinternally in a loop, being assigned to a component rendering field with personalisation applied (see image above).

Here's how these three other variants look like:


Link list footer element switches to a referenced Link List item and uses Query variant field to iterate its children.

Rich Text footer element simply references to a Rich Text (reusable) item under /Data folder in the same manner.

As for Social presence footer element rendering variant, it defines a component that references Social buttons rendering with Social presence rendering variant, that I have described in one of my previous posts.

Lastly, footer-info__copyright field renders copyright lines at the very bottom.

8. In order to avoid confusion for your editors, it makes sense to restrict rendering variant by selecting Allowed in templates field leaving only Default variant since other three variants are internally called from Default using Component variant field. 

9. Apply new rendering to a Partial Design it in Experience Editor. Do not forget to configure footer placeholder to accept only Footer component by creating a placeholder setting,

Result

Save the page and enjoy the result:


No single line of back-end code!

How to add id and data-attributes to a Rendering Variant in SXA?

When dealing with a rendering variant field, it is not a big deal to set few data-attributes to it - those inputs are located at the very bottom of Variant Details section. You can do it like that:


But what if you need data attributes to the top level of component, which it Rendering Variant item itself? There isn't such an option!

Requirements are

  1. An id attribute (ie. section-1, section-2, ... section-N)
  2. One or many data-attributes (ie. " User-friendly title", "Another user-friendly title", etc.)
  3. CSS class section-with-anchor on those instances, which have both previous requirements implemented
All of the above should be set for the top node of a rendering - outside of the control of Rendering Variant. Thinking logically - if we ever could add the above to Rendering Variant item itself, then it would present on every single instance of that given rendering variant. We do have CSS-class field on Rendering Variant item, but as I said, we need this class to present only occasionally for some individual instances as per requirement so we cannot use that field.

Solution

That is where Rendering Parameters come into a play, as they apply per each individual rendering usage. Let's take a look!

1. ID of a component. That was the easiest as luckily default rendering parameters do support field for that:


2. Data-attributes do not exist in Rendering Parameters control, unlike id attribute. But since that is just a collection of Key-Value pairs, why not to convert them into a set of data-attributes on a component node. Not all, of them, of course, but those that start with data- as on an image below:


In order to pick them up and assign to a rendering view, I write a simple extension method:
public static MvcHtmlString RenderAllDataAttributes(this HtmlHelper helper)
{
    var rendering = Sitecore.Mvc.Presentation.RenderingContext.Current.Rendering;

    string additionalAttributes = String.Empty;
    if (rendering?.Parameters != null)
    { 
        foreach (KeyValuePair<string, string> parameter in rendering.Parameters)
        {
            if (parameter.Key.ToLower().StartsWith("data-"))
            {
                additionalAttributes += $"{parameter.Key}=\'{parameter.Value}\' ";
            }
        }
    }

    return new MvcHtmlString(additionalAttributes.Trim());
}
It can be called like that:
<div @Html.RenderAllDataAttributes() @Html.Sxa().Component(Model.Rendering.RenderingCssClass ?? "default-class", Model.Attributes)>
    <div class="component-content">
        ...
    </div>
</div>


3 Setting a style class. As mentioned before, do not misuse Css Class field of rendering variant definition for styling a specific implementation of rendering variant. Style that are applied individually for each instance of rendering (regardless of variant selected) can be found at that same Rendering Parameter window, located at Styling section.


Of course, you may need to create this style beforehand, if not yet done. To do so, create a new Style item underneath Styles grouping item and restrict to renderings where give style can be shown. That is a part of your style them and is located under /sitecore/content/Tenant/Site/Presentation/Styles node:

Result

Finally, I got it all rendered as expected:


That blog post shows how useful Rendering Parameters are, hope you find it helpful!

Script rendering variant field in SXA - why would one need it?

Note! The code used in this post can be cloned from GitHib repository: SXA.Foundation.Variants

Yet another rendering variant field came to my to-do list for implementation - Script rendering variant field. Why would I need one at all? 

I came across two uses cases where implementing this field type made my job done and that's just recently. Some developers have (reasonable) biases against having javascript code inline instead of referencing JS-file at the bottom of a page, but keep in mind that given field comes up as a part of a component dynamically added to a page so there isn't a big choice on how to extend running website with some additional client-side functionality. I am presenting both cases below, let's take a look at them. Would you know the better way of achieving these goals - please let me know via Slack or Twitter. Also, the code of Script Variant Field is at the very bottom of the page.


Use case 1: implementing in-page navigation panel

As usual, I got a precise requirement from my strict front-end team to implement such a piece of code as a component. It has a UL-tag that will keep a link list to other components of this same page (prefixed with #) created client-side dynamically, and a script that does the actual job. When rendering the in-page navigation component by the backend, we're now aware of other components and their attributes, so the walk-around was handling page loaded event and identifying all the components that have IDs set and class section-with-anchor.

<div class="component content col-12 content-section">
    <div class="component-content">
        <div class="anchor-panel">
            <ul class="anchor-panel__list"></ul>
            <script defer>
                document.addEventListener('DOMContentLoaded', function(){
                    if (!$('.section-with-anchor').length || !$('.section-with-anchor').length) return;
                    $('.section-with-anchor').each(function(index, el) {
                        var anchor = '#' + $(el).attr('id');
                        var text = $(el).attr('data-text');
                        var $anchorList = $('.anchor-panel__list');
                        var $anchorItem = $('<li class="anchor-panel__item"></li>');
                        var $anchorLink = $('<a href=""></a>')
                        $anchorItem.append($anchorLink.text(text).attr('href', anchor));
                        $anchorList.append($anchorItem);
                    });
                });
            </script>
        </div>
    </div>
</div>

Quite obviously, my rendering variant will contain 2 fields: a UL-tag section field for in-page navigation that takes the links dynamically and script variant field, containing the client-side logic. You can test this script in action by this link. That is how it looks in Content Editor:


Since I have 5 other sections qualifying script requirements - they all have been identified by the script and added to in-page navigation panel. Here's how the result looks like on a styled page for me:


Once again, I decided to implement a new script variant reference field because the component is subject to some seldom minor JavaScript changes, but should be configurable. Also, at given use case it can be dropped only once into a page, so there's no problem with multiple instances of the same script for me, but using it you might need to check if that applies for your scenarios. But even with that in mind, I'd probably not bothered creating yet another new rendering variant field, if not few more potential usages I have in backlog.


Use case 2: accessing client side URL hash-key parameter and presenting it on a page

Recently I implemented a URL query string parameter variant field, where one can set any parameter and the field present its URL-decoded value on a page in any given tag and style. A colleague of mine who develops search results page with SXA asked if that's doable to extract a hash-key URL parameter and show it on a page along with other content, however that a fully client-side parameter that never got posted to the server.

I decided to give a try and managed to confirm with a small proof of concept. I quickly wrote this script (so please be forgiving for it being just a quick PoC and me not being the proper FED).

window.addEventListener("hashchange", function () {
    var h1 = document.getElementsByClassName("updatedHashValue");
    if (h1.length > 0) {
        h1[0].innerHTML = getHashValue("param");
    }
    function getHashValue(parameter) {
        var hashValues = window.location.hash.substr(1);
        var result = hashValues.split('&').reduce(function (result, item) {
                var parts = item.split('=');
                result[parts[0]] = parts[1];
                return result;
        }, {});
        return result[parameter];
    };
});

That's how it looks implemented as the part of rendering variant. It creates an H1 element to store the value, extracted out of hash parameters stored at URL bar and updated without any postback to the server, and of source script variant field:


Testing. After adding a component to a page, saving and selecting the above rendering variant, I got the page reloaded as anticipated with no visual changes. Then I open a console from browser dev.tools and enter:

window.location.hash = "param=Successful!"

From dev.tools that was easy to confirm that there was no postback done to the server, however, browser navigation bar predictably changed, appending new hash parameters pair: 

And guess what? H1 element immediately got that value displayed. Love that magic!


The code is very simple, similar to other variant fields I've blogged previously. Create the model and implement property for a field:

public class VariantScript : VariantField
{
    public string Script { get; set; }
}

Reference ID of that field from a template and ID of a template itself:

public static class Constants
{
    public static class RenderingVariants
    {
        public static class Templates
        {
            public static ID Script = new ID("...");
        }
        public static class Fields
        {
            public static class Script
            {
                public static ID ScriptField { get; } = new ID("...");
            }
        }
    }
}

Here's a parser

public class ParseScript : ParseField
{
    public override ID SupportedTemplateId => Constants.RenderingVariants.Templates.Script;

    public override void TranslateField(ParseVariantFieldArgs args)
    {
        ParseVariantFieldArgs variantFieldArgs = args;

        variantFieldArgs.TranslatedField = new VariantScript
        {
            Script = args.VariantItem[Constants.RenderingVariants.Fields.Script.ScriptField]
        };
    }
}

and a renderer:

public class RenderScript : RenderVariantField
{
    public override Type SupportedType => typeof(VariantScript);

    public override RendererMode RendererMode => RendererMode.Html;

    public override void RenderField(RenderVariantFieldArgs args)
    {
        var variantField = args.VariantField as VariantScript;
        if (variantField != null)
        {
            args.ResultControl = RenderScriptField(variantField, args);
            args.Result = RenderControl(args.ResultControl);
        }
    }

    protected virtual Control RenderScriptField(VariantScript variantScript, RenderVariantFieldArgs args)
    {
        if (!string.IsNullOrWhiteSpace(variantScript.Script))
        {
            var tag = new HtmlGenericControl("script") { InnerHtml = variantScript.Script };
            tag.Attributes.Add("defer", String.Empty);
            return tag;
        }

        return new LiteralControl();
    }
}

New Script Rendering Variant field, that enhances your tooling for implementing more modern-looking websites. And as usual - please use it responsibly!

canlı tv