Experience Sitecore ! | More than 200 articles about the best DXP by Martin Miles

Experience Sitecore !

More than 200 articles about the best DXP by Martin Miles

A PROPER way of validating models passed into a view

I am working on a project that uses code-generated interfaces passed as models into views.

I came with a nice way of validating these models being passed, implementing C# feature called pattern matching. Now validation works for me by just dropping the below snippet onto my view:

@if (Html.Validate<ICompositeTemplate>(Model) is var e && e != null)
{
    @e { return; }
}

What he above code doing is validating that:

  • model was passed and is not null
  • the model is of a given template
  • when the datasource template uses composition from numerous interface templates, the passed interface is also implementing those numerous code-generated interfaces for each of template using for composition. I an checking all them being actually composed and the model being mapped and passed using the right template
  • when validation fails - show users meaningful error messages, but do that only for editors using Experience Editor

For example, the above definition of ICompositeTemplate could be looking as below:

public interface ICompositeTemplate : ITitleWithRichText, ICtaWithSvg
{
}

where ICtaWithSvgis composite on its own:

public interface ICtaWithSvg : ICta, ISvg
{
}

Both implemented interfaces inherit from IGlassBase and also have SitecoreType attribute:

[SitecoreType(TemplateId = ITitleWithRichTextConstants.TemplateIdString)]
[GeneratedCode("Leprechaun", "2.0.0.0")]
public partial interface ITitleWithRichText : IGlassBase
{
   // code-generated implementation
}
Looking at Sitecore, the actual template is implemented at the Project layer from a composition of the interface templates:


That actually works like a charm, exactly as I want it to perform!

Show me the code! The code of HTML Helper that makes all that function:

using System.Web.Mvc;
using Sitecore;
using System;
using Sitecore.Data;
using Sitecore.Data.Items;
using Foundation.Data.Models;
using System.Linq;
using Glass.Mapper.Sc.Configuration.Attributes;
using System.Reflection;
using System.Collections.Generic;

namespace Feature.Components.Extensions
{
    public static class ModelValidationExtensions
    {
        public static MvcHtmlString Validate<T>(this HtmlHelper html, T model) where T : IGlassBase
        {
            if (model == null) 
                return new MvcHtmlString("Datasource is missing for this component");

            var featureInterfaceTemplateIds = GetTemplateId<T>();

            var modelItem = Context.Database.GetItem(new ID(model.Id));

            var failedTemplates = new List<Guid>();
            foreach (var templateId in featureInterfaceTemplateIds)
            {
                var section = modelItem?.GetAncestorOrSelfOfTemplate(new ID(templateId));
                if (section == null)
                {
                    failedTemplates.Add(templateId);
                }
            }

            if (failedTemplates.Any())
            {
                var ids = failedTemplates.Select(ft => ft.ToString());

                var error = String.Empty;

                if (Context.PageMode.IsExperienceEditor)
                {
                    error = $@"<div class='EE_error'>
                                <div>Provided datasource has invalid type(s).</div>
                                <div>Please correct it to inherit: {string.Join(", ", $"{{{ids}}}")}.</div>
                              </div>";
                }

                return new MvcHtmlString(error);
            }

            return null;
        }

        private static Item GetAncestorOrSelfOfTemplate(this Item item, ID templateID)
        {
            if (item == null)
            {
                throw new ArgumentNullException(nameof(item));
            }

            return item.DescendsFrom(templateID) 
                ? item 
                : item.Axes.GetAncestors().LastOrDefault(i => i.DescendsFrom(templateID));
        }

        private static bool SitecoreTypeAttributeFilter(Type referencedType, Object criteriaObj)
        {
            var attribs = referencedType.CustomAttributes;
            var customAttributes = attribs.FirstOrDefault(a => a.AttributeType == typeof(SitecoreTypeAttribute));
            if (customAttributes != null)
            {
                var tmplId = customAttributes.NamedArguments.FirstOrDefault(a => a.MemberName == "TemplateId");
                if (tmplId != null)
                {
                    return true;
                }
            }

            return false;
        }

        private static IEnumerable<Guid> GetTemplateId<T>() where T : IGlassBase
        {
            var guids = new List<Guid>();
            var referencedType = typeof(T);

            var filter = new TypeFilter(SitecoreTypeAttributeFilter);
            var _ifs = referencedType.FindInterfaces(filter, "IGlassBase").ToList();
            _ifs.Add(referencedType);

            foreach (var type in _ifs)
            {
                var attribs = type.CustomAttributes;

                var customAttributes = attribs
                    .FirstOrDefault(a => a.AttributeType == typeof(SitecoreTypeAttribute));

                if (customAttributes != null)
                {
                    var tmplId = customAttributes.NamedArguments
                        .FirstOrDefault(a => a.MemberName == "TemplateId");

                    if (tmplId != null)
                    {
                        guids.Add(Guid.Parse(tmplId.TypedValue.ToString().Trim('\"')));
                    }
                }
            }

            return guids;
        }
    }
}
It allows me saving lots of time without losing in quality and keeping the models validating. If something goes wrong - both me and editors know what exactly need to get fixed.

Hope you benefit from validating your MVC models too!

Advanced editing: managing dynamic popups from custom RTE dialog

One day a request came up from the business, they wanted to have a nice "information" icon appearing aside from a text, clicking which opens up a popup modal dialog showing more information related to that line of content. From an UX point of view that works as a decent solution preventing page from bloating with more-specific info.

From page visitor's eyes it looks as below:


There was a FE code provided that does exactly as described. If I was responsible for Front-End, I'd have chosen using Emoji as they're an official part of Unicode, as therefore gets supported with all browsers. But front-end was handed to me as given and I assume there was a decent reason for using it in a provided way.

An any case, our mission will be implementing that with BE with certain challenges:

  1. You may have multiple popups on the single page, at least more than one.
  2. User needs being able dynamically adding popups to a page (sometimes removing them).
  3. Each popup needs to be editable, while in their normal state they are hidden.
  4. Because of that above, there bight be a user-friendly way of distinguishing popups and giving them individual names.
  5. An information icon should be editable from with Rich Text, mixed along classical RTE interface
  6. Each "i"-icon should get referenced with a specific popup, so that clicking different icons trigger different popups.
  7. Because of that, a clear and nice way of referencing and icon to a popup should be presented.
  8. The BE solution is classical MVC implementation, with no SXA or JSS (unfortunately)
Now, with these requirements in hands, let's implement the whole feature. Below are my..

THOUGHTS


1. Popup code
At the front-end these I was given them implemented as <section>-tag blocks, hidden with styles. Each of these blocks has data-name attribute, that is used for referencing it along with a correspondent "i"-icon:


2. Popups aren't visible so need extra care to support editors managing them. On the layout I create a separate placeholder for exclusively adding these popups. Normally, once you add the first popup, placeholder add invitation disappears, as it now has an item already but that item is invisible.
To make it visible I add an additional section that becomes visible only in editing mode (Sitecore MVC Razor view):
@if (Sitecore.Context.PageMode.IsExperienceEditor)
{
    
Modal popup: @name
}

That allows selecting each popup individually, making that possible removing existing or adding a new popup after existing:


3. Editable Datasource
I use standard Sitecore Controller Rendering with a generic datasource of Title with Rich Text template. A simple code-generated Glass model coming from Leprechaun (but could be anything, i.e. T4 templates) gets passed into a MVC View.


4. Giving Component a unique Name
as you might know, a bunch of additional options (like styling) could be also provided from Rendering Parameters for each individual Popup Rendering. In or case we need giving each individual rendering on a page unique name and Rendering Parameters seem to be the ideal way of doing that.

Why Rendering Parameters?
Obviously, as it comes from their name, Rendering Parameters are stored with a page and are set per each component applied. That's opposed to the datasource items which carry on replaceable sources of data, and could be shared across several components of the same or compatible types.
per page.

I have recently written an article on how easily one could use rendering parameter with Glass Mapper by using strongly typed HTML helpers as I highly recommend reading as the code from this article is using described method.


5. Editing Information icon
Now, the biggest challenge is that "i"-icon is mixed along the rest of RTE content in a single field. Let's look at how FE team has implemented that:
<button data-name="NAME_OF_POPUP" type="button" aria-label="View Info" aria-haspopup="dialog"  class="button--more-info a-icon-info"></button>
It comes as a styled <button> HTML tag, with set of attributes the most important of which is data-name. This attribute sets the relationship with a corresponding modal popup <section> tag to be shown/hidden.


6. Wiring-up icon and modal popup together
An initial though was tokenizing this <button> tag and bringing it to the editor's snippets collection, if not the data-name parameter, which is unique per each icon and point out to that same parameter on a modal popup <section> tag attribute.


7.That mean a more elegant solution should be chosen. Creating a custom editors dialog would solve this, for example asking the name of modal popup to be shown on a click at this icon. The most straightforward way would be asking user to input the name user created at the stage 4, so that upon submission it is being injected into a <button> tag attribute and returned back to the editor.

That would work but is subject to potential mistakes/typos/misunderstandings by a user, so instead I decided to present user with a drop-down already populated with the names of modal popups previously added to the page. User needs typing once, and even he/she typed a total mess as the name of popup, that crazy value should be available for selection/usage without retyping.

Now let's turn to the actual..

IMPLEMENTATION


I will start with Modal Popup rendering itself. First of all need creating a Controller rendering:

With the view action code:
@using Glass.Mapper.Sc.Web.Mvc
@using Feature.Components.Templates
@using Feature.Components.Extensions
@model ITitleWithRichText

@{ var name = Html.GetRenderingParametersString<IPopupModalDialog>(m => m.DialogName); } 


@if (Sitecore.Context.PageMode.IsExperienceEditor)
{
    
Modal popup: @name
}

Controller action method itself will be extremely simple- grab a strongly-typed interface from Glass Mapper and pass it to a view:
public ActionResult ModalPopup()
{
var model = _componentsRepository.GetModel<ITitleWithRichText>();
return View("~/Views/Feature/Components/ModalPopup.cshtml", model);
}

Next, need to add a Sitecore Placeholder for holding modal dialogs. The good practice here also is not to ignore creating placeholder settings for users' choice convenience:


Rich Text Editor

We need creating the button on a Rich Text, these buttons are configured inside of active Rich Text Profile within core database.

Please note: as soon as you need modifying anything from OOB profile, do not change them. Duplicate the whole desired profile node instead, put it under the serialization and then you're welcome to modifying it.

Therefore I duplicate Rich Text Full profile into: 
/sitecore/system/Settings/Html Editor Profiles/Custom
Same as the rest of editing-related stuff, I am keeping it serialized under Foundation.Editing Helix module.

As an option, you could also want this new profile becoming the default, so that there will be no need of explicitly stating the profiles name at the templates' Source column. To achieve that you can create a config patch file (just here, at Foundation.Editing) that sets HtmlEditor.DefaultProfile setting to the one we've just created:


  
    
      /sitecore/system/Settings/Html Editor Profiles/Custom
    
  



Once we've sorted with Rich Text profile, let's include a new icon for a custom dialog. To do so, I create a new Modal Popup item under /sitecore/system/Settings/Html Editor Profiles/Custom/Toolbar 2.

The most important property here is Click field and it stores the name of modal dialog to serve this:

As for icon itself, that could be chosen from one of sprite images stored at sitecore/shell/Themes/Standard/Images/Editor/WebResource.png by specifying offset in style:
html .ModalPopup { background-position: -6px center }
Here is how the result looks like:


After we created a new button and have associated it with a Click command referencing a new dialog name, there is also a code to be added that handles new button click and triggers new dialog.

Danger zone: to add this code one need modifying existing OOB JavaScript file, so with each version update there is a risk of new vanilla version being overwritten with you custom-modified old version script file. This file rarely changes, but still please keep an eye - if the change occurs you'll need handling the diff.

The customization in my case is simply appending some code to the very end of sitecore\shell\Controls\Rich Text Editor\RichText Commands.js file:
Telerik.Web.UI.Editor.CommandList["ModalPopup"] = function(commandName, editor, args) {
  var html = editor.getSelectionHtml();
  var id;
  
  if (!id) {
    id = GetMediaID(html);
  }

  scEditor = editor;

  editor.showExternalDialog(
    "/sitecore/shell/default.aspx?xmlcontrol=RichText.ModalPopup&la=" + scLanguage + (id ? "&fo=" + id : "") + (scDatabase ? "&databasename=" + scDatabase : "") ,
    null, 400, 260, scModalPopup, null, "Insert Modal Popup Dialog", true, Telerik.Web.UI.WindowBehaviors.Close,false, false 
  );
};

function scModalPopup(sender, returnValue) {
  if (!returnValue) {
      return;
  }

  scEditor.pasteHtml(unescape(returnValue.Text), "DocumentManager");
}
What the above code does is adds the Telerik.Web.UI.Editor.CommandList["ModalPopup"] handling code (note that "ModalPopup" matches the value from Click command we entered into a profile at core database; it also adds scModalPopup handler that actually pastes resulted markup into Rich Text Editor.


Creating the Dialog

That is done with very legacy markup method called SheerUI. A traditional Sheer UI component would consist of 3 pieces:
  1. XML markup that dictates how control layout should be laid
  2. Codebehind similar to old knows ASP.NET WebForms (not to confuse with another deprecated tolset - WFFM)
  3. Related JavaScript code
Here they are, one after another:

1. XML markup for sitecore\shell\Controls\Rich Text Editor\ModalPopup\ModalPopup.xml


  
    
      
      
      
          
            
          
		  
      
    
  


This code has <richtext.modalpopup> section that hard-ties to the command from previous steps. It also has <codebeside> section that references C# code doing the rest of its logic behing the markup, here is it below:

2. ModalPopup.cs
using System;
using Sitecore.Web.UI.Pages;
using Sitecore.Diagnostics;
using Sitecore;
using Sitecore.Web;
using Sitecore.Web.UI.Sheer;
 
namespace Foundation.Editing.Dialogs
{
    public class ModalPopup : DialogForm
    {
        protected Sitecore.Web.UI.HtmlControls.Combobox Target;

        string Wrapping = @"";

        protected override void OnLoad(EventArgs e)
        {
            Assert.ArgumentNotNull(e, "e");
            base.OnLoad(e);

            if (!Context.ClientPage.IsEvent)
            {
                Mode = WebUtil.GetQueryString("mo");
               
                Context.ClientPage.ClientScript.RegisterStartupScript(GetType(), "script", "scOnLoad();", true);
            }
        }

        protected override void OnOK(object sender, EventArgs args)
        {
            Assert.ArgumentNotNull(sender, "sender");
            Assert.ArgumentNotNull(args, "args");

            string code = string.Format(Wrapping, Target.Value);      

            if (Mode == "webedit")
            {
                SheerResponse.SetDialogValue(StringUtil.EscapeJavascriptString(code));
                base.OnOK(sender, args);
            }
            else
            {
                SheerResponse.Eval($"scClose({StringUtil.EscapeJavascriptString(code)})");
            }
        }

        protected override void OnCancel(object sender, EventArgs args)
        {
            Assert.ArgumentNotNull(sender, "sender");
            Assert.ArgumentNotNull(args, "args");

            if (Mode == "webedit")
            {
                base.OnCancel(sender, args);
            }
            else
            {
                SheerResponse.Eval("scCancel()");
            }
        }

        protected string Mode
        {
            get
            {
                string str = StringUtil.GetString(base.ServerProperties["Mode"]);
                if (!string.IsNullOrEmpty(str))
                {
                    return str;
                }
                return "shell";
            }
            set
            {
                Assert.ArgumentNotNull(value, "value");
                base.ServerProperties["Mode"] = value;
            }
        }
    }
}

Simply saying, we take an input from user (in a given case by selecting a drop-down item from a list) and wrapping it with <button> tag store at Wrapping string variable:

3. Finally, sitecore\shell\Controls\Rich Text Editor\ModalPopup\ModalPopup.js that handles client-side part for this control:
function scClose(text) {
    var returnValue = {
        Text: text
    };
 
    getRadWindow().close(returnValue);
}
 
function GetDialogArguments() {
    return getRadWindow().ClientParameters;
}
 
function getRadWindow() {

    if (window.radWindow) {
        return window.radWindow;
    }
 
    if (window.frameElement && window.frameElement.radWindow) {
        return window.frameElement.radWindow;
    }
 
    return null;
}
 
var isRadWindow = true;
 
var radWindow = getRadWindow();
 
if (radWindow) {
    if (window.dialogArguments) {
        radWindow.Window = window;
    }
}

function scOnTargetLoad() {    
}

function scOnLoad() {    
    
    let select = document.getElementById("Target");
    let list = parent.parent.parent.document.querySelectorAll('section[data-name]')
    let values = Array.from(list).map(x => x.getAttribute('data-name'));

    for (var i = 0; i < values.length; i++) {
        var opt = document.createElement('option');
        opt.innerHTML = values[i];
        opt.value = values[i];
        select.appendChild(opt);
    }
}

function scCancel() {
 
    getRadWindow().close();
}
 
function scCloseWebEdit(embedTag) {
    window.returnValue = embedTag;
    window.close();
}
 
if (window.focus && Prototype.Browser.Gecko) {
    window.focus();
}
The main trick here you may see inside of scOnLoad() method. I created and referenced this handler to exactly catch the moment the drop-down is actually created on the control - that is not trivial as is instantiated asynchronously far later that holding control itself. Once created, I am crawling the triple-parent iframe for the presence of popup modal windows, and grab their name into a drop-down, if found.

That is probably the main bit of the whole blog post. The user can now select any name of those existing modal popup components he actually dropped into a placeholder on a page, and those are the names entered only once from a Rendering Parameters dialog forced after control was added.

As mentioned above, I place both new custom dialog and modified RichText Commands.js file into Foundation.Editing project as per Helix guidance.That guarantees all these related code, scripts and items get kept together.

DEMO TIME

Thanks for reading and watching!

Sum-up of my PowerShell experience. Best practices

Last year I've spent an enormous of time developing Sifon, which gave me a dive deep into the wonderful world of PowerShell. Of course, by these days I already had a decent 6-7 years experience with this scripting language, starting from the earliest version. However, the Sifon experience gave me a chance of revising all of my past experience and aggregating it into the best practices.

Obviously, I do expect my reader to have some snippets of experience of PowerShell, a partial understanding of its principles and object nature, etc. My objective is to advise certain best practices on top of that for increasing the readability and maintainability of the PowerShell code used in your company and, as a result, increasing the productivity of the administrator working with it.


Content



Styleguides

Developing scripts according to styleguides is a good practice universally, and there can hardly be two opinions on it. Due to the lack of any officially approved or detailed descriptions from Microsoft, the community has filled that gap (at the time of PowerShell v3) and maintained that on GitHub: PowerShellPracticeAndStyle. This is "the must" repository for anyone who has ever used the "Save" button in the PowerShell ISE.

Briefly, style guides fall to the following statements:

  • PowerShell uses PascalCase to name variables, cmdlets, module names, and pretty much everything except operators;
  • Language statements such as if, switch, break, process, -match are exclusively written in lowercase;
  • There is only one correct way for curly braces, also known as the Kernighan and Richie style, leading its history from the book The C Programming Language;
  • Avoid using aliases other than at interactive console session, do not write any ps | ? processname -eq firefox | %{$ws=0}{$ws+=$_.workingset}{$ws/1MB}; in the actual scripts.
  • Specify the parameter names explicitly, the cmdlets behavior and/or their signature may change with time making such call invalid. Furthermore doing that provides a context to whoever is nonfamiliar with a specific cmdlet;
  • Make out the parameters for calling scripts, and do not write a function inside the script and call this function in the last line with the need to change the values of global variables instead of specifying parameters;
  • Specify [CmdletBinding ()] - this enriches your cmdlet with -Verbose and -Debug flags and certain other useful features. I personally am not a fan of specifying this attribute in simple inline functions and filters that are literal to a few lines;
  • Write a comment-based help: just a sentence, or a link to a ticket, and an example of a call;
  • Specify the required version of PowerShell in the #requires section;
  • Use Set-StrictMode -Version Latest, it will help you avoid these problems;
  • Process error and exceptions
  • Don't rush rewriting everything in PowerShell. In the first place, PowerShell is a shell responsible for invoking binaries which is its primary task. There is nothing wrong with using robocopy in a script, rather than trying to mimic all of its logic with PS.

Comment Based Help

Please find an example of a script help implementation. The actual script crops the image bringing it to a square and resizes, It may seem familiar to those who once made avatars for users. There is a call example in the .EXAMPLE section, try it. And since PowerShell executes by the CLR (that same as other .NET languages use), it has the ability to employ the full power of .NET libraries:

<#
    .SYNOPSIS
    Resize-Image resizes an image file

    .DESCRIPTION
    This function uses the native .NET API to crop a square and resize an image file

    .PARAMETER InputFile
    Specify the path to the image

    .PARAMETER OutputFile
    Specify the path to the resized image

    .PARAMETER SquareHeight
    Define the size of the side of the square of the cropped image.

    .PARAMETER Quality
    Jpeg compression ratio

    .EXAMPLE
    Resize the image to a specific size:
    .\Resize-Image.ps1 -InputFile "C:\userpic.jpg" -OutputFile "C:\userpic-400.jpg"-SquareHeight 400
#>

# requires -version 3

[CmdletBinding()]
Param(
    [Parameter(Mandatory)]
    [string]$InputFile,
    [Parameter(Mandatory)]
    [string]$OutputFile,
    [Parameter(Mandatory)]
    [int32]$SquareHeight,
    [ValidateRange(1, 100)]
    [int]$Quality = 85
)

# Add System.Drawing assembly
Add-Type -AssemblyName System.Drawing

# Open image file
$Image = [System.Drawing.Image]::FromFile($InputFile)

# Calculate the offset for centering the image
$SquareSide = if ($Image.Height -lt $Image.Width) {
    $Image.Height
    $Offset = 0
} else {
    $Image.Width
    $Offset = ($Image.Height - $Image.Width) / 2
}
# Create empty square canvas for the new image
$SquareImage = New-Object System.Drawing.Bitmap($SquareSide, $SquareSide)
$SquareImage.SetResolution($Image.HorizontalResolution, $Image.VerticalResolution)

# Draw new image on the empty canvas
$Canvas = [System.Drawing.Graphics]::FromImage($SquareImage)
$Canvas.DrawImage($Image, 0, -$Offset)

# Resize image
$ResultImage = New-Object System.Drawing.Bitmap($SquareHeight, $SquareHeight)
$Canvas = [System.Drawing.Graphics]::FromImage($ResultImage)
$Canvas.DrawImage($SquareImage, 0, 0, $SquareHeight, $SquareHeight)

$ImageCodecInfo = [System.Drawing.Imaging.ImageCodecInfo]::GetImageEncoders() |
    Where-Object MimeType -eq 'image/jpeg'

# https://msdn.microsoft.com/ru-ru/library/hwkztaft(v=vs.110).aspx
$EncoderQuality     = [System.Drawing.Imaging.Encoder]::Quality
$EncoderParameters  = New-Object System.Drawing.Imaging.EncoderParameters(1)
$EncoderParameters.Param[0] = New-Object System.Drawing.Imaging.EncoderParameter($EncoderQuality, $Quality)

# Save the image
$ResultImage.Save($OutputFile, $ImageCodecInfo, $EncoderParameters)

The above script starts with a multi-line comment <# ... #>. When such a comment comes first and contains certain keywords, PowerShell is smart enough to guess building help for a given script. That's why this type of help is called that - Comment Based Help:

Moreover, after typing the script name, IntelliSense hints will suggest the relevant parameters, regardless of if that's PowerShell console or code editor:

Once again, I want to encourage you not to neglect it. If you don't have a clue what to write there, take a break and think of your function and its purpose - that helps writing meaningful help. Always works for me.

There's no need to fill in all the keywords, PowerShell is designed to be self-documenting, and if you've given meaningful and fully qualified names to the parameters, a short sentence in the .SYNOPIS section with one example will be sufficient.


Strict mode

Similar to many other scripting languages PowerShell is dynamically typed. This approach brings many benefits: writing simple yet powerful high-level logic appears to be a matter of minutes, but when your solution grows to thousands of lines, you'll face the fragility of this approach.

For example, while testing a script, you always received a set of elements as an array. In real life, it could receive just one element and at the next condition, instead of checking the number of elements, you will receive the number of characters or another attribute, depending on the element type. The script logic will definitely break, but the runtime will pretend that everything is fine.

Enforcing strict mode avoids this kind of problem, at a cost of a little more code from you, like variable initialization and explicit types casting.

This mode is enabled by the Set-StrictMode -Version Latest cmdlet, although there are other options for "strictness", my choice is to use the latter.

In the example below, strict mode catches a call to a nonexistent property. Since there is only one element inside the folder, the type of the $Input variable as a result of execution will be FileInfo, and not the expected array of the corresponding elements:


To avoid such a problem, the result of running the cmdlet should be explicitly converted to an array:

$Items = @(Get-ChildItem C:\Nextcloud)

Make it a rule to always enable strict mode to avoid unexpected results from your scripts.


Error processing

ErrorActionPreference

Looking at other people's scripts, I often see either complete ignoring of the error handling mechanism or an explicit force of silent continuation mode in case of an error. Error handling is certainly not the easiest topic in programming in general and in scripts in particular, but it definitely does not deserve to be ignored. By default, if an error occurs, PowerShell displays it and continues working (I simplified it a little). This is convenient, for example, if you need urgently send requests to numerous hosts,. It's unproductive to interrupt and restart the whole process if one of the machines is turned off, or returns a faulty response.

On the other hand, if you are doing a complex backup of a system consisting of more than one data file of more than one part of a system, you' better be sure that your backup is consistent and that all the necessary data sets were copied without errors.

To change the behavior of cmdlets in the event of an error, there is a global variable $ErrorActionPreference, with the following list of possible values: Stop, Inquire, Continue, Suspend, SilentlyContinue.

Get-ChildItem 'C:\System Volume Information\' -ErrorAction 'Stop'

Actually, such an option defines two potential strategies: either to resolve all errors "by default" and set ErrorAction only for critical places, where they must be handled; or include it at the whole script level by setting a global variable and setting -ErrorAction 'Continue' for non-critical operations. I always prefer the second option, but do not impose it on you. I only recommend understanding this issue and using this useful tool wisely based on your needs.


try/catch

In the error handler, you can control the execution flow by an exception type. Interestingly, one can build the whole execution flow with try/catch/throw/trap operator, that in fact has a terrible "smell" due to being severe antipattern. Not just because it uglifies the code turning it into the worst sort of "spaghetti". but also exception handling is very costly with .NET and abusing it dramatically reduces the script performance.

#requires -version 3
$ErrorActionPreference = 'Stop'

# create logger with a path to write
$Logger = Get-Logger "$PSScriptRoot\Log.txt"

# global errors trap
trap {
    $Logger.AddErrorRecord($_)
    exit 1
}

# connection attempt count
$count = 1;
while ($true) {
    try {
        # connection attempt
        $StorageServers = @(Get-ADGroupMember -Identity StorageServers | Select-Object -Expand Name)
    } catch [System.Management.Automation.CommandNotFoundException] {
        # there is no sense going ahead without modeule, thus throw an exception
        throw "Get-ADGroupMember is not available, plase add feature Active Directory module for PowerShell; $($_.Exception.Message)"
    } catch [System.TimeoutException] {
        # sleep a bit and do one more attempt, if attempt count is not exceeded
        if ($count -le 3) { $count++; Start-Sleep -S 10; continue }
        # terminate and throw exception outside
        throw "Server failed to connect due to a timeout, $count attempts done; $($_.Exception.Message)"
    }
    # since no exceptins occured, just leave the loop 
    break
}

It is worth mentioning trap operator if you haven't come across it before. It works as a global error trap: catches everything not processed at lower levels, or thrown out from an exception handler due to the impossibility of self-fixing.

Apart from object-oriented exception handling, PowerShell also provides more familiar concepts that are compatible with other "classic" shells, such as error streams, return codes, and variables accumulating errors. All this is definitely convenient, sometimes with no alternative, but is out of the scope of this overview topic. Luckily, there is a good book on GitHub highlighting this topic.

When I am not sure the target system has PowerShell v5, I use this logger compatible with version 3:

# poor man's logger, compatible with PowerShell v3
function Get-Logger {
    [CmdletBinding()]
    param (
        [Parameter(Mandatory = $true)]
        [string] $LogPath,
        [string] $TimeFormat = 'yyyy-MM-dd HH:mm:ss'
    )

    $LogsDir = [System.IO.Path]::GetDirectoryName($LogPath)
    New-Item $LogsDir -ItemType Directory -Force | Out-Null
    New-Item $LogPath -ItemType File -Force      | Out-Null

    $Logger = [PSCustomObject]@{
        LogPath    = $LogPath
        TimeFormat = $TimeFormat
    }

    Add-Member -InputObject $Logger -MemberType ScriptMethod AddErrorRecord -Value {
        param(
            [Parameter(Mandatory = $true)]
            [string]$String
        )
        "$(Get-Date -Format 'yyyy-MM-dd HH:mm:ss') [Error] $String" | Out-File $this.LogPath -Append
    }
    return $Logger
}

But once again - do not ignore errors! 

It will save your time and nerves in the long run. Don't confuse a faulty script completing without an error with a good script. A good script will terminate when needed instead of silently continuing with an unpredicted result.


Tools

New Windows Terminal is a decent configurable terminal with tabs and plenty of hidden features. It does not exclusively use PowerShell, old school cmd.exe is also there, while I mostly use it for a Linux console of WSL2.

The next step is to install the modules: oh-my-posh and posh-git - they will make the prompt more functional by adding information about the current session, the status of the last executed command, and the status of the git repository in the current location.

Visual Studio Code

Just the best editor! All the worst that I could say about PowerShell relates exclusively to PowerShell ISE, those who remember its first three-panel version won't forget that experience! The different terminal encoding, the lack of basic editor features, such as autocomplete, auto-closing brackets, code formatting, and a whole set of anti-patterns generated by it - this is all about ISE. Don't use it, use Visual Studio Code with the PowerShell extension instead - everything you want is there.

In addition to syntax highlighting, method hints, and the ability to debug scripts, the plugin installs a linter that will also help you follow established practices in the community, for example, deploy abbreviations in one click (on a bulb icon). In fact, this is a regular module that can be installed independently, for example, add it to your script signing pipeline: PSScriptAnalyzer.

It is worth remembering that any action in VS Code can be performed from the control center, called by the CTRL + Shift + P combination. Format the piece of code inserted from the chat, sort the lines alphabetically, change the indent from spaces to tabs, and so on, all this in the control center. Or, for example, enable full screen and centered editor position:

Git

People sometimes have a phobia of resolving conflicts in version control systems, those who are often alone and do not encounter any problems of this kind. With VS Code, conflict resolution is literally a matter of mouse clicks on those parts of the code that need to be retained or replaced. How to work with version control systems is available and with pictures is written in the VSCode doc, go over your eyes to the end there briefly and well.

Snippets

Snippets are a kind of macros/templates that speed up the writing of code. Definitely, a must-use. Let's take a look at a few of them.

Quick object creation:


Template for comment-based help:


When cmdlet needs to pass a large number of parameters, it makes sense to use splatting.Here's a snippet for it:


All available snippets can be viewed by Ctrl + Alt + J:


You could improve your working environment furthermore, there are LOTS of goodies accumulated by the community, for example at Awesome Lists.


Performance

The topic of performance is not as simple as it might seem at first glance. On the one hand, premature optimizations can greatly reduce the readability and maintainability of the code, saving 300ms of script execution time, the usual running time of which can be ten minutes, their use, in this case, is certainly destructive. On the other hand, there are several fairly simple techniques that increase both the readability of the code and the speed of its operation, which are quite appropriate to use on an ongoing basis. Below I will talk about some of them, in case performance is all for you, and readability fades into the background due to strict time constraints for service downtime for the duration of maintenance, I recommend referring to the specialized literature.

Pipeline и foreach

The easiest and always working way to increase productivity is to avoid using pipes. Due to the type safety and convenience, when passing elements through the pipe, PowerShell wraps each of them with an object. In .NET languages, this is known as boxing. Boxing is good as it guarantees safety but is a heavy operation, which sometimes it makes sense to avoid.

In order to improve performance and to increase readability the first step would be removing all usages of the Foreach-Object cmdlet replacing it with the foreach statement. You may be surprised to find out that these are actually two different entities, with foreach being an alias for Foreach-Object. In practice, the main difference is that foreach does not take values from the pipeline, and it works up to three times faster!

Let's think of a task where we need to process a large log to get some result, for example, selecting and converting its records to a different format:

Get-Content D:\temp\SomeHeavy.log | Select-String '328117'

The above example looks nice and is easy to read at first glance. At the same time, it contains a performance bottleneck - the pipe. To be correct, it is not the pipeline itself to blame, but the behavior of the Get-Content cmdlet. To use the pipeline, it reads a file line by line wrapping each line of a log from a string type into an object. That increases its size greatly, decreasing the number of data objects in the cache and doing unwanted work.

Avoiding this is simple - you just need to indicate that data must be read in full at one time by setting ReadCount to 0:

Get-Content D:\temp\SomeHeavy.log -ReadCount 0 | Select-String '328117'
For my 1Gb log file the advantage of the second approach is almost three times:


I advise you not to take my word on faith but to check it yourself whether this is true with a file of similar size. In general, Select-String gives a good result, and if the final time suits you, it's time to stop optimizing this part of the script. If the final script execution time still strongly depends on the stage of data retrieval, you can slightly reduce the data retrieval time by replacing the Select-String cmdlet. This is a very powerful and convenient tool, but in order to be so, Select-String adds a certain amount of metadata to its output, again doing not free time work, we can refuse unnecessary metadata and related work by replacing the cmdlet with the language operator:

foreach ($line in (Get-Content D:\temp\SomeHeavy.log -ReadCount 0)) {
    if ($line -match '328117') {
        $line
    }
}

As per tests, the execution time decreased to 30 seconds, meaning I won 30%. The overall readability of the code also decreased just a little. But if you operate with tens of gigabytes of logs, then this is your way with no doubt.

Another thing I would like to mention is the -match operator, a search by a regular expression pattern. In this particular case, the search turns simple substring location, but this is not always the case of such simplicity. Your regular expression can be extremely complex progressively increasing the execution time - RegEx is a Turing-complete language, so be careful with them.

The next task would be to process a heavy log file. Write the straightforward solution with the addition of the current date to each selected line and write to the file via the pipe:

foreach ($line in (Get-Content D:\temp\SomeHeavy.log -ReadCount 0)) {
    if ($line -match '328117') {
        "$(Get-Date -UFormat '%d.%m.%Y %H:%M:%S') $line" | Out-File D:\temp\Result.log -Append
    }
}

And the measurement results done by the Measure-Command cmdlet:

Hours             : 2
Minutes           : 20
Seconds           : 9
Milliseconds      : 101

Now, let's improve the result.

It is obvious that writing a file on a per-line basis isn't optimal, it is much better to make a buffer that is periodically flushed to disk, ideally to flush it once. It is also worth reminding that strings in .NET (therefore, in PowerShell as well) are immutable and any string manipulation creates memory allocation for a new string to be written, while the old one remains to wait for the garbage collector. This is expensive both in terms of speed and memory. .NET has a specialized class to solve this problem that allows you to change strings while encapsulating the logic for more accurate memory allocation and its name is StringBuilder. When creating a class, a buffer is allocated in RAM in which new lines are added without re-allocating memory, if the size of the buffer is not enough to add a new line, then a new one twice as large is created and the work continues with it. In addition to the fact that such a strategy greatly reduces the number of memory allocations, it can still be tweaked if you know the approximate amount of memory that the lines will occupy and set it in the constructor when creating an object.

$StringBuilder = New-Object System.Text.StringBuilder
foreach ($line in (Get-Content D:\temp\SomeHeavy.log -ReadCount 0)) {
    if ($line -match '328117') {
        $null = $StringBuilder.AppendLine("$(Get-Date -UFormat '%d.%m.%Y %H:%M:%S') $line")
    }
}
Out-File -InputObject $StringBuilder.ToString() -FilePath D:\temp\Result.log -Append -Encoding UTF8

The execution time of this code is only 5 minutes, instead of the last two and a half hours:

Hours             : 0
Minutes           : 5
Seconds           : 37
Milliseconds      : 150

It is worth noting the Out-File -InputObject construction, its essence is to get rid of the pipeline once again. This method is faster than a pipe and works with all cmdlets - any value that in the signature of the cmdlet is the value received from the pipe can be specified by a parameter. The easiest way to find out which parameter takes the values passing through the pipe is to run Get-Help on the cmdlet with the -Full parameter, among the list of parameters one should contain Accept pipeline input? true (ByValue):

-InputObject <psobject>

    Required?                    false
    Position?                    Named
    Accept pipeline input?       true (ByValue)
    Parameter set name           (All)
    Aliases                      None
    Dynamic?                     false

In both cases, PowerShell limited itself to three gigabytes of memory:

The previous approach is fairly good, except perhaps that 3GB of memory consumption.

Let's try to reduce memory consumption and use another .NET class written to solve such problems - StreamReader:

$StringBuilder = New-Object System.Text.StringBuilder
$StreamReader  = New-Object System.IO.StreamReader 'D:\temp\SomeHeavy.log'
while ($line = $StreamReader.ReadLine()) {
    if ($line -match '328117') {
        $null = $StringBuilder.AppendLine("$(Get-Date -UFormat '%d.%m.%Y %H:%M:%S') $line")
    }
}
$StreamReader.Dispose()
Out-File -InputObject $StringBuilder.ToString() -FilePath C:\temp\Result.log -Append -Encoding UTF8
Hours             : 0
Minutes           : 5
Seconds           : 33
Milliseconds      : 657

The execution time has remained almost the same, but the memory consumption and its nature have changed. If in the previous example, when reading a file in memory, space was occupied at once for the entire file, I have more than a gigabyte, while the script's memory usage was about three gigabytes, then when using a streamer, the memory occupied by the processor slowly increased until it reached 2GB. I did not manage to screenshot the final amount of memory occupied, but here is a screenshot of what happened towards the end of the work:

The behavior of the program in terms of memory consumption is quite obvious - its input is, roughly speaking, a "pipe", and the output is our StringBuilder - a "pool" that spills until the end of the program.

Let's set the buffer size of 100MB in order to remove unnecessary allocations and start dumping the contents to a file when approaching the end of the buffer. I implemented it straightforward by comparing if the buffer passed the mark of 90% of the total size (it makes sense to take this operation out of the loop to make it even better):

$BufferSize     = 104857600
$StringBuilder  = New-Object System.Text.StringBuilder $BufferSize
$StreamReader   = New-Object System.IO.StreamReader 'C:\temp\SomeHeavy.log'
while ($line = $StreamReader.ReadLine()) {
    if ($line -match '1443') {
        # check if we are approaching ond of buffer
        if ($StringBuilder.Length -gt ($BufferSize - ($BufferSize * 0.1))) {
            Out-File -InputObject $StringBuilder.ToString() -FilePath C:\temp\Result.log -Append -Encoding UTF8
            $StringBuilder.Clear()
        }
        $null = $StringBuilder.AppendLine("$(Get-Date -UFormat '%d.%m.%Y %H:%M:%S') $line")
    }
}
Out-File -InputObject $StringBuilder.ToString() -FilePath C:\temp\Result.log -Append -Encoding UTF8
$StreamReader.Dispose()
Hours             : 0
Minutes           : 5
Seconds           : 53
Milliseconds      : 417

The maximum memory consumption was 1 GB with almost the same execution speed:

Of course, the results for the absolute numbers of reclaimed memory will differ from one machine to another, it all depends on how much memory is available on the machine and how tough it will be to free it. If memory is critical for you, but a few percent of the performance is not so, then you can further reduce its consumption by using StreamWriter.

In my opinion, the idea is explained well - the main thing is to localize this problem. There's no need to re-invent the wheel: as people have already solved most of the problems, it is always good to look up a solution in the standard library first. But if Select-String and Out-File suit your timings, the machine does not crash with OutOfMemoryException, then use them - simplicity and readability are more important.


Native binaries

Often, having amazed with all the convenience of PowerShell, developers begin to opt-in favor of built-in cmdlets rather than using system binaries. On the one hand, it is understood - convenience is the king, but on the other - PowerShell is, in the first place, a shell, and launching binaries is its primary purpose. If PowerShell does that pretty well, so why not?

A good example of getting the relative paths of all the files in a directory and subdirectories (with lots of files).

Using a native command, the execution time took five times less:

$CurrentPath = (Get-Location).Path + '\'
$StringBuilder = New-Object System.Text.StringBuilder
foreach ($Line in (&cmd /c dir /b /s /a-d)) {
    $null = $StringBuilder.AppendLine($Line.Replace($CurrentPath, '.'))
}
$StringBuilder.ToString()
Hours             : 0
Minutes           : 0
Seconds           : 3
Milliseconds      : 9

$StringBuilder = New-Object System.Text.StringBuilder
foreach ($Line in (Get-ChildItem -File -Recurse | Resolve-Path -Relative)) {
    $null = $StringBuilder.AppendLine($Line)
}
$StringBuilder.ToString()
Hours             : 0
Minutes           : 0
Seconds           : 16
Milliseconds      : 337

Assigning the output to $null is the cheapest and easiest way of suppressing an output. The most expensive as you might guess it already would be using a pipe to send output to Out-Null

Moreover, such a suppression (assigning the result to $null) also reduces the execution time, albeit not that much:

# works faster:
$null = $StringBuilder.AppendLine($Line)

# works slower:
$StringBuilder.AppendLine($Line) | Out-Null

Once I had the task of synchronizing directories with a large number of files. This was only part of the work of a rather large script, kind of the preparation stage. The directory synchronization using Compare-Object looked decent and compact but required more time for the implementation than the whole time allocated by me. The way out of this situation was using robocopy.exe but through writing a wrapper (as a class in PowerShell 5) which became a compromise. Here's the code:

class Robocopy {
    [String]$RobocopyPath

    Robocopy () {
        $this.RobocopyPath = Join-Path $env:SystemRoot 'System32\Robocopy.exe'
        if (-not (Test-Path $this.RobocopyPath -PathType Leaf)) {
            throw 'Robocopy not found'
        }

    }
    [void]CopyFile ([String]$SourceFile, [String]$DestinationFolder) {
        $this.CopyFile($SourceFile, $DestinationFolder, $false)
    }
    [void]CopyFile ([String]$SourceFile, [String]$DestinationFolder, [bool]$Archive) {
        $FileName   = [IO.Path]::GetFileName($SourceFile)
        $FolderName = [IO.Path]::GetDirectoryName($SourceFile)

        $Arguments = @('/R:0', '/NP', '/NC', '/NS', '/NJH', '/NJS', '/NDL')
        if ($Archive) {
            $Arguments += $('/A+:a')
        }
        $ErrorFlag = $false
        &$this.RobocopyPath $FolderName $DestinationFolder $FileName $Arguments | Foreach-Object {
            if ($ErrorFlag) {
                $ErrorFlag = $false
                throw "$_ $ErrorString"
            } else {
                if ($_ -match '(?<=\(0x[\da-f]{8}\))(?<text>(.+$))') {
                    $ErrorFlag   = $true
                    $ErrorString = $matches.text
                } else {
                    $Logger.AddRecord($_.Trim())
                }
            }
        }
        if ($LASTEXITCODE -eq 8) {
            throw 'Some files or directories could not be copied'
        }
        if ($LASTEXITCODE -eq 16) {
            throw 'Robocopy did not copy any files. Check the command line parameters and verify that Robocopy has enough rights to write to the destination folder.'
        }
    }
    [void]SyncFolders ([String]$SourceFolder, [String]$DestinationFolder) {
        $this.SyncFolders($SourceFolder, $DestinationFolder, '*.*', '', $false)
    }
    [void]SyncFolders ([String]$SourceFolder, [String]$DestinationFolder, [Bool]$Archive) {
        $this.SyncFolders($SourceFolder, $DestinationFolder, '*.*', '', $Archive)
    }
    [void]SyncFolders ([String]$SourceFolder, [String]$DestinationFolder, [String]$Include) {
        $this.SyncFolders($SourceFolder, $DestinationFolder, $Include, '', $false)
    }
    [void]SyncFolders ([String]$SourceFolder, [String]$DestinationFolder, [String]$Include, [Bool]$Archive) {
        $this.SyncFolders($SourceFolder, $DestinationFolder, $Include, '', $Archive)
    }
    [void]SyncFolders ([String]$SourceFolder, [String]$DestinationFolder, [String]$Include, [String]$Exclude) {
        $this.SyncFolders($SourceFolder, $DestinationFolder, $Include, $Exclude, $false)
    }
    [void]SyncFolders ([String]$SourceFolder, [String]$DestinationFolder, [String]$Include, [String]$Exclude, [Bool]$Archive) {
        $Arguments = @('/MIR', '/R:0', '/NP', '/NC', '/NS', '/NJH', '/NJS', '/NDL')
        if ($Exclude) {
            $Arguments += $('/XF')
            $Arguments += $Exclude.Split(' ')
        }
        if ($Archive) {
            $Arguments += $('/A+:a')
        }
        $ErrorFlag = $false
        &$this.RobocopyPath $SourceFolder $DestinationFolder $Include $Arguments | Foreach-Object {
            if ($ErrorFlag) {
                $ErrorFlag = $false
                throw "$_ $ErrorString"
            } else {
                if ($_ -match '(?<=\(0x[\da-f]{8}\))(?<text>(.+$))') {
                    $ErrorFlag = $true
                    $ErrorString = $matches.text
                } else {
                    $Logger.AddRecord($_.Trim())
                }
            }
        }
        if ($LASTEXITCODE -eq 8) {
            throw 'Some files or directories could not be copied'
        }
        if ($LASTEXITCODE -eq 16) {
            throw 'Robocopy did not copy any files. Check the command line parameters and verify that Robocopy has enough rights to write to the destination folder.'
        }
    }
}

Attentive readers will ask, how that comes: why on earth Foreach-Object used in a class that fights for performance? This is a valid query, and the given example is one of the exclusions for using of this cmdlet, and here's why: unlike foreach, the Foreach-Object cmdlet does not wait for the complete execution of the command sending data to the pipe - processing occurs in a stream, in a specific situation, for example, by throwing exceptions immediately, rather than waiting for the end of the synchronization process. Parsing the output of a utility is a suitable place for this cmdlet.

Using the wrapper described above is trivially simple, all you need is just adding an exception handling:

$Robocopy = New-Object Robocopy

# copy a singe file
$Robocopy.CopyFile($Source, $Dest)

# syncing folders
$Robocopy.SyncFolders($SourceDir, $DestDir)

# syncing .xml only with setting an archive bit
$Robocopy.SyncFolders($SourceDir, $DestDir, '*.xml', $true)

# syncing all the files except *.zip *.tmp *.log with setting an archive bit
$Robocopy.SyncFolders($SourceDir, $DestDir, '*.*', '*.zip *.tmp *.log', $true)

I wrote this code several years ago so the implementation may not seem the best. The above example uses PowerShell classes, that do have a payload of transferring from Powershell to CLR and then back from CLR to PowerShell - that actively consumes stack. In addition, classes do have some drawbacks:

  • they require PowerShell version 5, but as for 2021 it is not that critical
  • do not support generating help from comments
  • do not work nicely in pipelines (ValueFromPipeline/ValueFromPipelineByPropertyName), well you can still access that via % { SomeClass.Method($_.xxx) }
  • the whole class must be within the same file
  • when class is defined within a module, it won't be easily consumed from outside
  • class are unimplemented: do not have namespaces, private fields, getters/setters

But thinking widely, do we really need to run several instances of Robocopy so that implement it as a class? So, instead, you can just make Robocopy as a module:

Import-Module robocopy.psm1

# copy a singe file
Robocopy-CopyFile $Source $Dest

# syncing folders
Robocopy-SyncFolders $SourceDir $DestDir

# syncing .xml only with setting an archive bit
Robocopy-SyncFolders $SourceDir $DestDir -Include '*.xml' -Archive $true

# syncing all the files except *.zip *.tmp *.log with setting an archive bit
Robocopy-SyncFolders $SourceDir $DestDir -Include '*.*' -Exclude '*.zip *.tmp *.log' -Archive $true

Few more tips to improve your code:

  • Take a look at Plaster for generating boilerplate for modules.
  • Did you know you can unit-test PowerShell? Pester is a test framework for PowerShell. It provides a language that allows you to define test cases and the Invoke-Pester cmdlet to execute these tests and report the results.

Coming back to scripts performance - that is a tricky topic. Implementing micro-optimizations may take more time than the benefits they bring, at the cost of the code's maintainability and readability. The cost of supporting it could be higher than any profit from using such a solution.

At the same time, there are a number of simple recommendations that make your code simpler, clearer, and faster, would you just start using them:

  • Use the foreach statement instead of the Foreach-Object cmdlet in scripts
  • minimize the number of pipelines
  • read/write files at once, instead of line-by-line
  • use StringBuilder and other specialized classes
  • profile your code to understand bottlenecks before optimizing it
  • do not avoid executing native binaries

And once again: do not rush to optimize something without a real need as premature optimization can break it all.


Jobs

Once you optimize everything and come to some compromise between readability and speed, but either operation is still lengthy or the data is huge, you still need to reduce the operating time. In this case, the parallel execution of some parts of the code becomes an answer. (You obviously ensure you're in possession of all necessary resources on the machine).

From the second version of PowerShell, cmdlets are available for working with jobs (Get-Command * -Job), you can read more here. There is nothing conceptually complicated in jobs: we make out the script block, run the task, get the results at the right time:

$Job = Start-Job -ScriptBlock {
    Write-Output 'Good night'
    Start-Sleep -S 10
    Write-Output 'Good morning'
}

$Job | Wait-Job | Receive-Job
Remove-Job $Job

The code above is intended solely for demonstration purposes and carries out zero value. As a great example of using jobs, I recommend you to play and debug this multithreaded script for network ping.

A problem that seems to be not a problem, but it is indicated - every job wants a little memory to be faster and is launched by a full-fledged operating system process with all the pros and cons of this approach.

In fact, every job executes an individual runspace, that reserves 30-50 megabytes, as any other .NET process would do. But unlike other .NET apps, these are not really consumed memory, but just a cache speeding up the work. Of course, if you do 100 of parallel jobs for asynchronous pinging - it will take few GBs of RAM, but it is doable on most of modern machines.

Jobs will help running any of your parallel tasks in and make it convenient. Be sure to study this mechanism, jobs are the best choice for a simple solution to a problem at a fairly high level of abstraction while keeping number of lines to the minimum. Just keep in mind - your scripts must be readable after you, scripts are written for people.

But it so happens that this abstraction is no longer enough due to the architectural limitations of such a solution, for example, it is difficult in such a paradigm to make an interactive gooey with binding of values ​​on the form to some variables.


Runspaces

A whole series of articles on the Microsoft blog is devoted to the concept of runspaces, and I highly recommend referring to the primary source - Beginning Use of PowerShell Runspaces: Part 1. In short, the runspace is a separate PowerShell thread that runs in the same process of the operating system, and therefore does not have an overhead on new process.

If you like the concept of light streams and you want to launch dozens of them (there is no concept of channels in PowerShell), then I have good news for you: for convenience, all the low-level logic in this module repository on the github (there are gifs) is already wrapped in a more familiar concept of jobs. In the meantime, I'll demonstrate how to work with them natively.

As an example of using runspaces, let's build a simple WPF form, which is rendered in the same OS thread as the main PowerShell process, however in a separate runtime thread. Interaction occurs through a thread-safe hashtable - so that you do not need to add any complexity like mutexes, it just works! The advantage of this approach is that you can implement any logic of any complexity and duration in the main script. It won't block the main execution flow leading to the form "freezing" (take a look at the last line of the script below - despite it "sleeps" a thread for half a minute - there's no block).

In a specific example, only one runspace is launched, although nothing stops you from creating a few more (and creating a pool for them for convenience).

# Thread-syncronized hashtable
$GUISyncHash = [hashtable]::Synchronized(@{})

<#
    WPF form
#>
$GUISyncHash.FormXAML = [xml](@"
<Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        Title="Sample WPF Form" Height="510" Width="410" ResizeMode="NoResize">
    <Grid>
        <Label Content="Sample form" HorizontalAlignment="Left" Margin="10,10,0,0" VerticalAlignment="Top" Height="37" Width="374" FontSize="18"/>
        <Label Content="From:" HorizontalAlignment="Left" Margin="16,64,0,0" VerticalAlignment="Top" Height="26" Width="48"/>
        <TextBox x:Name="BackupPath" HorizontalAlignment="Left" Height="23" Margin="69,68,0,0" TextWrapping="Wrap" Text="" VerticalAlignment="Top" Width="300"/>
        <Label Content="To:" HorizontalAlignment="Left" Margin="16,103,0,0" VerticalAlignment="Top" Height="26" Width="35"/>
        <TextBox x:Name="RestorePath" HorizontalAlignment="Left" Height="23" Margin="69,107,0,0" TextWrapping="Wrap" Text="" VerticalAlignment="Top" Width="300"/>
        <Button x:Name="FirstButton" Content="√" HorizontalAlignment="Left" Margin="357,68,0,0" VerticalAlignment="Top" Width="23" Height="23"/>
        <Button x:Name="SecondButton" Content="√" HorizontalAlignment="Left" Margin="357,107,0,0" VerticalAlignment="Top" Width="23" Height="23"/>
        <CheckBox x:Name="Check" Content="Check me" HorizontalAlignment="Left" Margin="16,146,0,0" VerticalAlignment="Top" RenderTransformOrigin="-0.113,-0.267" Width="172"/>
        <Button x:Name="Go" Content="Run" HorizontalAlignment="Left" Margin="298,173,0,0" VerticalAlignment="Top" Width="82" Height="26"/>
        <ComboBox x:Name="Droplist" HorizontalAlignment="Left" Margin="16,173,0,0" VerticalAlignment="Top" Width="172" Height="26"/>
        <ListBox x:Name="ListBox" HorizontalAlignment="Left" Height="250" Margin="16,210,0,0" VerticalAlignment="Top" Width="364"/>
    </Grid>
</Window>
"@)

<#
    Form thread
#>
$GUISyncHash.GUIThread = {
    $GUISyncHash.Window       = [Windows.Markup.XamlReader]::Load((New-Object System.Xml.XmlNodeReader $GUISyncHash.FormXAML))
    $GUISyncHash.Check        = $GUISyncHash.Window.FindName("Check")
    $GUISyncHash.GO           = $GUISyncHash.Window.FindName("Go")
    $GUISyncHash.ListBox      = $GUISyncHash.Window.FindName("ListBox")
    $GUISyncHash.BackupPath   = $GUISyncHash.Window.FindName("BackupPath")
    $GUISyncHash.RestorePath  = $GUISyncHash.Window.FindName("RestorePath")
    $GUISyncHash.FirstButton  = $GUISyncHash.Window.FindName("FirstButton")
    $GUISyncHash.SecondButton = $GUISyncHash.Window.FindName("SecondButton")
    $GUISyncHash.Droplist     = $GUISyncHash.Window.FindName("Droplist")

    $GUISyncHash.Window.Add_SourceInitialized({
        $GUISyncHash.GO.IsEnabled = $true
    })

    $GUISyncHash.FirstButton.Add_Click({
        $GUISyncHash.ListBox.Items.Add('Click FirstButton')
    })

    $GUISyncHash.SecondButton.Add_Click({
        $GUISyncHash.ListBox.Items.Add('Click SecondButton')
    })

    $GUISyncHash.GO.Add_Click({
        $GUISyncHash.ListBox.Items.Add('Click GO')
    })

    $GUISyncHash.Window.Add_Closed({
        Stop-Process -Id $PID -Force
    })

    $null = $GUISyncHash.Window.ShowDialog()
}

$Runspace = @{}
$Runspace.Runspace = [RunspaceFactory]::CreateRunspace()
$Runspace.Runspace.ApartmentState = "STA"
$Runspace.Runspace.ThreadOptions = "ReuseThread"
$Runspace.Runspace.Open()
$Runspace.psCmd = { Add-Type -AssemblyName PresentationCore, PresentationFramework, WindowsBase }.GetPowerShell()
$Runspace.Runspace.SessionStateProxy.SetVariable('GUISyncHash', $GUISyncHash)
$Runspace.psCmd.Runspace = $Runspace.Runspace
$Runspace.Handle = $Runspace.psCmd.AddScript($GUISyncHash.GUIThread).BeginInvoke()

Start-Sleep -S 1

$GUISyncHash.ListBox.Dispatcher.Invoke("Normal", [action] {
    $GUISyncHash.ListBox.Items.Add('Hi there!')
})

$GUISyncHash.ListBox.Dispatcher.Invoke("Normal", [action] {
    $GUISyncHash.ListBox.Items.Add('Populating a dropdown')
})

foreach ($item in 1..5) {
    $GUISyncHash.Droplist.Dispatcher.Invoke("Normal", [action] {
        $GUISyncHash.Droplist.Items.Add($item)
        $GUISyncHash.Droplist.SelectedIndex = 0
    })
}

$GUISyncHash.ListBox.Dispatcher.Invoke("Normal", [action] {
    $GUISyncHash.ListBox.Items.Add('While ($true) { Start-Sleep -S 10 }')
})

while ($true) { Start-Sleep -S 30 }


Note: you do not need to have full Visual Studio for the sake of drawing WPF forms - just use this simple tool to quickly generate desired markup.


WinRM

It is mostly known as PowerShell Remoting and is one of my most used PowerShell features. If simplified, it allows you to execute full power of PowerShell in the context of any remote machine, so that all the code you execute runs on that machine. Of course, it is possible to pass parameters into a remotely-running script and get the results back.

To make the whole magic happen, WinRm must be enabled on both client and server and that is understood given the ultimate power it brings us.

Client

As for me, this is how PowerShell was written by the paranoid. Everything that is allowed and not allowed is forbidden. So it is also forbidden to connect to servers. I am exaggerating this a little, of course. If everything is in the same domain, then everything will be transparent. But in my situation, the machine is outside the domain and not even on the same network. In general, I decided that I would be careful and allow any extramarital affairs.

set-item wsman: localhost\client\trustedhosts -value * -force
restart-service WinRm

Server

It looks like a simple PowerShell one-liner but is tricky: enabling PowerShell remoting cannot be executed remotely. You may need either physical access or RDP. Eventually, you will run:

Enable-PSRemoting -SkipNetworkProfileCheck -Force

Despite being so short, this command does loads of actions:

  • starts WinRM service and sets the autostart of the WinRM service to automatic
  • creates a listener
  • adds firewall exceptions
  • includes all registered PowerShell session configurations for receiving instructions from remote machines
  • registers a configuration if it is not registered with "Microsoft.PowerShell", and the same for x64
  • removes the "Deny Everyone" ban from the security descriptor of all session configurations
  • restarts the WinRM service

Once both server and client get configured, it becomes possible to enter an interactive remote session, so that everything you do happens in the remote context:

Enter-PSSession -ComputerName RemoteMachine -Credential "RemoteMachine\Administrator"

Optionally you can just execute remote code from within ScriptBlock at a host machine in a similar manner:

$pass = ConvertTo-SecureString -AsPlainText '123' -Force
$Cred = New-Object System.Management.Automation.PSCredential -ArgumentList 'Martin',$pass
Invoke-Command -ComputerName 192.168.173.14 -ScriptBlock { Get-ChildItem C:\ } -credential $Cred

You can do whatever you want for example copy files both ways:

$pass = ConvertTo-SecureString -AsPlainText '123' -Force                                       # 123 - password
$Cred = New-Object System.Management.Automation.PSCredential -ArgumentList 'Martin',$pass      # Martin - username
$Session = New-PSSession -ComputerName RemoteMachine -Credential $Cred                        
$remotePath = Invoke-Command -Session $session -ScriptBlock { New-Item -ItemType Directory -Force -Path FolderName } Copy-Item -Path "c:\Some\Path\To\Local\Script.ps1" -Destination $remotePath.FullName -ToSession $session Remove-PSSession $Session

Remoting however has certain restrictions:

  • you cannot make a second jump - only 1 session, you cannot connect deeper inside the session
  • you cannot use commands that have a graphical interface. If you do this, the shell will hang until Ctrl + C pressed
  • you cannot run commands that have their own shell, for example nslookup, netsh
  • you can run scripts if the launch policy on the remote machine allows them to run
  • you cannot attach to an interactive session, you enter as network logon, as if you were attached to a network drive. Therefore, logon scripts will not start, and you may not get the home folder on the remote machine (an extra reason not to map home folders with logon scripts)
  • you will not be able to interact with users on a remote machine even they got logged in. You won't be able to show users a window or print anything to them.
Once done with interactive session, you can Exit-PsSession.

The whole concept of Sifon - an application I've done for installing and managing Sitecore on both local and remote machines, is built on the concept of WinRM along with runspaces, threads and plenty of other tricks describe in this blog post.

Conclusion

PowerShell is a powerful and easy-to-use environment for working with Windows infrastructure. It is good conceptually, it is convenient for its syntax and self-documenting cmdlet names, it can perform itself as an environment and you as a specialist, you just need to understand its concepts and start having fun. And of course, this technology fully justifies its title - as it is truly Power Shell


A nice way of using HTML Helper for accessing Rendering Parameters along with Glass Mapper

I very love Glass Mapper as (once being configured) it takes away from you much of the manual efforts of wiring up your models.

Often working with Rendering Parameters, I decided to simplify their usage by having a strongly-typed HTML Helper powered by Glass Mapper. Here is how the usage looks like:

<div class="@(Html.GetRenderingParametersClassFor<ISingleClass>(m => m.Class))">
   your content here
</div>

It benefits from usage simplicity and also from having IntelliSense. Here's an example of what ISingleClass looks like:

[SitecoreType(TemplateId = ISingleClassConstants.TemplateIdString)]
public partial interface ISingleClass : IGlassBase
{   
    [SitecoreField(ISingleClassConstants.ClassFieldName)]
    Guid Class { get; }
}

In Sitecore items of a given type may look as smth. similar below:


OK, won't make it any longer: here's the code that does all the magic:

public static class HtmlHelperExtensions
{
    public static string GetRenderingParametersClassFor<T>(this HtmlHelper html, Func<T, object> getField, string fieldName = "Class") where T : class
    {
        var selectedItem = GetSelectedRenderingParameter(html, getField);
        return selectedItem?[fieldName] ?? "";
    }


    private static Item GetSelectedRenderingParameter<T>(this HtmlHelper html, Func<T, object> getField) where T : class
    {
        T renderingParameters = GetRenderingParameters<T>(html);

        var selectedItemId = getField(renderingParameters)?.ToString();
        return Context.Database.GetItem(selectedItemId);
    }

    public static T GetRenderingParameters<T>(this HtmlHelper html) where T : class
    {
        //TODO: wire-up ISitecoreService to get resolved via DI of your choise
        var sitecoreService = new SitecoreService(PageContext.Current.Database);

        var parameters = RenderingContext.CurrentOrNull.Rendering["Parameters"];
        var nameValueCollection = WebUtil.ParseUrlParameters(parameters);
        var config = sitecoreService.GlassContext[typeof(T)] as SitecoreTypeConfiguration;

        var renderingParametersModelFactory = new RenderingParametersModelFactory(sitecoreService);
        return renderingParametersModelFactory.CreateModel<T>(nameValueCollection, config.TemplateId);
    }
}


Bonus: in some cases you may also want selecting a HTML tag from rendering parameters, for example your editors could choose between heading tags they want to be on your rendering (like H1, H2, H3, H4 or may be just a generating paragraph P-tag):

In that case you can add one more method into the above HTML helper class:

public static MvcTag TagFrom<T>(this HtmlHelper html, Func<T, object> getField, string className = null) where T : class
{
    var selectedItem = GetSelectedRenderingParameter(html, getField);
    string tag = selectedItem?["Tag"] ?? "";

    html.ViewContext.ViewBag.Tag = tag;

    if (!string.IsNullOrWhiteSpace(tag)) 
    {
        var tagBuilder = new TagBuilder(tag);
    
        if (!string.IsNullOrWhiteSpace(className))
        {
            tagBuilder.Attributes.Add("class", className);
        }
        
        html.ViewContext.Writer.Write(tagBuilder.ToString(TagRenderMode.StartTag));
    }

    return new MvcTag(html.ViewContext);
}

Once compiled, you can then render you tags as below:

@using (Html.TagFrom<ITagRenderingParameter>(m => m.Tag, "tag_class"))
{
    @Html.Glass().Editable(m => m.Title)
}

And choose which HTML tag just from a Rendering Parameters dropdown:

Hope this helps!

XBlog on Sitecore 10? That's possible!

I was using XBlog as nice and simple solution for maintaining the blogs on Sitecore (at least for editors), but unfortunately the original repo has not gone along with the progress Sitecore XP does - the last update dates as 3 years ago for some Sitecore 9 related configs, while the rest of it is 6 years old.

I tried using the 'Sitecore 9' branch of the original repo but unfortunately it did not go well. If installing it from the packages - then it had minor issues even with declared version 9.X, which in fact in fact had the very wide range of changes int between of 9.0.1 till 9.3. Not to say version 10.* therefore:

I decided to refactor it instead!

You may find the successful result of this exercise published at my corresponding GitHub repository, that one worked out for 10.1 and tested well there, but since all versions 10.X share the same runtime it should work universally on all of them.

Few notes on what has been done:

  • much unwanted legacy stuff such as support for WebForms has been entirely removed
  • the IDs of items have been retained, so that helps upgrading existing solution with 1000s of posts
  • there were changes done to reflect reworked Content Search of the XP platform
  • found and fixed in Bucket items creation logic, related to events suppression
  • serialization changed to the one using CLI, officially released as the part of Sitecore 10
  • few more minor changes and improvements in references, runtime, confutation, etc.

Please also note: XBlog uses fast query which declared to get deprecated in Sitecore 10.1, but that particular code works perfectly well, and I cross-tested it in debugger to confirm: the fast queries return that expected result I queried. Just for the future references, fast queries have been used in code\Areas\XBlog\Buckets\BucketFolderConfigurationManager.cs and code\Areas\XBlog\Import\ImportManager.cs files.

The resulted code can be found here: XBlog for Sitecore 10.1

Upgrading Sitecore like a Pro

Sooner or later, you'll face it - the need for Sitecore version upgrade. However, each Sitecore instance is unique, there is no universal method for an upgrade, therefore each upgrade path is bespoke.

In the past 10 years working with Sitecore I have done numerous upgrades, resulted in Upgrade Planning Strategy and a set of Upgrade Tips and Tricks. These tips will help saving 2-3 times (!) on spent efforts, comparing to direct approaches.
Last but not the least. versions 10 and 10.1 brought new challenges and options of migrating existing solutions into containers. I will explain your options and propose best approach on this as well.

I have lodged the speech proposal for SUGCON 2021 so fingers crossed for me to be chosen. Once happened, I will share the whole presentation and sides updating this blog post.

There is still no "silver bullet" for Sitecore upgrades, but following tips from my proposed session will eliminate risks, reduce efforts, and bring you confidence while upgrading your instances.

If being chosen, I will tell you about:

  1. How to approach the whole instance upgrade
  2. What are the upgrade time-wasters
  3. Common and potential traps to avoid
  4. Dealing with configuration upgrade, and why that's not as complicated as initially seems to be
  5. Upgrading your solution codebase, including ORM (Glass etc.) and DI
  6. Employing automations with PowerShell
  7. Upgrading databases and how to approach
  8. Even more automation to add
  9. How to treat deprecated Sitecore APIs and obsolete code
  10. Migrating forms from WFFM
  11. Upgrading to version 10 containers and how 10.1 changes the upgrades
  12. Testing your upgrade strategies
  13. Going live
  14. Summary of tips and findings from this session
Stay tuned!

HTTPS, SSL and TLS FAQs

There are sometimes cases of misunderstanding SSL or misinterpreting certain aspects of it. Therefore I made these FAQs from the very basics, which may clear out any confusion.

1. What is SSL?

SSL stands for Secure Sockets Layer, a protocol used to encrypt and authenticate data transmitted between applications, such as a browser and a web server.

2. Where did SSL come from?

SSL version 1.0 was developed by Netscape in the early 1990s. But due to security flaws, it was never published. The first public release was SSL 2.0, which came out in February 1995. It was an improved version, but still, after a complete redesign, version 3.0 got approved.

3. What is its purpose?

The SSL protocol prevents attackers from reading and modifying transactions and messages between the browser and the server, for example, the transfer of credit card data, logins, etc. This ensures that all data remains confidential and protected.

4. What is TLS-certificate?

TLS stands for Transport Layer Security - the successor and improved version of SSL. It has more reliable encryption algorithms, but despite some similarities, it is considered to be a different standard.

5. What protocols and versions are used nowadays?

SSL, surprisingly, is not used now. As described above, there were 3 versions (1.0, 2.0, and 3.0) initially. Then based on SSL 3.0 they created the TLS 1.0 protocol. Then TLS got upgraded to 1.1, 1.2, and 1.3 came out. Currently, only the last two are used.

6. How does an SSL certificate operate?

When the browser is connected to the site, an "SSL handshake" occurs ("handshake" keys are created), then cryptographic data is exchanged and, in the end, session keys are formed, on the basis of which the traffic is encrypted.

7. What types of certificates do exist and what's the difference between those?

  • SSC (Self-Signed Certificate) - you can create yourself, but there is no trust
  • SGC (Server Gated Cryptography) - for very old browsers
  • SAN/UCC (Unified Communications Certificates) - multi-domain for MS Exchange
  • Code Signing - for software signing
  • Wildcard SSL Certificate - for domain and subdomains
  • DV (Domain Validation) - confirms the domain name
  • OV (Organisation Validation) - checking organization, address, and location
  • EV (Extended validation) - gives the most protection

8. Are there any other types available?

Yes, for example, a Multi-Domain SSL certificate for several domains - does not require the purchase of certificates for each individual domain which simplifies administration. Code Signing certificates are designed to protect your code, content, and other files while they are being transferred online.

9. What data is stored inside of SSL certificate?

SSL certificate contains information:

  • Version
  • Serial number
  • Search Algorithm ID
  • Publisher name
  • Validity period (no earlier than / no later)
  • The subject of the certificate
  • Subject's public key information
  • Unique identifier of the publisher
  • Subject unique identifier
  • Add-ons
  • Signature

10. Are SSL and HTTPS actually the same thing?

No, they aren't. 

SSL is a security protocol used to establish a secure connection between a web browser and a web server. 

HTTPS works as a sublevel between applications. HTTPS encrypts a regular HTTP message before sending and decrypts it during delivery.

11. How do I verify if a site uses SSL/HTTPS and it is correct?

Most browsers display a padlock and/or message at the address bar to mark secure connections protected by SSL / TLS certificates. The computer OS maintains a list of root certificates that make up a chain of trust.

12. What is a Chain of Trust?

When opening a site, the browser checks the entire chain of certificates, up to the root. If even one of the certificates is invalid, then the entire chain is also considered invalid.

12. Why SSL certificates are that expensive?

SSL certificates can be viewed as a kind of and insurance, they are expensive due to the guarantees that come with them. If something goes wrong, the SSL Certificate Provider will pay the affected end user an insurance premium.

13. Are there free certificates and how secure are they?

Free certificates do exist, perform the same function as paid ones, but usually have the lowest security level (DV), there will probably not be a guarantee for it, i.e. they are safe to some extent. When threatened, your site visitors will not be able to claim a guarantee. The most known free certificates provider is Let's Encrypt.

14. What is the way of buying a certificate?

If your hosting provider does not offer any free SSL certificates, or you need a different type of certificate, you can purchase your own certificate from a Certification Authority or a discounted reseller, which will be cheaper.

15. What is a Certificate Authority then?

Certificate Authority (CA) - an international organization that issues and is responsible for SSL certificates. CA verifies the domains and sources that authenticate organizations. Certification authorities are trusted by 99% of browsers.

16. What is the process for obtaining a certificate upon payment completion?

First, you generate a CSR and a private key. Then you send the CSR to the Certification Authority (CA) and get an archive with text/binaries for different types of web servers. Then you install it on your hosting or your server.

17. How do I use an SSL / TLS certificate?

  1. Deploy your site on the hosting provider's server.
  2. Create a CSR - Certificate Request, buy a certificate from a trusted CA or reseller.
  3. Install SSL / TLS certificate in the hosting control panel or on the server.
  4. Create a redirect to the protected version of the site - HTTPS.
18. How can I check if the certificate is configured correctly?

One popular way is to visit the Qualys website, enter the site name and click the submit button. The service will check and give an assessment in points. If the site received A or A +, then the certificate is working correctly.

19. Are the SSL certificate and SEO somehow related?

If your site is not secured with an SSL certificate, then it is marked "not secure" from Google. In addition, if you want your site to rank high in the search results, you will need it marked as "safe". But the main goal of course remains the transport layer security of the connection.

20. What is the validity period of the certificate?

The commercial certificate is valid for 1 year. The validity of the certificates has been steadily decreasing from 10 years in 2011, then to 3 years in 2015, then to two years in 2018, and to a year now. Free certificates are valid for 90 days.

21. Why don't SSL certificates last forever?

Security standards are changing, therefore old certificates must be kept updated. In addition, the defined certificate life cycle reduces the risks associated with the possible loss of the private key.

22. Does SSL protect site data?

Any information sent over the Internet passes through network equipment, servers, computers, and other connections, and if the data is not encrypted, then this makes them vulnerable to attacks.

SSL / TLS certificate helps to protect the data transmitted between the visitor and the site, and vice versa. But this does not mean that the information stored at the server itself is completely protected - the administrator must decide how to protect it, for example, he. should probably encrypt the database.

23. What are symmetrical keys and Public / Private Keys used with certificates?

Almost all encryption methods in use today employ pairs of public/private keys. These are considered much more secure than the old symmetrical keys. With Public and Private keys, two keys are used that are mathematically related (they belong as a key pair), but are different.

This means a message encrypted with a public key cannot be decrypted with the same public key. To decrypt the message you require the private key.

24. How do asymmetrical keys and SSL Certificates work together?

Public keys can be made available to anyone, that's why obviously called public. That should raise a concern of trust, specifically: "how do you know that a particular public key belongs to the person/entity that it claims to be". For example, you receive a key claiming to belong to your bank. How do you know that it does belong to your bank?

The answer is to use a digital certificate. A certificate serves the same purpose as a passport does in everyday life. Likewise, a passport establishes a link between a photo and a person, and that link is verified by a trusted authority (passport office).

A digital certificate provides a link between a public key and an entity (business, domain name, etc) that has been verified (signed) by a trusted third party (a certificate authority). A digital certificate provides a convenient way of distributing trusted public encryption keys.

25. What are various SSL Certificate formats?

An SSL Certificate is essentially an X.509 certificate. X.509 is a standard that defines the structure of the certificate. It defines the data fields that should be included in the SSL certificate. The following figure illustrates the X.509 Certificate's encoding formats and file extensions:

  • PEM. Most Certificate Authorities provide certificates in PEM format in Base64 ASCII encoded files. The certificate file types can be .pem, .crt, .cer, or .key. Because of this, .pem file can include the server certificate, the intermediate certificate, and the private key in a single file. The server certificate and intermediate certificate can also be in a separate .crt or .cer file. The private key can be in a .key file.  The actual data is located between the ---- BEGIN CERTIFICATE---- and ----END CERTIFICATE---- statements, a similar approach for key and certificate requests.
  • PKCS#7 also use Base64 ASCII encoding with file extension .p7b or .p7c. Only certificates can be stored in this format, not private keys. The P7B certificates are contained between the -----BEGIN PKCS7----- and -----END PKCS7----- statements.
  • DER certificates are provided in a binary form, contained in .der or .cer files. These certificates are mainly used in Java-based web servers.
  • PKCS#12 also use binary form for the data, contained in .pfx or .p12 files. However, it can store the server certificate, the intermediate certificate, and the private key in a single .pfx file with password protection. These certificates are mainly used on the Windows platform.

26. What is a trusted store?

It is a list of CA certificates that you trust. All web browsers come with a list of trusted CAs.

27.  Can I add my own CA to my browser's trusted store?

On Windows when you right-click on the certificate you should see an install option. Not sure about other operating systems.

28.  What is a certificate fingerprint?

A fingerprint is a hash of an actual certificate and can be used to verify the certificate without the need to have the CA certificate installed. This is very useful in small devices that don’t have a lot of memory to store CA files. It is also used for manually verifying a certificate.

Hope this blog post helps you eliminate some gaps or misunderstandings about SSL and TLS!

Everything you wanted to ask about "Items-as-Resources" coming with new Sitecore 10.1

Sitecore 10.1 brings new Items-as-Resources option, which raises plenty of questions.
  • What it that used for?
  • Why did we get it at all?
  • Any concerns of using that?
Please find the answers below:


1. Before 10.1 you’ve been given the initial set of OOB items upon the installation in the databases. That includes default templates, layouts, workflows and the rest of scaffolding items.

2. Now with 10.1 all these are supplied as the resources files outside of database. That's correct: all these items are no longer residing in the database. Yes, you still have them in your content tree as normal.

3. Does databases come empty? Not actually - there are just two entries for the default site (page) you normally first see after successful Sitecore instance installation, at the root of URL. It was decided not to put these into resources, as most customers delete that default home page anyway.

4. Are these resource read-only? Yes, Sitecore cannot write back into those resource files. Treat it as if they're written on CD but with an immediate access.

5. So does that mean I cannot modify default OOB items in Sitecore anymore? No, you actually can edit those as normal after "Unprotecting item" from a Content Editor ribbon. What happens in that case is Sitecore will take the delta between initial value stored in resource file and your changes and will store that delta having only changes you’ve done in the database. On item "consumption" the current state of item gets calculated from a resource file and that delta.

6. But you cannot delete these items. Sitecore prompts that it origins from the resource file therefore cannot be deleted. Still good, as leaves less potential for silly errors, anyway..

7. So where are these resource files located? They are based (quite predictably) within App_Data folder - App_Data\items\<DATABASE_NAME>\items.<DATABASE_NAME>.dat (by default).

8. What format are these resource files? Protobuff (Protocol Buffers) from Google. That is a surprisingly old format which is proven for a decade, at least.

9. How can I create my own resources?
Officially - you cannot. Well, it is technically possible but requires very deep dive into Protocol Buffers, raw database storage and investigating new data provider in Sitecore. But, Sitecore will likely start providing authors of popular modules with the toolset to create such a resources with an ease. So, let's say for SXA you will no longer need installing SPE + SXA packages yourself, instead you'll simply drop the resource files provided by SXA team underneath items folder, not even need to publish that afterwards.

10. Why no need publishing? That's because you copy the resource file for web database as well - items are alredy on the web database. Of course, all the items created by you will still need to get published.

11. But why at all Sitecore introduced that?
The main reason is to simplify the platform version upgrade process. The way update is done has changed.
You may have notice on the Sitecore download page, "Upgrade options" section have changed: instead of Sitecore Update Packages you now have Sitecore UpdateApp Tool that operates against each specific version you'd want to upgrade from. This tool will remove the default items for each particular legacy version and replace it with the resource files, Of course it also updates the schema with the changes which was already available.

12. The bigger reason for this change was "think containers - think ahead" approach. With such a change it becomes easier to upgrade version of Sitecore when running in containers: everything from the database since now is entirely user's custom data, and can be entirely copied to a never database, while version-specific-and-system-related items get updated by just a resource file substitute.

13. Also you may heard that Fast Query has been deprecated. That is exact reason why - if something isn't in the database, Sitecore cannot efficiently build the graph of the relationship for fast query


14. What is that data provider mentioned above?
That is a new one called CompositeDataProvider that inherited by DefaultDataProvider. The name composite assumes that one cares of merging items for Sitecore tree from both DB and the resources. In the configuration you specify it for an individual database under <database> section, you can also change the location for such resources by patching <filePath> node of <protobutItems> and overriding the location.

15. For the end-consumer of DataProvider (high level of stack) nothing changes as they still use DefaultDataProvider from their code. The changes occur at intermediate level and those happen to be internal for Sitecore.

16. That actually opens up a much wider potential for creating some intemediate-level providers to things other than ProtoBuf and SQL Databases: CRMs, DAMs, some other headless CMSs maybe. In any case this is very important and greatly welcomed step ahead for the platform!

Update: there is another great blog post from my MVP-colleague Jeremy Davis on that same topic, where he also tried drilling into these resurce files with ProtoBuff.Net library.

I have won a Sitecore Technology MVP 2021 award!

This year I am celebrating my fifth in a row year as a Sitecore MVP! I am very excited to announce that I have been named Most Valuable Professional (MVP) by Sitecore for 2021 - that's the most prestigious award in whole ecosystem of Sitecore!

As a Technology MVP I am one of only 170 Sitecore professionals worldwide that have been awarded with an MVP title in this category. It really means a lot to me to be part of such a great community and to be able to contribute to sharing knowledge within this community.