Experience Sitecore ! | More than 200 articles about the best DXP by Martin Miles

Experience Sitecore !

More than 200 articles about the best DXP by Martin Miles

Sitecore 10 .NET Fundamental Developer Certification Exam is now available

Many of you have been asking about the version 10 developer exam availability, so it is finally there!

You may book it on the same page as you did all previous exams, same proctored exam. The price however is slightly higher - $350 as opposed to $300 for version 9 exams. For some European countries, there is also smth called "estimated tax" - not explained which exactly tax it is and how it got calculated. For example, buying an exam from the UK sets this tax to $70, totaling $420.

Metrics: you'll be given 100 minutes and the same amount of questions to answer as if was before - 50. Pass rate - 80%.

So, what to expect?

Firstly, I would admit the quality - the test has been reworked much to better! One thing I really disliked about the previous versions was large code snippets, with a challenge to select the correct one. Ironically these snippets were all related to item access API - something that very few people are doing nowadays due to ORMs, like Glass.

Secondly, you would expect questions on certain new stuff, like containers and CLI. In addition, the test takers must demonstrate knowledge of the existing familiar competencies, like security and roles/users management, layouts, placeholders, components, controls, renderings, and items management, and similar. Obviously, no SXA or JSS related questions are expected, as that is a Fundamental exam.

Thirdly, I want to notice incorrect answer options - those became more realistic, and therefore - confusing. With all my experience with the platform, I manage to miss a few, due to being not 100% attentive. That means - do read questions and answers very carefully and pay attention to details. Having 2 minutes per single question is a decent amount of time for doing that.

General feedback: the exam changed for the better but it is very unlikely to pass it without having actual hands-on all the expected competencies. With an 80% pass rate, one could incorrectly answer 10 questions. Let's imagine a person, who is proficient in version 9 but never has worked with 10. In that case, he/she is likely to fail more than 10 questions for containers and CLI, and given the rest of the answers are correct, the result is still - fail. On the other hand, the required level of experience is not that high, so everyone working with the latest features would pass without any doubt.

Wish you good luck with Sitecore 10 .NET Fundamental Developer Certification Exam!

Things beginners get incorrect about Kubernetes

On start playing with Kubernetes, one may face with one of the biggest delusions considering the K8S will work in the same way for both the development or testing environment. 

But It won't!

When it comes to containers in general and Kubernetes specifically, there is a big difference between occasional runs in a labs-alike conditions and in full production lifecycle. That is similar to a difference between just starting an app and long term running it full security and reliability enabled.

Not a Kubernetes exclusive problem, but is true for the entire variety of containers and microservices. Spin-up a container comes as relative simple task, while scaling containers as containerized microservices in the production turns to be more complicated.

Although Kubernetes has alternatives, it has quickly become a de-facto standard for orchestration. However there is a difference between launching K8S in a sandbox compared to a full production environment.



Delusion #1. Running containers with Kubernetes in the development or testing environment ensures that your operational needs will be satisfied.

The truth: the launch of Kubernetes in the development or testing environment allows cutting the corners, simplify things and not to bother with the operational load, which one faces when going live to Prod. Ops and safety considerations will become major areas of differences between K8S running in prod and in the development / testing environments. Failing a cluster in the labs conditions does not bring any losses.

For me it looks like a compromise between an agility and reliability: devs use containers to achieve flexibility while working with apps when developing and testing the code does its purpose. While the ops need to provide reliability, scaling, performance and safety provided by a sustainable, industry-proven platform. They are looking for a deployment automation for the clusters to ensure the repeatability and consistency. It also helps when restoring the system.

Versioning is also critical for operations. As far as possible, you need enabling versioning everywhere, including services deployment configuration, policies and infrastructure (applying the infrastructure-as-a-code approach). That results in environments becoming repeatable. As a good practice, avoid "latest" image versions, in order to avoid configuration drift effect.


Delusion #2. Both reliability and security got provided with Kubernetes

In reality: when using Kubernetes at non-production environments only, most unlikely reliability and security got provided, at least initially. Do not get discouraged, you will be there: it's a matter of designing an architecture before switching to the Prod.

Obviously, performance, scaling, availability and safety requirements are much higher in prod environments. This It is important to plan these requirements for the deployment of K8S into architecture, as well as build scaling and security plans into Helm-charts, etc.

But how could running a cluster in dev/testing environments lead to a false confidence?

This is common for non-production environments having all network connections open. It is acceptable that any service can refer to any other service: open connections are the defaults for Kubernetes. However such an approach is an evil practice for production environments and can lead to downtime. It also exposes larger areas for potential attack and increases threats to business.

When it comes to containers / microservices, one needs spending bigger effort for creating a highly available and reliable system. Orchestration itself helps a lot but isn't a "silver bullet", same applies to security. We will have to work hard to protect Kubernetes and reduce the surface of the attack. It is very important using RBAC with minimal privileges and enforce network policies, leaving only those channels services indeed use.

Also vulnerabilities of container images can rapidly turn ops into a critical state, while on development / testing environments this danger may absent at all. Pay attention to the base images used for building your containers: as far as possible, use trusted official images, or build your own. The last thing you want happening for your Kubernetes cluster is helping someone mining crypto coins.

It is recommended to refer to the security of containers as a ten-level system covering the container stack (host and registries), as well as questions related to the life cycle of containers (for example, API management). 


Delusion #3. Orchestration makes scaling a formality

Although Kubernetes considered being a completely necessary tool for scaling containers, it will be delusted to think that orchestration immediately sorts out scaling needs for the production environment. The volume of data at live environments is times more, please also keep in mind that monitoring may also need scaling. With increasing volumes, everything changes. 

It is impossible to ensure all K8S components implementing the interfaces correctly until you spin-up the prod: determining Kubernetes "working normally", and the API server and other controlled components get scaled according to your needs.

As I say, the development and testing environments go much easier. In local environments it is easy skipping basics like defining the right resources and restrictions for requests. Avoiding that can collapse you prod once later. 

Scaling the cluster both directions is a good example when the task goes easy locally, being clearly complicated at production: scaling prod clusters is more difficult than clusters for development/testing.

While Kubernetes makes it relatively simple scaling horizontally, DevOps still need keeping in mind some nuances, especially when it comes to maintaining services live when scaling an infrastructure. It is crucial to ensure that the main services, as well as a system monitoring and security alerts, were distributed across the cluster nodes and do work with stateful volumes so that data not being lost on scaling down.

Again, it all comes to proper planning and resources available. You need not just understand your needs for scaling when planning but most importantly - test them. Your production environment must be capable for handling much higher loads.


Delusion #4. Kubernetes works everywhere equally that same

In reality: differences in work in another environment may vary similar to those differences between running Kubernetes on the developer's laptop and prod server. The reality is that there may be serious differences depending on the vendor .Many believe that if the K8S works locally, it will work in any operational environment. 

Local environments commonly miss important components required by prod environments: monitoring, logging, certificate management and credentials. You need to keep that in mind, as that is another problem raised from a difference between prod  and development/testing environments.

However, that isn't Kubernetes exclusively, but applies to containers/microservices in general, especially in multicloud and hybrid cloud setups. Those Kubernetes implementations are more complicated than it seems initially, as many of the mandatory services are proprietary, like load balancing and firewalls. A container that works well locally may work unprotected (may not start at all) in the cloud with another setup of tools. Therefore, SERVICE MESH technologies like Istio attract so much attention. They guarantee the availability wherever your container works, so you do not need to think about infrastructure - which is the main reason for using containers.

I hope you can reach safer and more reliable production environments with Kubernetes keeping the above in mind!

Converting Sitecore back-end developer skills for a rapid kickstart with JSS & Next.js

Developers are crucial for Sitecore ecosystem!

With a count of several tens of thousands globally, only less than 15% of Sitecore developers feel confident with modern front-end tools. Resolving this bottleneck is very important as that slows down adoption of new generation of headless approaches: JSS and Next.js. This session is to show the quickest but still effective path for converting typical existing BE skills into the new development paradigm.

I have prepared a session for Symposium 2021 and below is my paper submission proposal. Symposium speech gets backed with a several blog posts and "how-to" videos that highlight the whole path for typical Sitecore developers with minimal knowledge of JS to the state of competency with Next.js

Update: sadly, this proposal was not chosen, leaving me frustrated, as the described topic is very sensitive to the most of us - developers and solution architects. Therefore, I am leaving my submission for historical purposes below.

The proposal

With much already said about the advantages of Next.JS with Jamstack fast delivery by pre-rendering, we won't focus on that as it's well-documented.

Instead, session will mostly cover converting a typical developers' experience from purely back-end skills to the state of confidence enough to start building own Next.js solutions. Outside of it there's almost nothing that describes that the actual learning curve which raise high level of frustration: the gap is too big to fill without knowing a shortcut to success.

The session gets based on my own experience: being such a typical back-end person, I carefully documenting all the way down to the wonderful world of headless Jamstack: necessary steps and traps that may not be obvious for the target audience get explained.

The mission is simplifying switching to a new gen. of development for as much of typical XP developers as possible by:
  • explaining the bare minimum of skills to obtain to be able using Next.js with Sitecore
  • explaining how to set up the necessary toolset, solution, dependencies and the fastest approach for getting the knowledge
  • briefly focusing on container environments being a part of overall experience
  • doing all the above with the minimum efforts possible
  • assuming the audience gets on a self-learning path after overcoming initial "studying gravity" with materials from this session

Speech Agenda

1. JavaScript
  • most important changes since years jQuery ruled the front-end world
  • starting with React: important basis to be used
  • all you need to know about TypeScript using it with Next.js Sitecore solutions
  • unobvious traps of front-end world to avoid

2. JSS
  • mapping the terminology of old dev experience to newer counterparts
  • explaining and troubleshooting GraphQL and layout service
  • JSS Styleguide, DOs and DON'Ts

3. Containers
  • brief introduction for those never had experience containers approach
  • Next.JS starter template
  • development considerations

4. Next.JS
  • understanding pre-rendering options: static generation vs server-side rendering vs incremental static generation
  • managing dynamic content with ISG
  • routing / dynamic routes
  • components rehydration
  • client-side personalization via callback to origin

5. Development Experience
  • understanding solution structure
  • organize CSS on component level
  • debugging and troubleshooting

6. Deployment and Going Live
  • brief architectural overview
  • is self-hosting the best option for your solution?
  • hosting at Vercel
  • Sitecore Experience Edge

7. Demo time covering some of Next.js features:
  • image optimization
  • error handling
  • unusual API routes

8. Conclusion
  • FAQs
  • take-away materials
  • further learning plan


Takeaway materials

By the time of the event, I am going to produce the following materials covering my presentation:

  • A series of blog posts covering a topic much wider
  • GitHub repo with a guidance and codebase from demo
  • A series of short YouTube videos for each use case

Hopefully, once my submission get selected for either SUGCON or next Symposium.

Evolutional approach to Next.js and its modes

As you've might hear, Sitecore has chosen Next.js to be used along with its JSS SDK. But what makes Next that great tool for most of us switching to a new paradigm of development for Sitecore? In this blog post, I'll go through Sitecore development evolution, starting with a review of Sitecore development progressing with time.

Old school development

A decade ago, we used classical ASP.NET WebForms to render a page on a server and pass it to the client. The whole idea of WebForms was faulty as it tried to mimic the event-based model of desktop development to make web development feel familiar to them. That was at cost of ignoring the stateless nature of HTML, creating weird ugly abstractions (ie. ViewState and EventValidation).

It was later made obsolete with an MVC approach which turned ASP.NET web development to what it should be in a better world: no state and events abstractions, server controls, and Master pages. Proper separation of code and markup (which itself went better and readable with the introduction of Razor views). It all benefited from an MVC architecture, proven with other web technologies, such as Ruby on Rails. Moreover, the implementation allowed extensibility at always every lifecycle of web request while ASP.NET MVC going open-source allowed writing your code aligned with the exact implementation of the framework.

MVC made a great step ahead and stayed a default way of making sites with Sitecore for as long as 5-8 years. Being so close to a raw request was a great strength but at a cost of having a lot of repetitive activities.

The introduction of SXA fixed most of these issues by strongly relying on Sitecore PowerShell for addressing most things that should and were in fact automated. The overall developers and editors' experience has improved with SXA due to the introduction of Page and Partial Designs, powerful components adjustable with rendering variants, most popular grid systems support, flexible search and SEO tools.

SXA was great in most aspects, except the one but most important - it was still based on top of MVC. That means web pages were generated at the server by rendering content into HTML views. Or in other words, it was not headless...

Headless

Meanwhile, the world of front-end development has experienced massive growth and after half-a-decade craziness of JS frameworks appearing one after another, a triad of winners stood out: React, Angular, and Vue. Those good old days of using jQuery came to an end giving way to industry-proven frameworks with the bigger feature sets and revised architecture that suits modern web development.

With time It became even harder and harder to split work between back-end and front-end teams (as for full-stack guys most of them tend to choose either side). Even bigger efforts have been spent on unwanted work of merging FE and BE teams in sync, which could not last long as both sides were struggling from that situation.

The headless approach was the right answer resolving all those issues with JSS being Sitecore response for that.

With the release of JSS, it became possible to separate BE and FE in a way that page each side becomes responsible for only its own duties. Front-end becomes free of previous limitations and could use React / Vue / Angular as much as they wanted. They did not need to use a heavily loaded web server with Sitecore for generating HTML pages - a new component called Rendering Host did that job exclusively for them. The only interaction with the back-end left was receiving just the necessary data asynchronously thanks to Layout Service and GraphQL.

NOTE: Actually headless means anything can consume the data from back-end services, not just FE frameworks. That is well done in ASP.NET Core renderings as an alternative option for headless implementation for Sitecore.

Client-Side Rendering

With a typical non-Sitecore single-page application the webserver firstly sends the browser an HTML page being in some initial state. Once that page gets loaded, the browser executes its JavaScript code which raises an asynchronous request(s) to an API endpoint in order to get actual data. As the user progresses with this app, more requests are sent by the browser, which will partially update content on a page without the whole page reloading from a web server. This approach is known as Client-Side Rendering, CSR and it brings lots of advantages such as apps responding faster and reducing traffic between client and server.

What's wrong with single-page applications?

Since single-page apps only load an initial HTML page once, this is the same as what search engine bots get. They struggle to obtain follow-up data from APIs and cannot index the page. Also without page reload the URL reaming the same and it can vary by only appending a #-anchor to a page URL. Often these URLs cannot be correctly processed when called directly.

Next.js

To address the above we have Next.js - a framework for statically generated and server-rendered React applications that opens up a lot of possibilities for developers: creating ready-to-use, zero-configuration applications, code separation, static HTML exporting, better UX, faster performance, and more. You can see many of its features below:

Next.js will ensure SEO without any extra actions from users beyond creating an application. Just to make clear, that results not from Next.js specifically, but from server-side rendering.

Once can do some SEO reports with Lighthouse even at earlier stages as you begin building your application.

But that still wasn't that....

SSG challenge

The idea behind Jamstack is truly attractive: instead of serving webpages in real-time (even when taking those from a cache), the webpages are already pre-rendered and deployed to CDN being globally accessible immediately upon publishing. In a simple scenario, one does not even have to keep a running server up as the traffic never reaches it going to CDN. Static content is fast, resilient to downtime, and gets indexed immediately by crawlers.

This approach however has some issues.

Let's think about a huge site with millions of pages. Deploying such a site may last hours rather than minutes due to static pages generation and the number of files to process. An increasing amount of content means increasing generation time. It seems to be reasonable to re-generating only those pages been updated, but it is only a small part of the solution (deployment becomes complicated and even one character change in a common part like a header will still make you process all the pages).

ISR

That is where Incremental Static Regeneration (ISR) comes into play. ISR is a new evolution step for Jamstack. Next.js allows you to create or update static pages beyond you’ve built a site. Incremental Static Regeneration enables developers and content editors to use static-generation on a per-page basis, without needing to rebuild the entire site. With ISR, you can benefit best from both worlds while scaling to millions of pages.

The principle difference is that now Static pages could be generated on-demand at runtime. The developers' job is now deciding which portion of pages you pre-generate, i.e. well known 80/20 Pareto's Law where 80% of traffic is served by only 20% of pages, while the other 80% of pages get the remaining 20% of traffic.

So it makes good sense to pre-generate that heavily used 20 % of pages. How to know which pages or sections to go through? You've got an arsenal of tools like analytics, A/B testing, alternative metrics - in any case, you got the flexibility to make your own tradeoff on build times, as the image below compares:

With being given a choice now, developers can define options A or B and choose between them: Selecting option A build time gets faster, while option B generates more pages.

This becomes crucial when working on large eCommerce implementations or headless CMSs such as Sitecore.

How that works

ISR relies on that same API being used for static sites generation getStaticProps. The difference is that by setting revalidate parameter to 60 we make Next.js using ISR for a page. Here's how the request goes with ISR:

  1. With Next.js one can define a revalidation time per page (ie. 60 seconds)
  2. The initial request to a product page will return the cached page with the original price
  3. At this stage, someone makes changes into a product data, affected in the database changes
  4. All requests to the page after the initial request but before 60 seconds are returned immediately as are cached.
  5. After a given 60-second window, the following request will still show the cached (old) page. But Next.js triggers background regeneration of that page. Once completed, it will update a cache for that single page or keep an old cached page upon a background regeneration failure.

Finding a compromise

Since all the sites vary by volume, audience, purpose, and internal architecture - there's no a silver bullet to cover them all with a universal solution. That is why Next.js is end-user-centric, offering developers shifting between solutions without leaving the bounds of the framework. It's for you to choose the right tool for a project.

Edge caching

In certain cases, ISR is not the best option, like some apps where live data display is crucial. Those would be better handled with server rendering, with some option of own Cache-Control headers with surrogate keys to invalidate content. Server rendered pages could get cached at some edge servers. With a hybrid framework, one can make own tradeoff and still stay within the framework.

SSR with edge server caching may look similar to ISR (especially with stale-while-revalidate headers for cache control).

The major difference comes from the way of handling the first request. With ISR it returns a statically rendered page that ensures the user will see a page even in case of API connectivity loss or database failure. SSR allows setting the pages depending on the specific features of requests.

One thing to care about in that case is using SSR whiteout caching may affect the performance as every millisecond of wait is important. In addition SSR with no cache badly impacts the TTFB metric (Time to First Byte) being used by Lighthouse.

In addition to that, ISR is not beneficial for small websites. That is reasonable if build time for the whole site is times lower than the revalidation parameter - just use classic SSR instead.

ISP fallback options

This is an important parameter with two potential options. When working with data that is fast to retrieve it makes sense using fallback: blocking. In that case, you do not need to display using a temporal "in progress" page while the data retrieval. That will guarantee users see the right page regardless of it is cached or not.

For uncertain or slow loading data the above approach will affect UX badly, therefore setting fallback: true makes an immediate display of the "please wait" page while data is processed.

SEO is the cause

SEO (search engine optimization) is a set of techniques (and even unobvious tricks) for changing your site in order to attract higher traffic from search engines. In order to increase the site's search rate, one needs to keep in mind many of them, such as:

Visitors won’t wait an eternity until your page loads. Performance is actually a crucial factor for SEO and therefore should be the main concern when building an app. In addition to FTFB (mentioned previously), there is another important parameter abbreviated as FCP (First Contentful Paint). Google uses FCP as a key metric for performance - FCP directly affects SEO rating. You can read more about improving FCP.

With Next.js you can analyze FCP and LCP (Largest Contentful Paint - time used for major content shown) by creating App component with a reportWebVitals function:

// pages/_app.js
export function reportWebVitals(metric)
{
  console.log(metric)
}

Once these parameters get calculated reportWebVitals function is called with all the metrics for you to log and analyze. Follow this link for more details about measuring performance with Next.js

I hope this post gives an overall highlight on the rendering evolution from the old days till ISR and nuances choosing them with Next.js.

Sitecore gets presented at Awesome List

After 3 months pull-requests-rejections-football I managed to squeeze Sitecore to be presented at Awesome List.

Awesome List logo

What is awesome list?

An awesome list is a list of "awesome things" curated by the community. There are awesome lists about everything from CLI applications to fantasy books. The main repository serves as a curated list of awesome lists, each of them represents the whole world presented in the most friendly way. If you never heard about it, I highly recommend start navigating if from the home page and can guarantee you'll find much great things there.

Until 2020 the list has missed Sitecore, so that I've fixed that. Now the repository contains comprehensive and well classified list of all known GitHub repositories. I personally find it useful for random lookups for certain code for a specific domain upon the demand - that saves much time! But apart from that, it's nice having the whole list of all the open source implementations just to review all the variety of things people did with Sitecore.


Existing categories

As for today, the whole Sitecore repositories are grouped into the below categories. I am leaving the direct links to each of them for the simplicity:

Everyone is welcome contributing to the repo as soon as you got any awesome stuff to add (by PR), but please be aware of the strict guidelines. 

Hope you find this list helpful!

A PROPER way of validating models passed into a view

I am working on a project that uses code-generated interfaces passed as models into views.

I came with a nice way of validating these models being passed, implementing C# feature called pattern matching. Now validation works for me by just dropping the below snippet onto my view:

@if (Html.Validate<ICompositeTemplate>(Model) is var e && e != null)
{
    @e { return; }
}

What he above code doing is validating that:

  • model was passed and is not null
  • the model is of a given template
  • when the datasource template uses composition from numerous interface templates, the passed interface is also implementing those numerous code-generated interfaces for each of template using for composition. I an checking all them being actually composed and the model being mapped and passed using the right template
  • when validation fails - show users meaningful error messages, but do that only for editors using Experience Editor

For example, the above definition of ICompositeTemplate could be looking as below:

public interface ICompositeTemplate : ITitleWithRichText, ICtaWithSvg
{
}

where ICtaWithSvgis composite on its own:

public interface ICtaWithSvg : ICta, ISvg
{
}

Both implemented interfaces inherit from IGlassBase and also have SitecoreType attribute:

[SitecoreType(TemplateId = ITitleWithRichTextConstants.TemplateIdString)]
[GeneratedCode("Leprechaun", "2.0.0.0")]
public partial interface ITitleWithRichText : IGlassBase
{
   // code-generated implementation
}
Looking at Sitecore, the actual template is implemented at the Project layer from a composition of the interface templates:


That actually works like a charm, exactly as I want it to perform!

Show me the code! The code of HTML Helper that makes all that function:

using System.Web.Mvc;
using Sitecore;
using System;
using Sitecore.Data;
using Sitecore.Data.Items;
using Foundation.Data.Models;
using System.Linq;
using Glass.Mapper.Sc.Configuration.Attributes;
using System.Reflection;
using System.Collections.Generic;

namespace Feature.Components.Extensions
{
    public static class ModelValidationExtensions
    {
        public static MvcHtmlString Validate<T>(this HtmlHelper html, T model) where T : IGlassBase
        {
            if (model == null) 
                return new MvcHtmlString("Datasource is missing for this component");

            var featureInterfaceTemplateIds = GetTemplateId<T>();

            var modelItem = Context.Database.GetItem(new ID(model.Id));

            var failedTemplates = new List<Guid>();
            foreach (var templateId in featureInterfaceTemplateIds)
            {
                var section = modelItem?.GetAncestorOrSelfOfTemplate(new ID(templateId));
                if (section == null)
                {
                    failedTemplates.Add(templateId);
                }
            }

            if (failedTemplates.Any())
            {
                var ids = failedTemplates.Select(ft => ft.ToString());

                var error = String.Empty;

                if (Context.PageMode.IsExperienceEditor)
                {
                    error = $@"<div class='EE_error'>
                                <div>Provided datasource has invalid type(s).</div>
                                <div>Please correct it to inherit: {string.Join(", ", $"{{{ids}}}")}.</div>
                              </div>";
                }

                return new MvcHtmlString(error);
            }

            return null;
        }

        private static Item GetAncestorOrSelfOfTemplate(this Item item, ID templateID)
        {
            if (item == null)
            {
                throw new ArgumentNullException(nameof(item));
            }

            return item.DescendsFrom(templateID) 
                ? item 
                : item.Axes.GetAncestors().LastOrDefault(i => i.DescendsFrom(templateID));
        }

        private static bool SitecoreTypeAttributeFilter(Type referencedType, Object criteriaObj)
        {
            var attribs = referencedType.CustomAttributes;
            var customAttributes = attribs.FirstOrDefault(a => a.AttributeType == typeof(SitecoreTypeAttribute));
            if (customAttributes != null)
            {
                var tmplId = customAttributes.NamedArguments.FirstOrDefault(a => a.MemberName == "TemplateId");
                if (tmplId != null)
                {
                    return true;
                }
            }

            return false;
        }

        private static IEnumerable<Guid> GetTemplateId<T>() where T : IGlassBase
        {
            var guids = new List<Guid>();
            var referencedType = typeof(T);

            var filter = new TypeFilter(SitecoreTypeAttributeFilter);
            var _ifs = referencedType.FindInterfaces(filter, "IGlassBase").ToList();
            _ifs.Add(referencedType);

            foreach (var type in _ifs)
            {
                var attribs = type.CustomAttributes;

                var customAttributes = attribs
                    .FirstOrDefault(a => a.AttributeType == typeof(SitecoreTypeAttribute));

                if (customAttributes != null)
                {
                    var tmplId = customAttributes.NamedArguments
                        .FirstOrDefault(a => a.MemberName == "TemplateId");

                    if (tmplId != null)
                    {
                        guids.Add(Guid.Parse(tmplId.TypedValue.ToString().Trim('\"')));
                    }
                }
            }

            return guids;
        }
    }
}
It allows me saving lots of time without losing in quality and keeping the models validating. If something goes wrong - both me and editors know what exactly need to get fixed.

Hope you benefit from validating your MVC models too!

Advanced editing: managing dynamic popups from custom RTE dialog

One day a request came up from the business, they wanted to have a nice "information" icon appearing aside from a text, clicking which opens up a popup modal dialog showing more information related to that line of content. From an UX point of view that works as a decent solution preventing page from bloating with more-specific info.

From page visitor's eyes it looks as below:


There was a FE code provided that does exactly as described. If I was responsible for Front-End, I'd have chosen using Emoji as they're an official part of Unicode, as therefore gets supported with all browsers. But front-end was handed to me as given and I assume there was a decent reason for using it in a provided way.

An any case, our mission will be implementing that with BE with certain challenges:

  1. You may have multiple popups on the single page, at least more than one.
  2. User needs being able dynamically adding popups to a page (sometimes removing them).
  3. Each popup needs to be editable, while in their normal state they are hidden.
  4. Because of that above, there bight be a user-friendly way of distinguishing popups and giving them individual names.
  5. An information icon should be editable from with Rich Text, mixed along classical RTE interface
  6. Each "i"-icon should get referenced with a specific popup, so that clicking different icons trigger different popups.
  7. Because of that, a clear and nice way of referencing and icon to a popup should be presented.
  8. The BE solution is classical MVC implementation, with no SXA or JSS (unfortunately)
Now, with these requirements in hands, let's implement the whole feature. Below are my..

THOUGHTS


1. Popup code
At the front-end these I was given them implemented as <section>-tag blocks, hidden with styles. Each of these blocks has data-name attribute, that is used for referencing it along with a correspondent "i"-icon:


2. Popups aren't visible so need extra care to support editors managing them. On the layout I create a separate placeholder for exclusively adding these popups. Normally, once you add the first popup, placeholder add invitation disappears, as it now has an item already but that item is invisible.
To make it visible I add an additional section that becomes visible only in editing mode (Sitecore MVC Razor view):
@if (Sitecore.Context.PageMode.IsExperienceEditor)
{
    
Modal popup: @name
}

That allows selecting each popup individually, making that possible removing existing or adding a new popup after existing:


3. Editable Datasource
I use standard Sitecore Controller Rendering with a generic datasource of Title with Rich Text template. A simple code-generated Glass model coming from Leprechaun (but could be anything, i.e. T4 templates) gets passed into a MVC View.


4. Giving Component a unique Name
as you might know, a bunch of additional options (like styling) could be also provided from Rendering Parameters for each individual Popup Rendering. In or case we need giving each individual rendering on a page unique name and Rendering Parameters seem to be the ideal way of doing that.

Why Rendering Parameters?
Obviously, as it comes from their name, Rendering Parameters are stored with a page and are set per each component applied. That's opposed to the datasource items which carry on replaceable sources of data, and could be shared across several components of the same or compatible types.
per page.

I have recently written an article on how easily one could use rendering parameter with Glass Mapper by using strongly typed HTML helpers as I highly recommend reading as the code from this article is using described method.


5. Editing Information icon
Now, the biggest challenge is that "i"-icon is mixed along the rest of RTE content in a single field. Let's look at how FE team has implemented that:
<button data-name="NAME_OF_POPUP" type="button" aria-label="View Info" aria-haspopup="dialog"  class="button--more-info a-icon-info"></button>
It comes as a styled <button> HTML tag, with set of attributes the most important of which is data-name. This attribute sets the relationship with a corresponding modal popup <section> tag to be shown/hidden.


6. Wiring-up icon and modal popup together
An initial though was tokenizing this <button> tag and bringing it to the editor's snippets collection, if not the data-name parameter, which is unique per each icon and point out to that same parameter on a modal popup <section> tag attribute.


7.That mean a more elegant solution should be chosen. Creating a custom editors dialog would solve this, for example asking the name of modal popup to be shown on a click at this icon. The most straightforward way would be asking user to input the name user created at the stage 4, so that upon submission it is being injected into a <button> tag attribute and returned back to the editor.

That would work but is subject to potential mistakes/typos/misunderstandings by a user, so instead I decided to present user with a drop-down already populated with the names of modal popups previously added to the page. User needs typing once, and even he/she typed a total mess as the name of popup, that crazy value should be available for selection/usage without retyping.

Now let's turn to the actual..

IMPLEMENTATION


I will start with Modal Popup rendering itself. First of all need creating a Controller rendering:

With the view action code:
@using Glass.Mapper.Sc.Web.Mvc
@using Feature.Components.Templates
@using Feature.Components.Extensions
@model ITitleWithRichText

@{ var name = Html.GetRenderingParametersString<IPopupModalDialog>(m => m.DialogName); } 


@if (Sitecore.Context.PageMode.IsExperienceEditor)
{
    
Modal popup: @name
}

Controller action method itself will be extremely simple- grab a strongly-typed interface from Glass Mapper and pass it to a view:
public ActionResult ModalPopup()
{
var model = _componentsRepository.GetModel<ITitleWithRichText>();
return View("~/Views/Feature/Components/ModalPopup.cshtml", model);
}

Next, need to add a Sitecore Placeholder for holding modal dialogs. The good practice here also is not to ignore creating placeholder settings for users' choice convenience:


Rich Text Editor

We need creating the button on a Rich Text, these buttons are configured inside of active Rich Text Profile within core database.

Please note: as soon as you need modifying anything from OOB profile, do not change them. Duplicate the whole desired profile node instead, put it under the serialization and then you're welcome to modifying it.

Therefore I duplicate Rich Text Full profile into: 
/sitecore/system/Settings/Html Editor Profiles/Custom
Same as the rest of editing-related stuff, I am keeping it serialized under Foundation.Editing Helix module.

As an option, you could also want this new profile becoming the default, so that there will be no need of explicitly stating the profiles name at the templates' Source column. To achieve that you can create a config patch file (just here, at Foundation.Editing) that sets HtmlEditor.DefaultProfile setting to the one we've just created:


  
    
      /sitecore/system/Settings/Html Editor Profiles/Custom
    
  



Once we've sorted with Rich Text profile, let's include a new icon for a custom dialog. To do so, I create a new Modal Popup item under /sitecore/system/Settings/Html Editor Profiles/Custom/Toolbar 2.

The most important property here is Click field and it stores the name of modal dialog to serve this:

As for icon itself, that could be chosen from one of sprite images stored at sitecore/shell/Themes/Standard/Images/Editor/WebResource.png by specifying offset in style:
html .ModalPopup { background-position: -6px center }
Here is how the result looks like:


After we created a new button and have associated it with a Click command referencing a new dialog name, there is also a code to be added that handles new button click and triggers new dialog.

Danger zone: to add this code one need modifying existing OOB JavaScript file, so with each version update there is a risk of new vanilla version being overwritten with you custom-modified old version script file. This file rarely changes, but still please keep an eye - if the change occurs you'll need handling the diff.

The customization in my case is simply appending some code to the very end of sitecore\shell\Controls\Rich Text Editor\RichText Commands.js file:
Telerik.Web.UI.Editor.CommandList["ModalPopup"] = function(commandName, editor, args) {
  var html = editor.getSelectionHtml();
  var id;
  
  if (!id) {
    id = GetMediaID(html);
  }

  scEditor = editor;

  editor.showExternalDialog(
    "/sitecore/shell/default.aspx?xmlcontrol=RichText.ModalPopup&la=" + scLanguage + (id ? "&fo=" + id : "") + (scDatabase ? "&databasename=" + scDatabase : "") ,
    null, 400, 260, scModalPopup, null, "Insert Modal Popup Dialog", true, Telerik.Web.UI.WindowBehaviors.Close,false, false 
  );
};

function scModalPopup(sender, returnValue) {
  if (!returnValue) {
      return;
  }

  scEditor.pasteHtml(unescape(returnValue.Text), "DocumentManager");
}
What the above code does is adds the Telerik.Web.UI.Editor.CommandList["ModalPopup"] handling code (note that "ModalPopup" matches the value from Click command we entered into a profile at core database; it also adds scModalPopup handler that actually pastes resulted markup into Rich Text Editor.


Creating the Dialog

That is done with very legacy markup method called SheerUI. A traditional Sheer UI component would consist of 3 pieces:
  1. XML markup that dictates how control layout should be laid
  2. Codebehind similar to old knows ASP.NET WebForms (not to confuse with another deprecated tolset - WFFM)
  3. Related JavaScript code
Here they are, one after another:

1. XML markup for sitecore\shell\Controls\Rich Text Editor\ModalPopup\ModalPopup.xml


  
    
      
      
      
          
            
          
		  
      
    
  


This code has <richtext.modalpopup> section that hard-ties to the command from previous steps. It also has <codebeside> section that references C# code doing the rest of its logic behing the markup, here is it below:

2. ModalPopup.cs
using System;
using Sitecore.Web.UI.Pages;
using Sitecore.Diagnostics;
using Sitecore;
using Sitecore.Web;
using Sitecore.Web.UI.Sheer;
 
namespace Foundation.Editing.Dialogs
{
    public class ModalPopup : DialogForm
    {
        protected Sitecore.Web.UI.HtmlControls.Combobox Target;

        string Wrapping = @"";

        protected override void OnLoad(EventArgs e)
        {
            Assert.ArgumentNotNull(e, "e");
            base.OnLoad(e);

            if (!Context.ClientPage.IsEvent)
            {
                Mode = WebUtil.GetQueryString("mo");
               
                Context.ClientPage.ClientScript.RegisterStartupScript(GetType(), "script", "scOnLoad();", true);
            }
        }

        protected override void OnOK(object sender, EventArgs args)
        {
            Assert.ArgumentNotNull(sender, "sender");
            Assert.ArgumentNotNull(args, "args");

            string code = string.Format(Wrapping, Target.Value);      

            if (Mode == "webedit")
            {
                SheerResponse.SetDialogValue(StringUtil.EscapeJavascriptString(code));
                base.OnOK(sender, args);
            }
            else
            {
                SheerResponse.Eval($"scClose({StringUtil.EscapeJavascriptString(code)})");
            }
        }

        protected override void OnCancel(object sender, EventArgs args)
        {
            Assert.ArgumentNotNull(sender, "sender");
            Assert.ArgumentNotNull(args, "args");

            if (Mode == "webedit")
            {
                base.OnCancel(sender, args);
            }
            else
            {
                SheerResponse.Eval("scCancel()");
            }
        }

        protected string Mode
        {
            get
            {
                string str = StringUtil.GetString(base.ServerProperties["Mode"]);
                if (!string.IsNullOrEmpty(str))
                {
                    return str;
                }
                return "shell";
            }
            set
            {
                Assert.ArgumentNotNull(value, "value");
                base.ServerProperties["Mode"] = value;
            }
        }
    }
}

Simply saying, we take an input from user (in a given case by selecting a drop-down item from a list) and wrapping it with <button> tag store at Wrapping string variable:

3. Finally, sitecore\shell\Controls\Rich Text Editor\ModalPopup\ModalPopup.js that handles client-side part for this control:
function scClose(text) {
    var returnValue = {
        Text: text
    };
 
    getRadWindow().close(returnValue);
}
 
function GetDialogArguments() {
    return getRadWindow().ClientParameters;
}
 
function getRadWindow() {

    if (window.radWindow) {
        return window.radWindow;
    }
 
    if (window.frameElement && window.frameElement.radWindow) {
        return window.frameElement.radWindow;
    }
 
    return null;
}
 
var isRadWindow = true;
 
var radWindow = getRadWindow();
 
if (radWindow) {
    if (window.dialogArguments) {
        radWindow.Window = window;
    }
}

function scOnTargetLoad() {    
}

function scOnLoad() {    
    
    let select = document.getElementById("Target");
    let list = parent.parent.parent.document.querySelectorAll('section[data-name]')
    let values = Array.from(list).map(x => x.getAttribute('data-name'));

    for (var i = 0; i < values.length; i++) {
        var opt = document.createElement('option');
        opt.innerHTML = values[i];
        opt.value = values[i];
        select.appendChild(opt);
    }
}

function scCancel() {
 
    getRadWindow().close();
}
 
function scCloseWebEdit(embedTag) {
    window.returnValue = embedTag;
    window.close();
}
 
if (window.focus && Prototype.Browser.Gecko) {
    window.focus();
}
The main trick here you may see inside of scOnLoad() method. I created and referenced this handler to exactly catch the moment the drop-down is actually created on the control - that is not trivial as is instantiated asynchronously far later that holding control itself. Once created, I am crawling the triple-parent iframe for the presence of popup modal windows, and grab their name into a drop-down, if found.

That is probably the main bit of the whole blog post. The user can now select any name of those existing modal popup components he actually dropped into a placeholder on a page, and those are the names entered only once from a Rendering Parameters dialog forced after control was added.

As mentioned above, I place both new custom dialog and modified RichText Commands.js file into Foundation.Editing project as per Helix guidance.That guarantees all these related code, scripts and items get kept together.

DEMO TIME

Thanks for reading and watching!

Sum-up of my PowerShell experience. Best practices

Last year I've spent an enormous of time developing Sifon, which gave me a dive deep into the wonderful world of PowerShell. Of course, by these days I already had a decent 6-7 years experience with this scripting language, starting from the earliest version. However, the Sifon experience gave me a chance of revising all of my past experience and aggregating it into the best practices.

Obviously, I do expect my reader to have some snippets of experience of PowerShell, a partial understanding of its principles and object nature, etc. My objective is to advise certain best practices on top of that for increasing the readability and maintainability of the PowerShell code used in your company and, as a result, increasing the productivity of the administrator working with it.


Content



Styleguides

Developing scripts according to styleguides is a good practice universally, and there can hardly be two opinions on it. Due to the lack of any officially approved or detailed descriptions from Microsoft, the community has filled that gap (at the time of PowerShell v3) and maintained that on GitHub: PowerShellPracticeAndStyle. This is "the must" repository for anyone who has ever used the "Save" button in the PowerShell ISE.

Briefly, style guides fall to the following statements:

  • PowerShell uses PascalCase to name variables, cmdlets, module names, and pretty much everything except operators;
  • Language statements such as if, switch, break, process, -match are exclusively written in lowercase;
  • There is only one correct way for curly braces, also known as the Kernighan and Richie style, leading its history from the book The C Programming Language;
  • Avoid using aliases other than at interactive console session, do not write any ps | ? processname -eq firefox | %{$ws=0}{$ws+=$_.workingset}{$ws/1MB}; in the actual scripts.
  • Specify the parameter names explicitly, the cmdlets behavior and/or their signature may change with time making such call invalid. Furthermore doing that provides a context to whoever is nonfamiliar with a specific cmdlet;
  • Make out the parameters for calling scripts, and do not write a function inside the script and call this function in the last line with the need to change the values of global variables instead of specifying parameters;
  • Specify [CmdletBinding ()] - this enriches your cmdlet with -Verbose and -Debug flags and certain other useful features. I personally am not a fan of specifying this attribute in simple inline functions and filters that are literal to a few lines;
  • Write a comment-based help: just a sentence, or a link to a ticket, and an example of a call;
  • Specify the required version of PowerShell in the #requires section;
  • Use Set-StrictMode -Version Latest, it will help you avoid these problems;
  • Process error and exceptions
  • Don't rush rewriting everything in PowerShell. In the first place, PowerShell is a shell responsible for invoking binaries which is its primary task. There is nothing wrong with using robocopy in a script, rather than trying to mimic all of its logic with PS.

Comment Based Help

Please find an example of a script help implementation. The actual script crops the image bringing it to a square and resizes, It may seem familiar to those who once made avatars for users. There is a call example in the .EXAMPLE section, try it. And since PowerShell executes by the CLR (that same as other .NET languages use), it has the ability to employ the full power of .NET libraries:

<#
    .SYNOPSIS
    Resize-Image resizes an image file

    .DESCRIPTION
    This function uses the native .NET API to crop a square and resize an image file

    .PARAMETER InputFile
    Specify the path to the image

    .PARAMETER OutputFile
    Specify the path to the resized image

    .PARAMETER SquareHeight
    Define the size of the side of the square of the cropped image.

    .PARAMETER Quality
    Jpeg compression ratio

    .EXAMPLE
    Resize the image to a specific size:
    .\Resize-Image.ps1 -InputFile "C:\userpic.jpg" -OutputFile "C:\userpic-400.jpg"-SquareHeight 400
#>

# requires -version 3

[CmdletBinding()]
Param(
    [Parameter(Mandatory)]
    [string]$InputFile,
    [Parameter(Mandatory)]
    [string]$OutputFile,
    [Parameter(Mandatory)]
    [int32]$SquareHeight,
    [ValidateRange(1, 100)]
    [int]$Quality = 85
)

# Add System.Drawing assembly
Add-Type -AssemblyName System.Drawing

# Open image file
$Image = [System.Drawing.Image]::FromFile($InputFile)

# Calculate the offset for centering the image
$SquareSide = if ($Image.Height -lt $Image.Width) {
    $Image.Height
    $Offset = 0
} else {
    $Image.Width
    $Offset = ($Image.Height - $Image.Width) / 2
}
# Create empty square canvas for the new image
$SquareImage = New-Object System.Drawing.Bitmap($SquareSide, $SquareSide)
$SquareImage.SetResolution($Image.HorizontalResolution, $Image.VerticalResolution)

# Draw new image on the empty canvas
$Canvas = [System.Drawing.Graphics]::FromImage($SquareImage)
$Canvas.DrawImage($Image, 0, -$Offset)

# Resize image
$ResultImage = New-Object System.Drawing.Bitmap($SquareHeight, $SquareHeight)
$Canvas = [System.Drawing.Graphics]::FromImage($ResultImage)
$Canvas.DrawImage($SquareImage, 0, 0, $SquareHeight, $SquareHeight)

$ImageCodecInfo = [System.Drawing.Imaging.ImageCodecInfo]::GetImageEncoders() |
    Where-Object MimeType -eq 'image/jpeg'

# https://msdn.microsoft.com/ru-ru/library/hwkztaft(v=vs.110).aspx
$EncoderQuality     = [System.Drawing.Imaging.Encoder]::Quality
$EncoderParameters  = New-Object System.Drawing.Imaging.EncoderParameters(1)
$EncoderParameters.Param[0] = New-Object System.Drawing.Imaging.EncoderParameter($EncoderQuality, $Quality)

# Save the image
$ResultImage.Save($OutputFile, $ImageCodecInfo, $EncoderParameters)

The above script starts with a multi-line comment <# ... #>. When such a comment comes first and contains certain keywords, PowerShell is smart enough to guess building help for a given script. That's why this type of help is called that - Comment Based Help:

Moreover, after typing the script name, IntelliSense hints will suggest the relevant parameters, regardless of if that's PowerShell console or code editor:

Once again, I want to encourage you not to neglect it. If you don't have a clue what to write there, take a break and think of your function and its purpose - that helps writing meaningful help. Always works for me.

There's no need to fill in all the keywords, PowerShell is designed to be self-documenting, and if you've given meaningful and fully qualified names to the parameters, a short sentence in the .SYNOPIS section with one example will be sufficient.


Strict mode

Similar to many other scripting languages PowerShell is dynamically typed. This approach brings many benefits: writing simple yet powerful high-level logic appears to be a matter of minutes, but when your solution grows to thousands of lines, you'll face the fragility of this approach.

For example, while testing a script, you always received a set of elements as an array. In real life, it could receive just one element and at the next condition, instead of checking the number of elements, you will receive the number of characters or another attribute, depending on the element type. The script logic will definitely break, but the runtime will pretend that everything is fine.

Enforcing strict mode avoids this kind of problem, at a cost of a little more code from you, like variable initialization and explicit types casting.

This mode is enabled by the Set-StrictMode -Version Latest cmdlet, although there are other options for "strictness", my choice is to use the latter.

In the example below, strict mode catches a call to a nonexistent property. Since there is only one element inside the folder, the type of the $Input variable as a result of execution will be FileInfo, and not the expected array of the corresponding elements:


To avoid such a problem, the result of running the cmdlet should be explicitly converted to an array:

$Items = @(Get-ChildItem C:\Nextcloud)

Make it a rule to always enable strict mode to avoid unexpected results from your scripts.


Error processing

ErrorActionPreference

Looking at other people's scripts, I often see either complete ignoring of the error handling mechanism or an explicit force of silent continuation mode in case of an error. Error handling is certainly not the easiest topic in programming in general and in scripts in particular, but it definitely does not deserve to be ignored. By default, if an error occurs, PowerShell displays it and continues working (I simplified it a little). This is convenient, for example, if you need urgently send requests to numerous hosts,. It's unproductive to interrupt and restart the whole process if one of the machines is turned off, or returns a faulty response.

On the other hand, if you are doing a complex backup of a system consisting of more than one data file of more than one part of a system, you' better be sure that your backup is consistent and that all the necessary data sets were copied without errors.

To change the behavior of cmdlets in the event of an error, there is a global variable $ErrorActionPreference, with the following list of possible values: Stop, Inquire, Continue, Suspend, SilentlyContinue.

Get-ChildItem 'C:\System Volume Information\' -ErrorAction 'Stop'

Actually, such an option defines two potential strategies: either to resolve all errors "by default" and set ErrorAction only for critical places, where they must be handled; or include it at the whole script level by setting a global variable and setting -ErrorAction 'Continue' for non-critical operations. I always prefer the second option, but do not impose it on you. I only recommend understanding this issue and using this useful tool wisely based on your needs.


try/catch

In the error handler, you can control the execution flow by an exception type. Interestingly, one can build the whole execution flow with try/catch/throw/trap operator, that in fact has a terrible "smell" due to being severe antipattern. Not just because it uglifies the code turning it into the worst sort of "spaghetti". but also exception handling is very costly with .NET and abusing it dramatically reduces the script performance.

#requires -version 3
$ErrorActionPreference = 'Stop'

# create logger with a path to write
$Logger = Get-Logger "$PSScriptRoot\Log.txt"

# global errors trap
trap {
    $Logger.AddErrorRecord($_)
    exit 1
}

# connection attempt count
$count = 1;
while ($true) {
    try {
        # connection attempt
        $StorageServers = @(Get-ADGroupMember -Identity StorageServers | Select-Object -Expand Name)
    } catch [System.Management.Automation.CommandNotFoundException] {
        # there is no sense going ahead without modeule, thus throw an exception
        throw "Get-ADGroupMember is not available, plase add feature Active Directory module for PowerShell; $($_.Exception.Message)"
    } catch [System.TimeoutException] {
        # sleep a bit and do one more attempt, if attempt count is not exceeded
        if ($count -le 3) { $count++; Start-Sleep -S 10; continue }
        # terminate and throw exception outside
        throw "Server failed to connect due to a timeout, $count attempts done; $($_.Exception.Message)"
    }
    # since no exceptins occured, just leave the loop 
    break
}

It is worth mentioning trap operator if you haven't come across it before. It works as a global error trap: catches everything not processed at lower levels, or thrown out from an exception handler due to the impossibility of self-fixing.

Apart from object-oriented exception handling, PowerShell also provides more familiar concepts that are compatible with other "classic" shells, such as error streams, return codes, and variables accumulating errors. All this is definitely convenient, sometimes with no alternative, but is out of the scope of this overview topic. Luckily, there is a good book on GitHub highlighting this topic.

When I am not sure the target system has PowerShell v5, I use this logger compatible with version 3:

# poor man's logger, compatible with PowerShell v3
function Get-Logger {
    [CmdletBinding()]
    param (
        [Parameter(Mandatory = $true)]
        [string] $LogPath,
        [string] $TimeFormat = 'yyyy-MM-dd HH:mm:ss'
    )

    $LogsDir = [System.IO.Path]::GetDirectoryName($LogPath)
    New-Item $LogsDir -ItemType Directory -Force | Out-Null
    New-Item $LogPath -ItemType File -Force      | Out-Null

    $Logger = [PSCustomObject]@{
        LogPath    = $LogPath
        TimeFormat = $TimeFormat
    }

    Add-Member -InputObject $Logger -MemberType ScriptMethod AddErrorRecord -Value {
        param(
            [Parameter(Mandatory = $true)]
            [string]$String
        )
        "$(Get-Date -Format 'yyyy-MM-dd HH:mm:ss') [Error] $String" | Out-File $this.LogPath -Append
    }
    return $Logger
}

But once again - do not ignore errors! 

It will save your time and nerves in the long run. Don't confuse a faulty script completing without an error with a good script. A good script will terminate when needed instead of silently continuing with an unpredicted result.


Tools

New Windows Terminal is a decent configurable terminal with tabs and plenty of hidden features. It does not exclusively use PowerShell, old school cmd.exe is also there, while I mostly use it for a Linux console of WSL2.

The next step is to install the modules: oh-my-posh and posh-git - they will make the prompt more functional by adding information about the current session, the status of the last executed command, and the status of the git repository in the current location.

Visual Studio Code

Just the best editor! All the worst that I could say about PowerShell relates exclusively to PowerShell ISE, those who remember its first three-panel version won't forget that experience! The different terminal encoding, the lack of basic editor features, such as autocomplete, auto-closing brackets, code formatting, and a whole set of anti-patterns generated by it - this is all about ISE. Don't use it, use Visual Studio Code with the PowerShell extension instead - everything you want is there.

In addition to syntax highlighting, method hints, and the ability to debug scripts, the plugin installs a linter that will also help you follow established practices in the community, for example, deploy abbreviations in one click (on a bulb icon). In fact, this is a regular module that can be installed independently, for example, add it to your script signing pipeline: PSScriptAnalyzer.

It is worth remembering that any action in VS Code can be performed from the control center, called by the CTRL + Shift + P combination. Format the piece of code inserted from the chat, sort the lines alphabetically, change the indent from spaces to tabs, and so on, all this in the control center. Or, for example, enable full screen and centered editor position:

Git

People sometimes have a phobia of resolving conflicts in version control systems, those who are often alone and do not encounter any problems of this kind. With VS Code, conflict resolution is literally a matter of mouse clicks on those parts of the code that need to be retained or replaced. How to work with version control systems is available and with pictures is written in the VSCode doc, go over your eyes to the end there briefly and well.

Snippets

Snippets are a kind of macros/templates that speed up the writing of code. Definitely, a must-use. Let's take a look at a few of them.

Quick object creation:


Template for comment-based help:


When cmdlet needs to pass a large number of parameters, it makes sense to use splatting.Here's a snippet for it:


All available snippets can be viewed by Ctrl + Alt + J:


You could improve your working environment furthermore, there are LOTS of goodies accumulated by the community, for example at Awesome Lists.


Performance

The topic of performance is not as simple as it might seem at first glance. On the one hand, premature optimizations can greatly reduce the readability and maintainability of the code, saving 300ms of script execution time, the usual running time of which can be ten minutes, their use, in this case, is certainly destructive. On the other hand, there are several fairly simple techniques that increase both the readability of the code and the speed of its operation, which are quite appropriate to use on an ongoing basis. Below I will talk about some of them, in case performance is all for you, and readability fades into the background due to strict time constraints for service downtime for the duration of maintenance, I recommend referring to the specialized literature.

Pipeline и foreach

The easiest and always working way to increase productivity is to avoid using pipes. Due to the type safety and convenience, when passing elements through the pipe, PowerShell wraps each of them with an object. In .NET languages, this is known as boxing. Boxing is good as it guarantees safety but is a heavy operation, which sometimes it makes sense to avoid.

In order to improve performance and to increase readability the first step would be removing all usages of the Foreach-Object cmdlet replacing it with the foreach statement. You may be surprised to find out that these are actually two different entities, with foreach being an alias for Foreach-Object. In practice, the main difference is that foreach does not take values from the pipeline, and it works up to three times faster!

Let's think of a task where we need to process a large log to get some result, for example, selecting and converting its records to a different format:

Get-Content D:\temp\SomeHeavy.log | Select-String '328117'

The above example looks nice and is easy to read at first glance. At the same time, it contains a performance bottleneck - the pipe. To be correct, it is not the pipeline itself to blame, but the behavior of the Get-Content cmdlet. To use the pipeline, it reads a file line by line wrapping each line of a log from a string type into an object. That increases its size greatly, decreasing the number of data objects in the cache and doing unwanted work.

Avoiding this is simple - you just need to indicate that data must be read in full at one time by setting ReadCount to 0:

Get-Content D:\temp\SomeHeavy.log -ReadCount 0 | Select-String '328117'
For my 1Gb log file the advantage of the second approach is almost three times:


I advise you not to take my word on faith but to check it yourself whether this is true with a file of similar size. In general, Select-String gives a good result, and if the final time suits you, it's time to stop optimizing this part of the script. If the final script execution time still strongly depends on the stage of data retrieval, you can slightly reduce the data retrieval time by replacing the Select-String cmdlet. This is a very powerful and convenient tool, but in order to be so, Select-String adds a certain amount of metadata to its output, again doing not free time work, we can refuse unnecessary metadata and related work by replacing the cmdlet with the language operator:

foreach ($line in (Get-Content D:\temp\SomeHeavy.log -ReadCount 0)) {
    if ($line -match '328117') {
        $line
    }
}

As per tests, the execution time decreased to 30 seconds, meaning I won 30%. The overall readability of the code also decreased just a little. But if you operate with tens of gigabytes of logs, then this is your way with no doubt.

Another thing I would like to mention is the -match operator, a search by a regular expression pattern. In this particular case, the search turns simple substring location, but this is not always the case of such simplicity. Your regular expression can be extremely complex progressively increasing the execution time - RegEx is a Turing-complete language, so be careful with them.

The next task would be to process a heavy log file. Write the straightforward solution with the addition of the current date to each selected line and write to the file via the pipe:

foreach ($line in (Get-Content D:\temp\SomeHeavy.log -ReadCount 0)) {
    if ($line -match '328117') {
        "$(Get-Date -UFormat '%d.%m.%Y %H:%M:%S') $line" | Out-File D:\temp\Result.log -Append
    }
}

And the measurement results done by the Measure-Command cmdlet:

Hours             : 2
Minutes           : 20
Seconds           : 9
Milliseconds      : 101

Now, let's improve the result.

It is obvious that writing a file on a per-line basis isn't optimal, it is much better to make a buffer that is periodically flushed to disk, ideally to flush it once. It is also worth reminding that strings in .NET (therefore, in PowerShell as well) are immutable and any string manipulation creates memory allocation for a new string to be written, while the old one remains to wait for the garbage collector. This is expensive both in terms of speed and memory. .NET has a specialized class to solve this problem that allows you to change strings while encapsulating the logic for more accurate memory allocation and its name is StringBuilder. When creating a class, a buffer is allocated in RAM in which new lines are added without re-allocating memory, if the size of the buffer is not enough to add a new line, then a new one twice as large is created and the work continues with it. In addition to the fact that such a strategy greatly reduces the number of memory allocations, it can still be tweaked if you know the approximate amount of memory that the lines will occupy and set it in the constructor when creating an object.

$StringBuilder = New-Object System.Text.StringBuilder
foreach ($line in (Get-Content D:\temp\SomeHeavy.log -ReadCount 0)) {
    if ($line -match '328117') {
        $null = $StringBuilder.AppendLine("$(Get-Date -UFormat '%d.%m.%Y %H:%M:%S') $line")
    }
}
Out-File -InputObject $StringBuilder.ToString() -FilePath D:\temp\Result.log -Append -Encoding UTF8

The execution time of this code is only 5 minutes, instead of the last two and a half hours:

Hours             : 0
Minutes           : 5
Seconds           : 37
Milliseconds      : 150

It is worth noting the Out-File -InputObject construction, its essence is to get rid of the pipeline once again. This method is faster than a pipe and works with all cmdlets - any value that in the signature of the cmdlet is the value received from the pipe can be specified by a parameter. The easiest way to find out which parameter takes the values passing through the pipe is to run Get-Help on the cmdlet with the -Full parameter, among the list of parameters one should contain Accept pipeline input? true (ByValue):

-InputObject <psobject>

    Required?                    false
    Position?                    Named
    Accept pipeline input?       true (ByValue)
    Parameter set name           (All)
    Aliases                      None
    Dynamic?                     false

In both cases, PowerShell limited itself to three gigabytes of memory:

The previous approach is fairly good, except perhaps that 3GB of memory consumption.

Let's try to reduce memory consumption and use another .NET class written to solve such problems - StreamReader:

$StringBuilder = New-Object System.Text.StringBuilder
$StreamReader  = New-Object System.IO.StreamReader 'D:\temp\SomeHeavy.log'
while ($line = $StreamReader.ReadLine()) {
    if ($line -match '328117') {
        $null = $StringBuilder.AppendLine("$(Get-Date -UFormat '%d.%m.%Y %H:%M:%S') $line")
    }
}
$StreamReader.Dispose()
Out-File -InputObject $StringBuilder.ToString() -FilePath C:\temp\Result.log -Append -Encoding UTF8
Hours             : 0
Minutes           : 5
Seconds           : 33
Milliseconds      : 657

The execution time has remained almost the same, but the memory consumption and its nature have changed. If in the previous example, when reading a file in memory, space was occupied at once for the entire file, I have more than a gigabyte, while the script's memory usage was about three gigabytes, then when using a streamer, the memory occupied by the processor slowly increased until it reached 2GB. I did not manage to screenshot the final amount of memory occupied, but here is a screenshot of what happened towards the end of the work:

The behavior of the program in terms of memory consumption is quite obvious - its input is, roughly speaking, a "pipe", and the output is our StringBuilder - a "pool" that spills until the end of the program.

Let's set the buffer size of 100MB in order to remove unnecessary allocations and start dumping the contents to a file when approaching the end of the buffer. I implemented it straightforward by comparing if the buffer passed the mark of 90% of the total size (it makes sense to take this operation out of the loop to make it even better):

$BufferSize     = 104857600
$StringBuilder  = New-Object System.Text.StringBuilder $BufferSize
$StreamReader   = New-Object System.IO.StreamReader 'C:\temp\SomeHeavy.log'
while ($line = $StreamReader.ReadLine()) {
    if ($line -match '1443') {
        # check if we are approaching ond of buffer
        if ($StringBuilder.Length -gt ($BufferSize - ($BufferSize * 0.1))) {
            Out-File -InputObject $StringBuilder.ToString() -FilePath C:\temp\Result.log -Append -Encoding UTF8
            $StringBuilder.Clear()
        }
        $null = $StringBuilder.AppendLine("$(Get-Date -UFormat '%d.%m.%Y %H:%M:%S') $line")
    }
}
Out-File -InputObject $StringBuilder.ToString() -FilePath C:\temp\Result.log -Append -Encoding UTF8
$StreamReader.Dispose()
Hours             : 0
Minutes           : 5
Seconds           : 53
Milliseconds      : 417

The maximum memory consumption was 1 GB with almost the same execution speed:

Of course, the results for the absolute numbers of reclaimed memory will differ from one machine to another, it all depends on how much memory is available on the machine and how tough it will be to free it. If memory is critical for you, but a few percent of the performance is not so, then you can further reduce its consumption by using StreamWriter.

In my opinion, the idea is explained well - the main thing is to localize this problem. There's no need to re-invent the wheel: as people have already solved most of the problems, it is always good to look up a solution in the standard library first. But if Select-String and Out-File suit your timings, the machine does not crash with OutOfMemoryException, then use them - simplicity and readability are more important.


Native binaries

Often, having amazed with all the convenience of PowerShell, developers begin to opt-in favor of built-in cmdlets rather than using system binaries. On the one hand, it is understood - convenience is the king, but on the other - PowerShell is, in the first place, a shell, and launching binaries is its primary purpose. If PowerShell does that pretty well, so why not?

A good example of getting the relative paths of all the files in a directory and subdirectories (with lots of files).

Using a native command, the execution time took five times less:

$CurrentPath = (Get-Location).Path + '\'
$StringBuilder = New-Object System.Text.StringBuilder
foreach ($Line in (&cmd /c dir /b /s /a-d)) {
    $null = $StringBuilder.AppendLine($Line.Replace($CurrentPath, '.'))
}
$StringBuilder.ToString()
Hours             : 0
Minutes           : 0
Seconds           : 3
Milliseconds      : 9

$StringBuilder = New-Object System.Text.StringBuilder
foreach ($Line in (Get-ChildItem -File -Recurse | Resolve-Path -Relative)) {
    $null = $StringBuilder.AppendLine($Line)
}
$StringBuilder.ToString()
Hours             : 0
Minutes           : 0
Seconds           : 16
Milliseconds      : 337

Assigning the output to $null is the cheapest and easiest way of suppressing an output. The most expensive as you might guess it already would be using a pipe to send output to Out-Null

Moreover, such a suppression (assigning the result to $null) also reduces the execution time, albeit not that much:

# works faster:
$null = $StringBuilder.AppendLine($Line)

# works slower:
$StringBuilder.AppendLine($Line) | Out-Null

Once I had the task of synchronizing directories with a large number of files. This was only part of the work of a rather large script, kind of the preparation stage. The directory synchronization using Compare-Object looked decent and compact but required more time for the implementation than the whole time allocated by me. The way out of this situation was using robocopy.exe but through writing a wrapper (as a class in PowerShell 5) which became a compromise. Here's the code:

class Robocopy {
    [String]$RobocopyPath

    Robocopy () {
        $this.RobocopyPath = Join-Path $env:SystemRoot 'System32\Robocopy.exe'
        if (-not (Test-Path $this.RobocopyPath -PathType Leaf)) {
            throw 'Robocopy not found'
        }

    }
    [void]CopyFile ([String]$SourceFile, [String]$DestinationFolder) {
        $this.CopyFile($SourceFile, $DestinationFolder, $false)
    }
    [void]CopyFile ([String]$SourceFile, [String]$DestinationFolder, [bool]$Archive) {
        $FileName   = [IO.Path]::GetFileName($SourceFile)
        $FolderName = [IO.Path]::GetDirectoryName($SourceFile)

        $Arguments = @('/R:0', '/NP', '/NC', '/NS', '/NJH', '/NJS', '/NDL')
        if ($Archive) {
            $Arguments += $('/A+:a')
        }
        $ErrorFlag = $false
        &$this.RobocopyPath $FolderName $DestinationFolder $FileName $Arguments | Foreach-Object {
            if ($ErrorFlag) {
                $ErrorFlag = $false
                throw "$_ $ErrorString"
            } else {
                if ($_ -match '(?<=\(0x[\da-f]{8}\))(?<text>(.+$))') {
                    $ErrorFlag   = $true
                    $ErrorString = $matches.text
                } else {
                    $Logger.AddRecord($_.Trim())
                }
            }
        }
        if ($LASTEXITCODE -eq 8) {
            throw 'Some files or directories could not be copied'
        }
        if ($LASTEXITCODE -eq 16) {
            throw 'Robocopy did not copy any files. Check the command line parameters and verify that Robocopy has enough rights to write to the destination folder.'
        }
    }
    [void]SyncFolders ([String]$SourceFolder, [String]$DestinationFolder) {
        $this.SyncFolders($SourceFolder, $DestinationFolder, '*.*', '', $false)
    }
    [void]SyncFolders ([String]$SourceFolder, [String]$DestinationFolder, [Bool]$Archive) {
        $this.SyncFolders($SourceFolder, $DestinationFolder, '*.*', '', $Archive)
    }
    [void]SyncFolders ([String]$SourceFolder, [String]$DestinationFolder, [String]$Include) {
        $this.SyncFolders($SourceFolder, $DestinationFolder, $Include, '', $false)
    }
    [void]SyncFolders ([String]$SourceFolder, [String]$DestinationFolder, [String]$Include, [Bool]$Archive) {
        $this.SyncFolders($SourceFolder, $DestinationFolder, $Include, '', $Archive)
    }
    [void]SyncFolders ([String]$SourceFolder, [String]$DestinationFolder, [String]$Include, [String]$Exclude) {
        $this.SyncFolders($SourceFolder, $DestinationFolder, $Include, $Exclude, $false)
    }
    [void]SyncFolders ([String]$SourceFolder, [String]$DestinationFolder, [String]$Include, [String]$Exclude, [Bool]$Archive) {
        $Arguments = @('/MIR', '/R:0', '/NP', '/NC', '/NS', '/NJH', '/NJS', '/NDL')
        if ($Exclude) {
            $Arguments += $('/XF')
            $Arguments += $Exclude.Split(' ')
        }
        if ($Archive) {
            $Arguments += $('/A+:a')
        }
        $ErrorFlag = $false
        &$this.RobocopyPath $SourceFolder $DestinationFolder $Include $Arguments | Foreach-Object {
            if ($ErrorFlag) {
                $ErrorFlag = $false
                throw "$_ $ErrorString"
            } else {
                if ($_ -match '(?<=\(0x[\da-f]{8}\))(?<text>(.+$))') {
                    $ErrorFlag = $true
                    $ErrorString = $matches.text
                } else {
                    $Logger.AddRecord($_.Trim())
                }
            }
        }
        if ($LASTEXITCODE -eq 8) {
            throw 'Some files or directories could not be copied'
        }
        if ($LASTEXITCODE -eq 16) {
            throw 'Robocopy did not copy any files. Check the command line parameters and verify that Robocopy has enough rights to write to the destination folder.'
        }
    }
}

Attentive readers will ask, how that comes: why on earth Foreach-Object used in a class that fights for performance? This is a valid query, and the given example is one of the exclusions for using of this cmdlet, and here's why: unlike foreach, the Foreach-Object cmdlet does not wait for the complete execution of the command sending data to the pipe - processing occurs in a stream, in a specific situation, for example, by throwing exceptions immediately, rather than waiting for the end of the synchronization process. Parsing the output of a utility is a suitable place for this cmdlet.

Using the wrapper described above is trivially simple, all you need is just adding an exception handling:

$Robocopy = New-Object Robocopy

# copy a singe file
$Robocopy.CopyFile($Source, $Dest)

# syncing folders
$Robocopy.SyncFolders($SourceDir, $DestDir)

# syncing .xml only with setting an archive bit
$Robocopy.SyncFolders($SourceDir, $DestDir, '*.xml', $true)

# syncing all the files except *.zip *.tmp *.log with setting an archive bit
$Robocopy.SyncFolders($SourceDir, $DestDir, '*.*', '*.zip *.tmp *.log', $true)

I wrote this code several years ago so the implementation may not seem the best. The above example uses PowerShell classes, that do have a payload of transferring from Powershell to CLR and then back from CLR to PowerShell - that actively consumes stack. In addition, classes do have some drawbacks:

  • they require PowerShell version 5, but as for 2021 it is not that critical
  • do not support generating help from comments
  • do not work nicely in pipelines (ValueFromPipeline/ValueFromPipelineByPropertyName), well you can still access that via % { SomeClass.Method($_.xxx) }
  • the whole class must be within the same file
  • when class is defined within a module, it won't be easily consumed from outside
  • class are unimplemented: do not have namespaces, private fields, getters/setters

But thinking widely, do we really need to run several instances of Robocopy so that implement it as a class? So, instead, you can just make Robocopy as a module:

Import-Module robocopy.psm1

# copy a singe file
Robocopy-CopyFile $Source $Dest

# syncing folders
Robocopy-SyncFolders $SourceDir $DestDir

# syncing .xml only with setting an archive bit
Robocopy-SyncFolders $SourceDir $DestDir -Include '*.xml' -Archive $true

# syncing all the files except *.zip *.tmp *.log with setting an archive bit
Robocopy-SyncFolders $SourceDir $DestDir -Include '*.*' -Exclude '*.zip *.tmp *.log' -Archive $true

Few more tips to improve your code:

  • Take a look at Plaster for generating boilerplate for modules.
  • Did you know you can unit-test PowerShell? Pester is a test framework for PowerShell. It provides a language that allows you to define test cases and the Invoke-Pester cmdlet to execute these tests and report the results.

Coming back to scripts performance - that is a tricky topic. Implementing micro-optimizations may take more time than the benefits they bring, at the cost of the code's maintainability and readability. The cost of supporting it could be higher than any profit from using such a solution.

At the same time, there are a number of simple recommendations that make your code simpler, clearer, and faster, would you just start using them:

  • Use the foreach statement instead of the Foreach-Object cmdlet in scripts
  • minimize the number of pipelines
  • read/write files at once, instead of line-by-line
  • use StringBuilder and other specialized classes
  • profile your code to understand bottlenecks before optimizing it
  • do not avoid executing native binaries

And once again: do not rush to optimize something without a real need as premature optimization can break it all.


Jobs

Once you optimize everything and come to some compromise between readability and speed, but either operation is still lengthy or the data is huge, you still need to reduce the operating time. In this case, the parallel execution of some parts of the code becomes an answer. (You obviously ensure you're in possession of all necessary resources on the machine).

From the second version of PowerShell, cmdlets are available for working with jobs (Get-Command * -Job), you can read more here. There is nothing conceptually complicated in jobs: we make out the script block, run the task, get the results at the right time:

$Job = Start-Job -ScriptBlock {
    Write-Output 'Good night'
    Start-Sleep -S 10
    Write-Output 'Good morning'
}

$Job | Wait-Job | Receive-Job
Remove-Job $Job

The code above is intended solely for demonstration purposes and carries out zero value. As a great example of using jobs, I recommend you to play and debug this multithreaded script for network ping.

A problem that seems to be not a problem, but it is indicated - every job wants a little memory to be faster and is launched by a full-fledged operating system process with all the pros and cons of this approach.

In fact, every job executes an individual runspace, that reserves 30-50 megabytes, as any other .NET process would do. But unlike other .NET apps, these are not really consumed memory, but just a cache speeding up the work. Of course, if you do 100 of parallel jobs for asynchronous pinging - it will take few GBs of RAM, but it is doable on most of modern machines.

Jobs will help running any of your parallel tasks in and make it convenient. Be sure to study this mechanism, jobs are the best choice for a simple solution to a problem at a fairly high level of abstraction while keeping number of lines to the minimum. Just keep in mind - your scripts must be readable after you, scripts are written for people.

But it so happens that this abstraction is no longer enough due to the architectural limitations of such a solution, for example, it is difficult in such a paradigm to make an interactive gooey with binding of values ​​on the form to some variables.


Runspaces

A whole series of articles on the Microsoft blog is devoted to the concept of runspaces, and I highly recommend referring to the primary source - Beginning Use of PowerShell Runspaces: Part 1. In short, the runspace is a separate PowerShell thread that runs in the same process of the operating system, and therefore does not have an overhead on new process.

If you like the concept of light streams and you want to launch dozens of them (there is no concept of channels in PowerShell), then I have good news for you: for convenience, all the low-level logic in this module repository on the github (there are gifs) is already wrapped in a more familiar concept of jobs. In the meantime, I'll demonstrate how to work with them natively.

As an example of using runspaces, let's build a simple WPF form, which is rendered in the same OS thread as the main PowerShell process, however in a separate runtime thread. Interaction occurs through a thread-safe hashtable - so that you do not need to add any complexity like mutexes, it just works! The advantage of this approach is that you can implement any logic of any complexity and duration in the main script. It won't block the main execution flow leading to the form "freezing" (take a look at the last line of the script below - despite it "sleeps" a thread for half a minute - there's no block).

In a specific example, only one runspace is launched, although nothing stops you from creating a few more (and creating a pool for them for convenience).

# Thread-syncronized hashtable
$GUISyncHash = [hashtable]::Synchronized(@{})

<#
    WPF form
#>
$GUISyncHash.FormXAML = [xml](@"
<Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        Title="Sample WPF Form" Height="510" Width="410" ResizeMode="NoResize">
    <Grid>
        <Label Content="Sample form" HorizontalAlignment="Left" Margin="10,10,0,0" VerticalAlignment="Top" Height="37" Width="374" FontSize="18"/>
        <Label Content="From:" HorizontalAlignment="Left" Margin="16,64,0,0" VerticalAlignment="Top" Height="26" Width="48"/>
        <TextBox x:Name="BackupPath" HorizontalAlignment="Left" Height="23" Margin="69,68,0,0" TextWrapping="Wrap" Text="" VerticalAlignment="Top" Width="300"/>
        <Label Content="To:" HorizontalAlignment="Left" Margin="16,103,0,0" VerticalAlignment="Top" Height="26" Width="35"/>
        <TextBox x:Name="RestorePath" HorizontalAlignment="Left" Height="23" Margin="69,107,0,0" TextWrapping="Wrap" Text="" VerticalAlignment="Top" Width="300"/>
        <Button x:Name="FirstButton" Content="√" HorizontalAlignment="Left" Margin="357,68,0,0" VerticalAlignment="Top" Width="23" Height="23"/>
        <Button x:Name="SecondButton" Content="√" HorizontalAlignment="Left" Margin="357,107,0,0" VerticalAlignment="Top" Width="23" Height="23"/>
        <CheckBox x:Name="Check" Content="Check me" HorizontalAlignment="Left" Margin="16,146,0,0" VerticalAlignment="Top" RenderTransformOrigin="-0.113,-0.267" Width="172"/>
        <Button x:Name="Go" Content="Run" HorizontalAlignment="Left" Margin="298,173,0,0" VerticalAlignment="Top" Width="82" Height="26"/>
        <ComboBox x:Name="Droplist" HorizontalAlignment="Left" Margin="16,173,0,0" VerticalAlignment="Top" Width="172" Height="26"/>
        <ListBox x:Name="ListBox" HorizontalAlignment="Left" Height="250" Margin="16,210,0,0" VerticalAlignment="Top" Width="364"/>
    </Grid>
</Window>
"@)

<#
    Form thread
#>
$GUISyncHash.GUIThread = {
    $GUISyncHash.Window       = [Windows.Markup.XamlReader]::Load((New-Object System.Xml.XmlNodeReader $GUISyncHash.FormXAML))
    $GUISyncHash.Check        = $GUISyncHash.Window.FindName("Check")
    $GUISyncHash.GO           = $GUISyncHash.Window.FindName("Go")
    $GUISyncHash.ListBox      = $GUISyncHash.Window.FindName("ListBox")
    $GUISyncHash.BackupPath   = $GUISyncHash.Window.FindName("BackupPath")
    $GUISyncHash.RestorePath  = $GUISyncHash.Window.FindName("RestorePath")
    $GUISyncHash.FirstButton  = $GUISyncHash.Window.FindName("FirstButton")
    $GUISyncHash.SecondButton = $GUISyncHash.Window.FindName("SecondButton")
    $GUISyncHash.Droplist     = $GUISyncHash.Window.FindName("Droplist")

    $GUISyncHash.Window.Add_SourceInitialized({
        $GUISyncHash.GO.IsEnabled = $true
    })

    $GUISyncHash.FirstButton.Add_Click({
        $GUISyncHash.ListBox.Items.Add('Click FirstButton')
    })

    $GUISyncHash.SecondButton.Add_Click({
        $GUISyncHash.ListBox.Items.Add('Click SecondButton')
    })

    $GUISyncHash.GO.Add_Click({
        $GUISyncHash.ListBox.Items.Add('Click GO')
    })

    $GUISyncHash.Window.Add_Closed({
        Stop-Process -Id $PID -Force
    })

    $null = $GUISyncHash.Window.ShowDialog()
}

$Runspace = @{}
$Runspace.Runspace = [RunspaceFactory]::CreateRunspace()
$Runspace.Runspace.ApartmentState = "STA"
$Runspace.Runspace.ThreadOptions = "ReuseThread"
$Runspace.Runspace.Open()
$Runspace.psCmd = { Add-Type -AssemblyName PresentationCore, PresentationFramework, WindowsBase }.GetPowerShell()
$Runspace.Runspace.SessionStateProxy.SetVariable('GUISyncHash', $GUISyncHash)
$Runspace.psCmd.Runspace = $Runspace.Runspace
$Runspace.Handle = $Runspace.psCmd.AddScript($GUISyncHash.GUIThread).BeginInvoke()

Start-Sleep -S 1

$GUISyncHash.ListBox.Dispatcher.Invoke("Normal", [action] {
    $GUISyncHash.ListBox.Items.Add('Hi there!')
})

$GUISyncHash.ListBox.Dispatcher.Invoke("Normal", [action] {
    $GUISyncHash.ListBox.Items.Add('Populating a dropdown')
})

foreach ($item in 1..5) {
    $GUISyncHash.Droplist.Dispatcher.Invoke("Normal", [action] {
        $GUISyncHash.Droplist.Items.Add($item)
        $GUISyncHash.Droplist.SelectedIndex = 0
    })
}

$GUISyncHash.ListBox.Dispatcher.Invoke("Normal", [action] {
    $GUISyncHash.ListBox.Items.Add('While ($true) { Start-Sleep -S 10 }')
})

while ($true) { Start-Sleep -S 30 }


Note: you do not need to have full Visual Studio for the sake of drawing WPF forms - just use this simple tool to quickly generate desired markup.


WinRM

It is mostly known as PowerShell Remoting and is one of my most used PowerShell features. If simplified, it allows you to execute full power of PowerShell in the context of any remote machine, so that all the code you execute runs on that machine. Of course, it is possible to pass parameters into a remotely-running script and get the results back.

To make the whole magic happen, WinRm must be enabled on both client and server and that is understood given the ultimate power it brings us.

Client

As for me, this is how PowerShell was written by the paranoid. Everything that is allowed and not allowed is forbidden. So it is also forbidden to connect to servers. I am exaggerating this a little, of course. If everything is in the same domain, then everything will be transparent. But in my situation, the machine is outside the domain and not even on the same network. In general, I decided that I would be careful and allow any extramarital affairs.

set-item wsman: localhost\client\trustedhosts -value * -force
restart-service WinRm

Server

It looks like a simple PowerShell one-liner but is tricky: enabling PowerShell remoting cannot be executed remotely. You may need either physical access or RDP. Eventually, you will run:

Enable-PSRemoting -SkipNetworkProfileCheck -Force

Despite being so short, this command does loads of actions:

  • starts WinRM service and sets the autostart of the WinRM service to automatic
  • creates a listener
  • adds firewall exceptions
  • includes all registered PowerShell session configurations for receiving instructions from remote machines
  • registers a configuration if it is not registered with "Microsoft.PowerShell", and the same for x64
  • removes the "Deny Everyone" ban from the security descriptor of all session configurations
  • restarts the WinRM service

Once both server and client get configured, it becomes possible to enter an interactive remote session, so that everything you do happens in the remote context:

Enter-PSSession -ComputerName RemoteMachine -Credential "RemoteMachine\Administrator"

Optionally you can just execute remote code from within ScriptBlock at a host machine in a similar manner:

$pass = ConvertTo-SecureString -AsPlainText '123' -Force
$Cred = New-Object System.Management.Automation.PSCredential -ArgumentList 'Martin',$pass
Invoke-Command -ComputerName 192.168.173.14 -ScriptBlock { Get-ChildItem C:\ } -credential $Cred

You can do whatever you want for example copy files both ways:

$pass = ConvertTo-SecureString -AsPlainText '123' -Force                                       # 123 - password
$Cred = New-Object System.Management.Automation.PSCredential -ArgumentList 'Martin',$pass      # Martin - username
$Session = New-PSSession -ComputerName RemoteMachine -Credential $Cred                        
$remotePath = Invoke-Command -Session $session -ScriptBlock { New-Item -ItemType Directory -Force -Path FolderName } Copy-Item -Path "c:\Some\Path\To\Local\Script.ps1" -Destination $remotePath.FullName -ToSession $session Remove-PSSession $Session

Remoting however has certain restrictions:

  • you cannot make a second jump - only 1 session, you cannot connect deeper inside the session
  • you cannot use commands that have a graphical interface. If you do this, the shell will hang until Ctrl + C pressed
  • you cannot run commands that have their own shell, for example nslookup, netsh
  • you can run scripts if the launch policy on the remote machine allows them to run
  • you cannot attach to an interactive session, you enter as network logon, as if you were attached to a network drive. Therefore, logon scripts will not start, and you may not get the home folder on the remote machine (an extra reason not to map home folders with logon scripts)
  • you will not be able to interact with users on a remote machine even they got logged in. You won't be able to show users a window or print anything to them.
Once done with interactive session, you can Exit-PsSession.

The whole concept of Sifon - an application I've done for installing and managing Sitecore on both local and remote machines, is built on the concept of WinRM along with runspaces, threads and plenty of other tricks describe in this blog post.

Conclusion

PowerShell is a powerful and easy-to-use environment for working with Windows infrastructure. It is good conceptually, it is convenient for its syntax and self-documenting cmdlet names, it can perform itself as an environment and you as a specialist, you just need to understand its concepts and start having fun. And of course, this technology fully justifies its title - as it is truly Power Shell


A nice way of using HTML Helper for accessing Rendering Parameters along with Glass Mapper

I very love Glass Mapper as (once being configured) it takes away from you much of the manual efforts of wiring up your models.

Often working with Rendering Parameters, I decided to simplify their usage by having a strongly-typed HTML Helper powered by Glass Mapper. Here is how the usage looks like:

<div class="@(Html.GetRenderingParametersClassFor<ISingleClass>(m => m.Class))">
   your content here
</div>

It benefits from usage simplicity and also from having IntelliSense. Here's an example of what ISingleClass looks like:

[SitecoreType(TemplateId = ISingleClassConstants.TemplateIdString)]
public partial interface ISingleClass : IGlassBase
{   
    [SitecoreField(ISingleClassConstants.ClassFieldName)]
    Guid Class { get; }
}

In Sitecore items of a given type may look as smth. similar below:


OK, won't make it any longer: here's the code that does all the magic:

public static class HtmlHelperExtensions
{
    public static string GetRenderingParametersClassFor<T>(this HtmlHelper html, Func<T, object> getField, string fieldName = "Class") where T : class
    {
        var selectedItem = GetSelectedRenderingParameter(html, getField);
        return selectedItem?[fieldName] ?? "";
    }


    private static Item GetSelectedRenderingParameter<T>(this HtmlHelper html, Func<T, object> getField) where T : class
    {
        T renderingParameters = GetRenderingParameters<T>(html);

        var selectedItemId = getField(renderingParameters)?.ToString();
        return Context.Database.GetItem(selectedItemId);
    }

    public static T GetRenderingParameters<T>(this HtmlHelper html) where T : class
    {
        //TODO: wire-up ISitecoreService to get resolved via DI of your choise
        var sitecoreService = new SitecoreService(PageContext.Current.Database);

        var parameters = RenderingContext.CurrentOrNull.Rendering["Parameters"];
        var nameValueCollection = WebUtil.ParseUrlParameters(parameters);
        var config = sitecoreService.GlassContext[typeof(T)] as SitecoreTypeConfiguration;

        var renderingParametersModelFactory = new RenderingParametersModelFactory(sitecoreService);
        return renderingParametersModelFactory.CreateModel<T>(nameValueCollection, config.TemplateId);
    }
}


Bonus: in some cases you may also want selecting a HTML tag from rendering parameters, for example your editors could choose between heading tags they want to be on your rendering (like H1, H2, H3, H4 or may be just a generating paragraph P-tag):

In that case you can add one more method into the above HTML helper class:

public static MvcTag TagFrom<T>(this HtmlHelper html, Func<T, object> getField, string className = null) where T : class
{
    var selectedItem = GetSelectedRenderingParameter(html, getField);
    string tag = selectedItem?["Tag"] ?? "";

    html.ViewContext.ViewBag.Tag = tag;

    if (!string.IsNullOrWhiteSpace(tag)) 
    {
        var tagBuilder = new TagBuilder(tag);
    
        if (!string.IsNullOrWhiteSpace(className))
        {
            tagBuilder.Attributes.Add("class", className);
        }
        
        html.ViewContext.Writer.Write(tagBuilder.ToString(TagRenderMode.StartTag));
    }

    return new MvcTag(html.ViewContext);
}

Once compiled, you can then render you tags as below:

@using (Html.TagFrom<ITagRenderingParameter>(m => m.Tag, "tag_class"))
{
    @Html.Glass().Editable(m => m.Title)
}

And choose which HTML tag just from a Rendering Parameters dropdown:

Hope this helps!

XBlog on Sitecore 10? That's possible!

I was using XBlog as nice and simple solution for maintaining the blogs on Sitecore (at least for editors), but unfortunately the original repo has not gone along with the progress Sitecore XP does - the last update dates as 3 years ago for some Sitecore 9 related configs, while the rest of it is 6 years old.

I tried using the 'Sitecore 9' branch of the original repo but unfortunately it did not go well. If installing it from the packages - then it had minor issues even with declared version 9.X, which in fact in fact had the very wide range of changes int between of 9.0.1 till 9.3. Not to say version 10.* therefore:

I decided to refactor it instead!

You may find the successful result of this exercise published at my corresponding GitHub repository, that one worked out for 10.1 and tested well there, but since all versions 10.X share the same runtime it should work universally on all of them.

Few notes on what has been done:

  • much unwanted legacy stuff such as support for WebForms has been entirely removed
  • the IDs of items have been retained, so that helps upgrading existing solution with 1000s of posts
  • there were changes done to reflect reworked Content Search of the XP platform
  • found and fixed in Bucket items creation logic, related to events suppression
  • serialization changed to the one using CLI, officially released as the part of Sitecore 10
  • few more minor changes and improvements in references, runtime, confutation, etc.

Please also note: XBlog uses fast query which declared to get deprecated in Sitecore 10.1, but that particular code works perfectly well, and I cross-tested it in debugger to confirm: the fast queries return that expected result I queried. Just for the future references, fast queries have been used in code\Areas\XBlog\Buckets\BucketFolderConfigurationManager.cs and code\Areas\XBlog\Import\ImportManager.cs files.

The resulted code can be found here: XBlog for Sitecore 10.1