Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt explicabo. Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur, adipisci velit, sed quia non numquam eius modi tempora incidunt ut labore et dolore magnam aliquam quaerat voluptatem. Ut enim ad minima veniam, quis nostrum exercitationem ullam corporis suscipit laboriosam, nisi ut aliquid ex ea commodi consequatur? Quis autem vel eum iure reprehenderit qui in ea voluptate velit esse quam nihil molestiae consequatur, vel illum qui dolorem eum fugiat quo voluptas nulla pariatur?
Experience Sitecore! | All posts by martin

Experience Sitecore!

Martin Miles on Sitecore

Blog content

So, don't miss out! Subscribe to this blog/twitter from buttons above!

Upgrading Sitecore like a Pro

Sooner or later, you'll face it - the need for Sitecore version upgrade. However, each Sitecore instance is unique, there is no universal method for an upgrade, therefore each upgrade path is bespoke.

In the past 10 years working with Sitecore I have done numerous upgrades, resulted in Upgrade Planning Strategy and a set of Upgrade Tips and Tricks. These tips will help saving 2-3 times (!) on spent efforts, comparing to direct approaches.
Last but not the least. versions 10 and 10.1 brought new challenges and options of migrating existing solutions into containers. I will explain your options and propose best approach on this as well.

I have lodged the speech proposal for SUGCON 2021 and fingers crossed to be chosen, so that I will share the whole presentation and sides updating this blog post.

There is still no "silver bullet" for Sitecore upgrades, but following tips from my session will eliminate risks, reduce efforts, and bring you confidence while upgrading your instances.

I am going to tell you about:

  1. How to approach the whole instance upgrade
  2. What are the upgrade time-wasters
  3. Common and potential traps to avoid
  4. Dealing with configuration upgrade, and why that's not as complicated as initially seems to be
  5. Upgrading your solution codebase, including ORM (Glass etc.) and DI
  6. Employing automations with PowerShell
  7. Upgrading databases and how to approach
  8. Even more automation to add
  9. How to treat deprecated Sitecore APIs and obsolete code
  10. Migrating forms from WFFM
  11. Upgrading to version 10 containers and how 10.1 changes the upgrades
  12. Testing your upgrade strategies
  13. Going live
  14. Summary of tips and findings from this session
Stay tuned!

Everything you wanted to ask about "Items-as-Resources" coming with new Sitecore 10.1

Sitecore 10.1 brings new Items-as-Resources option, which raises plenty of questions.
  • What it that used for?
  • Why did we get it at all?
  • Any concerns of using that?
Please find the answers below:


1. Before 10.1 you’ve been given the initial set of OOB items upon the installation in the databases. That includes default templates, layouts, workflows and the rest of scaffolding items.

2. Now with 10.1 all these are supplied as the resources files outside of database. That's correct: all these items are no longer residing in the database. Yes, you still have them in your content tree as normal.

3. Does databases come empty? Not actually - there are just two entries for the default site (page) you normally first see after successful Sitecore instance installation, at the root of URL. It was decided not to put these into resources, as most customers delete that default home page anyway.

4. Are these resource read-only? Yes, Sitecore cannot write back into those resource files. Treat it as if they're written on CD but with an immediate access.

5. So does that mean I cannot modify default OOB items in Sitecore anymore? No, you actually can edit those as normal after "Unprotecting item" from a Content Editor ribbon. What happens in that case is Sitecore will take the delta between initial value stored in resource file and your changes and will store that delta having only changes you’ve done in the database. On item "consumption" the current state of item gets calculated from a resource file and that delta.

6. But you cannot delete these items. Sitecore prompts that it origins from the resource file therefore cannot be deleted. Still good, as leaves less potential for silly errors, anyway..

7. So where are these resource files located? They are based (quite predictably) within App_Data folder - App_Data\items\<DATABASE_NAME>\items.<DATABASE_NAME>.dat (by default).

8. What format are these resource files? Protobuff (Protocol Buffers) from Google. That is a surprisingly old format which is proven for a decade, at least.

9. How can I create my own resources?
Officially - you cannot. Well, it is technically possible but requires very deep dive into Protocol Buffers, raw database storage and investigating new data provider in Sitecore. But, Sitecore will likely start providing authors of popular modules with the toolset to create such a resources with an ease. So, let's say for SXA you will no longer need installing SPE + SXA packages yourself, instead you'll simply drop the resource files provided by SXA team underneath items folder, not even need to publish that afterwards.

10. Why no need publishing? That's because you copy the resource file for web database as well - items are alredy on the web database. Of course, all the items created by you will still need to get published.

11. But why at all Sitecore introduced that?
The main reason is to simplify the platform version upgrade process. The way update is done has changed.
You may have notice on the Sitecore download page, "Upgrade options" section have changed: instead of Sitecore Update Packages you now have Sitecore UpdateApp Tool that operates against each specific version you'd want to upgrade from. This tool will remove the default items for each particular legacy version and replace it with the resource files, Of course it also updates the schema with the changes which was already available.

12. The bigger reason for this change was "think containers - think ahead" approach. With such a change it becomes easier to upgrade version of Sitecore when running in containers: everything from the database since now is entirely user's custom data, and can be entirely copied to a never database, while version-specific-and-system-related items get updated by just a resource file substitute.

13. Also you may heard that Fast Query has been deprecated. That is exact reason why - if something isn't in the database, Sitecore cannot efficiently build the graph of the relationship for fast query


14. What is that data provider mentioned above?
That is a new one called CompositeDataProvider that inherited by DefaultDataProvider. The name composite assumes that one cares of merging items for Sitecore tree from both DB and the resources. In the configuration you specify it for an individual database under <database> section, you can also change the location for such resources by patching <filePath> node of <protobutItems> and overriding the location.

15. For the end-consumer of DataProvider (high level of stack) nothing changes as they still use DefaultDataProvider from their code. The changes occur at intermediate level and those happen to be internal for Sitecore.

16. That actually opens up a much wider potential for creating some intemediate-level providers to things other than ProtoBuf and SQL Databases: CRMs, DAMs, some other headless CMSs maybe. In any case this is very important and greatly welcomed step ahead for the platform!

Update: there is another great blog post from my MVP-colleague Jeremy Davis on that same topic, where he also tried drilling into these resurce files with ProtoBuff.Net library.

I have won a Sitecore Technology MVP 2021 award!

This year I am celebrating my fifth in a row year as a Sitecore MVP! I am very excited to announce that I have been named Most Valuable Professional (MVP) by Sitecore for 2021 - that's the most prestigious award in whole ecosystem of Sitecore!

As a Technology MVP I am one of only 170 Sitecore professionals worldwide that have been awarded with an MVP title in this category. It really means a lot to me to be part of such a great community and to be able to contribute to sharing knowledge within this community.


Sharing Sitecore Identity Server between two independant instances of Sitecore

Imagine a case where you need having two Sitecore instances in parallel next to each other. That may be cause by several legit reasons.

WHY?

For example, in my case I am moving (by reworking, not just migrating) some functional areas from one legacy instance to another that will features SXA. The legacy instance has been passed from one hand to another with numerous configuration artifacts, with a limited maintenance options, so that it becomes next to impossible to combine it with a brand new SXA stuff under the same roof. I know, that is doable in principle (and have done it myself before), but the amount maintenance and lack of knowledge / documentation on existing codebase makes its maintenance inappropriately risky and non-acceptable. Therefore, it becomes reasonable keeping them both in isolation, only uniting at the URL level (by rewriting a (sub)domain of a new instance into a primary domain's folder level).

Things you have to consider in that case would turn to almost doubling your infrastructure and related expense as well as checking if your Sitecore licence permits you that. Currently doing quite an unusual setup where both the above concerns give me a green light for going ahead and I am OK to run both instances in parallel (as on-prem solution).

Once agreed, the next thoughts come to Identity Server, where keeping two instances for that same activity does not make much sense. Keeping them both is exhaustive, but the good news is that one can re-use and existing ID Server for any number of instances (namely CM boxes). That comes to making two extra steps and below I will show you how-to:

HOW?

Let's assume we have two instances, called old and new. Old one has all the bits configured and running, so we only want re-using ID Server of old instance with a new instance.

1. Get rid ow ID Server for a new instance (you can stop its web app and app pool for now). That makes sure it is not used.

2. Find Sitecore.Owin.Authentication.IdentityServer.config file on a new instance (App_Config\Sitecore\Owin.Authentication.IdentityServer.config) and substitute identityServerAuthority variable to point to an existing ID Server

    <sc.variable name="identityServerAuthority" value="https://old.identityserver" />


3. Now CM for a new instance knows which ID Server to talk through, but will ID Server accept those calls? The answer is no, unless you explicitly permit it doing so. Navigate to Config\production folder of old instance ID Server and add addition allowed CORS origin group into Sitecore.IdentityServer.Host.xml file. You will end up having smth. as below:

<AllowedCorsOrigins>
<AllowedCorsOriginsGroup1>https://old</AllowedCorsOriginsGroup1>
<AllowedCorsOriginsGroup2>https://new</AllowedCorsOriginsGroup2>
</AllowedCorsOrigins>


4. There is also Identity Server secret stored below at the same xml file, with a matching counterpart at App_Config\ConnectionStrings.config, so you also need updating config for new instance with the value from shared Identity Server:

<add name="sitecoreidentity.secret" connectionString="SECRET_from_ID_Server" />


5. Finally, recycle Identity Server application pool, then you're OK to test it. To make things more visual, I've recorded all the steps and testing it and sharing resulted video below:


Things to consider: as you're re-using existing old instance Identity Server, it will itself re-use all the assets. When it comes to Active Directory then it brings a desired result, but speaking about internal users (those normally you have got at Sitecore domain) - they will all get reused all as well, including admin. This comes because ID Server has a reference to core database (or security database extracted from the core) and that one belongs by default to an old instance too.

Hope you find this helpful!

What is a Reverse Proxy and what do you need one for?

There are a variety of Reverse Proxy solutions on the market. You may have already heard about some:

Major cloud providers also have their proprietary solutions:


But what is a Reverse Proxy? Why "Reverse"?
As Wikipedia says, a common type of proxy server that is accessible from the public network. Large websites and content delivery networks use reverse proxies - together with other techniques - to balance the load between internal servers. Reverse proxies can keep a cache of static content, which further reduces the load on these internal servers and the internal network. It is also common for reverse proxies to add features such as compression or TLS encryption to the communication channel between the client and the reverse proxy.

Reverse proxies are typically owned or managed by the web service, and they are accessed by clients from the public internet. In contrast, a forward proxy is typically managed by a client (or their company) who is normally restricted to a private, internal network. The client can, however, access the forward proxy, which then retrieves resources from the public internet on behalf of the client. Here's a reverse proxy in action from a very high:


What are typical scenarios for using a Reverse Proxy?

1. SSL Offload. Let's assume we've got a website which works at HTTP only, and for some reason (legacy, gone developers or being unable bringing changes into a running solution that may huge or any other) it is not possible to change the website itself - "If it works, don't touch it" paradigm in action. For compliance, we must add HTTPS support for that website.
With using a Reverse Proxy it comes to a really quick and easiy solution - we don't need developers at all. All we need is asking our Ops professional asking him to instantiate a proxy server with SSL Termination. (obviously, we'll also need SSL certificates for domain hostname(s) of a given website). Job done!


2. Load Balancer
. Next, we'd want horizontally scaling that website and even deployed two equal client facing copies of it. How do we "split" traffic to distributing it equally to both sites? In this case we introduce a Proxy Server functioning as a Load Balancer.
But what if one of websited dies or crashes half way down the road? Load Balancer needs somehow to know each of the "boxes" functions well and react the outages by re-distributing traffic to the rest of mchines functioning well. This is traditionally implemented by "pinging" a such called "HealthCheck" URL on each particular box. As soon as one of the healthchecks keeps failing, an alert is raised and the traffic is no longer routed to a faulty box (be careful with sticky sessions!).



3. Cybersecurity enforcement
. Sending specially formed packets hackers can undertake a Deny-of-Service attack, when sending a request comes times cheaper than serving it back. At some moment your servers won't cope with this parasitic workload and will fail.
In order to prevent that, dangerous traffic should not reach your servers, being filtered at a proxy. Namely, a Firewall with an adequate rule set that filters out all patters with anomalies, raises alerts and bypasses the legitimate requests.


4. Caching and compressing
. Even with purely legitimate traffic beyond the proxy, one may still get a large request payload. But how comes? Well, there may be different reasons, such as usage patterns where all users navigate to the same heavy-loaded area of a website; alternatively the website itself could be written by junior developers who did not enough care about the way if functions in the most optimal way once deployed. Regardless of the reason, we could still soften things up by identifying some of the popular endpoint that consume much of server's resources and cache it up right at the proxy level. Of course assuming that a given set of parameters always returns that same result, there is no longer a need spending expensive server resources on producing the results we're already got in past and have effectively cached at the proxy level and never reach servers at all. If we however must ensure this traffic reaching the end servers, we could at least compress / encode the "last mile" beyond the proxy.


5. Smooth automated deployments
? Why not, have you ever heard of Blue-Green Deployments? With that in action end users won't even realize that you're upgrading the solution while they're browsing your site.


6. A/B Testing
. As a result of previous point, it may be a case you've updated some but not the all end servers. You do not want updating them all, instead you'd like to perform an A/B Testing on both sets and based on a result decide to complete an update or rollback to the most recent version. This would be a pretty valid scenario that a reverse proxy can do for you.


7. URL and Links Rewriting
. What if you have a legacy website that functions perfectly well, but similar to a scenario (1) the is no way (and need) of maintaining it. The development team has gone and in any case there is no single reason of investing a lot into a smth to be dismissed at some stage. At the same time you got another website(s) that either could be successor(s) for a legacy one, or some additional areas, written in isolation by more modern tools and thus either incompatible or expensive to merge with an existing solution. However the business wants everything to function with the same main domain name, just in different "folders" under it, so that end users (and search robots!) see no difference between consisting parts and naturally experience them both as being a single solid website.
Achieving that is also possible with a Reverse Proxy by rewriting URLs. Please note that not just a external request coming to site.com/company1 will be rewritten to www.company1.com but also all the internal URLs within all the requested pages need to be rewritten as well. Please note that it becomes possible only in conjunction of SSL offload, otherwise the traffic gets encrypted and proxy becomes "a man in the middle".

Not just that - once 6 years ago I wrote a walkthrough how one can achieve that same result purely and entirely by the means of IIS on Windows


Conclusion.
This article give a high-level explanation on Reverse Proxies and their primary features. It intentionally does not focus on a specific implementation avoiding going deeper in technical details.

In real-world solutions of course you will meet Reverse proxy solutions implementing several of the features combined. This may be a typical workflow for processing inbound traffic with a Reverse Proxy:



In any case, I hope you found this helpful!

Merry Christmas and happy New Year!

Every year I create a special Christmas postcard to congratulate my readers with a new oncoming year, full of changes and opportunities. Wish you all the best in 2021!

My artwork for the past years (click to expand)
2020


2019


2018


2017


2016


The easiest way of installing Solr cores for SXA search

Few years ago, I made a walkthrough on setting up search for SXA-based website. However, it does not cover one important thing - out-of-the-box SXA uses two own indexes. As a good practice "one index - one core" it assumes you create two additional cores for each of those indexes.

It is also assuming that you may follow the same principle used for existing Sitecore Solr cores naming them <INSTANCEPREFIX>_sxa_master_index and <INSTANCEPREFIX>_sxa_web_index. But how do you create those cores?

I would like to introduce the easiest way achieving that using Sifon and its existing plugin for creating SXA Solr cores. All you need to do is updating plugins from public repository by executing "Plugins" - "Get Sifon plugins" command.

Once updated, you will see corresponding plugin at the menu:


And that's it! After a bit of waiting Sifon will complete installation of both new cores into a Solr instance references with the selected profile. You'll see the confirmation:


As it prompts, the cores have been created at the named location, managed schema has been also published but you still need to rebuild these new indexes. Sifon also has a plugin for that (fastest, but SPE Remoting must be enabled for that instance):


Alternatively, you can use Control Panel as you normally do:


Confirmation:


Hope this helps enjoying search with SXA which is really taken to the new level!

Cannot use a Hyper-V disk VHDX image you've been shared with? That's how to manage it properly

Cannot use a Hyper-V disk VHDX image you've been shared with? It can be your colleague's shared VM or a disk exported from Azure VM. Below I am going to describe the steps required to managed it properly.

The first thing one is likely to perform is either importing this disk or creating a new VM without disk and attaching it later. Despite it seems absolutely valid, that will end up with:

If you see the above error it means you've most likely created a VM of 2-nd Generation of Hyper-V, and attached you drive to it. That won't work, but why?

To answer this question let's define the difference between both generations. The main fact about Generation 1 is that it emulates hardware - all required hardware components must be emulated to make the virtual machine work! Special software that can imitate the behaviour of real hardware is included in Hyper-V, as a result the VM can operate with virtual devices. Emulated hardware (that behaves identical to real hardware) includes drivers to most operating systems in order to provide high compatibility.

Generation 1 the emulation works way less productive compared to native equipment and has numerous limitations. Among those:

  • a legacy network adapter
  • IDE controllers with only 2 devices could be attached to each
  • MBR with max disk size of 2TB and not more than 4 partitions.

The 2-nd Generation uses:

  • UEFI BIOS
  • GPT support (without size and partitions legacy limit) and Secure Boot
  • because of the above - VMs boot (and operate) much faster
  • fewer legacy devices and new faster synthetic hardware used instead
  • better CPU and RAM consumption

UEFI is not just a replacement of BIOS, UEFI extends support of devices and features, including GPT (GUID Partition Table).

Secure Boot is a feature that allows protection against modifying boot loaders and main system files, done by comparing the digital signatures that must be trusted by the OEM.


The solution is quite simple.

What you need to do is create a 1-st Generation VM (without creating a new disk drive) and reference existing VHDX file. That would work and OS will load (slowly but will do). The laziest person would probably stop at this stage, but we progress ahead with converting it to the 2-nd Generation VM. But, how do we?

All you need to do is convert legacy BIOS to UEFI running the following command executed within a 1-st Generation VM:

mbr2gpt.exe /convert /allowFullOS

Please pay attention to the warning: since now you need to boot in UEFI mode.

Now you can turn off the Windows VM, delete existing 1-st Generation VS from Hyper-V Manager (but not the disk drive!), and then create a new 2-nd Generation VM referencing the converted VHDX disk image. Also make sure the attached hard Drive stands first for at the boot sequence order (Firmware tab) and also you may want unchecking Enable Secure Boot switch from the Security tab.

Now you may successfully start your received Hard Drive with a fast and reliable 2-nd Generation VM that manages resources of your host machine in a much reliable and savvy way!

canlı tv