Experience Sitecore ! | More than 200 articles about the best DXP by Martin Miles

Experience Sitecore !

More than 200 articles about the best DXP by Martin Miles

Upgrading Sitecore like a Pro

Sooner or later, you'll face it - the need for Sitecore version upgrade. However, each Sitecore instance is unique, there is no universal method for an upgrade, therefore each upgrade path is bespoke.

In the past 10 years working with Sitecore I have done numerous upgrades, resulted in Upgrade Planning Strategy and a set of Upgrade Tips and Tricks. These tips will help saving 2-3 times (!) on spent efforts, comparing to direct approaches.
Last but not the least. versions 10 and 10.1 brought new challenges and options of migrating existing solutions into containers. I will explain your options and propose best approach on this as well.

I have lodged the speech proposal for SUGCON 2021 so fingers crossed for me to be chosen. Once happened, I will share the whole presentation and sides updating this blog post.

There is still no "silver bullet" for Sitecore upgrades, but following tips from my proposed session will eliminate risks, reduce efforts, and bring you confidence while upgrading your instances.

If being chosen, I will tell you about:

  1. How to approach the whole instance upgrade
  2. What are the upgrade time-wasters
  3. Common and potential traps to avoid
  4. Dealing with configuration upgrade, and why that's not as complicated as initially seems to be
  5. Upgrading your solution codebase, including ORM (Glass etc.) and DI
  6. Employing automations with PowerShell
  7. Upgrading databases and how to approach
  8. Even more automation to add
  9. How to treat deprecated Sitecore APIs and obsolete code
  10. Migrating forms from WFFM
  11. Upgrading to version 10 containers and how 10.1 changes the upgrades
  12. Testing your upgrade strategies
  13. Going live
  14. Summary of tips and findings from this session
Stay tuned!

HTTPS, SSL and TLS FAQs

There are sometimes cases of misunderstanding SSL or misinterpreting certain aspects of it. Therefore I made these FAQs from the very basics, which may clear out any confusion.

1. What is SSL?

SSL stands for Secure Sockets Layer, a protocol used to encrypt and authenticate data transmitted between applications, such as a browser and a web server.

2. Where did SSL come from?

SSL version 1.0 was developed by Netscape in the early 1990s. But due to security flaws, it was never published. The first public release was SSL 2.0, which came out in February 1995. It was an improved version, but still, after a complete redesign, version 3.0 got approved.

3. What is its purpose?

The SSL protocol prevents attackers from reading and modifying transactions and messages between the browser and the server, for example, the transfer of credit card data, logins, etc. This ensures that all data remains confidential and protected.

4. What is TLS-certificate?

TLS stands for Transport Layer Security - the successor and improved version of SSL. It has more reliable encryption algorithms, but despite some similarities, it is considered to be a different standard.

5. What protocols and versions are used nowadays?

SSL, surprisingly, is not used now. As described above, there were 3 versions (1.0, 2.0, and 3.0) initially. Then based on SSL 3.0 they created the TLS 1.0 protocol. Then TLS got upgraded to 1.1, 1.2, and 1.3 came out. Currently, only the last two are used.

6. How does an SSL certificate operate?

When the browser is connected to the site, an "SSL handshake" occurs ("handshake" keys are created), then cryptographic data is exchanged and, in the end, session keys are formed, on the basis of which the traffic is encrypted.

7. What types of certificates do exist and what's the difference between those?

  • SSC (Self-Signed Certificate) - you can create yourself, but there is no trust
  • SGC (Server Gated Cryptography) - for very old browsers
  • SAN/UCC (Unified Communications Certificates) - multi-domain for MS Exchange
  • Code Signing - for software signing
  • Wildcard SSL Certificate - for domain and subdomains
  • DV (Domain Validation) - confirms the domain name
  • OV (Organisation Validation) - checking organization, address, and location
  • EV (Extended validation) - gives the most protection

8. Are there any other types available?

Yes, for example, a Multi-Domain SSL certificate for several domains - does not require the purchase of certificates for each individual domain which simplifies administration. Code Signing certificates are designed to protect your code, content, and other files while they are being transferred online.

9. What data is stored inside of SSL certificate?

SSL certificate contains information:

  • Version
  • Serial number
  • Search Algorithm ID
  • Publisher name
  • Validity period (no earlier than / no later)
  • The subject of the certificate
  • Subject's public key information
  • Unique identifier of the publisher
  • Subject unique identifier
  • Add-ons
  • Signature

10. Are SSL and HTTPS actually the same thing?

No, they aren't. 

SSL is a security protocol used to establish a secure connection between a web browser and a web server. 

HTTPS works as a sublevel between applications. HTTPS encrypts a regular HTTP message before sending and decrypts it during delivery.

11. How do I verify if a site uses SSL/HTTPS and it is correct?

Most browsers display a padlock and/or message at the address bar to mark secure connections protected by SSL / TLS certificates. The computer OS maintains a list of root certificates that make up a chain of trust.

12. What is a Chain of Trust?

When opening a site, the browser checks the entire chain of certificates, up to the root. If even one of the certificates is invalid, then the entire chain is also considered invalid.

12. Why SSL certificates are that expensive?

SSL certificates can be viewed as a kind of and insurance, they are expensive due to the guarantees that come with them. If something goes wrong, the SSL Certificate Provider will pay the affected end user an insurance premium.

13. Are there free certificates and how secure are they?

Free certificates do exist, perform the same function as paid ones, but usually have the lowest security level (DV), there will probably not be a guarantee for it, i.e. they are safe to some extent. When threatened, your site visitors will not be able to claim a guarantee. The most known free certificates provider is Let's Encrypt.

14. What is the way of buying a certificate?

If your hosting provider does not offer any free SSL certificates, or you need a different type of certificate, you can purchase your own certificate from a Certification Authority or a discounted reseller, which will be cheaper.

15. What is a Certificate Authority then?

Certificate Authority (CA) - an international organization that issues and is responsible for SSL certificates. CA verifies the domains and sources that authenticate organizations. Certification authorities are trusted by 99% of browsers.

16. What is the process for obtaining a certificate upon payment completion?

First, you generate a CSR and a private key. Then you send the CSR to the Certification Authority (CA) and get an archive with text/binaries for different types of web servers. Then you install it on your hosting or your server.

17. How do I use an SSL / TLS certificate?

  1. Deploy your site on the hosting provider's server.
  2. Create a CSR - Certificate Request, buy a certificate from a trusted CA or reseller.
  3. Install SSL / TLS certificate in the hosting control panel or on the server.
  4. Create a redirect to the protected version of the site - HTTPS.
18. How can I check if the certificate is configured correctly?

One popular way is to visit the Qualys website, enter the site name and click the submit button. The service will check and give an assessment in points. If the site received A or A +, then the certificate is working correctly.

19. Are the SSL certificate and SEO somehow related?

If your site is not secured with an SSL certificate, then it is marked "not secure" from Google. In addition, if you want your site to rank high in the search results, you will need it marked as "safe". But the main goal of course remains the transport layer security of the connection.

20. What is the validity period of the certificate?

The commercial certificate is valid for 1 year. The validity of the certificates has been steadily decreasing from 10 years in 2011, then to 3 years in 2015, then to two years in 2018, and to a year now. Free certificates are valid for 90 days.

21. Why don't SSL certificates last forever?

Security standards are changing, therefore old certificates must be kept updated. In addition, the defined certificate life cycle reduces the risks associated with the possible loss of the private key.

22. Does SSL protect site data?

Any information sent over the Internet passes through network equipment, servers, computers, and other connections, and if the data is not encrypted, then this makes them vulnerable to attacks.

SSL / TLS certificate helps to protect the data transmitted between the visitor and the site, and vice versa. But this does not mean that the information stored at the server itself is completely protected - the administrator must decide how to protect it, for example, he. should probably encrypt the database.

23. What are symmetrical keys and Public / Private Keys used with certificates?

Almost all encryption methods in use today employ pairs of public/private keys. These are considered much more secure than the old symmetrical keys. With Public and Private keys, two keys are used that are mathematically related (they belong as a key pair), but are different.

This means a message encrypted with a public key cannot be decrypted with the same public key. To decrypt the message you require the private key.

24. How do asymmetrical keys and SSL Certificates work together?

Public keys can be made available to anyone, that's why obviously called public. That should raise a concern of trust, specifically: "how do you know that a particular public key belongs to the person/entity that it claims to be". For example, you receive a key claiming to belong to your bank. How do you know that it does belong to your bank?

The answer is to use a digital certificate. A certificate serves the same purpose as a passport does in everyday life. Likewise, a passport establishes a link between a photo and a person, and that link is verified by a trusted authority (passport office).

A digital certificate provides a link between a public key and an entity (business, domain name, etc) that has been verified (signed) by a trusted third party (a certificate authority). A digital certificate provides a convenient way of distributing trusted public encryption keys.

25. What are various SSL Certificate formats?

An SSL Certificate is essentially an X.509 certificate. X.509 is a standard that defines the structure of the certificate. It defines the data fields that should be included in the SSL certificate. The following figure illustrates the X.509 Certificate's encoding formats and file extensions:

  • PEM. Most Certificate Authorities provide certificates in PEM format in Base64 ASCII encoded files. The certificate file types can be .pem, .crt, .cer, or .key. Because of this, .pem file can include the server certificate, the intermediate certificate, and the private key in a single file. The server certificate and intermediate certificate can also be in a separate .crt or .cer file. The private key can be in a .key file.  The actual data is located between the ---- BEGIN CERTIFICATE---- and ----END CERTIFICATE---- statements, a similar approach for key and certificate requests.
  • PKCS#7 also use Base64 ASCII encoding with file extension .p7b or .p7c. Only certificates can be stored in this format, not private keys. The P7B certificates are contained between the -----BEGIN PKCS7----- and -----END PKCS7----- statements.
  • DER certificates are provided in a binary form, contained in .der or .cer files. These certificates are mainly used in Java-based web servers.
  • PKCS#12 also use binary form for the data, contained in .pfx or .p12 files. However, it can store the server certificate, the intermediate certificate, and the private key in a single .pfx file with password protection. These certificates are mainly used on the Windows platform.

26. What is a trusted store?

It is a list of CA certificates that you trust. All web browsers come with a list of trusted CAs.

27.  Can I add my own CA to my browser's trusted store?

On Windows when you right-click on the certificate you should see an install option. Not sure about other operating systems.

28.  What is a certificate fingerprint?

A fingerprint is a hash of an actual certificate and can be used to verify the certificate without the need to have the CA certificate installed. This is very useful in small devices that don’t have a lot of memory to store CA files. It is also used for manually verifying a certificate.

Hope this blog post helps you eliminate some gaps or misunderstandings about SSL and TLS!

Everything you wanted to ask about "Items-as-Resources" coming with new Sitecore 10.1

Sitecore 10.1 brings new Items-as-Resources option, which raises plenty of questions.
  • What it that used for?
  • Why did we get it at all?
  • Any concerns of using that?
Please find the answers below:


1. Before 10.1 you’ve been given the initial set of OOB items upon the installation in the databases. That includes default templates, layouts, workflows and the rest of scaffolding items.

2. Now with 10.1 all these are supplied as the resources files outside of database. That's correct: all these items are no longer residing in the database. Yes, you still have them in your content tree as normal.

3. Does databases come empty? Not actually - there are just two entries for the default site (page) you normally first see after successful Sitecore instance installation, at the root of URL. It was decided not to put these into resources, as most customers delete that default home page anyway.

4. Are these resource read-only? Yes, Sitecore cannot write back into those resource files. Treat it as if they're written on CD but with an immediate access.

5. So does that mean I cannot modify default OOB items in Sitecore anymore? No, you actually can edit those as normal after "Unprotecting item" from a Content Editor ribbon. What happens in that case is Sitecore will take the delta between initial value stored in resource file and your changes and will store that delta having only changes you’ve done in the database. On item "consumption" the current state of item gets calculated from a resource file and that delta.

6. But you cannot delete these items. Sitecore prompts that it origins from the resource file therefore cannot be deleted. Still good, as leaves less potential for silly errors, anyway..

7. So where are these resource files located? They are based (quite predictably) within App_Data folder - App_Data\items\<DATABASE_NAME>\items.<DATABASE_NAME>.dat (by default).

8. What format are these resource files? Protobuff (Protocol Buffers) from Google. That is a surprisingly old format which is proven for a decade, at least.

9. How can I create my own resources?
Officially - you cannot. Well, it is technically possible but requires very deep dive into Protocol Buffers, raw database storage and investigating new data provider in Sitecore. But, Sitecore will likely start providing authors of popular modules with the toolset to create such a resources with an ease. So, let's say for SXA you will no longer need installing SPE + SXA packages yourself, instead you'll simply drop the resource files provided by SXA team underneath items folder, not even need to publish that afterwards.

10. Why no need publishing? That's because you copy the resource file for web database as well - items are alredy on the web database. Of course, all the items created by you will still need to get published.

11. But why at all Sitecore introduced that?
The main reason is to simplify the platform version upgrade process. The way update is done has changed.
You may have notice on the Sitecore download page, "Upgrade options" section have changed: instead of Sitecore Update Packages you now have Sitecore UpdateApp Tool that operates against each specific version you'd want to upgrade from. This tool will remove the default items for each particular legacy version and replace it with the resource files, Of course it also updates the schema with the changes which was already available.

12. The bigger reason for this change was "think containers - think ahead" approach. With such a change it becomes easier to upgrade version of Sitecore when running in containers: everything from the database since now is entirely user's custom data, and can be entirely copied to a never database, while version-specific-and-system-related items get updated by just a resource file substitute.

13. Also you may heard that Fast Query has been deprecated. That is exact reason why - if something isn't in the database, Sitecore cannot efficiently build the graph of the relationship for fast query


14. What is that data provider mentioned above?
That is a new one called CompositeDataProvider that inherited by DefaultDataProvider. The name composite assumes that one cares of merging items for Sitecore tree from both DB and the resources. In the configuration you specify it for an individual database under <database> section, you can also change the location for such resources by patching <filePath> node of <protobutItems> and overriding the location.

15. For the end-consumer of DataProvider (high level of stack) nothing changes as they still use DefaultDataProvider from their code. The changes occur at intermediate level and those happen to be internal for Sitecore.

16. That actually opens up a much wider potential for creating some intemediate-level providers to things other than ProtoBuf and SQL Databases: CRMs, DAMs, some other headless CMSs maybe. In any case this is very important and greatly welcomed step ahead for the platform!

Update: there is another great blog post from my MVP-colleague Jeremy Davis on that same topic, where he also tried drilling into these resurce files with ProtoBuff.Net library.

I have won a Sitecore Technology MVP 2021 award!

This year I am celebrating my fifth in a row year as a Sitecore MVP! I am very excited to announce that I have been named Most Valuable Professional (MVP) by Sitecore for 2021 - that's the most prestigious award in whole ecosystem of Sitecore!

As a Technology MVP I am one of only 170 Sitecore professionals worldwide that have been awarded with an MVP title in this category. It really means a lot to me to be part of such a great community and to be able to contribute to sharing knowledge within this community.


Sharing Sitecore Identity Server between two independant instances of Sitecore

Imagine a case where you need having two Sitecore instances in parallel next to each other. That may be cause by several legit reasons.

WHY?

For example, in my case I am moving (by reworking, not just migrating) some functional areas from one legacy instance to another that will features SXA. The legacy instance has been passed from one hand to another with numerous configuration artifacts, with a limited maintenance options, so that it becomes next to impossible to combine it with a brand new SXA stuff under the same roof. I know, that is doable in principle (and have done it myself before), but the amount maintenance and lack of knowledge / documentation on existing codebase makes its maintenance inappropriately risky and non-acceptable. Therefore, it becomes reasonable keeping them both in isolation, only uniting at the URL level (by rewriting a (sub)domain of a new instance into a primary domain's folder level).

Things you have to consider in that case would turn to almost doubling your infrastructure and related expense as well as checking if your Sitecore licence permits you that. Currently doing quite an unusual setup where both the above concerns give me a green light for going ahead and I am OK to run both instances in parallel (as on-prem solution).

Once agreed, the next thoughts come to Identity Server, where keeping two instances for that same activity does not make much sense. Keeping them both is exhaustive, but the good news is that one can re-use and existing ID Server for any number of instances (namely CM boxes). That comes to making two extra steps and below I will show you how-to:

HOW?

Let's assume we have two instances, called old and new. Old one has all the bits configured and running, so we only want re-using ID Server of old instance with a new instance.

1. Get rid ow ID Server for a new instance (you can stop its web app and app pool for now). That makes sure it is not used.

2. Find Sitecore.Owin.Authentication.IdentityServer.config file on a new instance (App_Config\Sitecore\Owin.Authentication.IdentityServer.config) and substitute identityServerAuthority variable to point to an existing ID Server

    <sc.variable name="identityServerAuthority" value="https://old.identityserver" />


3. Now CM for a new instance knows which ID Server to talk through, but will ID Server accept those calls? The answer is no, unless you explicitly permit it doing so. Navigate to Config\production folder of old instance ID Server and add addition allowed CORS origin group into Sitecore.IdentityServer.Host.xml file. You will end up having smth. as below:

<AllowedCorsOrigins>
<AllowedCorsOriginsGroup1>https://old</AllowedCorsOriginsGroup1>
<AllowedCorsOriginsGroup2>https://new</AllowedCorsOriginsGroup2>
</AllowedCorsOrigins>


4. There is also Identity Server secret stored below at the same xml file, with a matching counterpart at App_Config\ConnectionStrings.config, so you also need updating config for new instance with the value from shared Identity Server:

<add name="sitecoreidentity.secret" connectionString="SECRET_from_ID_Server" />


5. Finally, recycle Identity Server application pool, then you're OK to test it. To make things more visual, I've recorded all the steps and testing it and sharing resulted video below:


Things to consider: as you're re-using existing old instance Identity Server, it will itself re-use all the assets. When it comes to Active Directory then it brings a desired result, but speaking about internal users (those normally you have got at Sitecore domain) - they will all get reused all as well, including admin. This comes because ID Server has a reference to core database (or security database extracted from the core) and that one belongs by default to an old instance too.

Hope you find this helpful!

What is a Reverse Proxy and what do you need one for?

There are a variety of Reverse Proxy solutions on the market. You may have already heard about some:

Major cloud providers also have their proprietary solutions:


But what is a Reverse Proxy? Why "Reverse"?
As Wikipedia says, a common type of proxy server that is accessible from the public network. Large websites and content delivery networks use reverse proxies - together with other techniques - to balance the load between internal servers. Reverse proxies can keep a cache of static content, which further reduces the load on these internal servers and the internal network. It is also common for reverse proxies to add features such as compression or TLS encryption to the communication channel between the client and the reverse proxy.

Reverse proxies are typically owned or managed by the web service, and they are accessed by clients from the public internet. In contrast, a forward proxy is typically managed by a client (or their company) who is normally restricted to a private, internal network. The client can, however, access the forward proxy, which then retrieves resources from the public internet on behalf of the client. Here's a reverse proxy in action from a very high:


What are typical scenarios for using a Reverse Proxy?

1. SSL Offload. Let's assume we've got a website which works at HTTP only, and for some reason (legacy, gone developers or being unable bringing changes into a running solution that may huge or any other) it is not possible to change the website itself - "If it works, don't touch it" paradigm in action. For compliance, we must add HTTPS support for that website.
With using a Reverse Proxy it comes to a really quick and easiy solution - we don't need developers at all. All we need is asking our Ops professional asking him to instantiate a proxy server with SSL Termination. (obviously, we'll also need SSL certificates for domain hostname(s) of a given website). Job done!


2. Load Balancer
. Next, we'd want horizontally scaling that website and even deployed two equal client facing copies of it. How do we "split" traffic to distributing it equally to both sites? In this case we introduce a Proxy Server functioning as a Load Balancer.
But what if one of websited dies or crashes half way down the road? Load Balancer needs somehow to know each of the "boxes" functions well and react the outages by re-distributing traffic to the rest of mchines functioning well. This is traditionally implemented by "pinging" a such called "HealthCheck" URL on each particular box. As soon as one of the healthchecks keeps failing, an alert is raised and the traffic is no longer routed to a faulty box (be careful with sticky sessions!).



3. Cybersecurity enforcement
. Sending specially formed packets hackers can undertake a Deny-of-Service attack, when sending a request comes times cheaper than serving it back. At some moment your servers won't cope with this parasitic workload and will fail.
In order to prevent that, dangerous traffic should not reach your servers, being filtered at a proxy. Namely, a Firewall with an adequate rule set that filters out all patters with anomalies, raises alerts and bypasses the legitimate requests.


4. Caching and compressing
. Even with purely legitimate traffic beyond the proxy, one may still get a large request payload. But how comes? Well, there may be different reasons, such as usage patterns where all users navigate to the same heavy-loaded area of a website; alternatively the website itself could be written by junior developers who did not enough care about the way if functions in the most optimal way once deployed. Regardless of the reason, we could still soften things up by identifying some of the popular endpoint that consume much of server's resources and cache it up right at the proxy level. Of course assuming that a given set of parameters always returns that same result, there is no longer a need spending expensive server resources on producing the results we're already got in past and have effectively cached at the proxy level and never reach servers at all. If we however must ensure this traffic reaching the end servers, we could at least compress / encode the "last mile" beyond the proxy.


5. Smooth automated deployments
? Why not, have you ever heard of Blue-Green Deployments? With that in action end users won't even realize that you're upgrading the solution while they're browsing your site.


6. A/B Testing
. As a result of previous point, it may be a case you've updated some but not the all end servers. You do not want updating them all, instead you'd like to perform an A/B Testing on both sets and based on a result decide to complete an update or rollback to the most recent version. This would be a pretty valid scenario that a reverse proxy can do for you.


7. URL and Links Rewriting
. What if you have a legacy website that functions perfectly well, but similar to a scenario (1) the is no way (and need) of maintaining it. The development team has gone and in any case there is no single reason of investing a lot into a smth to be dismissed at some stage. At the same time you got another website(s) that either could be successor(s) for a legacy one, or some additional areas, written in isolation by more modern tools and thus either incompatible or expensive to merge with an existing solution. However the business wants everything to function with the same main domain name, just in different "folders" under it, so that end users (and search robots!) see no difference between consisting parts and naturally experience them both as being a single solid website.
Achieving that is also possible with a Reverse Proxy by rewriting URLs. Please note that not just a external request coming to site.com/company1 will be rewritten to www.company1.com but also all the internal URLs within all the requested pages need to be rewritten as well. Please note that it becomes possible only in conjunction of SSL offload, otherwise the traffic gets encrypted and proxy becomes "a man in the middle".

Not just that - once 6 years ago I wrote a walkthrough how one can achieve that same result purely and entirely by the means of IIS on Windows


Conclusion.
This article give a high-level explanation on Reverse Proxies and their primary features. It intentionally does not focus on a specific implementation avoiding going deeper in technical details.

In real-world solutions of course you will meet Reverse proxy solutions implementing several of the features combined. This may be a typical workflow for processing inbound traffic with a Reverse Proxy:



In any case, I hope you found this helpful!

Merry Christmas and happy New Year!

Every year I create a special Christmas postcard to congratulate my readers with a new oncoming year, full of changes and opportunities. Wish you all the best in 2021!

My artwork for the past years (click to expand)
2020


2019


2018


2017


2016


The easiest way of installing Solr cores for SXA search

Few years ago, I made a walkthrough on setting up search for SXA-based website. However, it does not cover one important thing - out-of-the-box SXA uses two own indexes. As a good practice "one index - one core" it assumes you create two additional cores for each of those indexes.

It is also assuming that you may follow the same principle used for existing Sitecore Solr cores naming them <INSTANCEPREFIX>_sxa_master_index and <INSTANCEPREFIX>_sxa_web_index. But how do you create those cores?

I would like to introduce the easiest way achieving that using Sifon and its existing plugin for creating SXA Solr cores. All you need to do is updating plugins from public repository by executing "Plugins" - "Get Sifon plugins" command.

Once updated, you will see corresponding plugin at the menu:


And that's it! After a bit of waiting Sifon will complete installation of both new cores into a Solr instance references with the selected profile. You'll see the confirmation:


As it prompts, the cores have been created at the named location, managed schema has been also published but you still need to rebuild these new indexes. Sifon also has a plugin for that (fastest, but SPE Remoting must be enabled for that instance):


Alternatively, you can use Control Panel as you normally do:


Confirmation:


Hope this helps enjoying search with SXA which is really taken to the new level!

Cannot use a Hyper-V disk VHDX image you've been shared with? That's how to manage it properly

Cannot use a Hyper-V disk VHDX image you've been shared with? It can be your colleague's shared VM or a disk exported from Azure VM. Below I am going to describe the steps required to managed it properly.

The first thing one is likely to perform is either importing this disk or creating a new VM without disk and attaching it later. Despite it seems absolutely valid, that will end up with:

If you see the above error it means you've most likely created a VM of 2-nd Generation of Hyper-V, and attached you drive to it. That won't work, but why?

To answer this question let's define the difference between both generations. The main fact about Generation 1 is that it emulates hardware - all required hardware components must be emulated to make the virtual machine work! Special software that can imitate the behaviour of real hardware is included in Hyper-V, as a result the VM can operate with virtual devices. Emulated hardware (that behaves identical to real hardware) includes drivers to most operating systems in order to provide high compatibility.

Generation 1 the emulation works way less productive compared to native equipment and has numerous limitations. Among those:

  • a legacy network adapter
  • IDE controllers with only 2 devices could be attached to each
  • MBR with max disk size of 2TB and not more than 4 partitions.

The 2-nd Generation uses:

  • UEFI BIOS
  • GPT support (without size and partitions legacy limit) and Secure Boot
  • because of the above - VMs boot (and operate) much faster
  • fewer legacy devices and new faster synthetic hardware used instead
  • better CPU and RAM consumption

UEFI is not just a replacement of BIOS, UEFI extends support of devices and features, including GPT (GUID Partition Table).

Secure Boot is a feature that allows protection against modifying boot loaders and main system files, done by comparing the digital signatures that must be trusted by the OEM.


The solution is quite simple.

What you need to do is create a 1-st Generation VM (without creating a new disk drive) and reference existing VHDX file. That would work and OS will load (slowly but will do). The laziest person would probably stop at this stage, but we progress ahead with converting it to the 2-nd Generation VM. But, how do we?

All you need to do is convert legacy BIOS to UEFI running the following command executed within a 1-st Generation VM:

mbr2gpt.exe /convert /allowFullOS

Please pay attention to the warning: since now you need to boot in UEFI mode.

Now you can turn off the Windows VM, delete existing 1-st Generation VS from Hyper-V Manager (but not the disk drive!), and then create a new 2-nd Generation VM referencing the converted VHDX disk image. Also make sure the attached hard Drive stands first for at the boot sequence order (Firmware tab) and also you may want unchecking Enable Secure Boot switch from the Security tab.

Now you may successfully start your received Hard Drive with a fast and reliable 2-nd Generation VM that manages resources of your host machine in a much reliable and savvy way!