Experience Sitecore ! | October 2021

Experience Sitecore !

More than 200 articles about the best DXP by Martin Miles

Sitecore 10 .NET Fundamental Developer Certification Exam is now available

Many of you have been asking about the version 10 developer exam availability, so it is finally there!

You may book it on the same page as you did all previous exams, same proctored exam. The price however is slightly higher - $350 as opposed to $300 for version 9 exams. For some European countries, there is also smth called "estimated tax" - not explained which exactly tax it is and how it got calculated. For example, buying an exam from the UK sets this tax to $70, totaling $420.

Metrics: you'll be given 100 minutes and the same amount of questions to answer as if was before - 50. Pass rate - 80%.

So, what to expect?

Firstly, I would admit the quality - the test has been reworked much to better! One thing I really disliked about the previous versions was large code snippets, with a challenge to select the correct one. Ironically these snippets were all related to item access API - something that very few people are doing nowadays due to ORMs, like Glass.

Secondly, you would expect questions on certain new stuff, like containers and CLI. In addition, the test takers must demonstrate knowledge of the existing familiar competencies, like security and roles/users management, layouts, placeholders, components, controls, renderings, and items management, and similar. Obviously, no SXA or JSS related questions are expected, as that is a Fundamental exam.

Thirdly, I want to notice incorrect answer options - those became more realistic, and therefore - confusing. With all my experience with the platform, I manage to miss a few, due to being not 100% attentive. That means - do read questions and answers very carefully and pay attention to details. Having 2 minutes per single question is a decent amount of time for doing that.

General feedback: the exam changed for the better but it is very unlikely to pass it without having actual hands-on all the expected competencies. With an 80% pass rate, one could incorrectly answer 10 questions. Let's imagine a person, who is proficient in version 9 but never has worked with 10. In that case, he/she is likely to fail more than 10 questions for containers and CLI, and given the rest of the answers are correct, the result is still - fail. On the other hand, the required level of experience is not that high, so everyone working with the latest features would pass without any doubt.

Wish you good luck with Sitecore 10 .NET Fundamental Developer Certification Exam!

Things beginners get incorrect about Kubernetes

On start playing with Kubernetes, one may face with one of the biggest delusions considering the K8S will work in the same way for both the development or testing environment. 

But It won't!

When it comes to containers in general and Kubernetes specifically, there is a big difference between occasional runs in a labs-alike conditions and in full production lifecycle. That is similar to a difference between just starting an app and long term running it full security and reliability enabled.

Not a Kubernetes exclusive problem, but is true for the entire variety of containers and microservices. Spin-up a container comes as relative simple task, while scaling containers as containerized microservices in the production turns to be more complicated.

Although Kubernetes has alternatives, it has quickly become a de-facto standard for orchestration. However there is a difference between launching K8S in a sandbox compared to a full production environment.



Delusion #1. Running containers with Kubernetes in the development or testing environment ensures that your operational needs will be satisfied.

The truth: the launch of Kubernetes in the development or testing environment allows cutting the corners, simplify things and not to bother with the operational load, which one faces when going live to Prod. Ops and safety considerations will become major areas of differences between K8S running in prod and in the development / testing environments. Failing a cluster in the labs conditions does not bring any losses.

For me it looks like a compromise between an agility and reliability: devs use containers to achieve flexibility while working with apps when developing and testing the code does its purpose. While the ops need to provide reliability, scaling, performance and safety provided by a sustainable, industry-proven platform. They are looking for a deployment automation for the clusters to ensure the repeatability and consistency. It also helps when restoring the system.

Versioning is also critical for operations. As far as possible, you need enabling versioning everywhere, including services deployment configuration, policies and infrastructure (applying the infrastructure-as-a-code approach). That results in environments becoming repeatable. As a good practice, avoid "latest" image versions, in order to avoid configuration drift effect.


Delusion #2. Both reliability and security got provided with Kubernetes

In reality: when using Kubernetes at non-production environments only, most unlikely reliability and security got provided, at least initially. Do not get discouraged, you will be there: it's a matter of designing an architecture before switching to the Prod.

Obviously, performance, scaling, availability and safety requirements are much higher in prod environments. This It is important to plan these requirements for the deployment of K8S into architecture, as well as build scaling and security plans into Helm-charts, etc.

But how could running a cluster in dev/testing environments lead to a false confidence?

This is common for non-production environments having all network connections open. It is acceptable that any service can refer to any other service: open connections are the defaults for Kubernetes. However such an approach is an evil practice for production environments and can lead to downtime. It also exposes larger areas for potential attack and increases threats to business.

When it comes to containers / microservices, one needs spending bigger effort for creating a highly available and reliable system. Orchestration itself helps a lot but isn't a "silver bullet", same applies to security. We will have to work hard to protect Kubernetes and reduce the surface of the attack. It is very important using RBAC with minimal privileges and enforce network policies, leaving only those channels services indeed use.

Also vulnerabilities of container images can rapidly turn ops into a critical state, while on development / testing environments this danger may absent at all. Pay attention to the base images used for building your containers: as far as possible, use trusted official images, or build your own. The last thing you want happening for your Kubernetes cluster is helping someone mining crypto coins.

It is recommended to refer to the security of containers as a ten-level system covering the container stack (host and registries), as well as questions related to the life cycle of containers (for example, API management). 


Delusion #3. Orchestration makes scaling a formality

Although Kubernetes considered being a completely necessary tool for scaling containers, it will be delusted to think that orchestration immediately sorts out scaling needs for the production environment. The volume of data at live environments is times more, please also keep in mind that monitoring may also need scaling. With increasing volumes, everything changes. 

It is impossible to ensure all K8S components implementing the interfaces correctly until you spin-up the prod: determining Kubernetes "working normally", and the API server and other controlled components get scaled according to your needs.

As I say, the development and testing environments go much easier. In local environments it is easy skipping basics like defining the right resources and restrictions for requests. Avoiding that can collapse you prod once later. 

Scaling the cluster both directions is a good example when the task goes easy locally, being clearly complicated at production: scaling prod clusters is more difficult than clusters for development/testing.

While Kubernetes makes it relatively simple scaling horizontally, DevOps still need keeping in mind some nuances, especially when it comes to maintaining services live when scaling an infrastructure. It is crucial to ensure that the main services, as well as a system monitoring and security alerts, were distributed across the cluster nodes and do work with stateful volumes so that data not being lost on scaling down.

Again, it all comes to proper planning and resources available. You need not just understand your needs for scaling when planning but most importantly - test them. Your production environment must be capable for handling much higher loads.


Delusion #4. Kubernetes works everywhere equally that same

In reality: differences in work in another environment may vary similar to those differences between running Kubernetes on the developer's laptop and prod server. The reality is that there may be serious differences depending on the vendor .Many believe that if the K8S works locally, it will work in any operational environment. 

Local environments commonly miss important components required by prod environments: monitoring, logging, certificate management and credentials. You need to keep that in mind, as that is another problem raised from a difference between prod  and development/testing environments.

However, that isn't Kubernetes exclusively, but applies to containers/microservices in general, especially in multicloud and hybrid cloud setups. Those Kubernetes implementations are more complicated than it seems initially, as many of the mandatory services are proprietary, like load balancing and firewalls. A container that works well locally may work unprotected (may not start at all) in the cloud with another setup of tools. Therefore, SERVICE MESH technologies like Istio attract so much attention. They guarantee the availability wherever your container works, so you do not need to think about infrastructure - which is the main reason for using containers.

I hope you can reach safer and more reliable production environments with Kubernetes keeping the above in mind!