Table of contents
Table of contents
- Common issues in software development, deployment and hosting
- Containers to the rescue!
- Containers vs. virtual machines
- Image and layers
- Further reading
Containers are far more lightweight than a virtual machine, meaning you can run far more of those on the same amount of hardware.
Watch the video accompanying this article below, which includes demos of these concepts. Or watch it on Master Sitecore on YouTube.
Common issues in software development, deployment and hosting
- Have you ever had the issue getting a new project set up on your machine with all the required dependencies?
- How about then managing those dependencies throughout all your non-production and production environments?
- What happens when you need to upgrade one or more of those dependencies?
- Have you ever mistakenly developed against one version of a dependency only to discover a different version is live in production?
- How about having a team with developers who want to run different operating systems (OS)?
- The problem here is that you’re deploying just the application on its own and relying on the host environment to provide the dependencies. In reality, the host should be responsible for the OS and not much else.
What all of these problems have in common surround issues of consistency, isolation, and reproducibility. These are the problems that Docker, and containers in general, were created to solve.
Containers to the rescue!
When you deploy a container, you’re actually deploying everything that is required to run that application, inside of a single artifact.
As you can see in the image, you’re not just packaging up your application but also its dependencies and libraries, its configuration assets, and anything else that it needs to run. All wrapped up nicely in a single asset that you can move between your different environments.
Containers vs. virtual machines
One common question is how does a container differ from a virtual machine. They have some overlap in the functionality they provide, but how they achieve it is fundamentally different.
If you look at the virtual machine diagram on the left, each of the green boxes represents an entire OS. This means there are four complete operating systems present on that host infrastructure and this can place a heavy load on the system resources and disk space required.
Compare that with the container-based approach on the right. Here, the host operating system’s kernel and resources are directly shared by the containers through the Docker daemon. This means that the containers are far more lightweight than a virtual machine, meaning you can run far more of those on the same amount of hardware.
Image and layers
Now the containers seen in the previous example will have been created from an image. An image is a read-only template with instructions for creating a Docker container. Images themselves are built on the concept of layers, each layer building upon the layer below it. This concept means it makes it very simple to customize existing images to build out your custom functionality.
If you look at the image above, you can see that the Sitecore® Experience Platform™ (XP) image is built on top of the 4.8-windowsservercore-ltsc2019 image. We take the Windows Server Core image released by Microsoft and we add a custom layer on top of it to include the Sitecore code. This could be an image to build a Content Management (CM) role, or Content Delivery (CD) role, but the concept remains the same.
We can also then take that Sitecore XP image and build on top of it to provide further functionality. The middle image shown above takes the XP image and then layers Sitecore PowerShell Extensions (SPE) on top. The diagram on the right takes this a step further, using the SPE image we just created and adding a layer for Sitecore Experience Accelerator (SXA) on top of that.
The powerful concept with the layered approach is that these layers are shared. So, that Windows Server Core image we started out with only needs to exist on disk once, and all three of the images above build on top of it.
You can read a good overview of Docker, its use cases and the application architecture on Docker Docs.
You may be wondering where all of these images come from originally. Well they are stored in registries — you can have either public or private registries. The most popular public registry is Docker Hub. In there, you can find images for almost any modern software you can think of! To bring an image down to your local machine, you would use a Docker pull command, e.g.
docker pull mcr.microsoft.com/dotnet/core/samples:dotnetapp
This command will pull down the mcr.microsoft.com/dotnet/core/samples image that has the dotnetapp tag. Once it has been pulled down, you can then use it locally to create containers based on it. To do that, you would use a simple Docker run command, e.g.
docker run mcr.microsoft.com/dotnet/core/samples:dotnetapp
You can read more about registries on Docker Docs.
Volumes is another of the key concepts when working with containers. There are many occasions when it is useful to share files between the host and container. A couple of good examples of this are:
- When working with static assets, you don't want to rebuild the image and recreate your container to see any changes.
- When working with secret files that you might not want to be included in an image, e.g. license files.
Volumes are created to solve this problem — they allow you to designate a folder on the host machine to be shared and available in the container. This is a powerful feature, as any changes you make to contents of the volume are instantly reflected inside of the container.
You can read more about volumes on Docker Docs.
The final key concept to cover here is networking. One of the reasons Docker containers and services are so powerful is that you can connect them together in an internal network created by the Docker daemon. A good example of this would be a system made up of a .NET Core application with a SQL Server back end. In this scenario, you can stand up two containers, one containing the .NET Core application and the other containing SQL Server, and have Docker create a network between the two of them to allow them to communicate with each other.
You can read more about networking on Docker Docs.
The Docker website has a great set of tutorials to get you up and running using Docker for your application development, which you can see here:
Thanks for reading and make sure to follow #LearnSitecore for future content!
Rob Earlam, Technical Evangelist, Sitecore