Let’s say you have a webshop. Since it’s a “web”shop, it has to run on a machine that’s connected to the Internet. I myself had a computer in my closet to host my website at some point, but let’s face it: that’s not really what you would call future-proof. When the business becomes big enough, we want to run our money-making software on something that’s operationally secure.
In the “old days” (5 years ago) we used to run these things much differently. Most companies would probably buy or rent an expensive piece of hardware called a “server” and put it in a data center. The system administrator would then go and install whatever was needed to run the webshop software, for example, the PHP or Java runtime, and a database too.
The developer would then go ahead and deploy the webshop software on the server. This approach, while workable, often lead to conflicts between the ops and development teams, as the developers would do things like deploy at 5 PM on a Friday and then go home for the weekend and leave ops to deal with the aftermath at 2 AM in the morning.
Not to mention the fact that servers are incredibly expensive, so building a test environment was incredibly cost-intensive. On the other hand, deploying something that didn’t work and leaving the webshop offline for half a day was an equally unattractive proposition.
Long story short, the most successful approach to dealing with this issue is “containers”. Containers are radically different from the approach we had until now. When building a container, we take the runtime (e.g. PHP or Java) and pack it together with the application code (i.e. the webshop software). This container image can then be used in a development environment but can also be shipped without modifications to the production system. This has numerous advantages, ranging from the ease of managing the different runtime versions to the fact that you can now actually test the application in isolation and you don’t need an expensive hardware setup to do so.
The downside, of course, is that not every application plays nice with containers. Older applications are difficult to operate in this manner as they often require manual steps to install and cannot be automated in this fashion.
When discussing the production environment, we have nonchalantly glossed over how it is exactly done. A container isn’t some magic technology with pixie dust, it has the same issues as a traditional VM: if the host goes down, so does the container.
Orchestration systems like Kubernetes solve this problem by managing where the container runs. If you have multiple servers and one of them goes down, Kubernetes can restart the container on a different machine.
This is, however, not like the failover setup you see with virtual machine systems like VMWare. It is not the same container being moved, it is a new copy of the container. This behavior has mainly engineering reasons, but it can be very efficient if used correctly.
Always recreating the container presents a problem: what about the data? Some services, such as databases, need to store data on the disk. The way Kubernetes handles containers means that when a container is moved to a different host, the data is not moved with it. Unless…. Unless you use something called persistent volumes.
Persistent volumes make it possible to reattach a storage volume to a container when it is moved to a different host. However, in order to do that, they need a network-based storage system. This can be either a network block device, or it can be a shared filesystem like GlusterFS, which is offered as the standard storage solution with RedHat OpenShift, an enterprise-grade Kubernetes solution that includes many features that a normal, “naked” Kubernetes doesn’t.
Kubernetes is hard
Suffice to say, Kubernetes isn’t a next-next-next-finish kind of deal. Deploying Kubernetes is hard, requires experienced engineers and even then, it’s going to be a learning curve for many companies.
Cloud providers that offer Kubernetes ready to go help with the deployment, but I’ve spoken to many developers using Kubernetes who were struggling. Instead of the old way where they could just log in to a server and take a look at what’s wrong, now they didn’t even know how to deal with issues.
From these conversations I gathered that simply put, Kubernetes has steam-rolled most of the IT world and while we like our shiny new toys, it is very much at the peak of the Gartner hype cycle. We as the IT community are lacking a significant amount of know-how and tribal knowledge in running these systems smoothly. Over the coming months and years, some projects will have to learn its usage the hard way, and figure out what it is useful for and where it is overkill
Should you use it?
As any good consultant would say, “it depends”.
If you have a cloud provider that takes care of the deployment, it may be a good idea, but your developers have to be ready for it. They have to practice, they need to know how to build stable and secure containers, set up monitoring, etc.
If you want to deploy Kubernetes yourself, be prepared that you will need somewhere between 2 and 5 people extra to maintain this deployment.
You may be expecting me at this point to sell you on some fancy service or product that makes ops so much easier, but I won’t. Ops is hard, and as an IT manager, I think you should keep the training and know-how of your team in mind when you make these decisions.
If you, together with your team, decide that you are in the market for a European infrastructure cloud, I’ll happily sell you a solution for that. But you’ll have to make that decision yourself.
You might also be interested in
Why IoT Drives the Logistics Industry
How to use IoT solutions to increase the efficiency of your supply chains and secure your competitive edge ...
The Challenge of Security in Digital Transformation
Security as a challenge: Digitalization, which should be efficient and productive, can not do without a security strategy.
Narrowband IoT: the network for medium-sized companies
Cars, machines, household appliances: More and more "things" are interconnected. This is made possible by wireless connections such as WLAN or LTE. But what about in cities or rural areas, where the network connection often weakens? Peter Gaspar, Solution Architect at A1 Digital, explains how Narrowband-IoT can fill these gaps and pave the way for mass applications in the Internet of Things.