Kubernetes is a great project known to all, especially for containerized app deployment and management. And Markus Eisele, Red Hat's EMEA Developer Adoption Lead, has some important details for anyone interested in learning about it.
And it is that business development has always been one of the great challenges of computer engineering, and especially of companies like Red Hat. That is why in the last decade we have moved from the classic 3-tier architecture to a novel architecture with highly distributed microservices to achieve almost unlimited infrastructure resources for public cloud providers. In addition, these microservices can be specialized in very specific and simple tasks, compared to obsolete heavy app servers.
These microservices they imply better efficiency in terms of resources consumed, which is another great advantage. In addition, it is one of the best ways to deploy these apps through containers, as if small virtual machines were treated. Although the main difference between a VM and a container is that the first does not have an operating system, instead it runs in a user space of the host operating system kernel, as if it were an app. This also means greater security.
But not everything was going to be advantages, since this architecture requires many containers (one per service or more), which means that the way in which they are managed and coordinated could be complex and represent a greater effort for the system administrator. . This is where Kubernetes enters the scene and it makes everything much easier.
Table of Contents
Setting up a native environment in Kubernetes
Kubernetes makes life easier for administrators, enabling a more automated management of apps and services. Looking for an analogy, it would be like the port authority on a jetty, which enables ships to move simultaneously within space. In other words, at the beginning, the capabilities of Kubernetes could be compared with those of Java EE, since both run apps on distributed physical hardware. However, containers care little about the requirements of the app itself.
With Kubernetes you can configure a cluster by writing configuration files to text format (mainly YAML, although it also supports JSON). Inside will be the parameters or specifications of each object defined for management.
Hardware for local Kubernetes configuration
In order to take advantage of the high scalability and reliability provided by a Kubernetes cluster, developers and administrators must take care to provide the container with enough resources to run.
If a cluster is assumed to have two master nodes with 2 GB of RAM, 4 cores, and 2 worker nodes with 1 GB of RAM and 2 cores, then a Kubernetes cluster you will need 6 GB of RAM and 12 cores as a minimum. Some resources that not all desktop computers can provide, although it is true that this project is not intended for the desktop.
However, there are currently a number of smaller learning environments that enable developers to develop with Kubernetes in local environments. Examples of them are MiniKube, MicroK8s, OpenShift CodeReady Cointainers, etc. All of them clusters of 1 single node to be able to have them in a desktop PC and whose installation can be done in a few minutes.
To test a more complex environment service, you usually have to go to a true Kubernetes cluster. But the tool Code Ready Containers it can make a developer's life much easier, including the entire toolkit and single-node installation of a Kubernetes cluster.
Native adoption in Kubernetes is a different world
Kubernetes has come to change the entire experience of developers, who see how the way of managing these services is totally different and integrated. As a result, Kubernetes adoption has become the next logical step towards simplification for the developer.
Likewise, Kubernetes enables greater flexibility, with help and tools for productive native Kubernetes development, and exciting new challenges ...