Content-Security-Policy: frame-ancestors 'self' Five Reasons Why COREBAPP Chose Kubernetes for Its Infrastructure Journey
top of page
  • corebapp editorial team

Five Reasons Why COREBAPP Chose Kubernetes for Its Infrastructure Journey

Updated: Nov 30, 2023


corebapp logo transparent

 

Kubernetes (k8s/kube) is an open-source system meant to help manage containers in a distributed environment. All important tasks should be achieved with it - from app life-cycle, service health, and world-renowned auto-scaling.


The creator of this sublime software engineering was, of course, Google. According to credible sources, Alphabet was running all its services (Mail, Search, Maps, etc.) since 2006 on containers. As a result, they ended up building two different, in-house, container-management systems with Borg and Omega before the rise of Kubernetes.


Here are five reasons that helped corebapp infrastructure decide between K8s and Swarm for it's underlying systems:


1. Scalability is a corebapp foundation pillar, starting with the infrastructure


Kubernetes is designed to be highly scalable, making it ideal for organizations that need to respond quickly to changes in resource requirements for thei infrastructure. With Kubernetes, you can easily scale resources up or down, adding or removing nodes as needed and adjusting the number of replicas running on each node. This level of scalability is critical for DevOps teams, who need to be able to respond quickly and effectively to changes in demand. According to the 2021 Cloud Native Computing Foundation (CNCF) Survey, 91% of Kubernetes users report that it has met or exceeded their expectations for scalability. This is a testament to the power and flexibility of the Kubernetes platform, and highlights why it's the preferred choice for organizations that require robust and scalable container orchestration.


Here are the metrics that we track:


  • Cluster Size: One of the primary scalability metrics for K8s is cluster size, which refers to the number of nodes in a cluster. K8s clusters can be scaled up by adding more nodes, which increases the overall computing capacity of the cluster. For reference, we have used one of the latest reports from Dynatrace.

  • Pod Scaling: Another scalability metric for K8s is pod scaling, which refers to the ability to add or remove pods in response to changing workloads. This allows organizations to scale their applications up and down as needed to meet changing demand. We strongly recommend looking at horizontal pod autoscaling that increases or decreases the number of replicas of a pod in response to changing demand. This is particularly useful for stateless applications, as it allows the application to be scaled out horizontally to accommodate increased demand.

  • Resource Utilization: K8s also provides metrics for resource utilization, such as CPU, memory, and network bandwidth usage. This information can be used to identify performance bottlenecks and to optimize resource utilization.

  • Latency: Latency is another important scalability metric for K8s, as it directly impacts the performance and user experience of applications. K8s provides several mechanisms for reducing latency, such as local storage, low-latency networking, and distributed load balancing.

  • Availability: Availability is a critical scalability metric, as it determines the reliability and resiliency of applications. K8s provides features such as automatic failover, self-healing, and automatic rolling updates, which help to ensure high availability for applications.


2. Automated Rollouts and Rollbacks


Another advantage of Kubernetes is its ability to automatically handle rollouts and rollbacks. This is an essential feature for DevOps teams, who need to be able to deploy new features and updates with confidence. If a new feature doesn't work as expected or causes problems, Kubernetes can automatically roll back to a previous version, minimizing downtime and reducing the risk of data loss. According to the 2021 CNCF Survey, 83% of Kubernetes users report that they use the platform's automated rollout and rollback capabilities. .


Automated rollouts and rollbacks are used when:


  1. Updating applications: Automated rollouts and rollbacks are used to update applications in a controlled and predictable manner, reducing the risk of downtime or other disruptive impacts.

  2. Deploying new features: Automated rollouts and rollbacks are used to deploy new features and functionalities in a controlled and predictable manner, reducing the risk of bugs or other issues that could impact application performance.

  3. Managing rollouts: Automated rollouts and rollbacks are used to manage the deployment of new releases, ensuring that applications are always running at optimal performance.

A successful case study for the use of automated rollouts and rollbacks in K8s is the deployment of the Google Cloud Console. The Google Cloud Console is a web-based interface for managing Google Cloud services, and is critical to the operation of Google's cloud infrastructure. To ensure optimal performance, the Google Cloud Console is deployed using K8s, with automated rollouts and rollbacks used to manage and control updates. This has allowed Google to roll out new features and updates quickly and easily, without impacting the performance or availability of the Google Cloud Console.


3. Resource Management


A laptop on a desk.

Kubernetes offers a highly sophisticated and flexible resource management system, making it easy to control the allocation of resources between different services and applications. This level of control is critical for DevOps teams, who need to be able to manage resources effectively and optimize performance. According to the 2021 CNCF Survey, 87% of Kubernetes users report that they use the platform's resource management capabilities. This highlights the importance of resource management in container orchestration, and why Kubernetes is a better choice than Docker Swarm, which offers a more basic resource management system. Here is what you should know:


  1. Resource Quotas: Resource quotas are a way to limit the amount of compute, storage, and network resources that a namespace can consume. This helps to ensure that resources are used in a controlled and predictable manner, and prevents one namespace from consuming all of the available resources.

  2. Limit Ranges: Limit ranges are similar to resource quotas, but are applied at the pod level. Limit ranges allow you to specify the minimum and maximum amount of resources that a pod can consume, helping to ensure that pods are running with the correct amount of resources.

  3. Resource Requests and Limits: Resource requests and limits are used to specify the amount of resources that a pod requires and the maximum amount of resources that it can consume. This helps to ensure that pods are scheduled to nodes that have the required resources available, and helps to prevent overloading of nodes.

  4. Node Selectors: Node selectors are used to control the scheduling of pods on specific nodes. Node selectors allow you to specify the nodes that a pod can run on, helping to ensure that pods are scheduled on nodes that have the necessary resources and configurations.

  5. Taints and Tolerations: Taints and tolerations are used to control the scheduling of pods on specific nodes. Taints are used to mark nodes as unschedulable, while tolerations are used to allow pods to be scheduled on nodes with specific taints.


4. Networking and Security


Kubernetes is designed with networking and security in mind, offering a range of features to help secure and manage containers and networks. For example, Kubernetes supports network segmentation, which allows you to create isolated network segments for different services and applications. It also offers advanced security features, such as Role-Based Access Control (RBAC), to help manage and control access to resources. According to the 2021 CNCF Survey, 80% of Kubernetes users report that they use the platform's networking and security capabilities. On the other hand, Dynatrace's report "Kubernetes in the wild, 2023" states that security is a key area for growth with a top priority.


Here are a few important hardening sources:

  • Official k8s security checklist, here

  • CIS benchmark guide, here

  • NSA, CISA hardening guide, here


5. Ecosystem and Community


Finally, it's worth mentioning the ecosystem and community that surround Kubernetes. With thousands of contributors and a vast network of users, Kubernetes is one of the most widely used and well-supported container orchestration platforms available. This means that you have access to a wealth of resources and support, including tutorials, guides, and plugins, to help you get the most out of the platform. Docker Swarm, on the other hand, has a smaller and less active community,



 


corebapp.com, established in Bucharest in 2019, is a pioneering cloud-native no-code platform enabling the creation of complex business applications without coding. Launched in 2023, it offers a full no-code experience with secure integration, simplifying development for medium to large businesses in industries like construction, healthcare, and finance. corebapp.com stands out for its flexibility, scalability, and ease of use, empowering non-technical users to build enterprise-grade applications efficiently. Headquartered in Bucharest, Romania, corebapp.com continues to innovate in no-code solutions, focusing on flexibility, scalability, and security. Learn more at corebapp.com.

27 views0 comments
bottom of page