aster.cloud aster.cloud
  • /
  • Platforms
    • Public Cloud
    • On-Premise
    • Hybrid Cloud
    • Data
  • Architecture
    • Design
    • Solutions
    • Enterprise
  • Engineering
    • Automation
    • Software Engineering
    • Project Management
    • DevOps
  • Programming
    • Learning
  • Tools
  • About
  • /
  • Platforms
    • Public Cloud
    • On-Premise
    • Hybrid Cloud
    • Data
  • Architecture
    • Design
    • Solutions
    • Enterprise
  • Engineering
    • Automation
    • Software Engineering
    • Project Management
    • DevOps
  • Programming
    • Learning
  • Tools
  • About
aster.cloud aster.cloud
  • /
  • Platforms
    • Public Cloud
    • On-Premise
    • Hybrid Cloud
    • Data
  • Architecture
    • Design
    • Solutions
    • Enterprise
  • Engineering
    • Automation
    • Software Engineering
    • Project Management
    • DevOps
  • Programming
    • Learning
  • Tools
  • About
  • Cloud-Native
  • Software Engineering

Multi-Tenancy In Kubernetes: Implementation And Optimization

  • aster.cloud
  • November 13, 2022
  • 6 minute read

Kubernetes users usually share clusters to meet the demands of multiple teams and multiple customers, which is usually described using the term multi-tenancy. Multi-tenancy saves costs and simplifies administration. While Kubernetes does not have first-class concepts of end users or tenants, it provides several features to help manage different tenancy requirements. Based on these features, the Kubernetes community has seen an emergence of projects for multi-tenancy.

In this article, we will discuss issues around multi-tenancy in Kubernetes. In particular, we will thoroughly examine the two implementations of multi-tenancy – cluster sharing and multi-cluster – and provide suggestions for enterprises caught in a bind.


Partner with aster.cloud
for your next big idea.
Let us know here.



From our partners:

CITI.IO :: Business. Institutions. Society. Global Political Economy.
CYBERPOGO.COM :: For the Arts, Sciences, and Technology.
DADAHACKS.COM :: Parenting For The Rest Of Us.
ZEDISTA.COM :: Entertainment. Sports. Culture. Escape.
TAKUMAKU.COM :: For The Hearth And Home.
ASTER.CLOUD :: From The Cloud And Beyond.
LIWAIWAI.COM :: Intelligence, Inside and Outside.
GLOBALCLOUDPLATFORMS.COM :: For The World's Computing Needs.
FIREGULAMAN.COM :: For The Fire In The Belly Of The Coder.
ASTERCASTER.COM :: Supra Astra. Beyond The Stars.
BARTDAY.COM :: Prosperity For Everyone.

Control Plane Isolation

Kubernetes offers three mechanisms to achieve control plane isolation, i.e. namespace, RBAC and quota.

As a common concept to almost every Kubernetes user, namespace is used to provide a separate naming space. Object names within a namespace can be the same as names in other namespaces. Furthermore, RBAC and quota are also scoped to namespaces.

Role-based access control (RBAC) is commonly used to enforce authorization in the Kubernetes control plane, for both users and workloads (service accounts). Isolated access to API resources can be achieved by setting appropriate RBAC rules.

ResourceQuota can be used to impose a ceiling on the resource usage of namespaces. This ensures that a tenant cannot monopolize a cluster’s resources or overwhelm its control plane hence minimizing the “noisy neighbor” issue. One restriction of using ResourceQuota is that when applying a quota to a namespace, Kubernetes requires users to also specify resource requests and limits for each container in that namespace.

While these three mechanisms can, to some extent, achieve control plane isolation, they cannot solve the problem once and for all. Cluster-wide resources, such as CRDs, cannot be well isolated with these mechanisms.

Data Plane Isolation

Data plane isolation generally concerns three aspects: container runtime, storage, and network.

As kernels are shared by containers and hosts, unpatched vulnerabilities in the application and system layers can be exploited by attackers for container breakouts and remote code execution that allow access to host resources or other containers. This can be resolved by running containers in isolated environments, such as VMs (e.g., Kata Containers) and user-mode kernels (e.g., gVisor).

Read More  SPIRE Now Runs On Windows!

Storage isolation should ensure that volumes are not accessed across tenants. Since StorageClass is a cluster-wide resource, reclaimPolicy should be specified as Delete to prevent PVs from being accessed across tenants. In addition, the use of volumes such as hostPath should also be prohibited to avoid the abuse of the nodes’ local storage.

Network isolation is usually achieved through setting NetworkPolicy. By default, all pods can communicate with each other. With NetworkPolicy, the network flow between pods can be restricted, reducing unexpected network access. Advanced network isolation can be achieved through the service mesh.

Implementation of Multi-Tenancy

The abovementioned control plane isolation and data plane isolation are just individual features provided by Kubernetes. There’s still a long way to go before these features can be delivered as  an integrated multi-tenancy solution. Fortunately, the Kubernetes community has a number of open-source projects specifically addressing multi-tenancy issues. These projects fall into two categories, with one dividing tenants by namespaces, while the other providing a virtual control plane to tenants.

Namespace Per Tenant

In Kubernetes’ control plane isolation, both RBAC and ResourceQuota are bounded by namespaces. While it seems to be easy to divide tenants by namespaces, the prescription of one namespace per tenant has significant limitations. For example, subdividing tenants in a granular level of team and application can hardly be accomplished, which increases the management difficulty. In view of this, Kubernetes has officially provided a controller that supports hierarchical namespace. Third-party open-source projects such as Capsule and kiosk also provide more sophisticated multi-tenant support.

Kiosk

Virtual Control Plane

Another option is to provide each tenant with a separate virtual control plane so as to completely isolate the tenant’s resources. A virtual control plane is typically implemented by running a separate set of apiservers for each tenant, while using the controller to synchronize resources from the tenant apiservers to the original Kubernetes cluster. Each tenant can only access its corresponding apiserver, while the apiservers of the original Kubernetes cluster are typically not externally accessible.

Read More  How Do You Integrate Emissary Ingress With OPA

Virtual control planes usually require a higher resource consumption for the radical isolation of control planes. But in conjunction with data plane isolation techniques, it’s a more thorough and secure multi-tenant solution. One example of this is the vcluster project.

vcluster

Comparison of Two Options

Whether to use “namespace per tenant” or the virtual control plane to achieve multi-tenancy basically depends on application scenarios. Generally speaking, dividing tenants by namespace reduces isolation and flexibility while being appropriate for lightweight scenarios such as sharing clusters between multiple teams. Whereas in the scenario of multiple customers, a virtual control plane usually can provide more secure isolation.

Multi-Cluster Solutions

As we can see from the above discussion, the sharing of Kubernetes clusters is not an easy task; multi-tenancy is not a built-in feature of Kubernetes and can only be implemented with other projects’ support in tenant isolation on both control and data planes. This has resulted in considerable learning and adaptation costs for the entire solution. As a result, we are seeing an increasing number of users adopt multi-cluster solutions instead.

Compared with cluster-sharing solutions, multi-cluster solutions have both pros and cons. The advantages include the high level of isolation and clear boundaries, whereas the disadvantages lie in the high overhead and O&M costs. Since each cluster requires individual control planes and worker nodes, Kubernetes clusters are usually built on VMs to increase the utilization of physical clusters. However, traditional virtualization products tend to be large and heavy as they are expected to deal with a wide range of scenarios. This makes them too expensive to be the best choice of supporting virtualized Kubernetes clusters.

Based on that, we believe that an ideal virtualization platform for virtualized Kubernetes clusters should have the following characteristics:

  • Lightweight. Instead of addressing all scenarios (e.g., VDI), it should focus on server virtualization while eliminating all unnecessary features and overheads.
  • Efficient. Widely use paravirtualization I/O technologies such as virtio to increase efficiency.
  • Secure. Minimize the host’s attack surface.
  • Kubernetes-native. The virtualization platform should itself be a Kubernetes cluster, which reduces learning and O&M costs.
Read More  What Does It Mean For Code To "Work"?

And these characteristics are exactly what the Virtink virtualization engine has. Virtink is an open-source and lightweight virtualization add-on for Kubernetes released by SmartX. Based on the Cloud Hypervisor project, Virtink is capable of orchestrating lightweight, modern VMs on Kubernetes.

Virtink uses virtio as much as possible to improve I/O efficiency. In terms of security, memory management is more secure with the Cloud Hypervisor project written in the Rust language. Also, legacy and unnecessary hardware is not supported so as to minimize the attack surface exposed to VMs and improve host security.

Overall, Virtink is more lightweight, efficient, and secure, providing more ideal virtualization support for Kubernetes in Kubernetes while lowering the overhead of the virtualization layer.

Kubernetes + Virtink

Moreover, in order to meet the needs of creating, operating, and maintaining virtualized Kubernetes clusters on Virtink, we developed the knest command line tool to help users create and operate clusters with one click. Particularly, to create a virtualized Kubernetes cluster on the Virtink cluster, users could simply run knest create <cluster-name>. One-click-scale-out is also supported.

Conclusion

Although Kubernetes has no built-in multi-tenancy, it provides some fine-grained feature support. Multi-tenant cluster sharing can be achieved through these features and some third-party tools, which yet entails additional learning and O&M costs. From this perspective, the multi-cluster solution is more suitable and has been adopted by many users. However, users may encounter other problems such as traditional virtualization platforms’ low efficiency and high cost.

Built on an efficient and secure Cloud Hypervisor project, Virtink, an open-source virtualization engine, is optimized in orchestrating lightweight VMs on Kubernetes. We also provide a knest command line tool which allows users to create and manage clusters with a simple click, reducing the overall O&M cost of multiple clusters.
To learn more about the Virtink project, features and demo, read the previous blog post here.

 

Guest post by Feng Ye, Software Engineering Manager & Virtink Project Maintainer at SmartX
Source CNCF


For enquiries, product placements, sponsorships, and collaborations, connect with us at [email protected]. We'd love to hear from you!

Our humans need coffee too! Your support is highly appreciated, thank you!

aster.cloud

Related Topics
  • CNCF
  • knest
  • Kubernetes
  • SmartX
  • Virtink
You May Also Like
View Post
  • Software Engineering
  • Technology

Claude 3.7 Sonnet and Claude Code

  • February 25, 2025
View Post
  • Engineering
  • Software Engineering

This Month in Julia World

  • January 17, 2025
View Post
  • Engineering
  • Software Engineering

Google Summer of Code 2025 is here!

  • January 17, 2025
View Post
  • Cloud-Native
  • Multi-Cloud

Oracle Expands Multicloud Capabilities with AWS, Google Cloud, and Microsoft Azure

  • September 11, 2024
Cloud computing concept image double exposure Digitally Enhanced Smart City Concept with Cloud Computing
View Post
  • Cloud-Native
  • Computing
  • Hybrid Cloud
  • Multi-Cloud
  • Public Cloud

Make Your Business Resilient By Integrating These Best Practices Into Your Cloud Architecture

  • July 29, 2024
View Post
  • Software Engineering

5 Books Every Beginner Programmer Should Read

  • July 25, 2024
Huawei Cloud Cairo Region Goes Live
View Post
  • Cloud-Native
  • Computing
  • Platforms

Huawei Cloud Goes Live in Egypt

  • May 24, 2024
View Post
  • Cloud-Native
  • Computing
  • Engineering

10 Cloud Development Gotchas To Watch Out For

  • March 29, 2024

Stay Connected!
LATEST
  • college-of-cardinals-2025 1
    The Definitive Who’s Who of the 2025 Papal Conclave
    • May 7, 2025
  • conclave-poster-black-smoke 2
    The World Is Revalidating Itself
    • May 6, 2025
  • oracle-ibm 3
    IBM and Oracle Expand Partnership to Advance Agentic AI and Hybrid Cloud
    • May 6, 2025
  • 4
    Conclave: How A New Pope Is Chosen
    • April 25, 2025
  • Getting things done makes her feel amazing 5
    Nurturing Minds in the Digital Revolution
    • April 25, 2025
  • 6
    AI is automating our jobs – but values need to change if we are to be liberated by it
    • April 17, 2025
  • 7
    Canonical Releases Ubuntu 25.04 Plucky Puffin
    • April 17, 2025
  • 8
    United States Army Enterprise Cloud Management Agency Expands its Oracle Defense Cloud Services
    • April 15, 2025
  • 9
    Tokyo Electron and IBM Renew Collaboration for Advanced Semiconductor Technology
    • April 2, 2025
  • 10
    IBM Accelerates Momentum in the as a Service Space with Growing Portfolio of Tools Simplifying Infrastructure Management
    • March 27, 2025
about
Hello World!

We are aster.cloud. We’re created by programmers for programmers.

Our site aims to provide guides, programming tips, reviews, and interesting materials for tech people and those who want to learn in general.

We would like to hear from you.

If you have any feedback, enquiries, or sponsorship request, kindly reach out to us at:

[email protected]
Most Popular
  • 1
    Tariffs, Trump, and Other Things That Start With T – They’re Not The Problem, It’s How We Use Them
    • March 25, 2025
  • 2
    IBM contributes key open-source projects to Linux Foundation to advance AI community participation
    • March 22, 2025
  • 3
    Co-op mode: New partners driving the future of gaming with AI
    • March 22, 2025
  • 4
    Mitsubishi Motors Canada Launches AI-Powered “Intelligent Companion” to Transform the 2025 Outlander Buying Experience
    • March 10, 2025
  • PiPiPi 5
    The Unexpected Pi-Fect Deals This March 14
    • March 13, 2025
  • /
  • Technology
  • Tools
  • About
  • Contact Us

Input your search keywords and press Enter.