In November 2022, the open source Kubernetes project announced that its new image registry, registry.k8s.gcr.io, was officially GA. The new registry would replace the legacy k8s.gcr.io registry, with k8s.gcr.io getting no further updates after April 3, 2023. To assist in this transition and ensure that users of earlier Kubernetes releases and tooling can update to supported versions in time, the Kubernetes project, in partnership with Google, gradually started redirecting image requests from k8s.gcr.io to registry.k8s.io on March 20, 2023.
Today’s post covers what’s happening, why, and, more importantly, what actions you can take to switch to registry.k8s.io to mitigate future issues.
From our partners:
Why is Kubernetes switching to the new registry?
Google open sourced the Kubernetes project and has supported the Cloud Native Computing Foundation (CNCF) since its inception. Today, there are millions of users and a massive, global ecosystem of vendors and projects that support the project and the CNCF. registry.k8s.io, the new vendor-agnostic registry built by Kubernetes community members from Google, Amazon, VMware, and elsewhere creates a global CDN for the project’s container images, spreading the load across multiple cloud providers. This new registry is more sustainable for the project and provides a better experience for all Kubernetes users.
For additional information on the registry, see the Kubernetes Community Blog post about its launch.
Why redirect requests?
The redirect is a temporary measure to smooth the transition to registry.k8s.io. Clusters should not continue to rely on k8s.gcr.io in the long term; the Kubernetes community plans to sunset it in the future.
The good news is that registry.k8s.io is a mirror of k8s.gcr.io that can be dropped in as a direct substitute for most users. However, if you use Google Kubernetes Engine (GKE) or Anthos in a restricted environment that applies strict domain name or IP address access policies, such as with VPC Service Controls, and you rely on Kubernetes community images from k8s.gcr.io, you may be impacted and need to make some adjustments to be future-compatible.
What cluster configurations are impacted by the redirect?
Workloads that may be impacted are those that run on top of GKE and Anthos clusters running in a restricted environment that applies strict domain name or IP address access policies by using VPC Service Controls or other network access tooling.
Check for registry.k8s.io connectivity
To test connectivity and image access to registry.k8s.io, run the following command:
kubectl run hello-world -ti --rm --image=registry.k8s.io/busybox:latest --restart=Never -- date
If the registry change doesn’t affect you, the output should look like the following:
Fri Mar 17 10:08:07 UTC 2023
pod "hello-world" deleted
What kind of errors will I see if I’m impacted?
You might notice an increase in ErrImagePull
or ImagePullBackOff
errors. Container creation might fail with the warningFailedCreatePodSandBox
for container images that reference k8s.gcr.io.
The redirect doesn’t affect running workloads. You’ll notice errors when scaling up the number of workloads, or when creating new workloads that reference k8s.gcr.io.
How can I detect what images in my cluster are impacted?
The Kubernetes community has devised the following methods:
Scan manifests and chartsIf you manage multiple clusters, or don’t have direct access to your clusters, you can search your manifests and charts for “k8s.gcr.io”. This is the recommended method for larger or more complicated environments.
Use kubectlFind Pods that contain image references to k8s.gcr.io:
kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' |\
sort |\
uniq -c \
grep "k8s.gcr.io"
NOTE: Direct updates to manifests might not change the references of some Pods returned by this command. Pods that are controlled by system-level services or by other controllers might require updating at the source before the effects can be seen.
Use a krew plugin
Use community images, a kubectl krew plugin that scans and generates a report of any Pods that run containers that reference k8s.gcr.io.
If you have krew installed, install the community-images plugin:
kubectl krew install community-images
Then, generate a report:
kubectl community-images
Other methods of installing the plugin are available in the kubernetes-sigs/community-images GitHub repository.
NOTE: Similarly to the kubectl method, direct updates to manifests might not change the references of some Pods returned by this command. Pods that are controlled by system-level services or by other controllers might require updating at the source before the effects can be seen.
For additional options for detecting and blocking containers that contain references k8s.gcr.io using third party tools, see the Kubernetes Blog.
One of my workloads is impacted, what should I do?
If you’re using VPC Service Controls or if your environment is similarly restrictive, add a rule to allow access to registry.k8s.io. If you can’t add a rule, the recommended forward-compatible option is to mirror the affected images to a private instance of Artifact Registry by using gcrane, and update your manifests to reference the image at its new location.
If you need other options or general assistance, please reach out to Google Cloud support.
I’m not affected by the redirect, should I still update my image references?
Yes. The redirect is temporary, and the Kubernetes community plans to phase out support for k8s.gcr.io in the future.
Getting help
If you have questions or run into issues, please check out the community resources available on this topic or contact us through standard support channels:
- Kubernetes community debugging guide for migrating to registry.k8s.io
- File a ticket with Google Cloud support
By: Bob Killen (Program Manager, Google Open Source Programs Office)
Originally published at Google Cloud Blog
Source: Cyberpogo
For enquiries, product placements, sponsorships, and collaborations, connect with us at [email protected]. We'd love to hear from you!
Our humans need coffee too! Your support is highly appreciated, thank you!