0
0
Kubernetesdevops~5 mins

Metrics Server installation in Kubernetes - Commands & Configuration

Choose your learning style9 modes available
Introduction
Sometimes you want to see how much CPU and memory your Kubernetes pods are using. Metrics Server collects this data so you can monitor your cluster's health and make decisions like scaling your apps automatically.
When you want to check resource usage of pods and nodes in your Kubernetes cluster.
When you need to enable Horizontal Pod Autoscaler to scale apps based on CPU or memory.
When you want to troubleshoot performance issues by seeing live metrics.
When you want a simple way to gather cluster metrics without installing heavy monitoring tools.
When you want to use kubectl top commands to see resource usage.
Commands
This command downloads and installs the Metrics Server components into your Kubernetes cluster. It sets up the necessary pods and permissions.
Terminal
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
Expected OutputExpected
namespace/metrics-server created serviceaccount/metrics-server created clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created clusterrole.rbac.authorization.k8s.io/system:metrics-server created clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created deployment.apps/metrics-server created service/metrics-server created apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
This command checks if the Metrics Server pod is running in the kube-system namespace by filtering pods with the label k8s-app=metrics-server.
Terminal
kubectl get pods -n kube-system -l k8s-app=metrics-server
Expected OutputExpected
NAME READY STATUS RESTARTS AGE metrics-server-abcdef1234 1/1 Running 0 30s
-n kube-system - Specifies the namespace where Metrics Server is installed.
-l k8s-app=metrics-server - Filters pods by label to show only Metrics Server pods.
This command shows CPU and memory usage for each node in the cluster, proving that Metrics Server is collecting metrics.
Terminal
kubectl top nodes
Expected OutputExpected
NAME CPU(cores) MEMORY(bytes) worker-node1 120m 500Mi worker-node2 100m 450Mi
This command shows CPU and memory usage for all pods across all namespaces, confirming Metrics Server is working cluster-wide.
Terminal
kubectl top pods --all-namespaces
Expected OutputExpected
NAMESPACE NAME CPU(cores) MEMORY(bytes) default my-app-1234567890-abcde 50m 100Mi kube-system metrics-server-abcdef1234 10m 30Mi
--all-namespaces - Shows metrics for pods in all namespaces.
Key Concept

If you remember nothing else from this pattern, remember: Metrics Server must be installed and running to provide live CPU and memory metrics for your Kubernetes cluster.

Common Mistakes
Not waiting for the Metrics Server pod to be in Running status before using kubectl top commands.
kubectl top commands will fail or show no data if Metrics Server is not ready.
Check pod status with kubectl get pods -n kube-system -l k8s-app=metrics-server and wait until it shows STATUS Running.
Installing Metrics Server without proper permissions or in the wrong namespace.
Metrics Server will not collect metrics if it lacks permissions or is not in kube-system namespace.
Use the official components.yaml from the Metrics Server GitHub release which sets correct permissions and namespace.
Trying to use kubectl top commands before Metrics Server is installed.
kubectl top commands depend on Metrics Server; without it, they will error out.
Always install Metrics Server first before running kubectl top commands.
Summary
Apply the official Metrics Server manifest to install it in your cluster.
Verify the Metrics Server pod is running in the kube-system namespace.
Use kubectl top nodes and kubectl top pods to see live resource usage metrics.