Arm64 Graviton on EKS+Karpenter
2 min readSep 13, 2024
Converting my default NodePool to arm64
This is a modification of what I do to build clusters here.
Why do this?
- Graviton instances are cheaper
- Graviton instances are more performant
- There are more than enough spot instances available
- Its fairly easy to run multiarch docker builds with buildx or ko
NodePool Manifest
This is what I run now… I actually dropped this in over the top of my old default NodePool. Karpenter rotated out my amd64 nodes gracefully with no downtime and replaced them all with t4g and m7g spot instances.
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
name: default
spec:
template:
spec:
nodeClassRef:
group: karpenter.k8s.aws
kind: EC2NodeClass
name: default
requirements:
- key: "kubernetes.io/arch"
operator: In
values: ["arm64"]
- key: "kubernetes.io/os"
operator: In
values: ["linux"]
- key: "karpenter.k8s.aws/instance-cpu"
operator: In
values: ["4", "8", "16", "32"]
- key: "karpenter.k8s.aws/instance-family"
operator: In
values: ["t4g", "m7g"]
- key: "karpenter.k8s.aws/instance-hypervisor"
operator: In
values: ["nitro"]
- key: "karpenter.k8s.aws/instance-generation"
operator: Gt
values: ["2"]
limits:
cpu: 1000
disruption:
consolidationPolicy: WhenEmptyOrUnderutilized
expireAfter: 168h # expire nodes after 7 days = 7 * 24h
consolidateAfter: 30s
Unfortunately, Fargate EKS doesn’t support arm64 yet. But at least I only run four of those per cluster and both CoreDNS and Karpenter publish multiarch images.
Are you running arm64 clusters? What does your NodePool look like?