- a modern, scalable way to track which Pods back a Service and how to reach them
- split Service endpoints into small, scalable chunks (slices) that Kubernetes networking components can efficiently consume
- replace (and improve on) the older Endpoints object
The problem they solve
- Large Services (hundreds/thousands of Pods) created huge objects
- Frequent updates caused API server and etcd pressure
- Harder to extend with metadata (zone, topology, readiness, etc.)
What EndpointSlices are
- A Kubernetes API object (discovery.k8s.io/v1)
- Owned by a Service
- EndpointSlice objects are:
- Automatically managed by the Kubernetes control plane
- Created and updated by the EndpointSlice controller
- Generated based on:
- Service selectors
- Matching Pod IPs and readiness state
- They are derived state, not source-of-truth configuration.
- Your source-of-truth objects are:
- Deployment
- StatefulSet
- DaemonSet
- Service
- EndpointSlices are dynamically rebuilt from those.
- Contains a subset of endpoints (Pods or external IPs)
- Typically holds up to ~100 endpoints per slice (configurable)
What’s inside an EndpointSlice
- Endpoints
- IP addresses (IPv4 / IPv6)
- Ready / serving / terminating status
- Zone & node info
- Ports
- Name, port number, protocol
- AddressType
- IPv4, IPv6, or FQDN
- Labels
- Including kubernetes.io/service-name
How they’re used
- kube-proxy reads EndpointSlices to program iptables/IPVS rules
- CoreDNS uses them for Service DNS resolution
- Controllers watch them instead of Endpoints
- The old Endpoints object still exists for backward compatibility
Ingress → Service → EndpointSlice → Pod IPs
- kubectl describe ingress
- kubectl describe service
- Some kubectl get ... -o wide outputs
Why EndpointSlices are better
Relationship to Endpoints (important)
- Modern clusters use EndpointSlices by default
- Kubernetes still creates Endpoints objects unless disabled
- You should:
- Read EndpointSlices
- Avoid writing Endpoints directly in new tooling
When you’ll notice them (DevOps angle)
- Debugging Services with many Pods
- Investigating kube-proxy or networking issues
- Watching API load in large clusters
- Writing controllers or operators
- Tuning Service performance at scale
What Happens During a Cluster Upgrade?
- Upgrade control plane in-place
- Do blue/green node groups
- Rotate nodes
- Continue to be reconciled automatically
- Be regenerated if needed
- Update when pods move to new nodes
- Update when pod IPs change
What Happens When Pods Move to New Nodes
- Pod gets rescheduled
- Pod gets new IP
- EndpointSlice controller updates the slice
- kube-proxy / CNI updates routing
- Traffic continues normally
When Would EndpointSlices Matter?
- You manually created custom EndpointSlices (very rare)
- You disabled the EndpointSlice controller (also rare)
- You are debugging networking issues post-upgrade


