You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When the app is up and running, make a change to the lws manifest (e.g. add a dummy env var to both leader and worker template), then apply the new manifest
Monitor the pods status change.
Expected result: the pod groups should be updated by 2, i.e. index 4 and 3 are updated, then index 2 and 1, and finally index 0
Actual result: the pod groups are updated one by one from index 4 to 0.
Anything else we need to know?:
The problem is that the rolling update option maxUnavailable depends on a StatefulSet feature in alpha state (FEATURE STATE: Kubernetes v1.24 [alpha]), and alpha features are not enabled in production stage of cloud providers. For example,
Which Kubernetes features are supported by Amazon EKS?
Amazon EKS supports all generally available (GA) features of the Kubernetes API. Starting with Kubernetes version 1.24, new beta APIs aren’t enabled in clusters by default. However, previously existing beta APIs and new versions of existing beta APIs continue to be enabled by default. Alpha features aren’t supported.
Unsupported alpha and beta Kubernetes features
AKS only supports stable and beta features within the upstream Kubernetes project. Unless otherwise documented, AKS doesn't support any alpha feature that is available in the upstream Kubernetes project.
GKE: Copied from Alpha Kubernetes features in GKE. Not an GKE expert but I feel it not a good practice to use Alpha clusters as production environment.
Alpha Kubernetes features in GKE
Alpha Kubernetes features are disabled by default in all GKE clusters. GKE might enable a specific alpha feature in a specific control plane version.
To enable all alpha Kubernetes features, create an alpha Standard cluster.
Warning: Alpha clusters are intended for experimental purposes and not for production workloads.
The maxUnavailable support is critical for real world use cases like large model deployment. For example, for a cluster with 80 model replicas, if updating a single replica takes 20 minutes, resulting in a total update time of over one day. But if maxUnavailable is working, setting it to 20%, the total update time would reduce to less than two hours.
If the MaxUnavailableStatefulSet feature is not going GA soon, an idea to mitigate the problem is adding a polyfill: porting the related StaetfulSet controller logic to LWS controller. Is the idea reasonable to you?
Environment:
Kubernetes version (use kubectl version): v1.31.3
LWS version (use git describe --tags --dirty --always): v0.3.0-8-ga4c468e
Cloud provider or hardware configuration: AWS EKS (server version v1.31.3-eks-56e63d8), node instance type g4dn.2xlarge
What happened:
As the subject, when maxUnavailable is set to 2 or more, the rolling update still refresh replicas one by one.
What you expected to happen:
The rolling update should honor the maxUnavailable parameter and work as documented.
How to reproduce it (as minimally and precisely as possible):
Preparation:
Change replicas from 2 to 5
Add the following rolling strategy, note the maxUnavailable parameter is set to 2
Steps to reproduce:
Anything else we need to know?:
The problem is that the rolling update option maxUnavailable depends on a StatefulSet feature in alpha state (FEATURE STATE: Kubernetes v1.24 [alpha]), and alpha features are not enabled in production stage of cloud providers. For example,
The maxUnavailable support is critical for real world use cases like large model deployment. For example, for a cluster with 80 model replicas, if updating a single replica takes 20 minutes, resulting in a total update time of over one day. But if maxUnavailable is working, setting it to 20%, the total update time would reduce to less than two hours.
If the MaxUnavailableStatefulSet feature is not going GA soon, an idea to mitigate the problem is adding a polyfill: porting the related StaetfulSet controller logic to LWS controller. Is the idea reasonable to you?
Environment:
kubectl version
): v1.31.3git describe --tags --dirty --always
): v0.3.0-8-ga4c468ecat /etc/os-release
): Amazon Linux 2uname -a
): 5.10.218The text was updated successfully, but these errors were encountered: