Skip to content

Commit fca0dc0

Browse files
authored
Fix k8s_drain runs into timeout with pods from stateful sets. (ansible-collections#793)
SUMMARY Fixes ansible-collections#792 . The function wait_for_pod_deletion in k8s_drain never checks on which node a pod is actually running: try: response = self._api_instance.read_namespaced_pod( namespace=pod[0], name=pod[1] ) if not response: pod = None time.sleep(wait_sleep) This means that if a pod is successfully evicted and restarted with the same name on a new node, k8s_drain does not notice and thinks that the original pod is still running. This is the case for pods which are part of a stateful set. ISSUE TYPE Bugfix Pull Request COMPONENT NAME k8s_drain Reviewed-by: Mike Graves <[email protected]>
1 parent cd68631 commit fca0dc0

File tree

3 files changed

+5
-2
lines changed

3 files changed

+5
-2
lines changed
Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
bugfixes:
2+
- k8s_drain - Fix k8s_drain runs into a timeout when evicting a pod which is part of a stateful set (https://github.com/ansible-collections/kubernetes.core/issues/792).

plugins/module_utils/helm.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,6 @@ def write_temp_kubeconfig(server, validate_certs=True, ca_cert=None, kubeconfig=
7777

7878

7979
class AnsibleHelmModule(object):
80-
8180
"""
8281
An Ansible module class for Kubernetes.core helm modules
8382
"""

plugins/modules/k8s_drain.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -299,7 +299,9 @@ def _elapsed_time():
299299
response = self._api_instance.read_namespaced_pod(
300300
namespace=pod[0], name=pod[1]
301301
)
302-
if not response:
302+
if not response or response.spec.node_name != self._module.params.get(
303+
"name"
304+
):
303305
pod = None
304306
del pods[-1]
305307
time.sleep(wait_sleep)

0 commit comments

Comments
 (0)