-
Notifications
You must be signed in to change notification settings - Fork 48
Adding Scale Out functionality #613
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
radez
commented
Feb 20, 2025
- Add nodes to worker inventory section and update vars in scaleout.yml to add nodes to the existing cluster.
- https://docs.openshift.com/container-platform/4.17/nodes/nodes/nodes-nodes-adding-node-iso.html
/test ? |
@josecastillolema: The following commands are available to trigger required jobs:
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/test deploy-sno |
/test deploy-5nodes |
The test failed because of:
Let me take care of this tomorrow morning, we need to update the secrets. |
Ok I was going to look into the route issue more this afternoon also. |
Should be fixed when openshift/release#62015 merges |
/test deploy-5nodes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm wondering if we could include some sort of "limit" on initial deployment of a cluster such that say someone had a 200 node allocation given to them, and they ran create inventory, they would have 196 workers in the worker section. I am thinking we could make mno-deploy only deploy say 120 of the worker nodes and you'd have to use the mno-scale-out.yml playbook to increase the count of workers above this initial threshold. Do you think it is worth it to implement that into mno-deploy?
/test deploy-5nodes |
@radez: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
This needs to be rebased to pick up the fix in #619 for CI to work |
Turns out we do need all.yml, there's a config director var that I used to hold the generated iso in. |
- Add nodes to worker inventory section and update vars in scaleout.yml to add nodes to the existing cluster. - https://docs.openshift.com/container-platform/4.17/nodes/nodes/nodes-nodes-adding-node-iso.html
Was able to run an initial deployment with 3 nodes and then scale up to 6 nodes: # oc get no
NAME STATUS ROLES AGE VERSION
e38-h02-000-r650 Ready control-plane,master 35m v1.31.6
e38-h03-000-r650 Ready control-plane,master 52m v1.31.6
e38-h06-000-r650 Ready control-plane,master 52m v1.31.6
vm00001 Ready worker 38m v1.31.6
vm00002 Ready worker 38m v1.31.6
vm00003 Ready worker 38m v1.31.6
vm00004 Ready worker 4m37s v1.31.6
vm00005 Ready worker 4m39s v1.31.6
vm00006 Ready worker 4m40s v1.31.6 Basic process was:
Playbook ran for 10m 28s for the 3 node scale up. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was able to scale the cluster to a 156 nodes, works well!!
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: akrzos The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
938cef9
into
redhat-performance:main
I think it would be nice to have a Prow test for this feature in the Jetlag CI, |