|
| 1 | +[[install-config-configuring-nsx-t-sdn]] |
| 2 | +[%hardbreaks] |
| 3 | += Configuring NSX-T SDN |
| 4 | +{product-author} |
| 5 | +{product-version} |
| 6 | +:data-uri: |
| 7 | +:icons: |
| 8 | +:experimental: |
| 9 | +:toc: macro |
| 10 | +:toc-title: |
| 11 | + |
| 12 | +toc::[] |
| 13 | + |
| 14 | +[[nsx-t-sdn-and-openshift]] |
| 15 | +== NSX-T SDN and {product-title} |
| 16 | + |
| 17 | +VMware NSX-T Data Center (TM) provides advanced software-defined networking (SDN), security, and visibility |
| 18 | +to container environments that simplifies IT operations and extends native {product-title} networking capabilities. |
| 19 | + |
| 20 | +NSX-T Data Center supports virtual machine, bare metal, and container workloads across multiple clusters. This allows |
| 21 | +organizations to have complete visibility using a single SDN across the entire environment. |
| 22 | + |
| 23 | +For more information on how NSX-T integrates with {product-title}, see the xref:../architecture/networking/network_plugins.adoc#nsx-sdn[NSX-T SDN in Available SDN plug-ins]. |
| 24 | + |
| 25 | +[[nsx-t-sdn-operations-workflow]] |
| 26 | +== Example Topology |
| 27 | + |
| 28 | +One typical use case is to have a Tier-0 (T0) router that connects the physical system with the virtual environment and a Tier-1 (T1) router to act as a default gateway for the {product-title} VMs. |
| 29 | + |
| 30 | +Each VM has two vNICs: One vNIC connects to the Management Logical Switch for accessing the VMs. The other vNIC connects to a Dump Logical Switch and is used by `nsx-node-agent` to uplink the Pod networking. For further details, refer to link:https://docs.VMware.com/en/VMware-NSX-T-Data-Center/2.4/nsxt_24_ncp_openshift.pdf[NSX Container Plug-in for OpenShift]. |
| 31 | + |
| 32 | +The LoadBalancer used for configuring {product-title} Routes and all project T1 routers and Logical Switches are created automatically during the {product-title} installation. |
| 33 | + |
| 34 | +In this topology, the default {product-title} HAProxy Router is used for all infrastructure components such as Grafana, Prometheus, Console, Service Catalog, and others. |
| 35 | +Ensure that the DNS records for the infrastructure components point to the infrastructure node IP addresses, because the HAProxy uses the host network namespace. |
| 36 | +This works for infrastructure routes, but in order to avoid exposing the infrastructure nodes management IPs to the outside world, deploy application-specific routes to the NSX-T LoadBalancer. |
| 37 | + |
| 38 | +This example topology assumes you are using three {product-title} master virtual machines and four {product-title} worker virtual machines (two for infrastructure and two for compute). |
| 39 | + |
| 40 | +[[nsx-t-sdn-installation]] |
| 41 | +== Installing VMware NSX-T |
| 42 | + |
| 43 | +Prerequisites: |
| 44 | + |
| 45 | +* ESXi hosts requirements: |
| 46 | +** ESXi servers that host {product-title} node VMs must be NSX-T Transport Nodes. |
| 47 | ++ |
| 48 | +.NSX UI dislaying the Transport Nodes for a typical high availability environment: |
| 49 | ++ |
| 50 | +image::nsxt-transportnodes.png[NSX Transport Nodes] |
| 51 | + |
| 52 | +* DNS requirements: |
| 53 | +** You must add a new entry to your DNS server with a wildcard to the infrastructure nodes. This allows load balancing by NSX-T or other third-party LoadBalancer. In the `hosts` file below, the entry is defined by the `openshift_master_default_subdomain` variable. |
| 54 | +** You must update your DNS server with the `openshift_master_cluster_hostname` and `openshift_master_cluster_public_hostname` variables. |
| 55 | + |
| 56 | +* Virtual Machine requirements: |
| 57 | +** The {product-title} node VMs must have two vNICs: |
| 58 | +** A Management vNIC must be connected to the Logical Switch that is uplinked to the management T1 router. |
| 59 | +** The second vNIC on all VMs must be tagged in NSX-T so that the NSX Container Plug-in (NCP) knows which port needs to be used as a parent VIF for all Pods running in a particular {product-title} node. The tags must be the following: |
| 60 | ++ |
| 61 | +---- |
| 62 | +{'ncp/node_name': 'node_name'} |
| 63 | +{'ncp/cluster': 'cluster_name'} |
| 64 | +---- |
| 65 | ++ |
| 66 | +The following image shows how the tags in NSX UI for all nodes. For a large scale cluster, you can automate the tagging using API Call or by using Ansible. |
| 67 | ++ |
| 68 | +.NSX UI dislaying node tags |
| 69 | ++ |
| 70 | +image::nsxt-tags.png[NSX VM tags] |
| 71 | ++ |
| 72 | +The order of the tags in the NSX UI is opposite from the API. |
| 73 | +The node name must be exactly as kubelet expects and the cluster name must be the same as the `nsx_openshift_cluster_name` in the Ansible hosts file, as shown below. Ensure that the proper tags are applied on the second vNIC on every node. |
| 74 | ++ |
| 75 | +* NSX-T requirements: |
| 76 | ++ |
| 77 | +The following prerequisites need to be met in NSX: |
| 78 | ++ |
| 79 | +** A Tier-0 Router. |
| 80 | +** An Overlay Transport Zone. |
| 81 | +** An IP Block for POD networking. |
| 82 | +** Optionally, an IP Block for routed (NoNAT) POD networking. |
| 83 | +** An IP Pool for SNAT. By default the subnet given per Project from the Pod networking IP Block is routable only inside NSX-T. NCP uses this IP Pool to provide connectivity to the outside. |
| 84 | +** Optionally, the Top and Bottom firewall sections in a dFW (Distributed Firewall). NCP places the Kubernetes Network Policy rules between those two sections. |
| 85 | +** The Open vSwitch and CNI plug-in RPMs need to be hosted on a HTTP server reachable from the {product-title} Node VMs (`http://websrv.example.com` in this example). Those files are included in the NCP Tar file, which you can download from VMware at link:https://my.VMware.com/web/vmware/details?downloadGroup=NSX-T-PKS-240&productId=673[Download NSX Container Plug-in 2.4.0 |
| 86 | +]. |
| 87 | + |
| 88 | +* {product-title} requirements: |
| 89 | ++ |
| 90 | +** Run the following command to install required software packages, if any, for {product-title}: |
| 91 | ++ |
| 92 | +---- |
| 93 | +$ ansible-playbook -i hosts openshift-ansible/playbooks/prerequisites.yml |
| 94 | +---- |
| 95 | ++ |
| 96 | +** Ensure that the NCP container image is downloaded locally on all nodes |
| 97 | ++ |
| 98 | +** After the `prerequisites.yml` playbook has successfully executed, run the following command on all nodes, replacing the `xxx` with the NCP build version: |
| 99 | ++ |
| 100 | +---- |
| 101 | +$ docker load -i nsx-ncp-rhel-xxx.tar |
| 102 | +---- |
| 103 | ++ |
| 104 | +For example: |
| 105 | ++ |
| 106 | +---- |
| 107 | +$ docker load -i nsx-ncp-rhel-2.4.0.12511604.tar |
| 108 | +---- |
| 109 | ++ |
| 110 | +** Get the image name and retag it: |
| 111 | ++ |
| 112 | +---- |
| 113 | +$ docker images |
| 114 | +$ docker image tag registry.local/xxxxx/nsx-ncp-rhel nsx-ncp <1> |
| 115 | +---- |
| 116 | +<1> Replace the `xxx` with the NCP build version. For example: |
| 117 | ++ |
| 118 | +---- |
| 119 | +docker image tag registry.local/2.4.0.12511604/nsx-ncp-rhel nsx-ncp |
| 120 | +---- |
| 121 | ++ |
| 122 | +** In the {product-title} Ansible hosts file, specify the following parameters to set up NSX-T as the network plug-in: |
| 123 | ++ |
| 124 | +---- |
| 125 | +[OSEv3:children] |
| 126 | +masters |
| 127 | +nodes |
| 128 | +etcd |
| 129 | +[OSEv3:vars] |
| 130 | +ansible_ssh_user=root |
| 131 | +openshift_deployment_type=origin |
| 132 | +openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}] |
| 133 | +openshift_master_htpasswd_users={"admin" : "$apr1$H0QeP6oX$HHdscz5gqMdtTcT5eoCJ20"} |
| 134 | +openshift_master_default_subdomain=demo.example.com |
| 135 | +openshift_use_nsx=true |
| 136 | +os_sdn_network_plugin_name=cni |
| 137 | +openshift_use_OpenShift_sdn=false |
| 138 | +openshift_node_sdn_mtu=1500 |
| 139 | +openshift_master_cluster_method=native |
| 140 | +openshift_master_cluster_hostname=master01.example.com |
| 141 | +openshift_master_cluster_public_hostname=master01.example.com |
| 142 | +openshift_hosted_manage_registry=true |
| 143 | +openshift_hosted_manage_router=true |
| 144 | +openshift_enable_service_catalog=true |
| 145 | +openshift_cluster_monitoring_operator_install=true |
| 146 | +openshift_web_console_install=true |
| 147 | +openshift_console_install=true |
| 148 | +# NSX-T specific configuration |
| 149 | +#nsx_use_loadbalancer=false |
| 150 | +nsx_openshift_cluster_name='cluster01' |
| 151 | +nsx_api_managers='nsxmgr.example.com' |
| 152 | +nsx_api_user='nsx_admin' |
| 153 | +nsx_api_password='nsx_api_password_example' |
| 154 | +nsx_tier0_router='LR-Tier-0' |
| 155 | +nsx_overlay_transport_zone='TZ-Overlay' |
| 156 | +nsx_container_ip_block='pod-networking' |
| 157 | +nsx_no_snat_ip_block='pod-nonat' |
| 158 | +nsx_external_ip_pool='pod-external' |
| 159 | +nsx_top_fw_section='containers-top' |
| 160 | +nsx_bottom_fw_section='containers-bottom' |
| 161 | +nsx_ovs_uplink_port='ens224' |
| 162 | +nsx_cni_url='http://websrv.example.com/nsx-cni-buildversion.x86_64.rpm' |
| 163 | +nsx_ovs_url='http://websrv.example.com/openvswitch-buildversion.rhel75-1.x86_64.rpm' |
| 164 | +nsx_kmod_ovs_url='http://websrv.example.com/kmod-openvswitch-buildversion.rhel75-1.el7.x86_64.rpm' |
| 165 | +nsx_insecure_ssl=true |
| 166 | +# vSphere Cloud Provider |
| 167 | +#openshift_cloudprovider_kind=vsphere |
| 168 | +#openshift_cloudprovider_vsphere_username='[email protected]' |
| 169 | +#openshift_cloudprovider_vsphere_password='viadmin_password' |
| 170 | +#openshift_cloudprovider_vsphere_host='vcsa.example.com' |
| 171 | +#openshift_cloudprovider_vsphere_datacenter='Example-Datacenter' |
| 172 | +#openshift_cloudprovider_vsphere_cluster='example-Cluster' |
| 173 | +#openshift_cloudprovider_vsphere_resource_pool='ocp' |
| 174 | +#openshift_cloudprovider_vsphere_datastore='example-Datastore-name' |
| 175 | +#openshift_cloudprovider_vsphere_folder='ocp' |
| 176 | +[masters] |
| 177 | +master01.example.com |
| 178 | +master02.example.com |
| 179 | +master03.example.com |
| 180 | +[etcd] |
| 181 | +master01.example.com |
| 182 | +master02.example.com |
| 183 | +master03.example.com |
| 184 | +[nodes] |
| 185 | +master01.example.com ansible_ssh_host=192.168.220.2 OpenShift_node_group_name='node-config-master' OpenShift_ip=192.168.220.2 |
| 186 | +master02.example.com ansible_ssh_host=192.168.220.3 OpenShift_node_group_name='node-config-master' OpenShift_ip=192.168.220.3 |
| 187 | +master03.example.com ansible_ssh_host=192.168.220.4 OpenShift_node_group_name='node-config-master' OpenShift_ip=192.168.220.4 |
| 188 | +node01.example.com ansible_ssh_host=192.168.220.5 OpenShift_node_group_name='node-config-infra' OpenShift_ip=192.168.220.5 |
| 189 | +#node02.example.com ansible_ssh_host=192.168.220.6 OpenShift_node_group_name='node-config-infra' OpenShift_ip=192.168.220.6 |
| 190 | +node03.example.com ansible_ssh_host=192.168.220.7 OpenShift_node_group_name='node-config-compute' OpenShift_ip=192.168.220.7 |
| 191 | +node04.example.com ansible_ssh_host=192.168.220.8 OpenShift_node_group_name='node-config-compute' OpenShift_ip=192.168.220.8 |
| 192 | +---- |
| 193 | ++ |
| 194 | +For information on the {product-title} installation parameters, see xref:../install/configuring_inventory_file.adoc#install-config-configuring-inventory-file[Configuring Your Inventory File]. |
| 195 | + |
| 196 | +.Procedure |
| 197 | + |
| 198 | +After meeting all of the prerequisites, you can deploy NSX Data Center and {product-title}. |
| 199 | + |
| 200 | +. Deploy the {product-title} cluster: |
| 201 | ++ |
| 202 | +---- |
| 203 | +$ ansible-playbook -i hosts openshift-ansible/playbooks/deploy_cluster.yml |
| 204 | +---- |
| 205 | ++ |
| 206 | +For more information on the {product-title} installation, see xref:../install/running_install.adoc#install-running-installation-playbooks[Installing OpenShift Container Platform]. |
| 207 | + |
| 208 | +. After the installation is complete, validate that the NCP and nsx-node-agent Pods are running: |
| 209 | ++ |
| 210 | +---- |
| 211 | +$ oc get pods -o wide -n nsx-system |
| 212 | +NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE |
| 213 | +nsx-ncp-5sggt 1/1 Running 0 1h 192.168.220.8 node04.example.com <none> |
| 214 | +nsx-node-agent-b8nkm 2/2 Running 0 1h 192.168.220.5 node01.example.com <none> |
| 215 | +nsx-node-agent-cldks 2/2 Running 0 2h 192.168.220.8 node04.example.com <none> |
| 216 | +nsx-node-agent-m2p5l 2/2 Running 28 3h 192.168.220.4 master03.example.com <none> |
| 217 | +nsx-node-agent-pcfd5 2/2 Running 0 1h 192.168.220.7 node03.example.com <none> |
| 218 | +nsx-node-agent-ptwnq 2/2 Running 26 3h 192.168.220.2 master01.example.com <none> |
| 219 | +nsx-node-agent-xgh5q 2/2 Running 26 3h 192.168.220.3 master02.example.com <none> |
| 220 | +---- |
| 221 | + |
| 222 | +== Check NSX-T after {product-title} deployment |
| 223 | + |
| 224 | +After installing {product-title} and verifying the NCP and `nsx-node-agent-*` Pods: |
| 225 | + |
| 226 | +* Check the routing. Ensure that the Tier-1 routers were created during the installation and are linked to the Tier-0 router: |
| 227 | ++ |
| 228 | +.NSX UI dislaying showing the T1 routers |
| 229 | +image::nsxt-routing.png[NSX routing] |
| 230 | + |
| 231 | +* Observe the network traceflow and visibility. For example, check the connection between 'console' and 'grafana'. |
| 232 | ++ |
| 233 | +For more information on securing and optimizing communications between Pods, Projects, virtual machines, and external services, see the following example: |
| 234 | ++ |
| 235 | +.NSX UI dislaying showing network traceflow |
| 236 | +image::nsxt-visibility.png[NSX visibility] |
| 237 | + |
| 238 | +* Check the load balancing. NSX-T Data center offers Load Balancer and Ingress Controller capabilities, as shown in the following example: |
| 239 | ++ |
| 240 | +.NSX UI dislay showing the load balancers |
| 241 | +image::nsxt-loadbalancing.png[NSX loadbalancing] |
| 242 | + |
| 243 | +For additional configuration and options, refer to the link:https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.4/rn/NSX-Container-Plugin-Release-Notes.html[VMware NSX-T v2.4 OpenShift Plug-In] documentation. |
0 commit comments