Skip to content

Commit ad09df7

Browse files
authored
Merge pull request #14709 from mburke5678/vmware-nsxt
VMware NSX-T SDN Install and Config updates
2 parents 33f6f7a + 61f63f7 commit ad09df7

File tree

9 files changed

+263
-0
lines changed

9 files changed

+263
-0
lines changed

_topic_map.yml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -390,6 +390,9 @@ Topics:
390390
- Name: Configuring Nuage SDN
391391
File: configuring_nuagesdn
392392
Distros: openshift-origin,openshift-enterprise
393+
- Name: Configuring NSX-T SDN
394+
File: configuring_nsxtsdn
395+
Distros: openshift-origin,openshift-enterprise
393396
- Name: Configuring Kuryr SDN
394397
File: configuring_kuryrsdn
395398
Distros: openshift-origin,openshift-enterprise

architecture/networking/network_plugins.adoc

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -40,6 +40,12 @@ ifdef::openshift-origin[]
4040
include::architecture/topics/contiv.adoc[]
4141
endif::[]
4242

43+
ifdef::openshift-enterprise,openshift-origin[]
44+
[[nsx-sdn]]
45+
=== NSX-T SDN
46+
include::architecture/topics/nsxt.adoc[]
47+
endif::[]
48+
4349
ifdef::openshift-enterprise,openshift-origin[]
4450
[[nuage-sdn]]
4551
=== Nuage SDN

architecture/topics/nsxt.adoc

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
The VMware NSX-T (TM) Data Center provides a policy-based overlay network reproducing the complete set of Layer 2
2+
through Layer 7 networking services (such as switching, routing, access control, fire-walling, and QoS) in software for
3+
native {product-title} networking capabilities.
4+
5+
The NSX-T components can be installed and configured as part of the Ansible installation procedure, which integrates an {product-title} SDN
6+
into a data-center-wide NSX-T virtualised network connecting bare metal, virtual machines, and {product-title} pods.
7+
See the xref:../../install_config/configuring_nsxtsdn.adoc#install-config-configuring-nsxt-sdn[Installation] section for information on how to install and deploy {product-title} with VMware NSX-T.
8+
9+
The NSX-T Container Plug-In (NCP) integrates {product-title} into an NSX-T Manager, which is typically configured for the entire data center.
10+
11+
For information on the NSX-T Data Cetner architecture and administration, see the link:https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.4/installation/GUID-10B1A61D-4DF2-481E-A93E-C694726393F9.html[VMware NSX-T Data Center v2.4 documentation] and the link:https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.4/com.vmware.nsxt.ncp_openshift.doc/GUID-1D75FE92-051C-4E30-8903-AF832E854AA7.html[NSX-T NCP configuration] guides.
Lines changed: 243 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,243 @@
1+
[[install-config-configuring-nsx-t-sdn]]
2+
[%hardbreaks]
3+
= Configuring NSX-T SDN
4+
{product-author}
5+
{product-version}
6+
:data-uri:
7+
:icons:
8+
:experimental:
9+
:toc: macro
10+
:toc-title:
11+
12+
toc::[]
13+
14+
[[nsx-t-sdn-and-openshift]]
15+
== NSX-T SDN and {product-title}
16+
17+
VMware NSX-T Data Center (TM) provides advanced software-defined networking (SDN), security, and visibility
18+
to container environments that simplifies IT operations and extends native {product-title} networking capabilities.
19+
20+
NSX-T Data Center supports virtual machine, bare metal, and container workloads across multiple clusters. This allows
21+
organizations to have complete visibility using a single SDN across the entire environment.
22+
23+
For more information on how NSX-T integrates with {product-title}, see the xref:../architecture/networking/network_plugins.adoc#nsx-sdn[NSX-T SDN in Available SDN plug-ins].
24+
25+
[[nsx-t-sdn-operations-workflow]]
26+
== Example Topology
27+
28+
One typical use case is to have a Tier-0 (T0) router that connects the physical system with the virtual environment and a Tier-1 (T1) router to act as a default gateway for the {product-title} VMs.
29+
30+
Each VM has two vNICs: One vNIC connects to the Management Logical Switch for accessing the VMs. The other vNIC connects to a Dump Logical Switch and is used by `nsx-node-agent` to uplink the Pod networking. For further details, refer to link:https://docs.VMware.com/en/VMware-NSX-T-Data-Center/2.4/nsxt_24_ncp_openshift.pdf[NSX Container Plug-in for OpenShift].
31+
32+
The LoadBalancer used for configuring {product-title} Routes and all project T1 routers and Logical Switches are created automatically during the {product-title} installation.
33+
34+
In this topology, the default {product-title} HAProxy Router is used for all infrastructure components such as Grafana, Prometheus, Console, Service Catalog, and others.
35+
Ensure that the DNS records for the infrastructure components point to the infrastructure node IP addresses, because the HAProxy uses the host network namespace.
36+
This works for infrastructure routes, but in order to avoid exposing the infrastructure nodes management IPs to the outside world, deploy application-specific routes to the NSX-T LoadBalancer.
37+
38+
This example topology assumes you are using three {product-title} master virtual machines and four {product-title} worker virtual machines (two for infrastructure and two for compute).
39+
40+
[[nsx-t-sdn-installation]]
41+
== Installing VMware NSX-T
42+
43+
Prerequisites:
44+
45+
* ESXi hosts requirements:
46+
** ESXi servers that host {product-title} node VMs must be NSX-T Transport Nodes.
47+
+
48+
.NSX UI dislaying the Transport Nodes for a typical high availability environment:
49+
+
50+
image::nsxt-transportnodes.png[NSX Transport Nodes]
51+
52+
* DNS requirements:
53+
** You must add a new entry to your DNS server with a wildcard to the infrastructure nodes. This allows load balancing by NSX-T or other third-party LoadBalancer. In the `hosts` file below, the entry is defined by the `openshift_master_default_subdomain` variable.
54+
** You must update your DNS server with the `openshift_master_cluster_hostname` and `openshift_master_cluster_public_hostname` variables.
55+
56+
* Virtual Machine requirements:
57+
** The {product-title} node VMs must have two vNICs:
58+
** A Management vNIC must be connected to the Logical Switch that is uplinked to the management T1 router.
59+
** The second vNIC on all VMs must be tagged in NSX-T so that the NSX Container Plug-in (NCP) knows which port needs to be used as a parent VIF for all Pods running in a particular {product-title} node. The tags must be the following:
60+
+
61+
----
62+
{'ncp/node_name': 'node_name'}
63+
{'ncp/cluster': 'cluster_name'}
64+
----
65+
+
66+
The following image shows how the tags in NSX UI for all nodes. For a large scale cluster, you can automate the tagging using API Call or by using Ansible.
67+
+
68+
.NSX UI dislaying node tags
69+
+
70+
image::nsxt-tags.png[NSX VM tags]
71+
+
72+
The order of the tags in the NSX UI is opposite from the API.
73+
The node name must be exactly as kubelet expects and the cluster name must be the same as the `nsx_openshift_cluster_name` in the Ansible hosts file, as shown below. Ensure that the proper tags are applied on the second vNIC on every node.
74+
+
75+
* NSX-T requirements:
76+
+
77+
The following prerequisites need to be met in NSX:
78+
+
79+
** A Tier-0 Router.
80+
** An Overlay Transport Zone.
81+
** An IP Block for POD networking.
82+
** Optionally, an IP Block for routed (NoNAT) POD networking.
83+
** An IP Pool for SNAT. By default the subnet given per Project from the Pod networking IP Block is routable only inside NSX-T. NCP uses this IP Pool to provide connectivity to the outside.
84+
** Optionally, the Top and Bottom firewall sections in a dFW (Distributed Firewall). NCP places the Kubernetes Network Policy rules between those two sections.
85+
** The Open vSwitch and CNI plug-in RPMs need to be hosted on a HTTP server reachable from the {product-title} Node VMs (`http://websrv.example.com` in this example). Those files are included in the NCP Tar file, which you can download from VMware at link:https://my.VMware.com/web/vmware/details?downloadGroup=NSX-T-PKS-240&productId=673[Download NSX Container Plug-in 2.4.0
86+
].
87+
88+
* {product-title} requirements:
89+
+
90+
** Run the following command to install required software packages, if any, for {product-title}:
91+
+
92+
----
93+
$ ansible-playbook -i hosts openshift-ansible/playbooks/prerequisites.yml
94+
----
95+
+
96+
** Ensure that the NCP container image is downloaded locally on all nodes
97+
+
98+
** After the `prerequisites.yml` playbook has successfully executed, run the following command on all nodes, replacing the `xxx` with the NCP build version:
99+
+
100+
----
101+
$ docker load -i nsx-ncp-rhel-xxx.tar
102+
----
103+
+
104+
For example:
105+
+
106+
----
107+
$ docker load -i nsx-ncp-rhel-2.4.0.12511604.tar
108+
----
109+
+
110+
** Get the image name and retag it:
111+
+
112+
----
113+
$ docker images
114+
$ docker image tag registry.local/xxxxx/nsx-ncp-rhel nsx-ncp <1>
115+
----
116+
<1> Replace the `xxx` with the NCP build version. For example:
117+
+
118+
----
119+
docker image tag registry.local/2.4.0.12511604/nsx-ncp-rhel nsx-ncp
120+
----
121+
+
122+
** In the {product-title} Ansible hosts file, specify the following parameters to set up NSX-T as the network plug-in:
123+
+
124+
----
125+
[OSEv3:children]
126+
masters
127+
nodes
128+
etcd
129+
[OSEv3:vars]
130+
ansible_ssh_user=root
131+
openshift_deployment_type=origin
132+
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
133+
openshift_master_htpasswd_users={"admin" : "$apr1$H0QeP6oX$HHdscz5gqMdtTcT5eoCJ20"}
134+
openshift_master_default_subdomain=demo.example.com
135+
openshift_use_nsx=true
136+
os_sdn_network_plugin_name=cni
137+
openshift_use_OpenShift_sdn=false
138+
openshift_node_sdn_mtu=1500
139+
openshift_master_cluster_method=native
140+
openshift_master_cluster_hostname=master01.example.com
141+
openshift_master_cluster_public_hostname=master01.example.com
142+
openshift_hosted_manage_registry=true
143+
openshift_hosted_manage_router=true
144+
openshift_enable_service_catalog=true
145+
openshift_cluster_monitoring_operator_install=true
146+
openshift_web_console_install=true
147+
openshift_console_install=true
148+
# NSX-T specific configuration
149+
#nsx_use_loadbalancer=false
150+
nsx_openshift_cluster_name='cluster01'
151+
nsx_api_managers='nsxmgr.example.com'
152+
nsx_api_user='nsx_admin'
153+
nsx_api_password='nsx_api_password_example'
154+
nsx_tier0_router='LR-Tier-0'
155+
nsx_overlay_transport_zone='TZ-Overlay'
156+
nsx_container_ip_block='pod-networking'
157+
nsx_no_snat_ip_block='pod-nonat'
158+
nsx_external_ip_pool='pod-external'
159+
nsx_top_fw_section='containers-top'
160+
nsx_bottom_fw_section='containers-bottom'
161+
nsx_ovs_uplink_port='ens224'
162+
nsx_cni_url='http://websrv.example.com/nsx-cni-buildversion.x86_64.rpm'
163+
nsx_ovs_url='http://websrv.example.com/openvswitch-buildversion.rhel75-1.x86_64.rpm'
164+
nsx_kmod_ovs_url='http://websrv.example.com/kmod-openvswitch-buildversion.rhel75-1.el7.x86_64.rpm'
165+
nsx_insecure_ssl=true
166+
# vSphere Cloud Provider
167+
#openshift_cloudprovider_kind=vsphere
168+
#openshift_cloudprovider_vsphere_username='[email protected]'
169+
#openshift_cloudprovider_vsphere_password='viadmin_password'
170+
#openshift_cloudprovider_vsphere_host='vcsa.example.com'
171+
#openshift_cloudprovider_vsphere_datacenter='Example-Datacenter'
172+
#openshift_cloudprovider_vsphere_cluster='example-Cluster'
173+
#openshift_cloudprovider_vsphere_resource_pool='ocp'
174+
#openshift_cloudprovider_vsphere_datastore='example-Datastore-name'
175+
#openshift_cloudprovider_vsphere_folder='ocp'
176+
[masters]
177+
master01.example.com
178+
master02.example.com
179+
master03.example.com
180+
[etcd]
181+
master01.example.com
182+
master02.example.com
183+
master03.example.com
184+
[nodes]
185+
master01.example.com ansible_ssh_host=192.168.220.2 OpenShift_node_group_name='node-config-master' OpenShift_ip=192.168.220.2
186+
master02.example.com ansible_ssh_host=192.168.220.3 OpenShift_node_group_name='node-config-master' OpenShift_ip=192.168.220.3
187+
master03.example.com ansible_ssh_host=192.168.220.4 OpenShift_node_group_name='node-config-master' OpenShift_ip=192.168.220.4
188+
node01.example.com ansible_ssh_host=192.168.220.5 OpenShift_node_group_name='node-config-infra' OpenShift_ip=192.168.220.5
189+
#node02.example.com ansible_ssh_host=192.168.220.6 OpenShift_node_group_name='node-config-infra' OpenShift_ip=192.168.220.6
190+
node03.example.com ansible_ssh_host=192.168.220.7 OpenShift_node_group_name='node-config-compute' OpenShift_ip=192.168.220.7
191+
node04.example.com ansible_ssh_host=192.168.220.8 OpenShift_node_group_name='node-config-compute' OpenShift_ip=192.168.220.8
192+
----
193+
+
194+
For information on the {product-title} installation parameters, see xref:../install/configuring_inventory_file.adoc#install-config-configuring-inventory-file[Configuring Your Inventory File].
195+
196+
.Procedure
197+
198+
After meeting all of the prerequisites, you can deploy NSX Data Center and {product-title}.
199+
200+
. Deploy the {product-title} cluster:
201+
+
202+
----
203+
$ ansible-playbook -i hosts openshift-ansible/playbooks/deploy_cluster.yml
204+
----
205+
+
206+
For more information on the {product-title} installation, see xref:../install/running_install.adoc#install-running-installation-playbooks[Installing OpenShift Container Platform].
207+
208+
. After the installation is complete, validate that the NCP and nsx-node-agent Pods are running:
209+
+
210+
----
211+
$ oc get pods -o wide -n nsx-system
212+
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
213+
nsx-ncp-5sggt 1/1 Running 0 1h 192.168.220.8 node04.example.com <none>
214+
nsx-node-agent-b8nkm 2/2 Running 0 1h 192.168.220.5 node01.example.com <none>
215+
nsx-node-agent-cldks 2/2 Running 0 2h 192.168.220.8 node04.example.com <none>
216+
nsx-node-agent-m2p5l 2/2 Running 28 3h 192.168.220.4 master03.example.com <none>
217+
nsx-node-agent-pcfd5 2/2 Running 0 1h 192.168.220.7 node03.example.com <none>
218+
nsx-node-agent-ptwnq 2/2 Running 26 3h 192.168.220.2 master01.example.com <none>
219+
nsx-node-agent-xgh5q 2/2 Running 26 3h 192.168.220.3 master02.example.com <none>
220+
----
221+
222+
== Check NSX-T after {product-title} deployment
223+
224+
After installing {product-title} and verifying the NCP and `nsx-node-agent-*` Pods:
225+
226+
* Check the routing. Ensure that the Tier-1 routers were created during the installation and are linked to the Tier-0 router:
227+
+
228+
.NSX UI dislaying showing the T1 routers
229+
image::nsxt-routing.png[NSX routing]
230+
231+
* Observe the network traceflow and visibility. For example, check the connection between 'console' and 'grafana'.
232+
+
233+
For more information on securing and optimizing communications between Pods, Projects, virtual machines, and external services, see the following example:
234+
+
235+
.NSX UI dislaying showing network traceflow
236+
image::nsxt-visibility.png[NSX visibility]
237+
238+
* Check the load balancing. NSX-T Data center offers Load Balancer and Ingress Controller capabilities, as shown in the following example:
239+
+
240+
.NSX UI dislay showing the load balancers
241+
image::nsxt-loadbalancing.png[NSX loadbalancing]
242+
243+
For additional configuration and options, refer to the link:https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.4/rn/NSX-Container-Plugin-Release-Notes.html[VMware NSX-T v2.4 OpenShift Plug-In] documentation.
127 KB
Loading
164 KB
Loading
133 KB
Loading
97.9 KB
Loading
108 KB
Loading

0 commit comments

Comments
 (0)