
上QQ阅读APP看书,第一时间看更新
Creating and configure custom TCP/IP Stacks
As described previously, starting with vSphere 6.0 there are multiple TCP/IP stacks.
As shown in Figure 2.23, the predefined VMkernel TCP/IP stacks are the following:
- Default TCP/IP stack: Provides networking support for the management of traffic between vCenter and ESXi, and usually for other system traffic such as vMotion, IP storage, fault tolerance, and so on.
- vMotion TCP/IP stack: Can be used for the vMotion traffic and could be useful to provide better isolation for the vMotion traffic or when the vMotion adapters need a different default gateway. If you use this TCP/IP stack, then the VMkernel adapters on the default TCP/IP stack are disabled for the vMotion service.
- Provisioning TCP/IP stack: Supports the traffic for VM cold migration, cloning, and snapshot migration but also for long-distance vMotion. The provisioning TCP/IP stack can be used to isolate the traffic from the cloning operations on a separate gateway. If you use this TCP/IP stack, then the VMkernel adapters on the default TCP/IP stack are disabled for the provisioning traffic.
The provisioning traffic uses the Network File Copy (NFC) service, a file-specific FTP service, used by ESXi for copying and moving data. Long-distance vMotion uses this service to copy data between data centers.
You can add custom TCP/IP stacks at the VMkernel level to handle networking traffic of custom applications, but actually, this operation is possible only from the CLI, with the following esxcli command on each ESXi:
esxcli network ip netstack add -N=StackName
StackName is the name of the new TCP/IP stack.
The custom TCP/IP stack is created on the host. Now you can assign VMkernel adapters to the stack.
For more information, see the vSphere 6.5 Networking guide (https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.networking.doc/GUID-660423B1-3D35-4F85-ADE5-FE1D6BF015CF.html).