Brocade VCS HW VTEP solution uses two different IP interfaces for this connectivity – VCS Virtual IP in Management VRF for connecting to NSX Controllers, and Loopback or VRRP-E based VTEP in Default VRF to talk to ESXi VTEPs.
It is possible to set up special VRFs for both of these, if needed.
Loopbacks are simpler from the configuration standpoint, and do not require a dynamic protocol, such as VRRP-E, to function.
interface Port-channel 20 description Uplink with VLANs for Management (128) and HW VTEP (150) switchport switchport mode trunk switchport trunk allowed vlan add 128,150 spanning-tree shutdown no shutdown !
rbridge-id 101 interface Ve 150 ip proxy-arp ip address 192.168.150.101/24 vrrp-extended-group 150 virtual-mac 02e0.5200.00xx virtual-ip 192.168.150.1 advertisement-interval 1 enable no preempt-mode short-path-forwarding ! rbridge-id 102 interface Ve 150 ip proxy-arp ip address 192.168.150.102/24 vrrp-extended-group 150 virtual-mac 02e0.5200.00xx virtual-ip 192.168.150.1 advertisement-interval 1 enable no preempt-mode short-path-forwarding ! rbridge-id 101 ip route 192.168.50.0/24 192.168.150.2 !
Once we’ve decided on the physical topology for our HW VTEP deployment, it’s time to ensure all the right network connectivity is in place.
HW VTEP requires connectivity to two separate, independent IP domains: Management and VTEP.
Both ways, according to my contacts in the know, deliver the same functionality; so there are no benefits / drawbacks to help us decide.
Let’s turn to how things work then, and figure this one out.With the above in mind, let’s check if our VCS fabric’s Virtual IP (1.100) can reach the Controllers (1.150–152).SSH into the VCS Virtual IP (which will connect us to the rbridge that is “Associated” with it), and run the following ping command: VDX6740-101# ping 1.150 vrf mgmt-vrf src-addr 1.100 Type Control-c to abort PING 1.150 (1.150) from 1.100: 56 data bytes 64 bytes from 1.150: icmp_seq=0 ttl=63 time=0.946 ms 64 bytes from 1.150: icmp_seq=1 ttl=63 time=0.523 ms 64 bytes from 1.150: icmp_seq=2 ttl=63 time=0.457 ms 64 bytes from 1.150: icmp_seq=3 ttl=63 time=0.564 ms 64 bytes from 1.150: icmp_seq=4 ttl=63 time=2.745 ms --- 1.150 ping statistics --- 5 packets transmitted, 5 packets received, 0% packet loss round-trip min/avg/max/stddev = 0.457/1.047/2.745/0.866 ms Ok, we can reach one of the Controllers, which means that our IP connectivity is working.To support management access, nodes (rbridges) of the fabric maintain a VCS Virtual IP. It is typical for this IP to belong to the Management VRF, and be on the same subnet as the Management interfaces of your fabric nodes.Hardware Switch Controller (HSC) component of overlay-gateway subsystem on VCS uses Virtual IP for Management and Control plane communications, namely for talking to NSX-v Controllers.If on other hand we’re setting up a new VCS fabric just to do the HW VTEP, we will need to configure its non-Management IP connectivity (for example, default-vrf) in any case, and then piggy-back VTEP configuration on it.