
The following figure shows the network components of the Red Hat OpenShift Container Platform and their logical architecture.

VLT offers a redundant, load-balancing connection to the core network in a loop-free environment and eliminates the need to use a spanning tree protocol. Virtual Link Trunking (VLT) is a layer-2 link aggregation protocol between end devices connected to two switches.

One port from each dual port NIC goes to ToR switch A, and the other port from each NIC is wired to ToR Switch B. Two ports are provided on the Network Daughter Card (NDC) and two more from a dual port NIC. This reference architecture is designed to deliver maximum availability and enough network bandwidth that storage performance and compute performance are not limited by available network bandwidth. The following figure shows the network architecture. Two S5248F-ON or S5232F-ON switches provide data layer communication, while one S3048-ON switch is used for OOB management. A 25 GbE network is the preferred primary fabric for internode communication.
DELL FX2 NETWORK CABLING STACKED TOR SWITCH FULL
Then plug the second uplink in and your second interface should become bundled and you'll be back on full bandwidth again.For the Dell EMC reference architecture, we provision ed each server with 4 x 25 Gbe NIC ports that are cross-wired to the network switches. We were pinging an in-band SVI on the access switch and typically dropped 2-3 pings. At that point, the access switch's portchannel will become consistent and start forwarding.

Then, unplug the second interface on the other N7K. At that point, the physical link will go up/up but the interface will be suspended on the access switch as the portchannel will be inconsistent. The access switch still has connectivity with half the bandwidth at this point. First unplug one uplink for the access switch at the N7K end. I don't see that this would be any different with Nexus at both ends. To do our migration, we implemented a separate L2 and 元 connection from the N7K to the C9500, did the physical moves first (retaining L2 connectivity), and then moved the SVIs from the N7K to C9500 to finish the job. The access switches (3750X) had 2x 10G uplinks to the N7K in a LACP portchannel. I've done a similar thing to migrate access stacks from a collapsed core (dual N7K with VPCs) to a new Catalyst 9500-48Y4C distribution switch with a few seconds of interruption.
