I've been pouring through various swaths of documentation and forum posts all day, to little avail. Perhaps it's buried in the "how Linux bridging works" doc somewhere in the kernel files, but I'm gonna ask here for kicks: I've noticed a strange trend, and ordinarily I wouldn't care except I lose connectivity for short periods with my virtuals. I have several hosts that all network by bridging virtual ethernet adapters with a bond of the physical adapters. For sake of clarity: phy -> bond -> bridge, with mode-6. So, I ping away at a host, and some of the pings come back "TTL Exceeded". Not fun, especially since I also have trouble ssh'ing into the vm. I can get to the console with VNC, and play around with it and even ping from the vm to, say, the firewall without any issues. After some reading, the notion of MAC conflicts and bridging and bonding issues came up.
I pulled up Wireshark and started examining the ARP responses for said given host. I detected that the ARP given for the vm's IP varied among the bonded adapters - I half expected this, since mode-6 is supposed to load-balance automatically. However, when I started pinging FROM my vm TO my Wireshark machine, the MAC in my cache suddenly changes to the vm's actual MAC. Once the pinging is stopped, the MAC eventually reverts to one of the physical adapters.
To sum up: pinging WS -> VM = bond MAC, whereas pinging VM -> WS = VM MAC. It's like the VM's pings become unintentional gratuitous ARPs.
So, what's the deal here? Is this "working right" and my problem with intermittent connectivity somewhere else? Is it normal for either the bridge or the bond to reply with their own selective MAC address to ARP requests? But then why bother doing that when you could just as easily let the VM publish its own response? Obviously the MAC isn't getting nuked when the VM transmits, since it appears in my ARP cache during that ping experiment..... ?? I get the feeling there is a setting that needs to be modified.
No comments:
Post a Comment