Showing posts with label network configuration. Show all posts
Showing posts with label network configuration. Show all posts

20141225

Behind The Firewall: Connecting Service Despite Restrictions

Here's the problem:

You have spent a lot of money for some internet service, and being the good administrator you are you'd like to share this service among your very small collection of machines.  You also want to put a firewall in the way of unnecessary and/or unwanted network traffic, because after all, the more hoops people have to jump through to hit your systems, the better.

Now, you've got your firewall connected, and you can ping the world and even browse to places on the firewall.  But when you connect a system behind the firewall, every request out seems to die.  You've checked that the firewall was properly configured, and you had even tested it back at the office.  You can still reach the world from the firewall, and yet from your client machine you can't go anywhere.

After a Wireshark capture of the traffic, you see your client machine traffic going out, and some interesting ICMP messages that say "Time to live exceeded in transit" or "TTL expired in transit."  What's more, you notice that your firewall is sending these messages back out to the servers you're trying to talk to!  It is as though no matter what comes in, its TTL exceeded.

And that is exactly what is happening.  Let's create this situation on purpose.  If we have a NATing firewall that is to be the only NAT device in the network, we don't want to allow any other NAT devices to deliver their traffic to their clients.  It's endpoints or nothing at all.  In iptables, the following rule should accomplish this:

iptables -t mangle -A PREROUTING -i eth0 -j TTL --ttl-set 1

(Refer to this page for more information: http://www.linuxtopia.org/Linux_Firewall_iptables/x4799.html)

The above command should mangle the TTL such that there is only one more hop it can go before it runs out of life.  I haven't tested that - if it doesn't work, try 2 instead of 1.  The above command is really based upon the fix for the problem the above command causes.  Basically, if you're experiencing an issue with the TTL expiring too soon, just rewrite it!

iptables -t mangle -A PREROUTING -i eth0 -j TTL --ttl-set 255

Now we can do another 255 hops before we run out of TTL.  As this is done in the prerouting, the packet is fixed before any checks are made as to whether or not it should be dropped for TTL expiration.  A similar fix can be done with pfSense (search in google for the appropriate terms and make sure to tread carefully when mucking around with the filter code - note that there IS NO GUI OPTION FOR CHANGING THE TTL IN PFSENSE!!).  An alternative to the above is to increment the TTL by 1 or more.

If you have a pfSense VM running on a Linux hypervisor, you can make the fix right in Linux (for the bridge adapter, of course).  IPTables in this way really saves the day.  The fix in pfSense is not terrible, but also not convenient and is not officially supported by their community.

Now that I think about it, this would be a great way to stop people from using unauthorized wireless access points around the office....

20120517

Tidbits - Live Migration, Bonding, Bridging, DNS


Live Migration - NEVER TURN THEM OFF!!

I today succeeded in performing a live migration of a VM, from one host to another.  From the source host:
virsh migrate --live  --verbose --domain myguest --desturi qemu+ssh://newhost/system
It took all of about 8 seconds to complete the migration.  No downtime.  It is said this is done in such a way that connections are not even interrupted.  I would like to test that for myself. :-)

Bonding, Bridging, and DNS

Ubuntu 12.04 now requires that DNS resolver directives reside in the /etc/network/interfaces file, or else are placed somewhere I haven't found out about yet.  Here are the key directives, which should go with the adapter definition (as in, under iface eth0 inet static):
dns-search my.local.domain my.other.local.domain
dns-nameservers 192.168.4.4 192.168.4.5
For 10.04 up, bonding and bridging seem to be a lot easier.  Unfortunately, the documentation for these features SUCKS (meaning it's almost non-existent, and forget the configuration examples, too).  Here I configure a two-nic bond, using mode 6 (balance-alb - no switch support required).  I also stuff the bond into a bridge (br0), since this host is also serving VMs:
auto eth0 eth1 br0 bond0 
iface eth0 inet manual
  bond-master bond0 
iface eth1 inet manual
  bond-master bond0 
iface bond0 inet manual
  bond-miimon 100
  bond-slaves none
  bond-mode   6 
iface br0 inet static
  bridge-ports bond0
  address 10.17.0.124
  netmask 255.240.0.0
  gateway 10.16.0.1
  bridge-stp on
  bridge-fd 0
  bridge-maxwait 0
Some things to note about the above:
  • bond-slaves none was the recommendation of the forums and docs - the slaves are defined by specifying their master for each interface.  This seems to work rather well.
  • bond-mode now suggests it's even easier to set up different kinds of bonds on the same machine.  Previously you had to do this by aliasing the bonding driver multiple times with different options.  Not terrible, just a little more clunky.
  • The bridge-* options used to be bridge_*.  Note the dash is used instead of the underscore.  Moreover, the forward-delay (fd) and maxwait are set to zero here.  Trying to set them manually or via a method other than this seems to fail terribly - they'll always go back to their defaults!  This is the first time I've been able to get my bridge to not have the 15 second forwarding delay on new interfaces, though I am anxious to actually validate it beyond any doubt.  The configuration tells me that's the way it is, so hopefully it tells the truth.
I tied the VM to the bridge in the NIC configuration of the VM definition (libvirt-style):
   
     
     
     
   

Note: Remember that when creating new VMs, just omit the
and the tags, and they will be auto-filled with appropriate or auto-generated values.