Xen networking is powerful enough to allow for extreme customization. Although the default networking configuration is usually more than enough for simple scenarios, it can fall short when trying to support multiple guests standing on different VLANs.
In this short article, I describe the steps needed to configure Xen to attach itself to multiple VLANs using a one-bridge-per-VLAN network interface mapping, then attaching each Xen domainU on as many VLANs as needed.
In the sample scenario, we will use a Cisco Catalyst 3560G-24TS switch carrying traffic from five different VLANs:
- VLAN2 is the administrative VLAN used to administer all the networking gear and boxes.
- VLAN10 carries Internet traffic coming from the first ISP.
- VLAN20 carries Internet traffic coming from the second ISP.
- VLAN100 carries the access network traffic.
- VLAN200 carries the core network traffic.
The final Xen configuration will provide five bridging network interfaces, one per VLAN. Each Xen domainU can freely attach to any of these bridging network interfaces in order to gain access to the traffic being carried by each VLAN.
The bridging interface,
|brname| is named after the following convention:
xenbr2 is the bridging interface standing on VLAN2.
xenbr10 is the bridging interface standing on VLAN10.
xenbr20 is the bridging interface standing on VLAN20.
xenbr100 is the bridging interface standing on VLAN100.
xenbr200 is the bridging interface standing on VLAN200.
Also, Xen creates an manages several virtual network interfaces, named in the form of
|X| equals the Xen domain numeric ID and
|Y| is a sequential interface index. Thus, starting up a Xen domainU given the following virtual network interface definition:
vif = [ 'mac=00:16:3e:00:00:44, bridge=xenbr10',
'mac=00:16:e3:00:00:45, bridge=xenbr20' ]
Will cause the Xen domain to get assigned, let’s say, a domain ID of 2, and two virtual network interfaces named
vif2.0 — attached to
xenbr10 — and
vif2.1 — attached to
Setting up the bridging interfaces:
This can be done manually, by invoking
brctl addbr |brname| in order to create a new bridging interface.
For example, the following commands will create five bridging interfaces, one for each supported VLAN:
brctl addbr xenbr2
brctl addbr xenbr10
brctl addbr xenbr20
brctl addbr xenbr100
brctl addbr xenbr200
or else can be automated to get done during system startup, by creating a file named
|brname| is the name assigned to the bridging interface, like
/etc/sysconfig/network-scripts/ifcfg-xenbr2 (the configuration file for the bridging interface standing on VLAN2):
Setting up the VLAN interfaces and add them up to the existing bridging interfaces:
This can be done manually, by invoking
vconfig add |ifname| |vlan| to configure VLAN number
|vlan| by using 802.1q tagging on interface
|ifname|. This will active a virtual interface named
- Any traffic sent to this interface will get tagged for VLAN
- Any traffic received from interface
|ifname| carrying an 802.1q VLAN tag matching
|vlan| will be untagged and received by this interface.
vconfig add eth0 2
vconfig add eth0 10
vconfig add eth0 20
vconfig add eth0 100
vconfig add eth0 200
This will add five new VLAN interfaces, one for every supported VLAN.
Once the VLAN interfaces are ready, we add them to their corresponding bridging interfaces by using
brctl addif |brname| |ifname|.|vlan|:
brctl addif xenbr2 eth0.2
brctl addif xenbr10 eth0.10
brctl addif xenbr20 eth0.20
brctl addif xenbr100 eth0.100
brctl addif xenbr200 eth0.200
The process of adding up a new VLAN interface and then adding it up to an existing bridging interface can be configured using a single configuration file named
Keeping Xen from reconfiguring the network:
Since we have already configured the network manually, we don’t want Xen to mess up with the configuration. In order to keep Xen from reconfiguring the network, simply make sure none of the following lines appear uncommented in the file
I have been experiencing a very strange behavior on Xen domainU guests while using this network configuration: it seems that UDP traffic gets stuck at the network stack and does not flow through unless I load the
ip_conntrack.ko kernel module.
Failing to load the
ip_conntrack.ko kernel module, even with an unconfigured, empty firewall, allows ICMP and TCP traffic to flow from and to the guest network stack, but UDP traffic, like DNS queries, gets stuck and doesn’t even touch the physical network interface.
This is really strange, isn’t it?