With 802.1q and 802.1p (see the introduction for background), we have a means to color traffic on Ethernet. Hopefully, there's a network element (such as a switch) that can use this information to prioritize our traffic appropriately. So what can go wrong? In an ideal world, you'd think that, at worst, the middle network elements don't do prioritization. You'd assume they would ignore the tag. Sadly, it's more complex. Gabe wrote about this before but I want to delve more into the topic.

Your NIC is responsible for adding the 802.1q tag to the outgoing packet. When Windows gives a packet to the driver, it tells it which 802.1p value to use for the packet. A few options:

  1. Your NIC doesn't know about 802.1p. If so, your packet goes out without an 802.1q tag.
  2. Your NIC knows about 802.1p but it's not configured to add the tags to outgoing packets. Before Vista, this was the default for most NICs that support 802.1q. To enable 802.1q tagging, you'll need to check the adapter properties. For example, to do so on XP:
    1. Open the control panel then open Network Connections.
    2. Right click on your adapter then select Properties.
    3. Click on Configure. This is one of a many ways to reach the NIC device's properties.
    4. Look under the Advanced tab. You're looking for something that says QoS, tagging or priority. The exact name used for this property varies per NIC vendor. For example, for my Intel NIC, the name is 802.1p QoS Packet Tagging.
    5. Some NICs will reset themselves if you change this property. Therefore, you may lose whatever network connections you have going. Enable it, at your own risk.
  3. Your NIC knows about 802.1q and is enabled by default to tag (if appropriate). You'll see more and more NICs and drivers with this behavior in Vista.

But when is it appropriate to tag? The NDIS documentation for device writers is very clear. If the user priority is 0, a tag should not be added. 0 is the default priority. It signifies best effort. Traffic tagged with this value is treated the same way as traffic without a tag. Unfortunately, in the past, some drivers misbehaved and would always tag.

What's wrong with always tagging? It assumes that 802.1q is supported by all destinations and network elements on the source's subnet. That's an invalid assumption. Many devices already on the market -- for example NICs and Internet Gateway Devices (IGD) -- don't know about 802.1q. Typically, this manifests itself in one of a few ways:

  1. The device crashes.
  2. The device discards any packets with an 802.1q tag. It doesn't ignore the tag or remove it, it drops the packet! Imagine if that was the switch at your first hop. Suddenly, the host has lost all connectivity.
  3. The device does basic packet validation and discards 802.1q tagged packets based on rules for packets without 802.1q tags. For example:
    1. Remember, 802.1q tagged packets are 4 bytes longer than non tagged packets. Those are 4 bytes for the tag, not for the IP payload.
    2. Therefore, the maximum packet length is 4 bytes longer.
    3. As a result, the path MTU has changed. This surprises many Audio/Video (AV) applications that assume that they can send IPv4 packets with 1472 bytes of data.

And this weird behavior varies per destination!!!

At home, I use a widely available brand-name IGD. If I turn on the adapter property on my main computer's NIC (that shall remain nameless), I suddenly can't talk to one of my other computers. I also can't talk to the internet anymore. While the 4 port switch in my IGD doesn't mind 802.1q tag, the routing part for the NAT for the WAN port does. Luckily, I can still talk to my XBoX 360. What a mess though.

Since there are good reasons to use 802.1p, as Gabe mentioned in The Necessity for End-to-End QoS Experiments, it is a mess we are trying to clean-up in Vista.

Mathias