With 802.1q and 802.1p (see the introduction for background), we have a means to color traffic on Ethernet. Hopefully, there's a network element (such as a switch) that can use this information to prioritize our traffic appropriately. So what can go wrong? In an ideal world, you'd think that, at worst, the middle network elements don't do prioritization. You'd assume they would ignore the tag. Sadly, it's more complex. Gabe wrote about this before but I want to delve more into the topic.
Your NIC is responsible for adding the 802.1q tag to the outgoing packet. When Windows gives a packet to the driver, it tells it which 802.1p value to use for the packet. A few options:
But when is it appropriate to tag? The NDIS documentation for device writers is very clear. If the user priority is 0, a tag should not be added. 0 is the default priority. It signifies best effort. Traffic tagged with this value is treated the same way as traffic without a tag. Unfortunately, in the past, some drivers misbehaved and would always tag.
What's wrong with always tagging? It assumes that 802.1q is supported by all destinations and network elements on the source's subnet. That's an invalid assumption. Many devices already on the market -- for example NICs and Internet Gateway Devices (IGD) -- don't know about 802.1q. Typically, this manifests itself in one of a few ways:
And this weird behavior varies per destination!!!
At home, I use a widely available brand-name IGD. If I turn on the adapter property on my main computer's NIC (that shall remain nameless), I suddenly can't talk to one of my other computers. I also can't talk to the internet anymore. While the 4 port switch in my IGD doesn't mind 802.1q tag, the routing part for the NAT for the WAN port does. Luckily, I can still talk to my XBoX 360. What a mess though.
Since there are good reasons to use 802.1p, as Gabe mentioned in The Necessity for End-to-End QoS Experiments, it is a mess we are trying to clean-up in Vista.