Kirk Evans Blog

.NET From a Markup Perspective

Ding, Dong, the DOM is Dead

Ding, Dong, the DOM is Dead

  • Comments 3

Mark Fussell writes his Ode to the XML DOM, heralding its demise. 

I have had conversations with other XML enthusiasts about why the DOM sucks.  Mark cites the API laying itself bare with no chance of optimization:  I say it is just a flat-out pig. 

For those who really don't want to think of XML as classes and the Infoset mappings of markup to object graphs, DOM stands for “Document Object Model.“  The model is specified by the W3C and implemented by various vendors in different forms.  This DOM class model was the core of the “old“ way of doing XML using COM: you migh thave worked with MSXML.DOMDocument at some point to send information using XMLHTTP in client-side JavaScript..  The .NET FCL also implements the DOM via System.Xml.XmlDocument

For those that don't really want to think of XML as objects but are sometimes forced to, the XmlDocument seems to require the smallest learning curve investment. 

Even if you don't use the System.Xml.XmlDocument type directly, you likely use the DOM quite a bit and probably don't know it:  If you create a custom configuration section using IConfigurationSectionHandler, then you are using the DOM model.  Which brings up a question I debated with someone at PDC:

But doesn't the configuration system pass only one XmlNode in the Create method for a custom configuration section?

The single XmlNode reference is just the camel's nose in the tent.  You get the whole DOM tree.

The DOM is, internally, a linked list.  Before you run hiding from nightmares of your CS201 classes, I wrote a nostalgia piece on linked lists to help refresh your memory.  The concept of the DOM as a linked list might surprise some:  It hides its linked list roots through a misdirection sleight-of-hand known as XPath.  It also covers its linked list heritage with the properties and methods required for the Node DOM type specified by the W3C.  Some examples of the properties required for a single node:

  readonly attribute Node            parentNode;
  readonly attribute NodeList        childNodes;
  readonly attribute Node            firstChild;
  readonly attribute Node            lastChild;
  readonly attribute Node            previousSibling;
  readonly attribute Node            nextSibling;

Everything in an XML document is a node... Elements, attributes, processing instructions, whitespace, element content... all are nodes.   Each node maintains a reference to at least one significant piece of information:  the parentNode.   This is exactly what the requirements for a linked list are.  If I go the the inner-most element in an XML document, I can get all the way back up to the very first element in the document just by traversing the parentNode of each node.

Some nodes have no correlation to other nodes.  An element may contain a set of attributes (this is the same in HTML... a “tag“ may have “attributes“), but attributes do not relate to each other positionally.  In fact, the XML 1.0 recommendation specifies that attribute order should always be considered insignificant.  Elements, however, may very well require a certain order to element positioning.  To support this concept, an XmlLinkedNode type exists in the System.Xml namespace.  And if you click on the link for XmlLinkedNode (go ahead, it really does support a point here), you will see that XmlElement derives from XmlLinkedNode.  XmlLinkedNode's main purpose in life is to point to the node before him and the node after him:  a doubly linked list.

Every well-formed XML document has at least 1 element.  That element, called the document element, has a special parent called “the document node.“  It is not represented in an XML document, it is really only there to support the concept of XML as a tree.  Back to our linked list, this would be the “head“ of the list.  The head points to just one node following him, the document Element.  The documentElement, in turn, points back to the document node as its parent.  Many other nodes can then point to the documentElement as their parent.  But all of this is pretty trivial, as you almost certainly have lots of elements in a document (a document with one element and 2000 attributes would be pretty uninteresting).

Why does this yield to the DOM being a pig?

As I mentioned in my nostalgia article, linked lists require all nodes to be loaded into memory to support traversal.  There are optimizations that you can do under the hood to ease some of this memory footprint, such as lazy instantiation, but at the cost of iteration performance.  Loading the entire structure into memory is one of the easiest ways to yield maximum performance when iterating over a large list, but at the cost of creating a very large initial working set.

Relatedly, one of the most frequently asked questions that I get when speaking is “XmlReaders are just too hard, I have to know so much about XML that I shouldn't.  Plus, XmlDocument is pretty simple to use, comparatively.”  I could go off on another long rant, but maybe that should be another thread alltogether.

Ding Dong! The DOM is dead. Which old DOM? The Big Fat DOM!
Ding Dong! The Big Fat DOM is dead.

Wake up - sleepy head, rub your eyes, get out of bed.
Wake up, the Big Fat DOM is dead. She's gone where the dragons go,
Below - below - below. Yo-ho, let's open up and sing and ring the bells out.
Ding Dong' the merry-oh, sing it high, sing it low.
Let them know
The Big Fat DOM is dead!

I join Mark in singing the praise of the XPathDocument's return to glory, and the end of an era for the XmlDocument.

  • Good post! Go off on another rant!
  • I dont get it. The dom is the most logical way to deal with an XML document. Yes, it has to load the whole document -- memory is cheap. I like xpath too but the DOM is objects and objects rule.
  • No, the DOM is not the most logical way to work with XML, and objects don't rule... the data they work with does.

    The DOM is just a binary tree with additional behaviors and interfaces tacked on for the concept of a node, creating specialized nodes as "XmlAttribute", "XmlElement", "XmlWhitespace", etc. That tree is just an implementation of a linked list.

    Those with a CS programming background would quickly see some of the limitations of working with a linked list implementation: optimization becomes extremely problematic unless you maintain an external index for the representation, and that is not possible using the W3C's mandated API because the XmlNode is directly exposed and may not even be attached to a parent tree at all.

    Linked lists also suffer from the problem of mandating the entire list be present in memory unless you want to incur an overall larger performance hit by lazy loading the tree as it is navigated.

    A better abstraction is to use the XPathNavigator model because it can optimize the navigation over the underlying store by maintaining an external index to the underlying implementation.

    Another point to consider is that the most logical way to work with XML is when you don't notice it at all. This is when the underlying plumbing is at its best. Consider XML Web Services in .NET: unless developers ask for the WSDL explicitly, they are unlikely to recognize the SOAP underpinnings that make web services as simple as they are. So what we are really after is a simple means to abstract the data with the fastest API that we can use that specialized towards a hierarchical structure. And the XmlDocument (actually, the W3C DOM) doesn't provide either a clean abstraction or the required performance.
Page 1 of 1 (3 items)
Leave a Comment
  • Please add 8 and 3 and type the answer here:
  • Post
Translate This Page