Partager via


On the Complexity of XML APIs

David Megginson (the creator of SAX) has a post entitled The complexity of XML parsing APIs where he writes

Dare Obasanjo recently posted a message to the xml-dev mailing list as part of the ancient and venerable binary XML permathread (just a bit down the list from attributes vs. elements, DOM vs. SAX, and why use CDATA?). His message including the following:

I don’t understand this obsession with SAX and DOM. As APIs go they both suck[0,1]. Why would anyone come up with a simplified binary format then decide to cruft it up by layering a crufty XML API on it is beyond me.

[0] https://www.megginson.com/blogs/quoderat/archives/2005/01/31/sax-the-bad-the-good-and-the-controversial/

[1] https://www.artima.com/intv/dom.html

I supposed that I should rush to SAX ’s defense. I can at least point to my related posting about SAX’s good points , but to be fair, I have to admit that Dare is absolutely right – building complex applications that use SAX and DOM is very difficult and usually results in messy, hard-to-maintain code.

I think this is a pivotal part of the binary XML debate. The primary argument for binary serializations of XML is that certain parties want to get the benefit of the wide array of technologies for processing XML yet retain the benefits of a binary format such as reduced size on the wire and processing time. Basically having one's cake and eating it too.

For me, the problem is that XML is already being pulled from too many directions as it is. In retrospect I realize it was foolish for me to think that the XML team could come up with a single API that would satisfy people processing business documents written in  wordprocessingML, people building distributed computing applications using SOAP or developers reading & writing to application configuration files. All of these scenarios use intersecting subsets of the full functionality of the XML specification. The SOAP specs go as far as banning some features of XML while others are simply frowned upon based on the fact that the average SOAP toolkit simply doesn't know what to do with them. One man's meat (e.g. mixed content) is another man's poison.

What has ended up happening is that we have all these XML APIs that expose a lot of cruft of XML that most developers don't need or even worse make things difficult in the common scenarios because they want to support all the functionality of XML. This is the major failing of APIs such as the .NET Framework's pull model parser class, System.Xml.XmlReader, DOM and SAX. The DOM also has issues with the fact that it tries to support conflicting data models (DOM vs. XPath) and serialization formats (XML 1.0 & XML 1.0 + XML namespaces). At the other extreme we have APIs that try to simplify XML by only supporting specific subsets of its expressivity such as the System.Data.DataSet and the System.Xml.XmlSerializer classes in the .NET Framework. The problem with these APIs is that the developer is dropped of a cliff once they reach the limits of the XML support of the API and have to either use a different API or resort to gross hacks to get what they need done. 

Unfortunately one of the problems we had to deal with when I was on the XML team was that we already had too many XML APIs as it was. Introducing more would create developer confusion but trying to change the existing ones would break backwards compatibility. Personally I'd rather see efforts being to create better toolkits and APIs for the various factions that use XML to make it easier for them to get work done than constantly churning the underlying format thus fragmenting it.