Home     Products      Support      Corporate     Sign In 
Support Knowledge Base, Article 674
Product
General
Title
.NET XML Best Practices: Reading XML Documents
Solution

.NET XML Best Practices

Part II: Reading XML Documents
by Aaron Skonnard

In the first part of this series, I covered the numerous XML-related classes in .NET for reading and writing XML documents and discussed the characteristics of each. As always in software development, there is a fundamental trade-off between performance/efficiency and productivity when comparing different implementations. The biggest and maybe only downside to the .NET XML framework is its sheer size, or in other words, the overwhelming number of ways to do the same thing. At the end of the last piece, Article 673: Choosing an XML API, I provided some decision trees to help guide you through choosing the right XML class for a given task.

As you learned in that piece, you'll typically write your reading code against one of four classes: XmlTextReader, XmlValidatingReader, XmlDocument (and the rest of the DOM API), or XPathNavigator. The following is a summary of the key points to consider when deciding which approach to use:

Use XmlTextReader if:

  • Performance is your highest priority and…
  • You don't need XSD/DTD validation and…
  • You don't need XSD type information at runtime and…
  • You don't need XPath/XSLT services

Use XmlValidatingReader if:

  • You need XSD/DTD validation or…
  • You need XSD type information at runtime or…

Use the DOM if:

  • Productivity is your highest priority or…
  • You need XPath services or…
  • You need to update the document (read/write)

Use XPathNavigator if:

  • You need to execute an XSLT transformation or…
  • You want to leverage an implementation (like XPathDocument)

These general guidelines are the result of comparing performance/efficiency with productivity for different groups of developers. The problem with these guidelines, however, is that it's hard to accurately measure productivity levels for entire groups of developers. For example, certain developers might be more productive using the DOM while at the same time others are more productive using XPathNavigator. Productivity generally has a lot to do with the aesthetics of the syntax and the underlying programming model.

Therefore, one remaining factor to consider is which API appeals to you the most in terms of the code you write. In the first part of this series it wasn't possible to provide much sample code to help illustrate the differences. So in this piece, we'll pick up from there by examining several code examples.

XmlTextReader

XmlTextReader is the "XML parser" in .NET. Hence, writing your code directly in terms of XmlTextReader is as close to the parser as you can get. You can instantiate an XmlTextReader over a Stream or a TextReader object containing the XML 1.0 byte stream. XmlTextReader parsers the supplied XML 1.0 byte stream and makes it available as a logical stream of nodes as defined by the abstract XmlReader base class. For more information on this, see my January and September 2001 articles for MSDN Magazine (see [1] and [2]).

How XmlTextReader Works

Once you've instantiated an XmlTextReader, you can begin moving through the logical stream of nodes by calling Read. Since this approach requires flattening the XML tree structure into a linear stream of nodes, end element markers must be inserted to the stream to enable proper interpretation of the structure. At any point in time, the current node's name, type, and value can be inspected through XmlTextReader properties.

Dealing with attributes is fairly straightforward. Since attributes are not considered part of the tree structure, they don't show up in the stream of nodes traversed by Read. If you wish to process attributes while positioned on an element, you can either call MoveToAttribute to move to a specific attribute by name or index, or you can call MoveToFirstAttribute/MoveToNextAttribute to iterate through the entire attribute collection. Then you can return the cursor to the attribute's owner element by calling MoveToElement. While positioned on an attribute, you can retrieve its value through the Value property. Attributes by definition don't contain child text nodes so you don't have to worry about traversing them.

If you're positioned on an element's child text node, you can also retrieve the text value through the Value property. If you wish to convert the text value to a CLR type, you should use the XmlConvert class that was specifically designed for this converting between the XML Schema and CLR type systems in either direction. For example, assuming the reader is positioned on a text node, the following code converts the value to a CLR double:

double age = XmlConvert.ToDouble(reader.Value);

Hence, the model for working with XmlTextReader consists of iterating through the forward-only stream of nodes, moving off the tree to inspect attributes when necessary, and retrieving text values and potentially coercing them to CLR values when desired.

Basic Custom Validation

Let's start by looking at how one would process an extremely simple XML document:

<x:name xmlns:x="http://example.org/name"><first>Aaron</first><last>Skonnard</last></x:name>

Since XmlTextReader doesn't provide any validation support, one must manually provide any desired error handling while processing the document stream. The following code fragment illustrates how one might process this document, validating it along the way:

// open XmlTextReader over file stream
XmlTextReader r = new XmlTextReader("name.xml");
r.Read(); // move to name element
if (r.LocalName.Equals("name") &&
r.NamespaceURI.Equals("http://example.org/name"))
{
r.Read(); // move to first element
if (r.LocalName.Equals("first") &&
r.NamespaceURI.Equals(""))
{
r.Read(); // move to first text
Console.WriteLine("first: {0}", r.Value);
r.Read(); // move to first end element
}
else throw new Exception("expected first");
r.Read(); // move to last element
if (r.LocalName.Equals("last") &&
r.NamespaceURI.Equals(""))
{
r.Read(); // move to last text
Console.WriteLine("last: {0}", r.Value);
r.Read(); // move to last end element
}
else throw new Exception("expected last");
r.Read(); // move to name end element
// read through end--make sure wellformed
while (r.Read());
}
else throw new Exception("expected name");

If you compile and run this code, you'll get the following output (assuming the XML document shown just above):

First: Aaron
Last: Skonnard

If the document weren't valid according to the implied schema, it would throw an exception to notify the caller. One problem with this code, however, is that it makes the unrealistic assumption that the document won't contain white space, comments, or processing instructions. For instance, if the document were formatted with some additional white space (like indentations) for readability's sake, the code would throw an exception. To illustrate this, try running the above code against the following version of the document:

<x:name xmlns:x="http://example.org/name">
<first>Aaron</first>
<last>Skonnard</last>
</x:name>

The program generates an exception containing the following message: expected first. Similar problems would present themselves if there were comments or processing instructions in the document.

Dealing with Whitespace, Comments, PIs

One way around this problem is to write a function that skips over white space, comments, and processing instructions until it finds the next real content node (e.g., element, text, etc.). XmlTextReader provides MoveToContent for this purpose.

MoveToContent checks if the current node is a white space, comment, or processing instruction node. If it is, it skips over the current node and any others until it reaches the next content node. If it isn't, it stays put and doesn't advance the cursor. So to make this program deal with these additional node types gracefully, you simply call MoveToContent before inspecting each element name:

// open XmlTextReader over file stream
XmlTextReader r = new XmlTextReader("name.xml");
r.MoveToContent();
if (r.LocalName.Equals("name") &&
r.NamespaceURI.Equals("http://example.org/name"))
{
r.Read();
r.MoveToContent(); // move to first element
if (r.LocalName.Equals("first") &&
r.NamespaceURI.Equals(""))
{
r.Read(); // move to first text
Console.WriteLine("first: {0}", r.Value);
r.Read(); // move to first end element
}
else throw new Exception("expected first");
... // remaining lines omitted for brevity
}
else throw new Exception("expected name");

Now the program processes the document properly even with the white space. It would also work with a document cluttered with white space, comments, and processing instructions like the one shown here:

<?xml-stylesheet type="text/xsl" href="name.xsl"?>
<!-- Aaron Skonnard's name structure -->
<x:name xmlns:x="http://example.org/name">
<first>Aaron</first>
<!-- middle initial optional -->
<last>Skonnard</last>
</x:name>
<!-- end of name -->

Simplifying Custom Validation Further

As you can see, this code isn't overly complex but it's already becoming quite tedious even with this extremely simple document. To help simplify things, the designers tried to identify the most common XmlTextReader practices and encapsulate them into higher-level methods.

For example, one thing that's common throughout the code above is the following pattern:

r.MoveToContent();
if (r.LocalName.Equals("name") &&
r.NamespaceURI.Equals("http://example.org/name"))
{
r.Read();
... // continue here
}
else throw new Exception("expected name");

This pattern can be summarized into the following three steps that must be performed for each element expected in the content model:

  1. Call MoveToContent to skip over any irrelevant nodes
  2. Compare the LocalName and NamespaceURI properties of the current node to what you expect at that location in the content model
  3. If they match, call Read to advance, otherwise throw an exception

XmlTextReader's ReadStartElement method encapsulates this pattern completely. Behind the scenes ReadStartElement calls MoveToContent and then checks the name of the current node against the supplied name information. If it matches, it then calls Read, otherwise it throws an appropriate exception. XmlTextReader also provides a ReadEndElement method that checks to make sure the current node is an EndElement node before advancing the cursor. If not, it throws an exception just like ReadStartElement.

Now the previous code fragment can be rewritten as follows:

XmlTextReader r = new XmlTextReader("name.xml");
r.ReadStartElement("name", "http://example.org/name");
r.ReadStartElement("first");
Console.WriteLine("first: {0}", r.Value);
r.Read(); // moves past text node
r.ReadEndElement(); // first
r.ReadStartElement("last");
Console.WriteLine("last: {0}", r.Value);
r.Read(); // moves past text node
r.ReadEndElement(); // last
r.ReadEndElement(); // name

Processing Text-Only Elements

As you can, the previous code fragment is much simpler than before but it's still tedious to deal with text-only elements. It's still necessary to call Read to move past the text node and then ReadEndElement to consume the element's end tag marker. The pattern for dealing with text-only elements is shown here:

r.ReadStartElement("first");
Console.WriteLine("first: {0}", r.Value);
r.Read(); // moves past text node
r.ReadEndElement(); // first

This pattern can be summarized into the following steps:

  1. Call ReadStartElement to consume the start tag
  2. Retrieve the element's text content through the Value property
  3. Call Read to move off the text node
  4. Call ReadEndElement to consume the end tag

To simplify using this pattern, the designers introduced another helper method called ReadElementString that encapsulates this behavior. Using ReadElementString makes it possible to simplify the code even further:

XmlTextReader r = new XmlTextReader("name.xml");
r.ReadStartElement("name", "http://example.org/name");
Console.WriteLine("first:{0}", r.ReadElementString("first"));
Console.WriteLine("last: {0}", r.ReadElementString("last"));
r.ReadEndElement(); // name

You'd be hard-pressed to simplify the code more than this.

Complex Content Models

All of these helper methods do a great job of simplifying the once tedious task of processing a forward-only stream of nodes. However, this approach only works for processing fairly simple content models that only contain sequences of elements. In fact, all of the following content model characteristics require special attention:

  • Choice groups
  • Optional elements/groups
  • Repeating elements/groups
  • All groups

Choices and Optional Elements

Assume that the content model for the name element is defined to be the first element, followed by a choice of either middle or mi elements, followed by the last element. With this type of content model, it's no longer possible to use ReadStartElement without first checking to see whether middle or mi was actually used as shown here:

XmlTextReader r = new XmlTextReader("name.xml");
r.ReadStartElement("name", "http://example.org/name");
Console.WriteLine("first:{0}", r.ReadElementString("first"));
r.MoveToContent(); // skip irrelevant nodes
switch(r.LocalName) // test for middle or mi element
{
case "middle":
Console.WriteLine("middle: {0}",
r.ReadElementString("middle"));
break;
case "mi":
Console.WriteLine("mi: {0}",
r.ReadElementString("mi"));
break;
default:
// comment out next line to make middle|mi optional
throw new Exception("unexpected element");
}
Console.WriteLine("last: {0}", r.ReadElementString("last"));
r.ReadEndElement(); // name

In this case, you have to call MoveToContent explicitly before checking the local name in the switch statement. This code expects either a middle or mi element after the first element, one or the other is required. If you wanted to make the choice optional, you could simply stop throwing the exception in the default case of the switch statement.

Repeating Elements

Dealing with repeating elements also presents a problem since you have to check the name of the next element before committing to the ReadStartElement or ReadElementString call. The following code illustrates how to process the name element assuming it may contain zero or more first elements followed by a mandatory last element:

XmlTextReader r = new XmlTextReader(@"name.xml");
r.ReadStartElement("name", "http://example.org/name");
bool more=true;
while (more)
{
r.MoveToContent();
if (r.LocalName.Equals("first"))
Console.WriteLine("first: {0}",
r.ReadElementString("first"));
else
more=false;
}
Console.WriteLine("last: {0}", r.ReadElementString("last"));
r.ReadEndElement(); // name

All Groups

Dealing with all groups, as defined by XML Schema, is even trickier because combinatorial mathematics really starts to work against you. For example, let's look at how you could process the name element if its content model were defined as an all group of the following elements: name, middle, and last. This means that the name element must contain exactly one name, middle, and last element but in any order.

If you try to attack this in the same way as the choice example, you'll end up with switch statements nested within switch statements. The easiest way to process something like this would be to compare the current node against a collection of names allowed at that location. The following code snippet illustrates how to set things up for this:

XmlTextReader r = new XmlTextReader("name.xml");
r.ReadStartElement("name", "http://example.org/name");
string[] allElements = {"first", "middle", "last"};
ProcessAllElements(r, allElements);
r.ReadEndElement(); // name

The implementation of ProcessAllElements calls ProcessElement once for each name in the original list:

static void ProcessAllElements(XmlReader r,
string[] remaining)
{
for (int i=0; i<remaining.Length; i++)
{
r.MoveToContent();
if (!ProcessElement(r, remaining))
throw new Exception("unexpected element");
}
}

And the implementation of ProcessElement compares the name of the current element to each name remaining in the list. If it's found, the name is allowed at that location so it's processed and then removed from the list. Otherwise if it's not found, an exception is thrown to indicate an unexpected element.

static bool ProcessElement(XmlReader r, string[] validNames)
{
for (int j=0; j<validNames.Length; j++)
{
if (validNames[j].Equals(r.LocalName))
{
Console.WriteLine("{0}: {1}", r.LocalName,
r.ReadElementString());
validNames[j]="";
return true;
}
}
return false;
}

These examples illustrate that for more complicated content models, working with XmlTextReader directly requires a productivity sacrifice but in return you get better performance. The only alternative at this level is to leverage XmlValidatingReader to ensure that the document is structurally valid according to a DTD/schema without having to inspect each node yourself.

XmlValidatingReader

XmlValidatingReader can be used in conjunction with XmlTextReader to provide DTD/schema driven validation as well as runtime type information. Both of these are extremely useful and powerful techniques.

DTD/Schema-Driven Validation

The previous section demonstrated several common approaches to writing code with custom validation. You observed that as the content models increased in complexity so did the custom validation code. One way to simplify the processing code is to use DTD/schema driven validation in conjunction with XmlTextReader. This allows you to safely ignore portions of the document that you don't care about.

For example, the following document (name.xsd) provides an XML Schema definition for the all content model we just wrote the code for manually:

<xsd:schema targetNamespace="http://example.org/name"
xmlns:tns="http://example.org/name"
xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<xsd:complexType name="name">
<xsd:all>
<xsd:element name="first" type="xsd:string"/>
<xsd:element name="middle" type="xsd:string"/>
<xsd:element name="last" type="xsd:string"/>
</xsd:all>
</xsd:complexType>
<xsd:element name="name" type="tns:name"/>
</xsd:schema>

The following code fragment shows how to use XmlValidatingReader to perform XML Schema validation against name.xsd:

XmlTextReader tr = new XmlTextReader("name.xml");
XmlValidatingReader vr = new XmlValidatingReader(tr);
vr.Schemas.Add("http://example.org/name", "name.xsd");
vr.ValidationType = ValidationType.Schema;
vr.ValidationEventHandler +=
new ValidationEventHandler(MyValidationHandler);
while (vr.Read())
{
if (vr.LocalName.Equals("middle"))
{
// process first element here ...
}
}

In this case, the content model restrictions are defined in name.xsd and it's the job of XmlValidatingReader to enforce them while processing calls to Read. If there's a validity error, XmlValidatingReader will call through the assigned delegate to inform my code of the error. This makes it possible to deal with extremely complex content models defined in the schema without having to manually write the validation code. My processing code above ignores everything in the document until it locates the middle element, which it processes manually.

Although this might seem attractive, it's not that common for developers to write code like this because they usually can't ignore large sections of the document. Developers typically need to track where they're at in the document and process things along the way. If you find yourself managing context and providing your own validation code (as shown in the previous section), you'd be better off just using XmlTextReader because it's much faster without the extra overhead of generic validation.

There are some cases where using XmlValidatingReader proves useful but it's more common to use it in conjunction with the DOM API as you'll see shortly. That said, it's still a good practice to use XmlValidatingReader during development and testing because it might help you find bugs in your validation code that you wouldn't have found otherwise. Then, when you're ready to go live, you can take it out and use XmlTextReader directly.

Reflection: Runtime Type Information

The one place where you must use XmlValidatingReader directly is for inspecting type information at runtime. XML Schema makes it possible to annotate XML documents with application-specific type information. In other words, XML Schema adds type to XML's generic, text-based data model (known by XML purists as the Post-Schema Validated Infoset or PSVI).

XmlValidatingReader makes it possible to inspect the XML Schema definition while processing the document. The XML Schema Object Model (SOM) represents the XML Schema definition in-memory as a tree of objects, which is very much like the DOM for XML documents. While processing the stream of nodes, you can access XmlValidatingReader's SchemaType property to inspect the type definition for the current node:

public static void DisplayTypeInfo(XmlValidatingReader vr)
{
if(vr.SchemaType != null)
{
if(vr.SchemaType is XmlSchemaDatatype ||
vr.SchemaType is XmlSchemaSimpleType)
{
object value = vr.ReadTypedValue();
Console.WriteLine("{0}({1},{2}):{3}", vr.NodeType,
vr.Name, value.GetType().Name, value);
}
else if(vr.SchemaType is XmlSchemaComplexType)
{
XmlSchemaComplexType sct =
(XmlSchemaComplexType)vr.SchemaType;
Console.WriteLine("{0}({1},{2})", vr.NodeType,
vr.Name, sct.Name);
}
}
}

As you can see in the code, it's also possible to have XmlValidatingReader return the appropriate CLR object based on the XML Schema simple type of the current node, which is a nice alternative to using XmlConvert. Again, if you want to write this kind of reflection-driven code, you have no choice but to use XmlValidatingReader today.

XmlDocument

The DOM is currently and will probably continue to be the API used by the masses. This is mostly due to the fact that it's the easiest to use and it's already quite familiar to most developers. The DOM has been around the longest; it came out shortly after the original XML 1.0 specification and has been evolving ever since. Now most DOM implementations come with built-in XPath and XSLT support, which greatly simplifies complex processing problems of the past.

In addition to XPath, most DOM developers also use DTD/schema-driven validation. This makes sense with the DOM because once the document is finished loading without errors, you know that you're dealing with a valid document instance, which greatly simplifies the amount of error handling you have to build into your processing code.

If you don't use validation with the DOM and you have to write some manual validation code yourself, you'll find that it's even more tedious than with XmlTextReader. This option doesn't even make sense since the only downside is the extra overhead, which is already significantly outweighed by the overhead of the DOM itself. So unless you just don't need validation period, you'll always want to load XmlDocument with an XmlValidatingReader.

The following code fragment illustrates how to process name.xml with an all content model in conjunction with XSD-driven validation and XPath expressions:

XmlTextReader tr = new XmlTextReader("name-all.xml");
XmlValidatingReader vr = new XmlValidatingReader(tr);
vr.Schemas.Add("http://example.org/name", "name.xsd");
vr.ValidationType = ValidationType.Schema;
vr.ValidationEventHandler +=
new ValidationEventHandler(MyValidationHandler);
// load document with validating reader
XmlDocument doc = new XmlDocument();
// if we get past this, we know document is valid
doc.Load(vr);
// pull things out of document using XPath
XmlNode first = doc.SelectSingleNode("//first");
Console.WriteLine("first: {0}", first.InnerText);
XmlNode middle = doc.SelectSingleNode("//middle");
Console.WriteLine("middle: {0}", middle.InnerText);
XmlNode last = doc.SelectSingleNode("//last");
Console.WriteLine("last: {0}", last.InnerText);

As shown here, most developers will use XPath expressions to identify a portion of the tree and then use the standard DOM APIs (XmlNode, XmlElement, etc.) to move around from there and extract information.

If you need to update the XML document, the DOM is the only viable approach. XmlReader, XmlTextReader, XmlValidatingReader, and XPathNavigator all provide read-only interfaces to the document stream. The DOM, however, was designed as read/write interfaces for exactly this purpose. The following code illustrates a read/write example:

XmlDocument doc = new XmlDocument();
doc.Load("aaron.xml");
XmlNode first = doc.SelectSingleNode("//first");
first.InnerText = "Tim";
XmlNode last = doc.SelectSingleNode("//last");
last.InnerText = "Ewald";
doc.Save("tim.xml");

See the next part of this series for more information on the process of writing XML documents.

As you know, using the DOM in conjunction with validation and XPath is the most expensive solution but the productivity benefits typically outweigh this obvious downside.

XPathNavigator

XPathNavigator is like XmlReader in that it provides a read-only cursor-based programming model. However, unlike XmlReader, XPathNavigator exposes the XML document as a logical tree structure with parent, child, and sibling relationships. In other words, XPathNavigator makes it possible to traverse the tree without restrictions. For more details on the mechanics of XPathNavigator, see my September 2001 article in MSDN Magazine (see [2] in References section at the bottom of this page).

Since XPathNavigator models a logical tree, its functionality is similar to that of the DOM in many ways. The key benefit to XPathNavigator, however, is that it's much simpler to implement than the DOM API. This encourages custom XPathNavigator implementations on top of other non-XML data sources. In my September 2001 article for MSDN Magazine (see [2] in References section at the bottom of this page), I provided several custom XPathNavigator implementations that sit on top of the file system, registry, .NET assemblies, and even zip files. If you're interested, you can download the source code and take a look.

You should use XPathNavigator whenever there is a custom implementation available that you want to use, otherwise you'd probably opt for the more familiar DOM API as shown in the previous section. The following code fragment illustrates how you could use my custom FileSystemNavigator class to execute an XPath expression against the file system:

FileSystemNavigator fsn = new FileSystemNavigator();
XPathNodeIterator ni = fsn.Select("/mycomputer/c/temp/*");
while (ni.MoveNext())
{
// process current file or directory here
}

Another implementation that's available in the .NET XML framework is called XPathDocument. XPathDocument is an optimized in-memory tree structure that can be navigated using the XPathNavigator interface. XPathDocument loads faster than XmlDocument and it's more efficient while processing XPath expressions and XSLT transformations. The following code fragment illustrates how you could use XPathDocument to re-implement the previous DOM example:

XmlTextReader tr = new XmlTextReader("name-all.xml");
XmlValidatingReader vr = new XmlValidatingReader(tr);
vr.Schemas.Add("http://example.org/name", "name.xsd");
vr.ValidationType = ValidationType.Schema;
vr.ValidationEventHandler +=
new ValidationEventHandler(MyValidationHandler);
// load document with validating reader
// if we get past this, we know document is valid
XPathDocument doc = new XPathDocument(vr);
// retrieve XPathNavigator reference
XPathNavigator nav = doc.CreateNavigator();
// pull things out of document using XPath
XPathNodeIterator it = nav.Select("//first");
it.MoveNext();
Console.WriteLine("first: {0}", it.Current.Value);
XPathNodeIterator it = nav.Select("//middle");
it.MoveNext();
Console.WriteLine("middle: {0}", it.Current.Value);
XPathNodeIterator it = nav.Select("//last");
it.MoveNext();
Console.WriteLine("last: {0}", it.Current.Value);

I should reiterate at this point that .NET's implementation of XSLT (in XslTransform) is defined completely in terms of XPathNavigator. So if XSLT is on your list, you'll be using XPathNavigator like it or not. This design is quite powerful since it enables you to perform XSLT transformations against data from any data store for which an XPathNavigator implementation exists. The following example shows how to execute an XSLT transformation against an XPathDocument:

XPathDocument doc = new XPathDocument("name.xml");
XPathNavigator nav = doc.CreateNavigator();
XslTransform tx = new XslTransform();
tx.Load("name.xsl");
tx.Transform(nav, null, Console.Out);

There is also an XPathNavigator that sits on top of XmlDocument (DOM) trees that you can use to execute XSLT transformations as shown here:

XmlDocument doc = new XmlDocument();
doc.Load("name.xml");
XPathNavigator nav = doc.CreateNavigator();
XslTransform tx = new XslTransform();
tx.Load("name.xsl");
tx.Transform(nav, null, Console.Out);

It's harder to draw clear lines between when you should use XPathNaviagor and the more universally understood DOM API. The former is generally faster and more efficient but the API & programming model aren't as familiar or intuitive as the DOM, at least not yet.

Conclusion

There are many APIs available in .NET for reading XML documents. Choosing the right one without understanding the key differences is about as easy as picking the winning lottery number. But once you're familiar with the entire .NET XML framework and have gained some experience using each of these different approaches, choosing the right API for a given situation becomes quite clear. See the guidelines at the beginning of this article to review the determining factors.

References

[1] XML in .NET: .NET Framework XML Classes and C# Offer Simple, Scalable Data Manipulation, MSDN Magazine January 2001, http://msdn.microsoft.com/en-us/magazine/cc302158.aspx, by Aaron Skonnard

[2] Writing XML Providers for Microsoft .NET, MSDN Magazine September 2001, http://msdn.microsoft.com/en-us/magazine/cc302171.aspx, by Aaron Skonnard

Sample Code

Download the sample code, readingxml.zip, at the bottom of this page.

About The Author

Aaron Skonnard is a consultant, instructor, and author specializing in Windows technologies and Web applications. Aaron teaches courses for DevelopMentor and is a columnist for Microsoft Internet Developer. He is the author of Essential WinInet, and co-author of Essential XML: Beyond MarkUp (Addison Wesley Longman). [SoftArtisans note: As of this update in 2012, Aaron is CEO of Pluralsight] Contact him at http://www.pluralsight-training.net/microsoft/Authors/Details?handle=aaron-skonnard.

More .NET XML Best Practices

Choosing an XML API

Writing XML Documents

Attachments
Attachments/KB674_ReadingXml.zip
Created : 2/3/2009 5:53:01 PM (last modified : 4/23/2012 4:31:41 PM)
Rate this article!
 
Comments



Copyright 2010 © SoftArtisans, Inc. All Rights Reserved.

Site Map     |     Privacy Policy     |     Contact Us