This report was run against the DITA User’s Guide.
The report generates a per-check-type pass or fail based on the number of errors found. Syntax errors, for example, generate a fail if just one is found. Terminology errors, however, do not generate a fail until 10 are found (see line 202 of qascript.xsl). There is also an overall pass/fail calculated from the individual ones (qascript.xsl line 111). As you can see, the DITA User Guide does not fare well against the default plugin – but we shouldn’t expect it to. The plugin needs to be customized for a particular group’s needs.
The document summary provides a variety of useful information about the DITA document, including a total word count (new!), a pie chart that shows how often each element is used (new!), which conditions have been applied, and a bunch of other stuff. This information, especially the conditions, are useful for troubleshooting successful builds. That may sound counter-intuitive, but many times a build may come out successful, but still have issues with conditions, or language codes, or a host of other things that may be valid DITA, but not correct.
Finally, the report lists errors, conditions, and word counts (new!) per-topic. This is where the work gets done–authors can review this section and make corrections in the corresponding DITA files. The issues, especially terminology / language warnings, should be reviewed, of course. The QA plugin is pretty good, but it is not a natural language processor, so it may generate false positives.
We have more plans around word counts too. We’re thinking we can add a target word count that represents the ideal topic size. Then we can flag topics that are over or under the ideal size, perhaps by 10% or more.
We’d love to hear more ideas for what to include in the report.
[Update: I just added @rev and @status values to the per-topic section.]