DITA > HTML > JSON

At Information Development World 2015 some attendees expressed interest in the JSON documentation format that feeds my documentation portal.

Starting from DITA source, there is a series of two transformation:

  1. HTML2 from DITA4Publishers, which flattens the directory structure.
  2. A custom XSLT that reads the resulting index and creates nested structures representing the document.
Each topic in the map becomes a “document” element in the JSON that is made up of the following pieces:
Field Source
Title Topic title
ID Topic filename
Unique key Top-level document filename + topic filename
Ancestors List of ancestor topics at all levels
Summary* Topic shortdesc
Body Topic body
HREF Topic path + topic filename
Documents* List of sub-documents

The JSON created in stage 2 is loaded into MongoDB for rendering on the documentation portal. As the loader and the rest of the portal infrastructure was developed by the support tools team I can’t give any insight there except to say that cross-references and image links presented a bit of a challenge.

The XSLT (ditahtml2json.xsl) and a sample JSON (hierarchy.json) generated from the DITA-OT hierarchy.ditamap are available from GitHub. More background is available in the slides from the IDW presentation.

DITA to Word for SME review

At the TC Camp unconference earlier this year, I participated in the discussion on review processes and tools. I’ve been interested in this topic for a while (here’s an article from 6 years ago). The session was pretty sobering. There have been a lot of great ideas, and even some good tools, for conducting review over the past 10 or 12 years. But in the final analysis, it seems that no one had been successful implementing and sustaining any of these approaches. SMEs, in general, seem to not be willing to incur any overhead whatsoever in the quest after good reviews. Overhead means adopting any new tools, certainly, especially those that can’t be used offline, but also even having to give a moment’s thought to the process.

So how can information developers move past the ad hoc approach to reviewing? Clearly, we need to provide a familiar experience so SMEs don’t have to think about it. The unpleasant conclusion is that it should be based on a common authoring and review process such as Google Docs or (gasp!) Word+Sharepoint. Despite the contempt we feel for Word, my team is asked constantly to provide content in these forms for review. So, if you can’t beat ’em, join ’em.

Since my company is in the final stages of moving from Google to Office 365, we decided to look at Word as an option, even though in my opinion the Google Docs review experience is superior. The RTF transformation in the DITA Open Toolkit has been broken for as long as I can remember. In the past I’ve used HTML as an intermediate step for going from DITA to Word, but it has a lot of styling issues.

Then I found that Word does a pretty handy conversion from PDF. Just open the PDF from Word, and there it is. Although the formatting isn’t good enough for production use maybe, it’s plenty good enough for review. One gap is that cross-references aren’t preserved, but that’s not a show-stopper for review.

It’s hard to overstate the revulsion I feel in using PDF as an interchange format. How ridiculous is it to strip out all the hard-won semantics from the source content, then heuristically bring back a poor imitation of that information? But I’m no Randian hero, just a guy trying to get stuff done.

Automation

Ok, so it’s sort of interesting that you can convert PDF to Word just by opening it up and saving it, but that’s a manual process. What about automating the conversion?

Naturally PowerShell is the answer. The full script is a bit longer, but here are the important parts.

$wordApp = New-Object –ComObject Word.Application
$wordApp.Visible = $False

$pdfFile = (Resolve-Path $pdfFile).Path
$wordDoc = $wordApp.Documents.Open($pdfFile, $false) # do not confirm conversion, open read-only
Write-Host "Opened PDF file $pdfFile"

$wordFile = $pdfFile.Replace("pdf", "docx")
$wordDoc.SaveAs([ref]$wordFile,[ref]$SaveFormat::wdFormatDocument)
Write-Host "Saved Word file as $wordFile"

You’ll want to wrap opening the PDF file and saving the DOCX file in a try/catch block, but that’s about it. My version also adds a “Draft” watermark to the Word file.

Troubleshooting

One issue I found was that sometimes the file wouldn’t close properly. The next time I tried to convert the same file, a warning like this would pop up. The solution was to go to the Task Manager and kill WINWORD.EXE.

Next steps

Using this utility is a two-step process. The first thing I’d like to do is create an ant target that would generate a PDF from DITA source then immediately convert it to PDF.

Once that’s done, the desired end state would be to do regular automated builds and upload to Sharepoint. The trick will be to merge comments, but that seems to be possible with PowerShell. Although I haven’t fully investigated the approach described here, it looks promising.

Information Development World 2015

Last week I attended the Information Development World conference in San Jose. My presentation “Dynamic chunking of component-authored information” covered how we are presenting tech docs on my company’s support portal, how we do it, and why we’re doing it that way. Reception was favorable with interest from several companies and researchers. Representative from the DITA technical committee asked about adding a JSON transform to the Open Toolkit.

There were some interesting presentations so I thought I’d summarize the key takeaways. The full agenda is here: https://www.eiseverywhere.com/ehome/113382/schedule/

 

Unforgettable: The neuroscience of memorable content (Dr. Carmen Simon)

The user experience of interacting with information is shaped largely by expectations, and the information developer can take steps to shape the expectations. Expectations = Beliefs + Tools

 

A radical new way to control the English Language (Dr. George Gopen)

Expectations again: readers have fixed expectations about where to look for what in a text. Key insight is that the most important words in the sentence are expected at the end. (BC: The concept of information structure from linguistics leads to much the same conclusion.)

SciAm article from 1990 is recognized as one of 36 classic articles from the publication’s history. Columns for journal Litigation are short and digestible: http://georgegopen.com/articles/litigation/

 

Open authoring: Content collaboration across disciplines (Ralph Squillace, MS Azure)

MS Azure documentation practices enable broad collaboration using GitHub plus markdown. Docs undergo a freshness review and are updated or discarded every 3 months (BC: we should be able to get reports about potentially stale topics from Git). Key metrics: freshness, performance, satisfaction. They conduct periodic “hackadocs” with SMEs to create/update documentation.

 

DocOps (Wade Clements, CA)

Inspired by the DevOps approach. Move from trying to get content perfect before publishing to being able to make corrections/adjustments quickly, and work from data not anecdotes. Capture referrals from context-sensitive help in the UI to the docs. Metrics use case: predict if a user who comes to the docs ultimately opens a support case.

 

Work smarter not harder (Skip Besthoff, InboundWriter)

By analogy to botany: some pieces of content are annuals (ongoing use, long-term interest), some are perennials (one-time use). Focus on creating better content (annuals), not more content. The average cost of a piece of content in the enterprise is $900 (BC: according to marketing consultant Jay Baer; I think technical content cost more over its lifecycle; or maybe it’s $900/page-ish). Move from using simple keywords to topic clusters.

 

Going mapless (Don Day, founding chair of OASIS DITA technical committee)

Some use cases for DITA may not require maps for top-level navigation. To do so, robust search, tags/keywords, topic-to-topic cross-references are required. Mapless DITA was implemented in wikis for “The Language of” series from XML Press: http://tlowiki.com/ See also expeDITA: http://expedita.info/ (BC: I’m not ready to jettison maps…yet.)

 

Single-sourcing publishing across multiple formats (George Bina, oXygen)

Specifically, publishing from multiple input formats (such as Excel, CSV, markdown, SVG). Dynamic transformations to DITA. It’s actually real: https://github.com/oxygenxml/dita-glass

 

Past, present, and future of DITA (Kristen Eberlein, OASIS DITA technical committee)

DITA 1.3 spec is complete and will be officially released in mid-December. It includes several interesting new features: troubleshooting topic type, classification domain/map, SVG domain, doc release notes capability.

Reviews were done using DITAweb: http://about.ditaweb.com/

DITA 2.0 won’t be out for about 5 years. Plan is to include lightweight DITA.

 

cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |

QA Plugin: Solving for Attribute Chunk

The Issue

The @chunk="to-content" requirement for the QA plugin has always been a bit sticky. Honestly, I hadn’t thought much about it since we run the QA plugin through a self-service web server and that attribute is handled by a Python controller. However, thinking in terms of local builds, it became evident that setting the @chunk by hand would quickly become a tiresome routine.

Besides attribute handling, the web server also masks another consideration—the QA plugin may not be running in isolation from other plugins.

 The First Iteration

The first iteration of to move this functionality to the plugin itself resulted in a new build target extending the chunk preprocess.

In plugin.xml:

<feature extension="depend.preprocess.chunk.pre" value="setchunk"/>

The target in build_qadata.xml:

<target name="setchunk" description="Set @chunk to-content on the temp input bookmap" if="if.chunk">
<replace file="${dita.temp.dir}/${user.input.file}"
token="chunk=.to-content." value="" />
<replace file="${dita.temp.dir}/${user.input.file}"
token="&lt;bookmap " value="&lt;bookmap chunk='to-content' " />
<replace file="${dita.temp.dir}/${user.input.file}"
token="&lt;map " value="&lt;map chunk='to-content' " />
</target>

The new target used a regex replace to add the chunk attribute just before processing began in the temporary build directory. This solved the problem of manually setting the attribute, but also extended the chunk pre-processing to other sibling plugins as well.

The Solution

It’s possible to add an if-condition to a target to look for the presence of a command-line parameter, but I needed to look for a parameter with a specific value.  A second iteration added a double-hop if-condition to the ant call.

<condition property="if.chunk">
<equals arg1="${setchunk}" arg2="true" casesensitive="false" />
</condition>

<target name="setchunk" description="Set @chunk to-content on the temp input bookmap" if="if.chunk">
<replace file="${dita.temp.dir}/${user.input.file}"
token="chunk=.to-content." value="" />
<replace file="${dita.temp.dir}/${user.input.file}"
token="&lt;bookmap " value="&lt;bookmap chunk='to-content' " />
<replace file="${dita.temp.dir}/${user.input.file}"
token="&lt;map " value="&lt;map chunk='to-content' " />
</target>

This approach looks for the presence of the setchunk switch and a value of true before applying the target, which is called with:

dita -f qa -i samples/taskbook.ditamap -Dsetchunk=true

So if you run the QA plugin alongside any others, you can leave off the switch to avoid unwanted chunk attributes.

cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |

Word Conversion Tool

I initially drafted this post about a Word conversion tool two years ago, and for some forgotten reason, never published it. Eventually the project was scrapped, but we did use the tool for quite a while. I’ve gone back and made the article past-tense, and added some reflection. I think there’s still some good stuff in here…

Word. It ain’t going away anytime soon. As a DITA–and more generally, as a structured authoring–evangelist, I’ve long loathed the wild West approach to documentation Word engenders. Okay sure, it is easy to use. It doesn’t really require any training. I suppose, yes, one could argue that writing in Word lets authors focus on the content rather than on “tagging.” (I think that argument is faulty, as I’m sure you do, but we won’t go into that here.) The point is, if Word is going to be here awhile, what might we do about it?

Continue reading

Word2DITA Plugin (DITA4Publishers)

One of the groups my team supports is solutions engineering. This group figures out the best way to run 3rd-party applications on our platform. Although they are not writers, one of their primary deliverables is documentation, and a lot of it. My team provides editorial services throughout the entire lifecycle.

Now that we’ve made great progress on content quality, which is the most important thing, here’s the problem: how to improve the user experience of the content. It will surely come as no surprise that the solutions documentation is authored in Word. You can get a minimally viable PDF from Word, but that’s about it.

I dream of the time when there is such a nice DITA authoring interface that they could create DITA topics, but that time isn’t now. These authors are more than casual authors, but writing and content management isn’t close to 100% of their time either. As such, authoring and reviewing in Word is a requirement.

At the same time, we already have a refined process to publish to our support portal in a searchable fashion. This process is based on DITA to HTML.

You can see where I’m going. How to get from Word to DITA so we can use our existing publishing pipeline?

The first solution that comes to mind is the DITA4Publishers Word2DITA plugin.

Installation

The instructions here seem to be out of date. The good news is that the reality is easier.

  1. Install the DITA4Publishers plugins.
  2. Copy the sample file from GitHub to the samples folder under DITA-OT
  3. Run the transform.
    ant -f build.xml -Dtranstype=word2dita -Dargs.input=word2dita_single_doc_to_map_and_topics_01.docx

In the out folder you’ll see a map and topics. This is a sample document with the default style-to-tag mapping.

Images

Unfortunately, images are not extracted. The solution (found here) is to open the Word file in oXygen and extract the media folder to the topics folder created by word2dita. This is a bitter disappointment–I had envisioned a single build target that would convert the Word file to DITA and then to PDF, HTML, and ePUB. There will have to be a manual step in there to extract the images.

But then I put 2 and 2 together: the DOCX file is a ZIP, and Ant has an unzip task. So I added these lines to the target:

<unzip src="${args.input}" dest="${temp}" />
<copy todir="${out}/topics/media" failonerror="false">
  <fileset dir="${temp}/word/media" />
</copy>

Now the DITA output is complete.

cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |

QA Check Compiler

We’ve been working on some enhancements for the QA plugin that are now available. You can download the plugin from GitHub.

The first enhancement I want to talk about is the QA check compiler.

Writing a QA script in PowerShell was a pretty keen idea even if I do say so myself. Moving to an Open Toolkit plugin was an even better idea with better execution. One of the drawbacks to the OT mechanism, however, is how complicated the expression of a simple check is.

For example, let’s say you want to flag occurrences of utilize and suggest use instead. This is the expression you have to write:

<xsl:if test="descendant::*[not($excludes)]/text()[matches(.,'utilize', 'i')]">
  <data type="msg" outputclass="term mmstp" importance="recommended">Found "utilize". Use "use".</data>
</xsl:if>

The contents of the matches call and the value and attributes of the data element are all significant and also very repetitive. As we all know, repetition leads to errors.

Authoring Checks for use with the Compiler

With the QA check compiler, you author the checks in an abbreviated form. The checks go inside a properties table inside a DITA reference topic. To express the example rule above, just add a row to a properties table to specify the severity, expression, and message.

The QA compiler, executed by the compilechecks target, takes care of the converting the rows in the properties tables to checks that the plugin can execute.

  • The propdesc becomes the message for the check.
  • The propvalue becomes the argument to the matches function in the XPath expression.
  • The proptype becomes the @importance.
  • The @id of the parent properties table becomes the @outputclass of the check.

You can have as many properties tables as you want.  If the @id is term_mmstp the resulting category will be term mmstp. (Spaces aren’t allowed in @id, so an underscore is necessary but then replaced with a space in the output.) These categories are unconstrained–you can make them whatever you want.

The proptype element is limited to the values for @importance: default, deprecated, high, low, normal, obsolete, optional.

Enabling the QA Compiler

The result of the QA compiler isn’t enabled by default. To do so, uncomment the xsl:include call in xsl/qa_checks/_qa_checks.xsl and also remove the term template from that stylesheet. The QA compiler produces a template called term to make it easy to integrate, and you can’t have two templates with the same name. Once the result is included, you can start adding and modifying checks in tools/qacompiler/qa_checks_r.dita, which is a DITA reference topic. Don’t forget to run ant compilechecks after editing the DITA topic.

cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |

XML-aware diff with Git

One of the less-than-perfect aspects of using Git for XML is comparing versions of a file. Standard diff tools are not optimized for files that contain markup. Not only is the markup exposed, but irrelevant details (like indentation or line length) can appear far more significant than they really are. Although you can reduce the impact by telling the diff tool to ignore whitespace, such tools will never be semantically aware.

The Windows client TortoiseGit includes a graphical diff tool. If you select a revision of a file in the Git repository, you can diff it with previous or later versions. This is a convenient feature, but disappointing that the diff is not XML aware.

I just found out that oXygen includes a graphical diff tool called diffFiles.exe. It’s significant that it’s graphical because it can’t write output to the console. But I wondered if there is a way to have TortoiseGit use diffFiles rather than TortoiseDiff.

It turns out that there is. Go to TortoiseGit > Settings > Diff
Viewer and click Advanced. Create new entries for .dita and .xml setting the following (adjusting the file path as needed for your environment) as the Program:

 C:\Program Files\Oxygen XML Editor 16\diffFiles.exe %base %mine

Now when you tell TortoiseGit to compare DITA or other XML files it will use the oXygen XML-aware diff rather than TortoiseGitMerge.

There are couple of limitations. One is that you can’t use oXygen’s diff to do a 3-way merge, which can be useful if you have merge conflicts. However, I never do this with XML files. The other limitation is that oXygen diff takes much longer to start than TortoiseGitMerge. TortoiseGitMerge is almost instantaneous, while oXygen diff takes several seconds.

QA Plugin Use Case: Learning Engagement

I thought it would be useful to share a use case for the QA Plugin from the Education team at Citrix. In addition to the metrics in the open-source code, we’ve added a number of our own used to measure the quality of instructional design in our courses. For example, we calculate what we call an “engagement ratio”, which is the ratio of words to interactions. We find a good target is 250 words to each interaction. The ratio gives us a single metric that tells us, at least directionally, whether the course will offer a sound experience for the student.

Of course, if the content uses a lot of “click to see more text” interactions, then a low ratio may be misleading. That’s why we also total up the number of each interaction type. Showing these two metrics together gives us a solid understanding of the variety and frequency of interaction in a course.

In addition, we are able to calculate reading time vs other activities, like videos, labs, and simulations, as well as an estimated total course length. Therefore, we have language metrics telling us about terminology and style use, interaction metrics telling us about variety and frequency, and timing metrics about various activity types. Those metrics combined give us a accurate picture of how engaging a course will be, without having to read a single page.

But, you know, you should still read the course. 🙂 But with the QA plugin, you know where to focus, what issues you are likely to encounter, and how much work you are likely to need in order to get the course ready for release.

If you have a use case for the QA plugin, please let us know! We’d be more than happy to feature it here on ditanauts.

QA Plugin Updated! (Finally, right?)

Hi folks, I’m happy to let you know that we have posted a major update to the QA Plugin. The ditanauts team owes a huge debt of gratitude to Don Day and Michael Boses for their work on this update. What’s new you ask? Well…

  • Reports are prettier. The HTML report we generate uses Google Charts to render visual elements.
  • We create a data file (written in DITA) rather than generating the report HTML directly from the DITA input. With the data file, you can then render whatever you want using normal OT processing. The plugin creates an HTML report and a .csv file from the data file.
  • @Chunk set automatically on bookmaps. One of the really annoying things with the old version was that you had to set the @chunk attribute manually before a build. That is no longer the case when building from a bookmap!
I’ve updated the install and run sections of the how-to page; I will be updating the customization section soon.
Let us know what you think!