Information Development World 2015

Last week I attended the Information Development World conference in San Jose. My presentation “Dynamic chunking of component-authored information” covered how we are presenting tech docs on my company’s support portal, how we do it, and why we’re doing it that way. Reception was favorable with interest from several companies and researchers. Representative from the DITA technical committee asked about adding a JSON transform to the Open Toolkit.

There were some interesting presentations so I thought I’d summarize the key takeaways. The full agenda is here: https://www.eiseverywhere.com/ehome/113382/schedule/

 

Unforgettable: The neuroscience of memorable content (Dr. Carmen Simon)

The user experience of interacting with information is shaped largely by expectations, and the information developer can take steps to shape the expectations. Expectations = Beliefs + Tools

 

A radical new way to control the English Language (Dr. George Gopen)

Expectations again: readers have fixed expectations about where to look for what in a text. Key insight is that the most important words in the sentence are expected at the end. (BC: The concept of information structure from linguistics leads to much the same conclusion.)

SciAm article from 1990 is recognized as one of 36 classic articles from the publication’s history. Columns for journal Litigation are short and digestible: http://georgegopen.com/articles/litigation/

 

Open authoring: Content collaboration across disciplines (Ralph Squillace, MS Azure)

MS Azure documentation practices enable broad collaboration using GitHub plus markdown. Docs undergo a freshness review and are updated or discarded every 3 months (BC: we should be able to get reports about potentially stale topics from Git). Key metrics: freshness, performance, satisfaction. They conduct periodic “hackadocs” with SMEs to create/update documentation.

 

DocOps (Wade Clements, CA)

Inspired by the DevOps approach. Move from trying to get content perfect before publishing to being able to make corrections/adjustments quickly, and work from data not anecdotes. Capture referrals from context-sensitive help in the UI to the docs. Metrics use case: predict if a user who comes to the docs ultimately opens a support case.

 

Work smarter not harder (Skip Besthoff, InboundWriter)

By analogy to botany: some pieces of content are annuals (ongoing use, long-term interest), some are perennials (one-time use). Focus on creating better content (annuals), not more content. The average cost of a piece of content in the enterprise is $900 (BC: according to marketing consultant Jay Baer; I think technical content cost more over its lifecycle; or maybe it’s $900/page-ish). Move from using simple keywords to topic clusters.

 

Going mapless (Don Day, founding chair of OASIS DITA technical committee)

Some use cases for DITA may not require maps for top-level navigation. To do so, robust search, tags/keywords, topic-to-topic cross-references are required. Mapless DITA was implemented in wikis for “The Language of” series from XML Press: http://tlowiki.com/ See also expeDITA: http://expedita.info/ (BC: I’m not ready to jettison maps…yet.)

 

Single-sourcing publishing across multiple formats (George Bina, oXygen)

Specifically, publishing from multiple input formats (such as Excel, CSV, markdown, SVG). Dynamic transformations to DITA. It’s actually real: https://github.com/oxygenxml/dita-glass

 

Past, present, and future of DITA (Kristen Eberlein, OASIS DITA technical committee)

DITA 1.3 spec is complete and will be officially released in mid-December. It includes several interesting new features: troubleshooting topic type, classification domain/map, SVG domain, doc release notes capability.

Reviews were done using DITAweb: http://about.ditaweb.com/

DITA 2.0 won’t be out for about 5 years. Plan is to include lightweight DITA.

 

cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |
cheap football kits  |
cheap football shirts  |
cheap football tops  |

Word Conversion Tool

I initially drafted this post about a Word conversion tool two years ago, and for some forgotten reason, never published it. Eventually the project was scrapped, but we did use the tool for quite a while. I’ve gone back and made the article past-tense, and added some reflection. I think there’s still some good stuff in here…

Word. It ain’t going away anytime soon. As a DITA–and more generally, as a structured authoring–evangelist, I’ve long loathed the wild West approach to documentation Word engenders. Okay sure, it is easy to use. It doesn’t really require any training. I suppose, yes, one could argue that writing in Word lets authors focus on the content rather than on “tagging.” (I think that argument is faulty, as I’m sure you do, but we won’t go into that here.) The point is, if Word is going to be here awhile, what might we do about it?

Continue reading

Reporting on your repository with PowerShell, part 2

A couple months ago, some developers and support engineers were looking over some documentation and said to me, “These procedures are too complicated!” To which I said, “I know! I made them as simple as possible, but I can do only so much within the constraints of the interface.” Then the engineers asked me an astounding question: “Can you give us some complexity measure on each procedure so we know where to start making things simpler?” Because I knew how to use PowerShell to get information out of my set of DITA topics, I calmly said, “Let me look into it” while inside I was bursting with excitement. Continue reading

STC Webinar: Simplify DITA Authoring with Constraints

On Tuesday, June 19th at 9pm EDT, I’ll be presenting an STC webinar about constraints, specifically on how to download, install, and customize the ditanauts constraints example plugin. Hope to see you there!

To register, go to: http://www.stc.org/education/online-education/live-seminars/item/simplify-dita-authoring-with-constraints?category_id=53

After the webinar, please feel free to post comments and questions here.

Spambots!

Yesterday we got over 30 comments on a variety of posts, and I was stoked! The comments seem intelligent at first blush, and don’t contain any links… so I didn’t think they were spam. That is, until I started trying match them up with the subject of the posts. For example, on the post about Automating Tasks in a CMIS Repo, which discusses python, there was a comment that discussed a particular python API… only, that API did not have anything to do with the post and did not follow logically with the post it was replying to.

There are others which related particular problems about XSLT… but don’t really ask a question or have anything to do with DITA. Like this one,

"hi Mukul,i have a problem. Can you 
plesae provide me a good suggestion.the expalnation for the problem
is as follows: i have a xslt code which transforms a xml to xsd.
i want to throw an error as the output when i execute the xslt if
the schema generated by the xslt is not valid. So this should stop
the schema generation also. so the output should only be an error
message without the generation of the schemas"

That almost makes sense, except that there are no approved comments by “Makul”. Or this one,

If you’re pursuing the beenift of XML to get the separation between
form and content, why do you want to reintroduce the requirement to
do output by hand?

OK, fair question… only the original comment was about a Facebook like button. So bizarre.

At any rate, I just deleted a ton of comments from the queue–my sincere apologies if any of them were legit. I’m pretty sure the multi-page poem in Japanese, along with the English translation, was not legit. But still… why?

Review: DITA for Practitioners, Volume 1: Architecture and Technology

Eliot Kimber has done a great job of compiling relevant, actionable guidelines and practices in the first volume of DITA for Practitioners. I fall into the “those with prior DITA experience” category. As a self-taught DITA (and XML, for that matter) user, I found a lot here that filled in the gaps in my knowledge. (Especially helpful was the section on essential terminology.) While I skimmed over some of the basic info in Chapters 2 and 3, new users will find a thorough explanation of how to get up running, writing and producing output with DITA and the Open Toolkit.

In later chapters, Eliot goes into how to install, run, and make basic customizations to the toolkit. Even though I’ve created lots of plugins, I’m certain I’ll come back to the sections where he explains extension points and best practices for creating ant targets. Part 2 builds on the foundation set in part 1, layering in complexities like specialization, compound maps, vocabularies, reuse, and more. (I’m still trying to wrap my brain around Chapter 8 on linking and addressing.)

In short, I wish I’d had this book when I started out implementing DITA four years ago. I’m certainly glad I have it now.

Impressions from DITA NA 2012

DITA NA 2012 has come and gone. This year the conference boasted a record 318 attendees. They added an emerging technologies track. It was held in beautiful San Diego.

I’ve presented at this conference for the last several years, and my impression this year is that the level of discourse has noticeably risen. There were more topics that were more technical. Presenters discussed more best practices, more concrete experiences, and more practical advice than in years past, when many discussions were more or less theoretical. Instead of “this is what will/might/should happen,” I heard more of what did happen and is happening. It sounds cliche and, yes, self-serving to say it, but this is an exciting time to be “in” DITA.

For me, these were the highlights of the conference:

  • Steve Anderson mentioned the QA plugin in his presentation “Automation and testing DITA OT Content and Customizations”! I was totally stoked.
  • George Bina showed us his RelaxNG plugin, which reproduces the DITA DTDs in an easier to manipulate format. By combining RelaxNG with Schematron, you can deeply customize your authoring experience, both constraining elements and attributes and providing in-line guidelines to authors.
  • Michael Boses also discussed the awesomeness of Schematron. Ok, fine, I’m convinced. We’ll rewrite the QA plugin in Schematron.
  • Bryan Schnabel showed his XLIFF round trip plugin, which converts DITa to and from XLIFF. Just converted a document to XLIFF. It was glorious. I am going to be all over this one.
  • Mat Verghese from Citrix discussed a detailed and solid vision for raising the value and esteem of Content Strategists
  • Keith Schengili-Roberts, in his keynote, gave me some great ideas for additions to the QA plugin, like calculating Flesch-Kincaid reading scale values. More to come on that front.
  • Mark Baker discussed the use cases for his SPFE architecture, which is a different solution to many of the problems DITA implementors face. I particularly liked the idea of automagically creating links based on string matches. It’d be cool if the QA plugin’s link report could suggest new links based on the content….hmmm…
  • I learned you can directly style XML with CSS! Who knew? There must be some great applications for this.
  • And of course, Eliot Kimberly released his new book on implementing DITA. I’ll be posting a review sooner rather than later.

All told, a great conference.