Jun 202016
 

[Ed. Note: This is the Third of a three-part article on the impact of mobile on technical communication.]

How Mobile Will Change Technical Communication, Part Three

By Neil Perlin

What to Start Thinking About

In this section, I’ll discuss technologies that exist but have little presence in technical communication to date. That situation could be because there’s just no fit, but it could be because the fit hasn’t happened yet. (HTML and technical communication were in completely separate silos until Microsoft unexpectedly merged them in 1997 with its release of HTML Help.) So keep an eye on the technologies discussed here.

“True” Apps for Technical Communication

Responsive design lets us create material that can be read on mobile devices but that material doesn’t look like typical apps. The example on the left is from a responsive output that I created using Flare 11. It looks rough because I simply output an existing project without optimizing it for responsive output, but it is running on a mobile device. The center example is a preliminary app that I did for the New England STC’s Interchange conference in 2015. It looks rough because it was a functional prototype, but it illustrates visual capabilities that responsive output won’t have without additional work. The example on the right is a completed diabetes monitoring app.

Technical communication and apps aren’t the same thing, you say? Perhaps, but there’s no reason why our content can’t be as visual and word-light as an app while adding an app’s responsiveness, location-awareness, and other features.

RO_Flare11RO_interchange2015 RO_monitoring app
The functionality differs from what we’re used to in tools like Flare but the interface will quickly become familiar.Technical communicators with whom I discuss app rarely think of themselves as app developers because of differences in the technologies and the tools. But here’s the main interface for ViziApps, the tool used to create the two apps above (and, in the interest of disclosure, a tool in which I’m a certified consultant and trainer).

ViziApps

 

If traditional documentation isn’t enough and you need interactivity, more of a visual element, location-sensitivity, and similar features, you may find yourself creating apps in the near future. You’ll be buying and learning new tools and concepts. You’ll also be revisiting issues like what email client users have on their phones (which sounds new but is actually similar to the old issue of what browser users had, if any, on their PCs in the mid-1990s). And you’ll be facing truly new issues like putting help content (data) into a database in order to pass it to or share it with an app.

We’re going to have to think differently about our content in order to make it work with apps and take advantage of the capabilities they offer.

New Interfaces

Keyboards and screens have been the main interface for years and work fine, but not for mobile where users may not have the space for them. (Although technology may be changing the screen issue; see this video  for an idea what new glass technology may do to displays in the years ahead.)

The likely replacement seems to be voice interfaces like Apple’s Siri and Amazon Echo’s voice interface. Another interesting idea is Google Now On Tap’s intelligent assistant and the ability to get help about what’s on an app screen without leaving the app.

voice_interface_siri

Working in this environment means writing content to be spoken instead of read, or both. We may also have to get involved in the technical side of audio, such as the CSS standard’s audio properties like pitch and pause-before. I haven’t heard of any technical communication projects voice as an interface yet but the technology is out there waiting. (If you are using voice for the interface on an online help project, please contact me.)

Adaptive Content

One looming problem with creating content to be used on multiple platforms or in multiple scenarios is content adaptation. Consider the “click” vs. “tap” issue when content may be used on both desktops and mobile devices, such as for responsive design. How do we deal with this? (You’ll find a discussion of this issue on LinkedIn here

One answer is to use intermediate words like “select”, but this can be ambiguous or just sound clunky if users have to select an item from a list and then “select OK”. An alternative is to create computer-driven adaptive content that uses CSS and media queries to test for the device display size and change the text accordingly. Mike Hamilton from MadCap Software and I created a prototype of this for Flare in 2014. The code isn’t pretty but it’s worked nicely in every test I’ve run. (Email me if you’d like to see the code.)

Note that this use of CSS also ties back to my earlier point about becoming more technical.

Content Customization

Content customization has been with us for years on the web as user preferences are tracked by various types of analytics software. You see it every time Amazon presents you with a list of options based on your previous purchase decisions. That tracked information can be thought of as a “data halo” (a Cognizant term) around users. Most of it today applies to ecommerce, but it may apply to technical communication as well.

For example, if certain users access the same help page or search on the same subject daily, analytics can keep track of that and let you offer that page or search result automatically when conditions are the same as on previous occasions.

You can collect the analytic data to do this using a general technology like Google Analytics or, more specifically if you’re a Flare shop, MadCap’s Pulse add-on.

However you do it, it’s a good idea to start collecting analytic data now. If you want to start creating customized content in the future, you’ll be building the data foundation now. And even if you don’t think customization is in your future, the collected search data will still give you information about what users are looking for and can and cannot find, information that will help you improve the online help and the product to which it refers.

Cross-Device State Preservation

Finally, consider that “mobile experience” users may find themselves starting to read your material on a PC in the office, then go out on the road and want to keep reading the material on a tablet or phone or have it read to them while driving their car. Easy. Just re-open the material and read it again. But what if users want to continue where they left off on the desktop but on a completely different device without having to restart from the top?

I saw the idea mentioned once in 2015 and most recently in a Bright Ideas for 2016 article in the January 2016 issue of Wired. (The idea is so new that it doesn’t even seem to have a name. I just made up the term “cross-device state preservation” in order to have a reference in conversations.) No doc group that I’ve spoken with has gone far enough down the mobile path for this to be an issue, but one lesson from the rise of online help is that things change and that it’s better to do things correctly from the beginning than to have to go back and fix them afterwards.

I’ve spoken with several engineering managers about how cross-device state preservation might work. It will require that the material be stored on a server and dynamically reformatted for whatever device is being used to read it, plus internet access (obviously), plus a login to let the server and format software knows what material to format for which user, plus syntactically correct code, plus extensive use of metadata. All these requirements make cross-device state preservation the most esoteric of all the new mobile experience technologies.

Summary

Some of the effects of mobile on technical communication are obvious – new tool and technology skills, huge changes in how we create content, new problems (and new forms of old problems), plus changing business models and intra-company relationships. But the biggest effect will be the emergence of new and challenging lines of work that I expect can take our field in unexpected directions and bring a younger generation into the field.

Thanks to…

The following people for their direct and indirect contributions to the subject of mobile and tech comm and this article:

  • Rhonda Truitt and Sally Martir, Huawei, organizers of Huawei’s Mobile Info Think Tank
  • Mike Hamilton, MadCap Software, for his work on CSS-based adaptive content
  • Nicky Bleiel and Bernard Aschwanden, STC, presenters with me at Huawei’s Mobile Info Think Tank
  • David Kay, D.B. Kay and Assoc., presenter at Huawei’s Mobile Info Think Tank
  • Kevin Benedict, Cognizant’s Center for the Future of Work, presenter at Huawei’s Mobile Info Think Tank
  • Joe Barkai, Product and Market Strategist and Catalyst, presenter at Huawei’s Mobile Info Think Tank
  • George Adams, CEO, ViziApps
  • Michael Kuperstein, CTO, ViziApps
  • Charles Cooper, Rockley Assoc., presenter at Huawei’s Mobile Info Think Tank
  • Neal, Carol, and Katherine for serving as an impromptu sounding board at the picnic table at TCUK’15

About the Author

Neil is president of Hyper/Word Services (www.hyperword.com) of Tewksbury, MA. He has many years of experience in technical writing, with 31 in training, consulting, and developing for online formats and outputs ranging from WinHelp to mobile apps and tools ranging from RoboHelp and Doc-To-Help to Flare and ViziApps. He has also been working in “mobile” since 1998.

Neil is MadCap-certified in Flare and Mimic, Adobe-certified for RoboHelp, and Viziapps-certified for the ViziApps Studio mobile app development platform. He is an STC Fellow, founded and managed the Bleeding Edge stem at the STC summit, and was a long-time columnist for STC Intercom, IEEE, and various other publications. You can reach him at nperlin@nperlin.cnc.net.