How conversational interfaces make the internet more accessible for everyone

Something as a deck

In 2004, human-computer interaction professor Alan Dix published the third edition of Human-Computer Interaction along with his colleagues, Janet Finley, Gregory Abowd, and Russell Beale. In a chapter called “The Interaction,” the authors wrote a section on natural language that ran about a page within the roughly 40-page chapter.

Perhaps the most attractive means of communicating with computers, at least at first glance, is by natural language.

Human-Computer Interaction

Perhaps the most attractive means of communicating with computers, at least at first glance, is by natural language, they wrote. Users, unable to remember a command or lost in a hierarchy of menus, may long for the computer that is able to understand instructions expressed in everyday words! The possibilities for accessibility here are obvious. An interface that doesn’t depend on users being able to recall specific commands or methods of interaction means the interface is by its nature accessible: each person uses the system in his or her own way, so every use case is accounted for. Users no longer have to translate their intents to actions. Now the intent is the action.

The book section doesn’t end as optimistically: Unfortunately, however, the ambiguity of natural language makes it very difficult for a machine to understand…. Given these problems, it seems unlikely that a general natural language will be available for some time.

This story is part of a series on bringing the journalism we produce to as many people as possible, regardless of language, access to technology, or physical capability. Find the series introduction, as well as a list of published stories here.

Less than 15 years later, conversational interfaces have crept into every facet of our everyday lives, with Slack bots available for anything from setting up meetings across teams to managing your to-do lists to helping you play werewolf in your channels. News has seen its fair share of conversational UI too: Quartz’s conversational news app delivers headlines to your phone that you respond to with pre-defined chat messages (although on the other end is a human being writing those snippets rather, not a robot), TechCrunch has launched a Messenger bot to personalize the tech news you get, and there’s even NewsBot, a Google Chrome extension that allows you to curate the news stream you want to get from a variety of sources.

To be fair to Dix et al., they weren’t necessarily talking about chat bots when they said that natural language interaction was a far way off. When they stressed the challenges presented by building a natural language interface, they mostly meant an interface capable of understanding anything at all, rather than a restricted subset of our natural language. Chat bots depend on defining a set of commands and actions that the bot can interpret. There may be some leeway when the bot interprets imprecise or vague language like “next week” or “with George,” but ultimately the bot’s capabilities are well-defined by the developer.

An interface that doesn’t depend on users being able to recall specific commands or methods of interaction means the interface is by its nature accessible: each person uses the system in his or her own way, so every use case is accounted for. Users no longer have to translate their intents to actions. Now the intent is the action.

Human-Computer Interaction

Still, a future where artificial intelligence systems are able to interpret a wide range of natural commands by voice doesn’t seem too far off. Companies have already started investing heavily into natural language processing and the machine learning to support its functions. Apple’s Siri, Google’s Google Now, and Amazon’s Alexa (the AI that lives in the Echo) all interpret voice commands and act as personal assistants, and their features and functionalities only improve as more people use them. Opportunities to support blind users are being explored as we speak, with huge potential as conversational UI seeps further into our everyday lives.

Wikimedia Commons
Various color spectrums for different color vision deficiencies.

Before we can discuss accessibility in conversational interfaces, however, it’s important to note where these interfaces excel, especially in comparison to graphical interfaces. Conversational interfaces are very good for doing one thing at a time when you know what you want. It would be difficult for a bot to do something like help you compiling research before writing an article because that typically requires a lot of browsing, listing of multiple sources at once, and cross-referencing between sources. But if you want to book a flight to London for next Tuesday or know who won the primary in your state, however, that’s something a bot could handle gracefully. Like in a real conversation, you ask someone a question, the person you’re talking to responds with an answer, and the two of you may deliberate or go back and forth before you arrive at a mutual answer.

Where conversational interfaces really shine, however, and the reason these interfaces have been utilized so much by bots and are so closely related to the field of artificial intelligence, is that If you asked a human being for a recommendation for a place to eat next week, there are a number of facts that would make it easier for a person to answer: where you were, what kinds of food you liked, any allergies you had, how much like to spend, etc.

You can get better, more helpful answers the more your bot knows about you and the context of your request.

Human-Computer Interaction

Matty Mariansky, co-founder and product designer at Meekan, a Slack bot that schedules meetings across teams, describes the ideal bot as being a search engine that gives you one result.

Searching Google, instead of getting 20 pages of results, you would only get the one perfect result that is perfect for you and the time you are asking and the location you are in, perfect for your situation at this specific moment you are in, Mariansky says. It knows everything about you and it gives you the one single result you can trust. Replace that with anything. Any type of application you’re looking for, this would be the endgame, this is where it should go.

Meekan
Meekan, a meeting scheduling robot, syncs calendars across teams to find the best meeting times for everyone.

This kind of insight is not necessarily unique to conversational interfaces. However, it is a key component in keeping up the conversational illusion that the robot you’re talking to is actually a knowledgeable human being that you can trust. And that’s where the biggest gain in accessibility comes in.

In removing a graphical user interface and replacing it with a message field or even just a voice, users are placed in a much more familiar context. They may not know exactly what they can ask the robot to do (that’s something that’s up to the bot to lay out when introducing itself), but they understand the input and can specify what they want in whatever way feels most comfortable to them. Users feel more natural getting feedback from the system, whether that be in the form of an error message (Sorry, I don’t know how to do that) to a confirmation (Okay, Wednesday at 11am it is then).

Three tools to help you make colorblind-friendly graphics

I am one of the 8% of men of Northern European descent who suffers from red-green colorblindness. Specifically, I have a mild case of protanopia (also called protanomaly), which means that my eyes lack a sufficient number of retinal cones to accurately see red wavelengths. To me some purples appear closer to blue; some oranges and light greens appear closer to yellow; dark greens and brown are sometimes indistinguishable.

Wikimedia Commons
Various color spectrums for different color vision deficiencies.

Most of the time this has little impact on my day-to-day life, but as a news consumer and designer I often find myself struggling to read certain visualizations because my eyes just can’t distinguish the color scheme. (If you’re not colorblind and are interested in experiencing it, check out Dan Kaminsky’s iPhone app DanKam which uses augmented reality to let you experience the world through different color visions.)

As information architects, data visualizers and web designers, we need to make our work accessible to as many people as possible, which includes people with colorblindness.

Color is critical

Color is frequently used to quickly convey meaning. It’s an important choice for any visualization, but making one that’s attractive, informative and easily distinguishable by colorblind people trips up many designers.

The NPR Visuals team worked through these challenges this spring in a visualization for a story on how school districts spend money.

Early on after some doing some exploration of the data we knew we were going to do a district level choropleth map, said Katie Park, deputy graphics editor at NPR. We thought that it would be easier to read with a diverging color palette where the center value was the U.S. average. When you get into divergent color palettes you realize that colorblindness might become an issue.

NPR Visuals
The NPR graphics team knew they wanted a divergent choropleth map to help show the differences in school spending by district. After considering accessibility for colorblind people, this was the palette they chose.

Park said that the default color schemes designers might use — like green for positive values and red for negative values — tend to cause problems for colorblind readers.

Meanwhile, common colorblind friendly palettes like magenta/green or orange/purple are devoid of meaning. Pink, for example, doesn’t convey negativity like red does. Others, like blue/red, are already imbued with too much cultural meaning and would be confusing in a non-political map.

Sometimes you can get away with using a single hue and varying its lightness. But sometimes a project calls for a multi-hue scale or a diverging scale. (If you’re interested in reading more, The New York Times’ Gregor Aisch wrote a blog post about his library chroma.js that goes into more detail on multi-hue scales)

Three tools to help

There are a few simple tools to help ensure that your projects are colorblind-friendly.

  1. Start by using the color schemes on ColorBrewer, which gives you sequential, diverging, and categorical (sometimes called binary) palettes that are colorblind safe. You can use these and modify them to fit your style guide. I try to find a good compromise between our colors, the colors that work in ColorBrewer, and the things that look good on the page, Park said, noting that she’ll tinker with the colors in Illustrator until she’s happy with the palette. Usually my trick is to tweak the shades a little bit so that the greens have a little bit more blue in them, Park added. But the problem with that is that if you’re greens get too blue then you start to look like you have a political map or the colors just don’t read as intuitively.

    ColorBrewer
    ColorBrewer, generates colorblind-friendly color palettes. NPR’s Katie Park starts with the colors here and then tweaks to find colors that look good on the page.
  2. Gregor Aisch’s chroma tool is also useful for optimizing your diverging color palettes. It can help you take two or more colors and generate a full scale of in-between values.

    Color Scale Helper
    The chroma.js-powered Color Scale Helper is an easy way to generate sequential and diverging color palettes.
  3. Before publishing, you should check your work. Color Oracle and Sim Daltonism both let non-colorblind people simulate colorblindness on their screens.

    Flickr/Richard Ricciardi
    Sim Daltonism lets you see your visualizations as though you were colorblind.

The great thing about picking a palette, is that once you have it, you can use it again and again.

Taking a few minutes before publishing a project to make sure it’s colorblind friendly is an easy way to make your work have a bigger impact.