Compartilhar via


Accessibility Gotchas: Introduction

Accessibility is about enabling everybody to use your app even if limitations or environment prevent using conventional user interfaces. Enabling applications for accessibility opens up new and enthusiastic markets. You can mark your app as accessible so it shows up in searches and you can crow about the specific features enabled. Accessibility features have broader impact than just accessibility and taking accessibility into account will lead to generally better apps.

The HTML, WinJS, and Xaml libraries have many accessibility features by default, and if you just create an app with the Visual Studio templates it will be pretty much accessible straight out of the box. The delight of Windows Store apps is that they are greatly customizable from that point, and the problem is that if you don’t think about it you can accidentally code accessibility out of your app. If you are aware of what to look for, it is pretty easy to keep the app accessible.

In this series I’m going to go over some of the most common gotchas that we encounter in apps. When we’re done you’ll recognize these problems before they bite your code, you’ll be able to harden your code against these problems, and you’ll end up with better apps!

I tend to think of two major classes of accessibility enablement.

  1. Basic design issues that affect the app all the time
  2. Specific accessibility enablement features

The first category includes things like using large enough targets that people with limited motor ability can still hit them, appropriate use (and non-use) of color, high resolution support, and good support for multiple input modes (mouse, keyboard, and touch).

The second category includes support for screen readers and automation and support for high contrast modes.

There isn’t a hard line between these categories, and over time accessibility specific features tend to move into the mainstream.

High resolution support is a good example of one which has moved into the mainstream. Eighteen years ago, a low vision friend of mine ran his 21 inch monitor in 640x480 mode. More recently the “large font” settings would magnify text so the screen was easier to read. The same system also allows well-designed apps to run properly on increasingly popular high-resolution displays: when the Surface Pro shipped with a 208ppi screen inflexible apps displayed teeny-tiny fonts and were difficult to use. Apps which flexibly supported high-resolution just looked great (see the Guidelines for window sizes and scaling to screens).

Screen reader support is a good example one that is becoming more common. The same UI Automation framework that assistive technology products like screen readers use to read and control apps can be used for automated testing and for speech recognition. By supporting UI Automation, your app will be more accessible, more testable, and more usable.

In many cases UI Automation can make good guesses as to what to report based on captions and control types, but in some cases the app needs to provide semantic information. UI Automation can’t guess what an image is supposed to represent. For our first gotcha, next time I’ll talk about an area where several major apps have missed their automation tags and explain the one line of code which will avoid the problem.

If you want to read ahead check out the Accessibility for Windows Store apps section in the documentation. This section gives a good overview of how several important accessibility systems work, how to implement features for both HTML and Xaml apps, and how to test apps for accessibility.