Partager via


Testing mobile web experiences

patterns & practices Developer Center

On this page: Download:
Mobile testing tools/options | Testing on desktop browsers | Pros | Cons | Testing on emulators and simulators | Emulators - Pros, Cons | Simulators - Pros, Cons | Testing on device hardware | Pros | Cons | Using a remote device service | Pros | Cons | Choosing test browsers and devices | 1. Existing traffic | 2. Regional traffic and market | 3. Device-specific factors - a) Form factor, b) Screen size, c) Performance, d) DPI, e) Screen conditions | 4. Project-specific factors | 5. Budget | Why and how to test | Debugging on mobile devices | Performing automated testing with Visual Studio 2010 - Pros, Cons | Summary Download code samples

One of the first challenges we faced was deciding on a comprehensive way to test Mileage Stats across its experiences and devices. There are a number of options in the area of testing, and each option comes with its own advantages and disadvantages.

Mobile testing tools/options

Testing a web app on the desktop is challenging enough, but doing so on mobile can easily become costly and overwhelming. The key to maximizing your test budget is having a good understanding of which browsers and devices are indispensable to your project, and which can be replaced by desktop tools and emulators. It’s also important to understand the benefits of each mobile testing option.

There are three primary options when testing on mobile devices.

  • Testing and debugging on desktop browsers
  • Testing on emulators and simulators
  • Testing on hardware devices

Testing on desktop browsers

Desktop browsers can be extremely useful when developing mobile web sites and apps. Although they are no replacement for testing on actual devices, they do provide robust debugging options and can be used to simulate mobile screen sizes.

Pros

  • Desktop browsers are free, familiar, and require no special hardware. Testing on desktop browsers can also be automated using tools such as Visual Studio 2011, or Selenium and Watir.
  • Modern desktop browsers can easily be resized to mimic an average smartphone screen (although several can only be resized down to approximately 480px. Due to the popularity of responsive design, there are also a growing number of tools and extensions to assist in triggering a specific screen size. It’s important to remember, however, that due to the width and position of scroll bars (and other browser chrome), resized desktop browser can only provide an approximation of the final mobile viewport size.
  • Most desktop browsers provide tools or extensions to swap the default user agent string. This is useful to trigger mobile-specific styling and business logic in cases where these are dependent on the presence of a specific user agent.
  • Desktop browsers can also be used in conjunction with developer tools or proxies such as Charles (on Mac) and Fiddler (on .NET) to monitor and debug network traffic and performance issues.

Cons

  • Desktop browsers are almost always more powerful than a mobile equivalent. They are more frequently updated, so they include the latest HTML specifications and are also installed on devices with much faster processors. For these reasons, they will never accurately simulate the environment of a smaller, more constrained device.

Testing on emulators and simulators

Emulators and simulators are often free of charge, and so can prove a useful and economical addition to your test routine. It is important, however, to understand the differences between an emulator and a simulator.

Emulators

An emulator is a piece of software “that translates compiled code from an original architecture to the platform where it is running” (see "Programming the Mobile Web." O’Reilly, 2010). Emulator’s don’t simply simulate a device’s operating system; they provide a means to run a virtual version of it on your computer (often down to the specific version number). Emulators therefore provide a useful and accurate environment to test and debug your mobile app.

Pros

  • Emulators reflect the true web browser (and operating system) environment, so are ideal for testing and debugging visual design, overall rendering, and the execution of simple client-side interactions.
  • Many emulators are provided free of charge by the OEM or platform manufacturer.

Cons

  • Emulators cannot provide emulation of individual devices from a hardware and CPU point of view. They cannot reveal, for example, that animations will perform poorly due to limited CPU power, or that link buttons will not respond as expected due to poor screen touch sensitivity.
  • Likewise, emulators may not represent real-world conditions such as network latency, limited bandwidth, or determining the device's location.
  • Emulators are typically bundled into an operating system SDK and they can take considerable time to install and learn to use correctly. They also require regular updates to ensure they always include the most up-to-date operating system versions.
  • Most emulators do not work with automation tools and so must be used manually. The user agent strings provided by emulators often differ from those of the actual devices. Logic that relies on specific user agent strings may not behave as expected on emulators.

Simulators

A simulator is a far less sophisticated tool that is only designed to provide a reasonable facsimile of the experience on a device. Simulators vary greatly in their ability to represent the true device environment. Some, such as the iOS simulator, provide an excellent facsimile of the overall environment, including simulation of the native iOS font. Others may simply simulate the screen size of popular devices.

Pros

  • Most (but not all) simulators are provided free of charge by the OEM or platform manufacturer.

Cons

  • Simulators cannot emulate the actual operating system, or web browser found on the device, so they cannot be relied on to test or debug front-end design or functionality (the excellent iOS simulator is an exception and provides an excellent browser simulator).
  • Similar to emulators, simulators will not reflect differences in device hardware, performance, CPU, fonts, color gamut, display density, network speed.

Note

A comprehensive list of mobile simulators and emulators can be found on Maximiliano Firtman’s Mobile Web Programming site. Note that many of these are incorporated into an operating system’s SDK and are therefore not web specific.

Testing on device hardware

Testing on actual device hardware can be costly, but will always provide the most accurate results.

Pros

  • Testing on actual devices provides access to real device capabilities and constraints. These include device CPU, physical size, manipulation options, screen size, dpi, screen quality, and overall device responsiveness. Testing on devices (using a network SIM card) will also enable you to determine the impact of network speed and latency.

Cons

  • Testing on actual devices is expensive. Unless you have access to a local test lab (or a community through which you can borrow devices), you will need to purchase the devices you need.

Note

Even when testing on actual devices with SIM cards, keep in mind that network speed and latency can vary significantly depending on your location and carrier. If this is crucial to your user experience, you should likewise consider testing your app from different locations.

Using a remote device service

An alternative to purchasing or borrowing devices is to pay a monthly or hourly fee to access remote devices through services such as Device Anywhere and Perfecto Mobile. These services enable you to choose from large collections of devices that are hosted remotely and accessible using a proxy app.

Pros

  • These services offer a vast collection of devices and enable you to request devices running on specific operator (cellular) networks. This enables you to test on specific network environments (and witness the impact of network speed, transcoding (designed to compress scripts and markup) and enables you to test on specific operator variants of a device.

Cons

  • Remote services can be slow to use and subject to network latency, making it difficult to accurately test features such as asynchronous loading, transitions, and animations. And although touch is supported (using the mouse as proxy), it can be difficult to accurately test the performance of complex gestures such as swipes, long-taps/presses or multi-touch interactions.
  • If used regularly, the cost these services can be comparable to that of purchasing your own devices, without the benefit of being able to fully manipulate an actual device.

Note

Another cost to factor is network access. Although most modern smartphones support Wi-Fi, it’s important to also test on an actual operator network. Doing so will enable you to determine the true impact of network speed (e.g. 2G, 3G) on your design and overall app functionality.

Choosing test browsers and devices

It’s important to carefully choose test devices, browsers, and even emulators, as this will enable you to maximize the value of your testing budget. The devices you choose should enable you to test the key features your app requires (such as location detection or HTML5 video) while also reflecting the various platforms and browser versions that will access your app.

The following steps are recommended when determining the most appropriate test browsers and devices for your project. Each step builds on the information derived from the previous step, enabling you to gradually compile a prioritized list of candidate browsers and devices. The final step in the process is to determine how this list maps to the resources at hand. Emulators may, at that point, be substituted for browsers that you are unable to acquire by purchasing, borrowing or remotely leasing actual hardware.

1. Existing traffic

If your mobile app has a desktop equivalent, review that site’s analytics. These will provide a good indication of popular platforms and devices within your existing user base and general region of operation. Where possible, try to also determine the most popular platform versions. This is important, as new devices don’t necessarily ship with the newest OS. Users may also not be aware that they can upgrade their device (or, in some cases won’t have the option to do so). Your analytics package should be able to provide some platform version data, and regularly updated platform version stats can also be found on the Android and BlackBerry developer sites.

Note

Be aware that analytics can be deceptive. Many analytics packages rely on JavaScript (or set this option as default) so may not report accurate traffic from devices with poor JavaScript support. If your analytics provider offers a mobile friendly (typically server-side) version, it may be worth trying it out for a few months. This will enable you to analyze a more representative segment of your traffic patterns.

2. Regional traffic and market

Next, review overall market share and traffic in your region (or the regions you operate in) so that you can focus on the platforms that are most likely to access the site. Certain brands are more popular in certain regions, so this step helps ensure that you don’t spend time testing on platforms that your users aren’t likely to be using. Regional platform data will typically reinforce the data in your desktop site’s analytics.

Good sources for this type of data include Statcounter’s mobile browser stats and the regular releases of market share statistics published by the likes of Comscore and Nielsen (although these will often be US only). Jason Grigsby from Cloud Four has also compiled this huge list of traffic and market share resources.

Based on these first two steps, you should be able to devise a list of candidate devices/browsers while also eliminating devices that are completely unrelated to your market or product conditions.

3. Device-specific factors

The next step is to map your device list against the factors that make a good test device. This will help you pick the most useful models rather than simply opting for the cheapest (or sexiest) on the list. A great resource during this step is Device Atlas’ Data Explorer (login required) which enables you to query common device properties across thousands of devices. Another useful tool is GSM Arena which includes comprehensive (consumer-facing) device specifications, a robust, advanced search/filter option, and a popularity meter providing a glimpse of the interest level for each device.

Here are the device-specific factors you should consider:

a) Form factor

Touch screens are increasingly popular, but a good 30% of smartphones also have a keyboard, trackball, or other indirect manipulation mechanism, so you want to make sure to test on multiples of these.

b) Screen size

This is obviously a big one. You want to be certain that you can test on a range of representative sizes (and orientations). Android devices are particularly handy as you can easily spoof the default viewport size by adjusting the zoom Level. This is a great stop-gap if you only have a few devices on hand.

c) Performance

Devices vary greatly in CPU, memory, and overall performance (including factors such as quality of touch screen) so you want to ensure you don’t simply test on only really high- or low-end devices.

d) DPI

Screen dpi also varies quite a bit and can greatly impact legibility. Although it’s hard to mitigate poor dpi displays, testing on such devices can be hugely useful to get a feel for the many ways your site will look.

e) Screen conditions

This is also one that you can’t do much about, but is good to keep in mind. Screen condition factors can include overall screen quality (which impacts sensitivity to touch input), variations in colour gamut, and the ability for users to adjust contrast. In general, the cheaper the display, the more potential for this type of variation.

4. Project-specific factors

Next you want to double-check your list of browsers and devices, accounting for any project-specific factors. If, for example, your app revolves around venues and business that are nearby, you will likely need to test support for the HTML geolocation API.

5. Budget

Rounding out the list is budget. In many cases, this will remain a primary consideration, but following the preceding steps should enable you to better justify each purchase and convey to stakeholders the value of testing on each browser or device. At this point, you should also be able to prioritize devices, and understand when it’s safe to simply use an emulator.

Note

Review steps 1 and 2 every few months to ensure conditions have not changed. Don’t presume that a platform or device that is popular today will remain popular. Newly released platforms (and platform versions such as Android 2.2) can take as long as 3-6 months to begin appearing in your logs and analytics.

Why and how to test

The ideal test environment is one that saves time and minimizes unnecessary or redundant testing—especially on actual hardware devices (which can be expensive to purchase and time consuming to test on). This requires not only a good mix of tools, but also a clear understanding of which aspects of your app require the most exhaustive testing, and which tools will enable you to test these features most effectively.

An obvious overall goal is to confirm that application features work as expected on the browsers and devices that are most likely to access your site. What working as expected actually means will be largely dependent on the nature of each bug or problem you uncover.

  • Feature testing of key functionality. Ideally, all users, on all browsers should be able to perform key application tasks. The usability may vary, but no user should be prevented from completing a task due to app error or lack of support for their browser.
  • Feature testing enhancements. Many web apps also include non-critical features that are implemented as enhancements through feature detection. For example, Mileage Stats requires that users input the name of their local gas station during a fill up. On browsers that support the HTML5 geolocation API, this input field is replaced with a prompt to “use my location” to detect nearby gas stations. A list of these stations is populated into a select menu, saving valuable time for users on more capable browsers.
  • Look and experience. Testing the look and experience will also be highly dependent on the browser or device. For example, if the design has been implemented using media queries, certain layout and stylistic enhancements will only appear at certain screen sizes. Other aspects of the design may be implemented across the board, but only work on certain browsers. For example, many Mileage Stats input forms use the placeholder attribute to provide hints within form elements. Browsers that don’t support this specification will simply ignore it (and that’s ok so long as the same information is available in other ways). This is not a bug; it’s simply the natural input field behavior and should be communicated to your test team to avoid unnecessary testing and the filing of unresolvable bugs.
  • Performance. Testers should also keep an eye out for performance problems. All devices are not alike, and even smartphones can vary greatly in screen quality, memory, and processor speed. An underpowered device will always perform more poorly than a more capable one, but testing may uncover cases where the performance difference is unacceptable.
  • Usability. Testers should also flag any usability concerns. These may include:
    • Text that is too small or difficult to read due to poor contrast (especially on lower-end devices).
    • Buttons and links that are too close together, or too small to manipulate.
    • Custom widgets and controls such as sliders or carousels that don’t work as expected.

Your test environment should ideally provide a means to test for all these factors, and include tools to debug problems as you discover them. Although testing on device hardware provides the most accurate results, devices don’t necessarily provide the best debugging environment. It’s therefore perfectly ok to use desktop browsers throughout the test process.

  • Aim to discover and resolve routine problems (related to layout, look and feel, and client-side functionality) on the desktop, where you have access to the robust easy-to-use debugging tools.
  • If something is broken on the desktop, you can safely assume it will be broken on mobile as well, so there’s no point wasting time retesting it on actual hardware. Fix the problem on the desktop, then test the feature in question on your target devices.
  • Be sure to test on a WebKit-based desktop browser, and at least one browser that uses an alternate rendering engine. Opera is an excellent choice as its proprietary Presto rendering engine is also found in the very popular Opera Mobile and Opera Mini mobile browsers. Unlike Safari and Chrome, Opera’s desktop browser can also be resized to widths of as little as 50px. You can therefore easily use it to simulate almost any device screen size.

It’s important however not to presume that just because a feature works on the desktop, it will test on mobile. Start testing on hardware (or emulators) as soon as working code is available. Mobile devices have varying levels of HTML, CSS and JavaScript support. They also operate on slower networks that may periodically time out or deliver scripts and content much more slowly than expected. Some network operators also implement transcoders, which may compress and sometimes even alter scripts and mark-up, in an effort to improve page load performance. These factors impact the experience and may introduce unexpected bugs, so the sooner you identify them the better.

To ensure you catch all relevant bugs, be sure to test in a variety of environments:

  • Test using wi-fi and (if possible on several) operator networks. This is particularly important if your app is data-sensitive and may fail if latency causes content to load too slowly, or not at all.
  • Be sure to test at both portrait and landscape orientations. On Android devices, it’s also possible to trigger alternate viewport sizes using the zoom Level feature (in Settings). Resetting the zoom level on a few devices will quickly help you determine how well your design adapts to changing conditions.
  • Touch screens are increasingly popular, yet approximately 30% of smartphones have a keyboard (sometimes along with a touch screen). Other devices may include a track ball or joystick. Your app should work regardless of the manipulation method the user will choose.
  • If your app is popular, users may share a link to it with their friends using social networks. Some of these friends may open that link in an embedded web view, directly within a native app. Embedded web views aren’t always implemented with the same level of web standards support or functionality. Testing in embedded web views can therefore prevent unfortunate surprises.
  • If possible, also test on several of the most popular standalone browsers. These include:
    • Opera Mini and Opera Mobile (available for most feature phones, smartphones, and now also on a tablet-optimized version)
    • Firefox Mobile (available for Android)
    • Dolphin (available for Android and iOS)
    • UC Web (a popular Chinese proxy browser, available for most feature phones and smartphones)

Debugging on mobile devices

Mobile debugging can be time-consuming, so always pretest mobile bugs on the desktop, just in case there is an obvious solution. It’s also wise to reconfirm support levels related to the feature that has failed. Certain browsers may simply not support that HTML element, attribute, or CSS property. This may need to be explained to a client or other stakeholder, and in extreme cases, the feature may need to be redesigned, but this should not be considered an actual bug.

If you find that you still can’t resolve the problem, you may want to try one of the following remote debugging tools.

  • Mobile Perf Bookmarklet is a collection of performance and debugging bookmarks for mobile browsers. Included are links to handy tools such as YSlow. DOM Monster and the Firebug Lite DOM inspector. These tools can be fiddly to use on small screens, so are most useful on tablets and larger devices (approximately 480px and up).
  • Adobe Shadow is a remote debugger compatible with recent versions of Android and iOS.
  • Opera Dragonfly is a cross-device, cross-platform debugging environment for Opera browsers.

Performing automated testing with Visual Studio 2010

Visual Studio 2010 Ultimate has a feature known as Coded UI Tests. This feature can be used to automate the testing of a mobile web site using a desktop browser.

Pros

  • Most desktop browsers can spoof a user agent string automatically during browser initialization in the coded UI automation, allowing you to simulate a device and test main user scenarios. It also allows you to use the automation as a build validation test process, which allows you to eliminate bugs before releasing code to the test team.
  • Many problems are often easy to replicate on the desktop, and can often be resolved through tweaking or refactoring of markup and styles. Automation can help catch those easy defects early before releasing to test.
  • Coded UI automation uses a WaitForReady mechanism, which waits for asynchronous Ajax calls to return, allowing you to test web apps with JavaScript.
  • The desktop size screen can be set up by the test automation to validate the UI elements on various screen sizes.
  • JavaScript exceptions can be trapped by the test automation, flagging errors that might occur during user scenarios.
  • Test automation can be used to validate UI design if one is present, or if the website has been wireframed, you can write assertions to validate common design elements.

Cons

  • Desktop browsers cannot reproduce device behavior to catch functional defects that are very specific to devices and platform operating systems
  • UI and/or design changes may pose a constraint in automation productivity, given the time necessary to write and maintain tests.

Summary

Testing is crucial to success when targeting mobile devices. Making assumptions about what will work is risky. Testing on actual devices is always preferable, but reasonable options exist when there are time and budget constraints.

Next Topic | Previous Topic | Home | Community

Last built: June 5, 2012