Skip to main content

Insight | Jul 11, 2014

The ABC’s of Website Testing

By Antonella Severo

There’s a saying that if there isn’t time for testing, then there isn’t time for development. Testing and development go hand in hand in order to deliver the best product possible. In this article, we provide a glimpse into our website testing practices for a typical product cycle and describe common terms related to testing.

Especially in an agile development environment, quality assurance needs to be injected in every part of the process. We typically start our website testing cycle by reviewing the original requirements and writing the test cases. A test case is a set of conditions or variables under which a tester will determine whether a system satisfies requirements or works correctly. We use Google Docs so that the whole team can work on the same sheets at the same time and get real-time updating. While writing the test cases, we also uncover problems in the requirements that may need to be fleshed out more before they go into programming.

As the programmers finish developing blocks of code, they will perform low-level testing to ensure that their code performs the expected results. Then they will assign that unit over to the QA team and project manager to begin what we loosely refer to as unit and component testing. Unit testing breaks down each possible action to the smallest step that a user might take. For example, it might be as simple as a cancel text link on a query form: when this link is clicked, does the user get to the correct page afterwards?

QA testers look at all of these individual actions and then move into testing a block of the integrated units to ensure that the whole system works. Verifying that an action can be completed from beginning to end according to specified business requirements and test cases is commonly referred to as functional testing. During functional testing, we try to create actual data and simulate all possible user roles, such as a person not logged in, a prospective buyer, etc. depending on the web site.

Going back to our original example of the query form, we isolate and test each independent unit such as tool tips, cancel links, info links, and disclaimer popups. Then we test the form as a whole. Can we submit it successfully when all the information is entered correctly? This expected behavior is known as the “happy path,” the desired result, which in this case would be that the form is submitted and the user receives a confirmation message. However, we also have to test unexpected behavior, known as “unhappy paths,” which are the actions a user might take that would generate errors or alternate paths — these all have to be considered and integrated into the programming. For example, if users don’t fill out a required field, then they should be shown an error message and the required field should be highlighted in red. Our job is to ensure that no user ends up at a dead end.

As the QA team identifies bugs, we submit tickets in our bug tracking system (we currently use the Fogbugz application) and assign them back to the programmer. When the ticket is resolved, the QA team retests the bug and either verifies the fix or rejects it until it is acceptable.

Most of the testing is done on development or testing servers where the new code is being developed. On our first pass we typically test on a stable browser first, such as Chrome, and then a browser that commonly presents problems, such as WinXP IE8. We save the full cross-browser testing at the end when most of the functionality has been verified. Once the new features are resolved and closed, there ideally should be a development cutoff. At this point, the QA team does a full cross-browser sweep, functionally testing the features on a series of browsers, platforms and devices. Any lingering bugs and design/layout issues across browsers are identified, logged, fixed and verified.

One of the last phases of testing is user-acceptance testing (UAT). At this point we deliver the test site to the client to do their end-user testing. We provide them with simplified test cases to help them progress through the new feature testing. The client provides their list of issues for further resolution.

When the programming is finally deemed stable right before the release, we go back and test a sampling of web site content types and existing features created in previous releases. This is known as regression testing, where we seek to uncover new bugs in existing areas to ensure that the changes have not introduced new faults and affected other formerly working parts.

It’s now release day, but we’re not done testing yet! Right after the programmers migrate the code to the production (live) site, we go into post-go live testing mode. In as much as possible without creating false data in the live environment, we test all the new features in a controlled manner. Then we do regression testing on existing features on the live site.

The above workflow describes an ideal situation. In practice, it rarely follows the strict progression given timelines, client needs and budget. But we always strive to cover as much testing ground as possible.

Drop us a line

Have a project in mind?

Contacting Third and Grove may cause awesomeness. Side effects include a website too good to ignore. Proceed at your own risk.

Reduced motion disabled