Introduction edit

This article gives a brief introduction to various software test techniques. These are all standard techniques widely used in the field of software testing.

Further information can be found in this document: Standard for Software Component Testing, Working Draft 3.4, 27 April 2001 produced by the British Computer Society Specialist Interest Group in Software Testing (BCS SIGIST). It can be found here.

Software test techniques are divided into two groups: black box testing techniques and white box testing techniques. Black box techniques are those in which the tester has no knowledge of the internal workings of the component being tested. For example, testing by end users is almost always black box testing. White box techniques require internal knowledge of the component. For example, some software testing requires knowledge of the code, e.g. statement testing.

Some test techniques are similar but are different "strengths". E.g. for a high-risk component, exploratory testing could be used, for medium-risk error guessing could be used and for low-risk ad hoc testing could be used.

Test techniques that have not been included here are: document reviews (Walkthrough, Inspection, Peer (aka Technical) and Informal), Complexity Analysis, Condition Testing (Branch Condition, Modified Condition Decision and Branch Condition Combination) and LCSAJ Testing.

Equivalence Partitioning edit

Description edit

Equivalence Partitioning is a test technique in which inputs and outputs that display similar behaviour are grouped together. Thus, by testing one input or output from a group, it is assumed that the whole group is tested. The groups are called, “Equivalence Classes”.

Inputs are catgerised or grouped called as Equivalence Classes. Testing conducted based on Equivalence Classes called as Equivalence Partitioning.

Example edit

If an exam has a 60% pass rate, then the scores 60 – 100% are assumed to be equivalent: testing any one of them tests the whole group. E.g. it is not necessary to test 70%, 80% and 90%.

When It Is Used edit

Whenever there are inputs or outputs in a range. It enables test cases to be objectively created. It is ideally used in combination with Boundary Value Analysis.

Random Testing edit

Description edit

Random testing is when certain inputs are possible but Equivalence Partitioning is not practised. Rather, inputs are chosen at random.

Example edit

Same as Equivalence Partitioning example, but inputs can be anything, e.g. 25%, 56%, 91%.

When It Is Used edit

Whenever Equivalence Partitioning is used but no time is available to analyse the scenario to create Equivalence Classes. Can be used in conjunction with a tool that generates random inputs.


Boundary Value Analysis edit

Description edit

Boundary Value Analysis requires having partitioned inputs and outputs (see Equivalence Partitioning), this test technique tests the boundaries. This is because defects are more likely to occur at the boundaries of equivalence classes than at any other point.

Example edit

If an exam has a 60% pass rate, then it is best to test the following values: 59% (fail), 60% (pass) and 61% (pass). However, the equivalence classes also contain other boundaries, so it is also useful to test: -1% (invalid), 0% (fail), 1% (fail), 99% (pass), 100% (pass) and 101% (invalid).

When It Is Used edit

Whenever there are inputs or outputs in a range. It enables test cases to be objectively created. Requires an understanding of Equivalence Partitioning.


State Transition Testing edit

Description edit

This test technique involves testing a system as it changes from one state to another. The initial state, input, output and final states are defined. Note that this technique is easily scalable, so that for a high-risk component more thorough testing can be achieved by testing the transition from the initial state to the final state via an intermediate state (or more than one intermediate state). See State Transition Table for more detailed information.

Example edit

A digital watch, which can exist in four possible states: display time, change time, display date and change date. Changing from display time to change time, the input would be “press reset” and the output would be “alter time”.

When It Is Used edit

Whenever a component can exist in certain states.


Cause Effect Graphing edit

Description edit

This technique involves analysing the causes and effects on a component. The word “Graphing” is misleading: graphs can be created but are rarely done so because the test cases can more easily be created via a table, called a decision table.

Example edit

A company decides to mail shot everyone in its database. The type of mail sent depends on the following “causes”: age of customer and gender of customer. The “effects” are different types of mail shots.

When It Is Used edit

Useful whenever a scenario involves combinations of causes and effects.


Syntax Testing edit

Description edit

Testing input via syntax.

Example edit

If a field requires the age of a customer, valid entries would be an integer (possibly within a range). Invalid entries would be: letters, decimals, special characters (€, @, or TAB) or nothing (leave field blank).

When It Is Used edit

Whenever an input requires a certain syntax. Can also be used to test interfaces between components (integration testing).


Statement Testing edit

Description edit

Statement testing involves running a program and comparing the outcome with the code.

Example edit

For code “if a then b”, a should be true to test all the code (i.e. also test b).

When It Is Used edit

This technique can be used for any program. It is best used in conjunction with a tool that can measure the coverage and highlight code that has not been tested. Compare with Branch / Decision Testing.


Branch / Decision Testing edit

Description edit

Branch Testing and Decision Testing are very similar. Here they will be treated as identical. Branch / Decision Testing involves checking paths through the code.

Example edit

For code “if a then b”, two tests should be run for 100% coverage: one with a true and one with a false.

When It Is Used edit

As with Statement Testing, it can be used for any program but is best used in conjunction with a tool to measure coverage.


Data Flow Testing edit

Description edit

This technique focuses on how variables are used within code. A path is traced from where a variable is initialised to where it is used.

Example edit

Can be used with any code. Simply select one variable and watch how it changes as the code is executed.

When It Is Used edit

This is a useful technique for developers to perform their own tests since they can observe this easily using debugging tools.


Ad Hoc Testing edit

Description edit

The tester simply tries things at random. The minimum knowledge is to have an understanding of the requirements in order to compare expected and actual outcome.

Example edit

For an online program in which users need to log in and enter personal information, a tester will simply try that a few times with different information entered into different fields.

When It Is Used edit

This technique will find the most serious defect in the least amount of time. The overhead of writing test cases is gone. This is useful for “Friday afternoon” testing.


Error Guessing edit

Description edit

An experienced tester has an intuitive idea of where bugs can be found. Error Guessing is similar to Ad Hoc Testing except that the tester first tries to consider where defects might lie. They may also attempt to test the most error prone or most visible components first.

Example edit

For the online program described under, “Ad Hoc Testing”, a tester might begin with standard input and then try non-standard input, e.g. informal syntax testing, entering a non-valid postcode or credit card details, etc.

When It Is Used edit

Error Guessing is used when one experienced tester is given a product and simply asked to, “test it”. Often the project will be under time pressure and so speed has a higher priority than test quality. (Any measure of quality of the component will be subjective.)


Exploratory Testing edit

Description edit

This is similar to Ad Hoc Testing and Error Guessing, but is more in-depth than either. Exploratory Testing begins with a meeting between two experienced testers and a manager. The group focuses on identifying risk prone areas and then considers what type of testing would be appropriate. The testers then test the product and report back. This procedure is repeated once a day or so, with one meeting per day. In a meeting the manager will typically ask, "What is the most interesting or important defect you have found today?"

Example edit

For the online program described under, “Ad Hoc Testing”, the team might decide that particularly risky items include: leap year dates, international address formats, unusual credit cards and foreign characters (like “ô”).

When It Is Used edit

Used in similar conditions to Error Guessing but on more risky components. VVXV