Item |
Description |
Technical Review |
A peer group discussion activity that focuses on achieving consensus on the technical approach to be taken. A technical review is also known as a peer review. |
Test Case |
A specific set of test data along with expected results for a particular test condition |
Test Maturity Model (TMM) |
A five level staged framework for test process improvement, related to the Capability Maturity Model (CMM) that describes the key elements of an effective test process. |
Test Process Improvement (TPI) |
A continuous framework for test process improvement that describes the key elements of an effective test process, especially targeted at system testing and acceptance testing. |
Test approach |
The implementation of the test strategy for a specific project. |
It typically includes the decisions made that follow based on the (test) project�s goal and the risk assessment carried out, starting points regarding the test process and the test design techniques to be applied.
|
Test automation |
The use of software to perform or support test activities, e.g. test management, test design, test execution and results checking. |
There are many factors to consider when planning for software test automation. Automation changes the complexion of testing and the test organisation from design through implementation and test execution. |
There are tangible and intangible elements and widely held myths about benefits and capabilities of test automation.
|
Test case specification |
A document specifying a set of test cases (objective, inputs, test actions, expected results, and execution preconditions) for a test item. |
Test charter |
A statement of test objectives, and possibly test ideas. Test charters are amongst other used in exploratory testing.
|
Test comparator |
A test tool to perform automated test comparison. |
Test comparison |
The process of identifying differences between the actual results produced by the component or system under test and the expected results for a test. |
Test comparison can be performed during test execution (dynamic comparison) or after test execution. |
Test condition |
An item or event of a component or system that could be verified by one or more test cases, e.g. a function, transaction, quality attribute, or structural element. |
Test data |
Data that exists (for example, in a database) before a test is executed, and that affects or is affected by the component or system under test. |
Test data preparation tool |
A type of test tool that enables data to be selected from existing databases or created, generated, manipulated and edited for use in testing. |
Test design specification |
A document specifying the test conditions (coverage items) for a test item, the detailed test approach and identifying the associated high level test cases. |
Test design tool |
A tool that support the test design activity by generating test inputs from a specification that may be held in a CASE tool repository. |
e.g. requirements management tool, or from specified test conditions held in the tool itself.
|
Test environment
|
An environment containing hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test.
|
Test evaluation report
|
A document produced at the end of the test process summarizing all testing activities and results. It also contains an evaluation of the test process and lessons learned.
|
Test execution
|
The process of running a test by the component or system under test, producing actual result(s).
|
Test execution phase
|
The period of time in a software development life cycle during which the components of a software product are executed, and the software product is evaluated to determine whether or not requirements have been satisfied.
|
Test execution schedule
|
A scheme for the execution of test procedures. The test procedures are included in the test execution schedule in their context and in the order in which they are to be executed.
|
Test execution technique
|
The method used to perform the actual test execution, either manually or automated.
|
Test execution tool
|
A type of test tool that is able to execute other software using an automated test script, e.g. capture/playback.
|
Test harness
|
A test environment comprised of stubs and drivers needed to conduct a test.
|
Test infrastructure
|
The organisational artifacts needed to perform testing, consisting of test environments, test tools, office environment and procedures.
|
Test item
|
The individual element to be tested. There usually is one test object and many test items.
|
Test level
|
A group of test activities that are organized and managed together. A test level is linked to the responsibilities in a project.
|
Examples of test levels are component test, integration test, system test and acceptance test.
|
Test log
|
A chronological record of relevant details about the execution of tests.
|
Test manager
|
The person responsible for testing and evaluating a test object. The individual, who directs, controls, administers plans and regulates the evaluation of a test object.
|
Test object
|
The component or system to be tested.
|
Test point analysis (TPA)
|
A formula based test estimation method based on function point analysis.
|
Test procedure specification
|
A document specifying a sequence of actions for the execution of a test. Also known as test script or manual test script.
|
Test process
|
The fundamental test process comprises planning, specification, execution, recording and checking for completion.
|
Test run
|
Execution of a test on a specific version of the test object.
|
Test specification
|
A document that consists of a test design specification, test case specification and/or test procedure specification.
|
Test strategy
|
A high-level document defining the test levels to be performed and the testing within those levels for a programme (one or more projects).
|
Test suite
|
A set of several test cases for a component or system under test, where the post condition of one test is often used as the precondition for the next one.
|
Test type
|
A group of test activities aimed at testing a component or system regarding one or more interrelated quality attributes.
|
A test type is focused on a specific test objective, i.e. reliability test, usability test, regression test etc., and may take place on one or more test levels or test phases.
|
Testability
|
The capability of the software product to enable modified software to be tested.
|
Tester
|
A technically skilled professional who is involved in the testing of a component or system.
|
Testing
|
The process of exercising software to verify that it satisfies specified requirements and to detect faults
|
Thread testing
|
A version of component integration testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by levels of a hierarchy.
|
Top-down testing
|
An incremental approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs.
|
Tested components are then used to test lower level components.
|
Traceability
|
The ability to identify related items in documentation and software, such as requirements with associated tests. See also horizontal traceability, vertical traceability.
|