Testing Dictionary

Testing Dictionary

Acceptance Testing: Formal testing conducted to determine whether or not a system satisfies its acceptance criteria – enables an end user to determine whether or not to accept the system.

Alpha Testing: Testing of a software product or system conducted at the developer?s site by the end user.

Affinity Diagram: A group process that takes large amounts of language data, such as a list developed by brainstorming, and divides it into categories.

Audit : An inspection/assessment activity that verifies compliance with plans,policies, and procedures, and ensures that resources are conserved. Audit is a staff function; it serves as the ?eyes and ears? of management.

Testing DictionaryAutomated Testing: That part of software testing that is assisted with software tool(s) that does not require Operator input, analysis, or evaluation.

Beta Testing: Testing conducted at one or more end user sites by the end user of a delivered software product or system.

Bottom-up Testing : An integration testing technique that tests the low-level components first using test drivers for those components that have not yet been developed to call the low-level components for test.

Boundary Value Analysis : A test data selection technique in which values are chosen to lie along data extremes. Boundary values include maximum, mini-mum, just inside/outside boundaries, typical values, and error values.

Brainstorming : A group process for generating creative and diverse ideas.

Branch Coverage Testing : A test method satisfying coverage criteria that requires each decision point at each possible branch to be executed at least once.

Black-box Testing: Functional testing based on requirements with no knowledge of the internal program structure or data. Also known as closed-box testing. Black box testing indicates whether or not a program meets required specifications by spotting faults of omission — places where the specification is not fulfilled.

Bug : A design flaw that will result in symptoms exhibited by some object (the object under test or some other object) when an object is subjected to an appropriate test. problem by representing the relationship between some effect and its possible cause.

Cause-effect Graphing : A testing technique that aids in selecting, in a systematic way, a high-yield set of test cases that logically relates causes to effects to produce test cases. It has a beneficial side effect in pointing out incompleteness and ambiguities in specifications.

Check sheet : A form used to record data it is gathered.

Clear-box Testing : Another term for white-box testing. Structural testing is sometimes referred to as clear-box testing; since ?white boxes? are considered opaque and do not really permit visibility into the code. This is also own glass-box or open-box testing.

Client : The end user that pays for the product received, and receives the benefit from the use of the product. .

Control Chart : A statistical method for distinguishing between common and special cause variation exhibited by processes,

Customer (end user) : The individual or organization, internal or external to the producing organization, that receives the product.

Cyclomatic Complexity : A measure of the number of linearly independent paths through a program module.

Data Flow Analysis : Consists of the graphical analysis of collections of(sequential) data definitions and reference patterns to determine constraints that can be placed on data values at points of executing the source program.

Debugging : The act of attempting to determine the cause of the symptoms of malfunctions detected by testing or by frenzied user complaints.

Defect : NOTE: Operationally, it is useful to work with two definitions of a defect: 1) From the producer?s viewpoint: a product requirement that has not been met or a product attribute Possessed by a product or a function performed by a product that is not in the statement of requirements that define the product.2) From the end user?s viewpoint: anything that causes end user dissatisfaction, whether in the statement of requirements or not.

Defect Analysis : Using defects as data for continuous quality improvement. Defect analysis generally seeks to classify defects into categories and identify possible causes in order to direct process improvement efforts.

Defect Density: Ratio of the number of defects to program length (a relative number).

Desk Checking: A form of manual static analysis usually performed by the originator. Source code documentation, etc., is visually checked against requirements and standards.

Dynamic Analysis : The process of evaluating a program based on execution of that program. Dynamic analysis approaches rely on executing a piece of software with selected test data.

Dynamic Testing : Verification or validation performed which executes the system?s code.

Error : 1) A discrepancy between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or condition; and 2) a mental mistake made by a programmer that may result in a program fault.

Error-based Testing : Testing where information about programming style, error-prone language constructs, and other programming knowledge is applied to select test data capable of detecting faults, either a specified class of faults or all possible faults.

Evaluation : The process of examining a system or system component to determine the extent to which specified properties are present.

Execution: The process of a computer carrying out an instruction or instructions of a computer.

Exhaustive Testing : Executing the program with all possible combination of values for program variables.