Articles → SOFTWARE TESTING → Terms Used In Software Testing Part ITerms Used In Software Testing Part IThis is a more exhaustive Article on testing and it is recommended that if you have never heard of testing before, it should be read after you have completed the ‘Introduction to Testing Article’. Reading this Article after that would make it sound more logical and easy to grasp.Below are some terms related to testing that cover many aspects, concepts, and logical functions that define the globe of testing phase. The terms have been arranged in alphabetical order. We have tried to capture all possible definitions and if you still come across a term in the descriptions below that has not been defined elsewhere in the article, please reach out to us using the ‘Contact Us’ page and we shall include it as well.ad hoc testing:Testing carried out informally; no formal test preparation takes place, no recognized test design technique is used, there are no expectations for results and arbitrariness guides the test execution activityacceptance testing: Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system.agile testing:Testing practice for a project using agile methodologies, such as extreme programming (XP), treating development as the customer of testing and emphasizing the test-first design paradigm.anomaly:Any condition that deviates from expectation based on requirements specifications, design documents, user documents, standards, etc. or from someone’s perception or experience. Anomalies may be found during, but not limited to, reviewing, testing, analysis, compilation, or use of software products or applicable documentation. arc testing/branch testing:A white box test design technique in which test cases are designed to execute branchesaudit trail:A path by which the original input to a process (e.g. data) can be traced back through the process, taking the process output as a starting point. This facilitates defect analysis and allows a process audit to be carried out.back-to-back testing:Testing in which two or more variants of a component or system are executed with the same inputs, the outputs compared, and analyzed in cases of discrepancies.Debugging/error seeding:The process of intentionally adding known defects to those already in the component or system for the purpose of monitoring the rate of detection and removal, and estimating the number of remaining defects.Benchmark test:(1) A standard against which measurements or comparisons can be made. (2) A test that is be used to compare components or systems to each other or to a standard as in (1).Beta testing:Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes. Beta testing is often employed as a form of external acceptance testing for off-the-shelf software in order to acquire feedback from the market.Big-bang testing:A type of integration testing in which software elements, hardware elements, or both are combined all at once into a component or an overall system, rather than in stages.Boundary value:An input value or output value which is on the edge of an equivalence partition or at the smallest incremental distance on either side of an edge, for example the minimum or maximum value of a range.Boundary value analysis/boundary value testing:A black box test design technique in which test cases are designed based on boundary values.Branch condition coverage/condition coverage:The percentage of condition outcomes that have been exercised by a test suite. 100% condition coverage requires each single condition in every decision statement to be tested as True and False.Beta testing:Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes. Beta testing is often employed as a form of external acceptance testing for off-the-shelf software in order to acquire feedback from the marketBlack-box testing:Testing, either functional or non-functional, without reference to the internal structure of the component or system. It can also be referred as the testing carried out without the knowledge of the internal coding involved in the module/system.Capability Maturity Model (CMM):A five level staged framework that describes the key elements of an effective software process. The Capability Maturity Model covers best-practices for planning, engineering and managing software development and maintenance.Capability Maturity Model Integration (CMMI):A framework that describes the key elements of an effective product development and maintenance process. The Capability Maturity Model Integration covers best-practices for planning, engineering and managing product development and maintenance. CMMI is the designated successor of the CMM.CASE:Acronym for Computer Aided Software Engineering. Do not confuse this term with the term Test case.CAST:Acronym for Computer Aided Software TestingCompliance testing:The process of testing to determine the compliance of the component or system.Component integration testing:Testing performed to expose defects in the interfaces and interaction between integrated components.Control flow graph:An abstract representation of all possible sequences of events (paths) in the execution through a component or systemCyclomatic complexity:The number of independent paths through a program. Cyclomatic complexity is defined as: L – N + 2P, where- L = the number of edges/links in a graph- N = the number of nodes in a graph- P = the number of disconnected parts of the graphClick to EnlargeCyclomatic complexity comes under Whitebox testing.Data driven testing:A scripting technique that stores test input and expected results in a table or spreadsheet, so that a single control script can execute all of the tests in the table. Data driven testing is often used to support the application of test execution tools such as capture/playback tools.Data flow coverage:The percentage of definition-use pairs that have been exercised by a test suite.Decision coverage:The percentage of decision outcomes that have been exercised by a test suite. 100% decision coverage implies both 100% branch coverage and 100% statement coverage.Defect Detection Percentage (DDP):The number of defects found by a test phase, divided by the number found by that test phase and any other means afterwards.Defect masking:An occurrence in which one defect prevents the detection of another.Desk checking:Testing of software or specification by manual simulation of its execution.Defect density:The number of defects identified in a component or system divided by the size of the component or system (expressed in standard measurement terms, e.g. lines-of-code, number of classes or function points).Dirty testing:Tests aimed at showing that a component or system does not work. Negative testing is related to the testers’ attitude rather than a specific test approach or test design technique, e.g. testing with invalid input values or exceptionsDynamic testing:Testing that involves the execution of the software of a component or system.Efficiency:The capability of the software product to provide appropriate performance, relative to the amount of resources used under stated conditionsElementary comparison testing:A black box test design technique in which test cases are designed to execute combinations of inputs using the concept of condition determination coverage.Entry criteria:The set of generic and specific conditions for permitting a process to go forward with a defined task, e.g. test phase. The purpose of entry criteria is to prevent a task from starting which would entail more (wasted) effort compared to the effort needed to remove the failed entry criteriaEquivalence partitioning:A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once.Exit criteria:The set of generic and specific conditions, agreed upon with the stakeholders, for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used to report against and to plan when to stop testingExhaustive testingA test approach in which the test suite comprises all combinations of input values and preconditions.Exploratory testing:An informal test design technique where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better testsField testing:Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes. Beta testing is often employed as a form of external acceptance testing for off-the-shelf software in order to acquire feedback from the marketFunctional testing:Testing based on an analysis of the specification of the functionality of a component or system.Function Point Analysis (FPA):Method aiming to measure the size of the functionality of an information system. The measurement is independent of the technology. This measurement may be used as a basis for the measurement of productivity, the estimation of the needed resources, and project controlGlass-box testing:Testing based on an analysis of the internal structure of the component or system. It also means that the testing is carried out with the knowledge of the internal structure of the systemPosted By - Amandeep Dhanjal Posted On - Thursday, October 21, 2010 Query/Feedback Your Email Id** Subject* Query/Feedback Characters remaining 250**
Query/Feedback