Articles → SOFTWARE TESTING → Terms Used In Software Testing Part IITerms Used In Software Testing Part IIError seeding:The process of intentionally adding known defects to those already in the component or system for the purpose of monitoring the rate of detection and removal, and estimating the number of remaining defects.Gray box testing:This is the testing carried out with a partial knowledge of the system. It may involve knowledge of the logical structure.Impact analysis:The assessment of change to the layers of development documentation, test documentation and components, in order to implement a given change to specified requirements.Incremental testing:Testing where components or systems are integrated and tested one or some at a time, until all the components or systems are integrated and tested.Intake test:A special instance of a smoke test to decide if the component or system is ready for detailed and further testing. An intake test is typically carried out at the start of the test execution phase.Integration testing:Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems.Keyword driven testing:A scripting technique that uses data files to contain not only test data and expected results, but also keywords related to the application being tested. The keywords are interpreted by special supporting scripts that are called by the control script for the test.LCSAJ testing:A white box test design technique in which test cases are designed to execute LCSAJs.LCSAJ:A Linear Code Sequence And Jump, consisting of the following three items (conventionally identified by line numbers in a source code listing): the start of the linear sequence of executable statements, the end of the linear sequence, and the target line to which control flow is transferred at the end of the linear sequence.Load testing:A test type concerned with measuring the behavior of a component or system with increasing load, e.g. number of parallel users and/or numbers of transactions to determine what load can be handled by the component or system.Link testing:Testing performed to expose defects in the interfaces and in the links between integrated components or systems.Maintenance testing:Testing the changes to an operational system or the impact of a changed environment to an operational system.Module testing:It is the testing of a single module or component of a system.Monkey testing:Testing by means of a random selection from a large range of inputs and by randomly pushing buttons, ignorant on how the product is being used.Mutation testing:Testing in which two or more variants of a component or system are executed with the same inputs, the outputs compared, and analyzed in cases of discrepancies.Negative testing:Tests aimed at showing that a component or system does not work. Negative testing is related to the testers’ attitude rather than a specific test approach or test design technique, e.g. testing with invalid input values or exceptionsOff-the-shelf software:A software product that is developed for the general market, i.e. for a large number of customers, and that is delivered to many customers in identical formatOracle:A source to determine expected results to compare with the actual result of the software under test. An oracle may be the existing system (for a benchmark), a user manual, or an individual’s specialized knowledge, but should not be the code.Pair programming:A software development approach whereby lines of code (production and/or test) of a component are written by two programmers sitting at a single computer. This implicitly means ongoing real-time code reviews are performed.Pair testing:Two persons, e.g. two testers, a developer and a tester, or an end-user and a tester, working together to find defects. Typically, they share one computer and trade control of it while testing.Partition testing:A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once.Pass:A test is deemed to pass if its actual result matches its expected result.Pass/fail criteria:Decision rules used to determine whether a test item (function) or feature has passed or failed a test.Path:A sequence of events, e.g. executable statements, of a component or system from an entry point to an exit point.Path coverage:The percentage of paths that have been exercised by a test suite. 100% path coverage implies 100% LCSAJ coverage.Path sensitizing:Choosing a set of input values to force the execution of a given path.Path testing:A white box test design technique in which test cases are designed to execute paths.Peer review:A review of a software work product by colleagues of the producer of the product for the purpose of identifying defects and improvements. Examples are inspection, technical review and walkthrough.Performance:The degree to which a system or component accomplishes its designated functions within given constraints regarding processing time and throughput rate.Performance indicator:A high level metric of effectiveness and/or efficiency used to guide and control progressive development, e.g. lead-time slip for software development. [CMMI]Performance testing:The process of testing to determine the performance of a software product. See also efficiency testing.Performance testing tool:A tool to support performance testing and that usually has two main facilities: load generation and test transaction measurement. Load generation can simulate either multiple users or high volumes of input data. During execution, response time measurements are taken from selected transactions and these are logged. Performance testing tools normally provide reports based on test logs and graphs of load against response times.Phase test plan:A test plan that typically addresses one test phase. See also test plan.Portability:The ease with which the software product can be transferred from one hardware or software environment to another.Portability testing:The process of testing to determine the portability of a software product.Post condition:Environmental and state conditions that must be fulfilled after the execution of a test or test procedure.Post-execution comparison:Comparison of actual and expected results, performed after the software has finished running.Posted By - Amandeep Dhanjal Posted On - Thursday, October 28, 2010 Query/Feedback Your Email Id Subject Query/Feedback Characters remaining 250
Query/Feedback