Articles → SOFTWARE TESTING → Terms Used In Software Testing Part IIITerms Used In Software Testing Part IIIPrecondition:Environmental and state conditions that must be fulfilled before the component or system can be executed with a particular test or test procedure.Predicted outcome:See expected result.Pretest:See intake test.Priority:The level of (business) importance assigned to an item, e.g. defect.Probe effect:The effect on the component or system by the measurement instrument when the component or system is being measured, e.g. by a performance testing tool or monitor. For example performance may be slightly worse when performance testing tools are being used.Problem:See defect.Problem management:See defect management.problem report:See defect report.Process:A set of interrelated activities, which transform inputs into outputs. [ISO 12207]Process cycle test:A black box test design technique in which test cases are designed to execute business procedures and processes. [TMap]Product risk:A risk directly related to the test object. See also risk.Project:A project is a unique set of coordinated and controlled activities with start and finish dates undertaken to achieve an objective conforming to specific requirements, including the constraints of time, cost and resources.Project risk:A risk related to management and control of the (test) project, e.g. lack of staffing, strict deadlines, changing requirements, etc.See also risk.Program instrumenter:A software tool used to insert additional code into the program in order to collect information about program behavior during executionPseudo-random:A series which appears to be random but is in fact generated according to some prearranged sequence.Quality assurance:Part of quality management focused on providing confidence that quality requirements will be fulfilled.Quality management:Coordinated activities to direct and control an organization with regard to quality. Direction and control with regard to quality generally includes the establishment of the quality policy and quality objectives, quality planning, quality control, quality assurance and quality improvementRandom testing:A black box test design technique where test cases are selected, possibly using a pseudo-random generation algorithm, to match an operational profile. This technique can be used for testing non-functional attributes such as reliability and performance.Record/playback tool:A type of test execution tool where inputs are recorded during manual testing in order to generate automated test scripts that can be executed later (i.e. replayed). These tools are often used to support automated regression testing.Recoverability:The capability of the software product to re-establish a specified level of performance and recover the data directly affected in case of failure.Recoverability testing:The process of testing to determine the recoverability of a software product.Regression testing:Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed.Release note:A document identifying test items, their configuration, current status and other delivery information delivered by development to testing, and possibly other stakeholders, at the start of a test execution phase.Reliability:The ability of the software product to perform its required functions under stated conditions for a specified period of time, or for a specified number of operations.Reliability testing:The process of testing to determine the reliability of a software product.Replaceability:The capability of the software product to be used in place of another specified software product for the same purpose in the same environment.Requirement:A condition or capability needed by a user to solve a problem or achieve an objective that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document.Requirements-based testing:An approach to testing in which test cases are designed based on test objectives and test conditions derived from requirements, e.g. tests that exercise specific functions or probe non-functional attributes such as reliability or usability.Requirements management tool:A tool that supports the recording of requirements, requirements attributes (e.g. priority, knowledge responsible) and annotation, and facilitates traceability through layers of requirements and requirements change management. Some requirements management tools also provide facilities for static analysis, such as consistency checking and violations to pre-defined requirements rules. requirements phase: The period of time in the software life cycle during which the requirements for a software product are defined and documented.Resource utilization:The capability of the software product to use appropriate amounts and types of resources, for example the amounts of main and secondary memory used by the program and the sizes of required temporary or overflow files, when the software performs its function under stated conditions.Resource utilization testing:The process of testing to determine the resource-utilization of a software product.Result:The consequence/outcome of the execution of a test. It includes outputs to screens, changes to data, reports, and communication messages sent out. See also actual result, expected result.Resumption criteria:The testing activities that must be repeated when testing is re-started after a suspension. [After IEEE 829]Re-testing:Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.Review:An evaluation of a product or project status to ascertain discrepancies from planned results and to recommend improvements. Examples include management review, informal review, technical review, inspection, and walkthrough.Reviewer:The person involved in the review that identifies and describes anomalies in the product or project under review. Reviewers can be chosen to represent different viewpoints and roles in the review process.Review tool:A tool that provides support to the review process. Typical features include review planning and tracking support, communication support, collaborative reviews and a repository for collecting and reporting of metrics.Risk:A factor that could result in future negative consequences; usually expressed as impact and likelihood.Risk analysis:The process of assessing identified risks to estimate their impact and probability of occurrence (likelihood).Risk-based testing:An approach to testing to reduce the level of product risks and inform stakeholders on their status, starting in the initial stages of a project. It involves the identification of product risks and their use in guiding the test process.Risk control:The process through which decisions are reached and protective measures are implemented for reducing risks to, or maintaining risks within, specified levels.Risk identification:The process of identifying risks using techniques such as brainstorming, checklists and failure history.Risk management:Systematic application of procedures and practices to the tasks of identifying, analyzing, prioritizing, and controlling risk.Risk mitigation:See risk control.Robustness:The degree to which a component or system can function correctly in the presence of invalid inputs or stressful environmental conditions. [IEEE 610] See also error-tolerance, fault-tolerance.Robustness testing:Testing to determine the robustness of the software product.Root cause:An underlying factor that caused a non-conformance and possibly should be permanently eliminated through process improvement.Safety:The capability of the software product to achieve acceptable levels of risk of harm to people, business, software, property or the environment in a specified context of use.Safety testing:Testing to determine the safety of a software product.Sanity test:See smoke test.Scalability:The capability of the software product to be upgraded to accommodate increased loads.Scalability testing:Testing to determine the scalability of the software product.Scribe:The person who records each defect mentioned and any suggestions for process improvement during a review meeting, on a logging form. The scribe has to ensure that the logging form is readable and understandable.Scripted testing:Test execution carried out by following a previously documented sequence of tests.Scripting language:A programming language in which executable test scripts are written, used by a test execution tool (e.g. a capture/playback tool).Security:Attributes of software products that bear on its ability to prevent unauthorized access, whether accidental or deliberate, to programs and data. [ISO 9126] See also functionality.Security testing:Testing to determine the security of the software product. See also functionality testing.Security testing tool:A tool that provides support for testing security characteristics and vulnerabilities.Security tool:A tool that supports operational security.Serviceability testing:See maintainability testing.Severity:The degree of impact that a defect has on the development or operation of a component or system. [After IEEE 610]Simulation:The representation of selected behavioral characteristics of one physical or abstract system by another system. [ISO 2382/1]Simulator: A device, computer program or system used during testing, which behaves or operates like a given system when provided with a set of controlled inputs. [After IEEE 610, DO178b] See also emulator. 28 site acceptance testing: Acceptance testing by users/customers at their site, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes, normally including hardware as well as software.Posted By - Amandeep Dhanjal Posted On - Saturday, October 30, 2010 Query/Feedback Your Email Id** Subject* Query/Feedback Characters remaining 250**
Query/Feedback