This section specifies the plans for producing updates to


You may use the annotated sample testing plan discussed below:

The following is an annotated sample testing plan outline following the "IEEE Standard for Software Test Documentation-IEEE Std 829-1998" (https://www.ruleworks.co.uk/testguide/IEEE-std-829-1998.htm).

1. Introduction. This section could have the following subsections:

a. Objectives: High level description of the scope, approach, resources, and schedule of the testing activities. It should also include a concise summary of the test plan objectives, the products to be delivered, major work activities, major work products, major milestones, required resources, and master high-level schedules, budget, and effort requirements.

b. Background

c. Scope: This section specifies the plans for producing updates to this software testing plan document itself and the methods for distribution of updates along with version control and configuration management requirements.

d. Reference

e. Definitions and Notations

2. Test items. The following documents need to be referred in this section: requirements specification, design specification, users guide, operations guide, installation guide, features, defect removal procedures, and verification and validation plans.

The following subsections could be included in this section then.

a. Program Modules

b. Job Control Procedures

c. User Procedures

d. Operator Procedures

3. Features to be tested

4. Features not to be tested

5. Approach. Testing approaches should be described in sufficient detail to permit accurate test effort estimation. This section should identify the types of testing to be performed and the methods and criteria to be used in performing test activities. The specific methods and procedures for each type of testing should be described in details. The criteria for evaluating the test results and the techniques that will be used to judge the comprehensiveness of the testing effort should also be discussed.

This section could include the following subsections:

a. Unit Testing

b. Integration Testing

c. Conversion Testing: Test whether all data elements and historical data is converted from an old system format to the new system format.

d. Job Stream Testing: Test whether the system operates in the production environment.

e. Interface Testing

f. Security Testing

g. Recovery Testing: Test whether the system restart/backup/recovery operations work as specified.

h. Performance Testing

i. Regression Testing

j. Acceptance Testing

k. Beta Testing

6. Pass/Fail criteria. One of the most difficult and political problems is deciding when to stop testing. The example criteria for exiting testing include (but are not limited to): scheduled testing time has expired (very weak criteria), some predefined number of defects discovered, all the formal tests have been executed without detecting any defects (tester may lose motivation under this criteria), or a combination of the above. This section may include the following subsections:

a. Suspension Criteria: when to suspend all or a portion of the testing activity on test items associated with the plan?

b. Resumption Criteria: the conditions that need to be met to resume testing activities after suspension.

c. Approval Criteria: The conditions for the approval of test results and the formal testing approval process. This section should also define who needs to approve a test deliverable, when it will be approved, and what is the backup plan if an approval cannot be obtained

d. Metrics: "You cannot control what you cannot measure" (see [Tom DeMarco]). A metric is a measurable indication of some quantitative aspect of a system. It is able to be measured, and it is independent (of human influence), accountable, and precise. A metric can be a "result", or a "predictor". A result metric measures a completed event or process (e.g., total elapsed time to process a business transaction). A predictor metric is an early warning metric with a strong correlation to some later result (e.g., predicted response-time through statistical regression analysis when more terminals are added to a system). The motivation for collecting test metrics is to make the testing process more effective.

7. Testing process

a. Test Deliverables: example test deliverables include (but are not limited to): test cases, testing report logs, test incident reports, test summary reports, and metrics' reports.

b. Testing Tasks and Dependencies: The sequence of tasks in the project work plans should be analyzed for activity and task dependencies.

c. Responsibilities: this will include the groups (e.g., developers, testers, operation staff, technical support staff, data administration staff, and the users) responsible for managing, designing, preparing, executing, witnessing, checking, and resolving test activities.

d. Defect recording/tracking process: Defect control procedures need to be established to control this process from initial identification through to reconciliation.

e. Configuration procedures. Assembling a software system involves tools to transform the source components, or source code, into executable programs. Example tools are compilers and linkage editors. Configuration build procedures need to be defined to identify the correct component versions and execute the component build procedures.

f. Issue resolution procedures. Testing issues can arise at any point in the development process and must be resolved successfully. The primary responsibility for issue resolution lies with the project manager who should work with the project sponsor to resolve them.

g. Resources

h. Schedule

8. Environmental requirements

a. Hardware

b. Software

c. Security

d. Tools: the special software tools, techniques, and methodologies employed in the testing efforts should be listed along with the purpose and use of each tool. Plans for the acquisition, training, support, and qualification for each tool or technique should be included also.

e. Publications

f. Risks and Assumptions

9. Change management process.

a. Change request process: Change control is the process by which a modification to a software component is proposed, evaluated, approved or rejected, scheduled, and tacked.

b. Version control process: A method for uniquely identifying each software component needs to be established via a labeling scheme. Software components evolve through successive revisions, and each needs to be distinguished. A simple way to distinguish component revisions is with a pair of integers 1.1, 1.2, etc., which define the release number and level number

10. Plan approvals. This should include the names, signatures, and dates of the plan approvers

Solution Preview :

Prepared by a verified Expert
Computer Networking: This section specifies the plans for producing updates to
Reference No:- TGS01255428

Now Priced at $20 (50% Discount)

Recommended (96%)

Rated (4.8/5)