SDLC Test Process

Test Planning, Monitoring and Control
Test Analysis and Design
Test Implementation and Execution
Evaluating Exit Criteria and Reporting
Text Closure Activitites

The software development life cycle (SDLC) is a framework defining tasks performed at each step in the software development process. SDLC is a structure followed by a development team within the software organization. It consists of a detailed plan describing how to develop, maintain and replace specific software. For testing, the following applies:

Test Planning, Monitoring and Control

For each test level, test planning starts at the initiation of the test process for that level and continues throughout the project until the completion of closure activities for that level. It involves the identification of the activities and resources required to meet the objectives identified in the test strategy. Test planning also includes identifying the methods for gathering and tracking the metrics that will be used to guide the project, determine adherence to plan and assess achievement of the objectives. By determining useful metrics during the planning stages, tools can be selected, training can be scheduled and documentation guidelines can be established.

In order for a Test Manager to provide efficient test control, a testing schedule and monitoring framework needs to be established to enable tracking of test work products and resources against the plan. This framework should include the detailed measures and targets that are needed to relate the status of test work products and activities to the plan and strategic objectives.

Of particular importance is the need to relate the status of test work products and activities to the test basis in a manner that is understandable and relevant to the project and business stakeholders.

Test control - is the ongoing activity of comparing actual progress against the plan and reporting the status, including any deviations from the plan. It also involves taking actions necessary to meet the mission and objectives of the project, including implementing corrective actions as and when required. Test control guides the testing through each of the test levels to fulfil the test process objectives.

Test Analysis and Design

Test analysis and design is the activity during which general testing objectives are transformed into tangible test conditions and test cases. Test analysis and design activities include the following major tasks:

  • Reviewing the test basis (i.e. requirements, risk analysis reports, architecture, design, interface specifications)
  • Evaluating testability of the test basis and test objects
  • Identifying and prioritising test conditions based on analysis of test items, the specification, behaviour and structure of the software
  • Designing and prioritising high level test cases
  • Identifying necessary test data to support the test conditions and test cases
  • Designing the test environment set-up and identifying any required infrastructure and tools
  • Creating bi-directional traceability between test basis and test cases

Test Implementation and Execution

Test implementation and execution is the activity where test procedures or scripts are specified by combining the test cases in a particular order and including any other information needed for test execution, the environment is set up and the tests are run.

Test implementation and execution has the following major tasks:

  • Finalising, implementing and prioritizing test cases (including the identification of test data)
  • Developing and prioritising test procedures, creating test data and, optionally, preparing test harnesses and writing automated test scripts
  • Creating test suites from the test procedures for efficient test execution
  • Verifying that the test environment has been set up correctly
  • Verifying and updating bi-directional traceability between the test basis and test cases
  • Executing test procedures either manually or by using test execution tools, according to the planned sequence
  • Logging the outcome of test execution and recording the identities and versions of the software under test, test tools and testware
  • Comparing actual results with expected results
  • Reporting discrepancies as incidents and analysing them in order to establish their cause (e.g. a defect in the code, in specified test data, in the test document, or a mistake in the way the test was executed)
  • Repeating test activities as a result of action taken for each discrepancy, for example, re-execution of a test that previously failed in order to confirm a fix (confirmation testing), execution of a corrected test and/or execution of tests in order to ensure that defects have not been introduced in unchanged areas of the software or that defect fixing did not uncover other defects (regression testing)

Evaluating Exit Criteria and Reporting

Evaluating exit criteria is the activity where test execution is assessed against the defined objectives. Evaluating exit criteria has the following major tasks:

  • Checking test logs against the exit criteria specified in test planning
  • Assessing if more tests are needed or if the exit criteria specified should be changed
  • Writing a test summary report for stakeholders

Test Closure Activities

Test closure activities collect data from completed test activities to consolidate experience, testware, facts and numbers. Test closure activities occur at project milestones such as when a software system is released, a test project is completed, a milestone has been achieved, or a maintenance release has been completed.

Test closure activities include the following major tasks:

  • Checking which planned deliverables have been delivered
  • Closing incident reports or raising change records for any that remain open
  • Documenting the acceptance of the system
  • Finalizing and archiving testware, the test environment and the test infrastructure for later reuse
  • Handing over the testware to the maintenance organization
  • Analysing lessons learned to determine changes needed for future releases and projects
  • Using the information gathered to improve test maturity