7.3.6 Design and development validation
Design and development validation shall be performed in accordance with planned arrangements (see 7.3.1) to
ensure that the resulting product is capable of meeting the requirements for the specified application or intended use,
where known. Wherever practicable, validation shall be completed prior to the delivery or implementation of the
product. Records of the results of validation and any necessary actions shall be maintained (see 4.2.4).
Validation of software is aimed at providing reasonable confidence that it will meet its operational requirements.
Before offering the product for customer acceptance, the organization should validate the operation of the
product in accordance with its specified intended use, under conditions similar to the application environment,
as specified in the contract. Any differences between the validation environment and the actual application
environment, and the risks associated with such differences, should be identified and justified as early in the life
cycle as possible, and recorded. In the course of validation, configuration audits or evaluations may be
performed, where appropriate, before release of a configuration baseline. Configuration audits or evaluations
confirm, by examination of the review, inspection and test records, that the software product complies with its
contractual or specified requirements. This may require analysis, simulation or emulation where validation is not
practicable in operational conditions.
In software development, it is important that the validation results and any further actions required to meet the
specified requirements are recorded, and checked when the actions are completed.
In some cases, it may not be possible, or feasible, to validate fully the software product by measurement and
monitoring. An example may be where safety-related software cannot be tested under actual circumstances
without risking serious consequences, or perhaps the actual circumstances themselves are rare and difficult to
The inability to test some software products exhaustively and conclusively may lead the organization to decide
a) how confidence can be gained from the development and tools used, and
b) what types of testing or analysis can be performed to increase confidence that the product will perform
correctly under the “untestable” circumstances, e.g. static code analysis.
Whatever methods are used, they should be commensurate with the risk and consequences of design and
Validation may often be performed by testing. Testing may be required at several levels, from the individual
software item to the complete software product. There are several different approaches to testing, and the
extent of testing and the degree of controls on the test environment, test inputs and test outputs may vary with
the approach, the complexity of the product and the risk associated with the use of the product. Test planning
should address test types, objectives, sequence and scope of testing, test cases, test data and expected
results. Test planning should identify the human and physical resources needed for testing and define the
responsibilities of those involved.
Specific testing for software includes establishing, documenting, reviewing and implementing plans for the
a) unit tests, i.e. stand-alone tests of software components;
b) integration and system tests, i.e. tests of aggregations of software components (and the complete system);
c) qualification tests, i.e. tests of the complete software product prior to delivery to confirm the software meets
its defined requirements;
d) acceptance tests, i.e. tests of the complete software product to confirm the software meets its acceptance
Regression testing should be performed to verify or validate that the capabilities of the software have not been
compromised by a change.
Acceptance tests are those that are performed for the customer's benefit with the aim of determining the
acceptability of the product. Acceptance may be with or without defects or deviations from requirements, by
agreement of the parties involved.
Testing tools and the environment to be used should be qualified and controlled, and any limitations to testing
Testing procedures should cover recording and analysis of results as well as problem and change management