 |
 |
 |
|
 |
|
8.3.3 Assessment Specification (AsSpec) |
|
Prüfspezifikation
The Assessment Specification contains the description of assessment requirements and goals, assessment methods, assessment criteria derived from the requirements, and the test cases. Coverage of the requirements by the test cases is documented in a coverage matrix. With the help of the Assessment Specification it must be possible to decide if the assessment has been successful or not.
An Assessment Specification is generated for each object to be assessed-cf. definition of Objects to be Assessed and Qualification Requirements in the Assessment Plan.
It is possible to combine several Assessment Specifications into one document.
1. General Information
2. Requirements
2.1. Classification of the Functional Unit with regard to Criticality and IT Security
2.2. Assessment Requirements
3. Assessment Methods
4. Assessment Criteria
4.1. Coverage
4.2. Check Lists
4.3. Termination Criteria
5. Test Cases
5.1. Test Case Description
5.2. Coverage Matrix
5.2.1. Architecture Elements and Interfaces
5.2.2. User-level and Technical Requirements
See schema 1. General Information.
The classification of the functional unit (System, Segment, SW Unit/HW Unit, SW Component, SW Module, Database) with regard to criticality and IT security has been defined in the development documents. It is adopted from these documents.
This chapter includes general requirements for an assessment. Examples are:
- assessments must be carried out with normal, limit, and faulty data values,
- assessments must be carried out under normal and exceptional conditions (maximum load, component failure, etc.),
- assessments must be carried out with real data,
- each execution option, if possible, and possibly faulty user input must be assessed.
An assessment is separated into the sections preparation, execution, and evaluation. If the preparation and the evaluation are sufficiently described in the Assessment Procedure they can be dropped here.
The preparation for an assessment includes the generation of test data. The methods and procedures for realizing this are determined and described, provided that they are not assumed to be known already.
Methods to execute the assessment include static analysis, test, simulation, proof of correctness, symbolic program execution, review, inspection. The methods for executing the assessment are derived from the classification of the object to be assessed with respect to criticality and IT security, from measures assigned to the individual levels, and from other quality requirements.
It is important to specify the kind of saving and evaluating results, in particular with regard to repeated assessments. It is described which of the data have to be stored during and after the assessment.
Methods and procedures are determined and described, such as the use of automated comparison routines, personal expertise, maintaining a chronological log.
This structural item states the criteria of each assessment. They must be established in such a way that the assessment can be evaluated with regard to the successful realization.
It is determined how deep the assessment has got to be (e. g. information about path coverage), so the suitability of the object to be assessed can be guaranteed. In general, the degree of coverage depends on the criticality of the object to be assessed.
This contains a list of questions to be processed in the course of the product and activity assessment. The following assessment check lists must be possibly updated at least for each generic object to be assessed, in critical cases individual objects to be assessed have to be updated as well. It must be formulated so that potential errors are easily discovered. The check lists have to cover the various error sources sufficiently:
- General criteria like completeness, product layout, spelling, consistency within the object to be assessed, uniqueness.
- Development-related criteria like consistency with previous and parallel products, comprehensibility, method conformity.
- Product-specific criteria like feasibility of a requirement, requirements fulfillment of a design, commenting code, refinement level of a specification.
- System- or user-specific criteria like meeting special user requirements, adherence to strict project plan data, ability to be integrated into organizational environment, harmonic cooperation with existing (sub-) system.
The following includes examples of some basic questions for the check lists of the various objects to be assessed. As already mentioned above, they have to be completed and interpreted with regard to the individual project. The general "basic" check list and the corresponding "product" check list have to be used as a starting point for objects to be assessed.
"Basic" Check List for each product to be assessed:
(The following questions have in common that they can be answered merely on the basis of the present object to be assessed.)
- Is the product structured according to the product schema?
- Does the product not include any syntax errors (e. g. writing mistakes)?
- Does the product not contain any contradictory statements?
- Are all statements in the product uniquely expressed?
- Is the product complete?
- Are all contents in the product that according to the product schema are relevant available in adequate detail?
(The following questions refer to the development process of a product, i. e. other related products have to be used as input for the assessment.)
- Is the product consistent with all previous products from which the product to be assessed resulted?
- Is the product free of inconsistencies and contradictions with regard to parallel products it is related to?
"Product" Check Lists for the individual representatives of the corresponding product classes:
(These questions refer to error sources typical for that kind of product, i. e. they refer to the content and the particularities of the corresponding products.)
- User Requirements
- Can the requirements be met and can this meeting the requirements be proven?
- Do the User Requirements only contain the requirements wanted by the customer, free of all unnecessary ballast?
- Are the User Requirements free of hidden design decisions?
- Do the user-level requirements correspond with the details in the current analysis?
- Are the specifications from the threat and risk analysis about the requirements for IT security sufficiently guaranteed?
- Can the overall quality requirements be covered, i. e. can the mutual (negative) influence of individual quality criteria, like reliability against efficiency, be compensated?
- System Architecture
- Is the System Architecture compatible with the User Requirements and the Technical Requirements?
- Does the System Architecture plausibly cover all User Requirements?
- Do IT security concept and IT security model meet the requirements for IT security and the requirements of the threat and risk analysis?
- Is the requirements allocation of User Requirements to the elements of the System Architecture unique and complete?
- Were the inheritance rules with regard to criticality and IT security classification correctly applied?
- Do the feasibility studies contain meaningful information?
- Technical Requirements
- Can the requirements be met and can this meeting the requirements be proven?
- Do the Technical Requirements correspond with the intentions of the creators of the System Architecture?
- Are the Technical Requirements free of hidden design decisions?
- Were the inheritance rules with regard to criticality and IT security classification correctly applied?
- Are the requirements for software and hardware sufficiently specified?
- Have development and SWMM environment been completely and uniquely described?
- Interface Overview
- Are all interfaces listed?
- Is there consistency with the architecture documents?
- Interface Description
- Are all interfaces identified in the Interface Overview described?
- Are the interfaces sufficiently described?
- Integration Plan
- Are the architecture elements listed in the Integration Plan compatible with those listed in the corresponding architecture document?
- Is the integration/assessment environment compatible with the development environment?
- Are the organizational and deadline specifications compatible with the Project Plan?
- Software Architecture
- Is the process design free of deadlocks?
- Is the access to shared resources sufficiently regulated?
- Is the process design compatible with the applied operating system?
- Were the inheritance rules with regard to criticality and IT security classification correctly applied?
- Were constructive measures according to the corresponding criticality level adhered to?
- Does the SW Architecture meet the User Requirements and the Technical Requirements?
- SW Design: Module,Database
- Were the feasibility studies checked?
- Were constructive measures according to the corresponding criticality level adhered to?
- Is the SW Design made in the sense of a programming specification?
- Data-Dictionary
- Is the information in the Data Dictionary consistent with the Database description?
- Is there consistency with the central data dictionary?
- Implementation Documents: SW Components, Database, SW Modules, SW Units
- Is the used module skeleton uniform?
- Were programming/coding standards adhered to?
- Was the code sufficiently and intelligibly documented?
- Is the Assessment Plan consistent with the specifications in the QA Plan?
- Is the selection of objects to be assessed conform with the tailoring decisions?
- Are the organizational and chronological allocations consistent with the Project Plan?
- Assessment Specification
- Does the Assessment Specification cover the specifications and requirements defined in the Assessment Plan?
- If necessary, is the assessment of interfaces documented in the Assessment Specification?
- Do the assessment methods correspond with the criticality level or IT security level classification allocated to the object to be assessed?
- Is it possible to say whether the specified assessment criteria can objectively be decided?
- Assessment Procedure
- Is the Assessment Procedure sufficiently specified and documented for critical components?
- Project Manual
- Are the tailoring decisions clear and reconstructably documented?
- Project Plan
- Is the time schedule free of deadlocks?
- Is the degree the resources (staff and resources) are utilized less than 100 percent?
- Are the contents of the Project Plan compatible with the specifications and procedures in the Project Manual, CM Plan and QA Plan?
- CM Plan
- Does the CM Plan take into consideration the special configuration requirements of the project?
- Are specifications made both with regard to software identification and to hardware identification?
- Configuration Identification Document (CID)
- Is the CID clearly organized so configuration relationships can easily be detected?
- Has it been uniquely stated which configuration units are defined?
- Does the System CID make clear which further CIDs exist?
- Are all items and documents of the configuration completely identified with their version names?
"Activity" Check List for each activity to be assessed:
- Is the activity realized in accordance with the relevant process regulations?
- Are the valid project standards adhered to?
- Are the roles suitably allocated when the activity is realized?
Termination criteria are stating conditions under which the assessment can be considered successfully terminated. This structural item contains both a description of termination criteria for a successful assessment (e. g. the required precision has been reached with a maximum deviation of +/- 0.0005) and for an unsuccessful assessment (e. g. message "overflow", "division by zero" "insufficient storage").
This contains a description of
- what (function, precision, etc.) has to be assessed,
- what is the initial condition for this assessment,
- which input (data and signals with all characteristics required for the assessment, such as time conditions) will be required, and
- what results can be expected (output data and reactions/effects).
With the test cases listed, the above mentioned termination criteria must be sufficiently met and also decidable.
Test case descriptions can be specified for each of the following products:
- System, Segment, SW Unit
- SW Component, SW Module, Database
- Operational Information
This matrix documents the coverage of the requirements of the individual test cases for the object to be assessed.
This chapter contains a documentation of the coverage of architecture elements of an object to be assessed (e. g. overlap of integrated items by SW Modules, external and internal interfaces) and of code elements (e. g. branch/condition/path coverage) by the test cases.
It is important to cover the assessments of the interfaces by corresponding test cases where the individual objects to be assessed are covered as well.
This chapter contains a documentation of the coverage of user-level and technical requirements (e. g. by covering equivalence classes and limit values or time and quantity requirements) by the test cases.
Mail 0723 - Abdeckungsmatrix (723)
Mail 0373 - Pruefkriterien V-Modell'97 (573)