Session
Information Systems and Security
Description
The testing of simulation models has much in common with testing processes in other types of application involving software development. However, there are also important differences associated with the fact that simulation model testing involves two distinct aspects, which are known as verification and validation. Model validation is concerned with investigation of modelling errors and model limitations while verification involves checking that the simulation program is an accurate representation of the mathematical and logical structure of the underlying model. Success in model validation depends upon the availability of detailed information about all aspects of the system being modelled. It also may depend on the availability of high quality data from the system which can be used to compare its behaviour with that of the corresponding simulation model. Transparency, high standards of documentation and good management of simulation models and data sets are basic requirements in simulation model testing. Unlike most other areas of software testing, model validation often has subjective elements, with potentially important contributions from face- validation procedures in which experts give a subjective assessment of the fidelity of the model. Verification and validation processes are not simply applied once but must be used repeatedly throughout the model development process, with regressive testing principles being applied. Decisions about when a model is acceptable for the intended application inevitably involve some form of risk assessment. A case study concerned with the development and application of a simulation model of a hydro-turbine and electrical generator system is used to illustrate some of the issues arising in a typical control engineering application. Results from the case study suggest that it is important to bring together objective aspects of simulation model testing and the more subjective face- validation aspects in a coherent fashion. Suggestions are also made about the need for changes in approach in the teaching of simulation techniques to engineering students to give more emphasis to issues of model quality, testing and validation.
Keywords:
validation, verification, transparency, documentation, control engineering, education
Proceedings Editor
Edmond Hajrizi
ISBN
978-9951-437-48-6
First Page
55
Last Page
64
Location
Durres, Albania
Start Date
28-10-2016 9:00 AM
End Date
30-10-2016 5:00 PM
DOI
10.33107/ubt-ic.2016.8
Recommended Citation
Murray-Smith, David J., "Some Issues in the Testing of Computer Simulation Models" (2016). UBT International Conference. 8.
https://knowledgecenter.ubt-uni.net/conference/2016/all-events/8
Some Issues in the Testing of Computer Simulation Models
Durres, Albania
The testing of simulation models has much in common with testing processes in other types of application involving software development. However, there are also important differences associated with the fact that simulation model testing involves two distinct aspects, which are known as verification and validation. Model validation is concerned with investigation of modelling errors and model limitations while verification involves checking that the simulation program is an accurate representation of the mathematical and logical structure of the underlying model. Success in model validation depends upon the availability of detailed information about all aspects of the system being modelled. It also may depend on the availability of high quality data from the system which can be used to compare its behaviour with that of the corresponding simulation model. Transparency, high standards of documentation and good management of simulation models and data sets are basic requirements in simulation model testing. Unlike most other areas of software testing, model validation often has subjective elements, with potentially important contributions from face- validation procedures in which experts give a subjective assessment of the fidelity of the model. Verification and validation processes are not simply applied once but must be used repeatedly throughout the model development process, with regressive testing principles being applied. Decisions about when a model is acceptable for the intended application inevitably involve some form of risk assessment. A case study concerned with the development and application of a simulation model of a hydro-turbine and electrical generator system is used to illustrate some of the issues arising in a typical control engineering application. Results from the case study suggest that it is important to bring together objective aspects of simulation model testing and the more subjective face- validation aspects in a coherent fashion. Suggestions are also made about the need for changes in approach in the teaching of simulation techniques to engineering students to give more emphasis to issues of model quality, testing and validation.