Validation of calibration software ? as required by ISO 17025, for instance ? is a topic that folks don?t prefer to talk about. Often there is uncertainty concerning the following: Which software actually must be validated? If so, who should take care of it? Which requirements should be satisfied by validation? How would you take action efficiently and how could it be documented? The following blog post explains the background and provides a recommendation for implementation in five steps.
In a calibration laboratory, software is used, among other things, from supporting the evaluation process, around fully automated calibration. Regardless of the degree of automation of the program, validation always refers to the complete processes into that your program is integrated. Behind validation, therefore, may be the fundamental question of if the process of calibration fulfills its purpose and whether it achieves all its intended goals, in other words, does it supply the required functionality with sufficient accuracy?
If you need to do validation tests now, you ought to know of two basic principles of software testing:
Full testing isn’t possible.
Testing is always dependent on the environment.
The former states that the test of most possible inputs and configurations of an application cannot be performed because of the large numbers of possible combinations. Depending on application, the user must always decide which functionality, which configurations and quality features should be prioritised and that are not relevant for him.
Which decision is made, often depends on the second point ? digital pressure gauge operating environment of the program. Depending on application, practically, there are always different requirements and priorities of software use. Additionally, there are customer-specific adjustments to the program, such as concerning the contents of the certificate. But also the average person conditions in the laboratory environment, with an array of instruments, generate variance. The wide selection of requirement perspectives and the sheer, endless complexity of the software configurations within the customer-specific application areas therefore make it impossible for a manufacturer to test for all the needs of a specific customer.
Correspondingly, taking into account the aforementioned points, the validation falls onto an individual themself. In order to make this process as efficient as possible, a procedure fitting the next five points is preferred:
The data for typical calibration configurations should be defined as ?test sets?.
At regular intervals, typically once a year, but at the very least after any software update, these test sets ought to be entered into the software.
The resulting certificates can be weighed against those from the prior version.
Regarding a first validation, a cross-check, e.g. via MS Excel, can take place.
The validation evidence should be documented and archived.
WIKA provides a PDF documentation of the calculations carried out in the software.
Note
For further information on our calibration software and calibration laboratories, go to the WIKA website.