What is method validation?


Validating an analytical method proves that by performing the method reliable and reproducible results will be generated, irrespective of time and person who performs. It proves that the method is suitable for its intended purpose.

A method validation is necessary when a new method has been established and is supposed to become a routinely used measurement, when aspects of the practical performance have been changed or the method is intended to be used at a different site. This is the case with method transfers.

There are regulatory requirements and guidelines to be applied to take critical points into account during method validation. The ICH Q2(R1) is one of them. This guideline „Validation of analytical procedures: text and methodology“, published by the International Conference on Harmonisation, offers help with the classification of different analytical methods and names parameters to be evaluated.

Analytical methods are classified according to their purpose into methods for identification, determination of content (assay) and testing for impurities. Different parameters have to be taken into account during method validation in accordance with this classification. The significant parameters are: accuracy, precision (in form of repeatability and intermediate precision), specificity, limit of detection (LOD) and limit of quantification (LOQ), linearity, working range and robustness.

Accuracy and precision can be explained with the aid of dartboards. A method can be very precise, all results are close to each other, but unfortunately at the outermost circle of the dartboard. A method can have high accuracy, all results hit the bullseye, but unfortunately there’s a great deal of scattering. The results have hit the bullseye and are where they are supposed to be only when good precision and high accuracy have been achieved. The result is reliable.

It’s useful to know the limits of determination and quantification when determining impurities. In this case, the limit of determination for limit tests (transition from “you can’t see anything / you can see something”) and the limit of quantification for quantitative tests are relevant (What is the lowest concentration I can measure quantitatively with a correct result?)

Specificity is essential especially for methods of identification. If a method of identification is used, it has to be ensured that only the analyte in the sample that is supposed to be determined is detected and that cross-reactions are eliminated. Otherwise, false-positive results are going to be the consequence.

The linearity of the method is of crucial importance for quantitative determinations, where the concentration of the examined analyte is unknown and can fluctuate. A method is linear when the measuring signal is directly proportional to the concentration. The unknown concentration of the analyte can be calculated correctly with the help of a regression line.

The working range of a method can be derived from the linearity, accuracy and precision. It’s the linear area between the smallest and biggest concentration in which an accurate and precise result can be achieved.

A method is robust when it leads to the same result despite smaller deviations from the usual performance procedure. A practical example: There won’t be a difference in the determined molecular weight with a robust SDS-PAGE, even when using precast gels from different suppliers (but with the same concentration).