In this small blog article, we would like to explain two terms often mixed up during analytical method validation. It’s about specificity and selectivity.
According to the official guideline to be applied for method validation ICH Q2(R1), specificity is defined as:
“Specificity is the ability to assess unequivocally the analyte in the presence of components which may be expected to be present.”
But what does this mean? Let’s explain using other words: Specificity tells us about the degree of interference by other substances also present in the sample while analysing the analyte. E.g., imagine you are carrying a bunch of keys and only one key among the bunch can open the lock of your door. If there is a method that can identify the correct key for the lock, it could be termed “specific”. In other words, the method is specific for the key among bunch of other keys.
For specificity, the identification of other keys in the bunch is not required.
For analytical methods, specificity defines the identity of an analyte among a mixture of similar components in a sample where the identity of the components is not important. It can be achieved by analysing
- either a known analyte among a mixture of structurally similar compounds (+) or
- a mixture of structurally similar molecules without the analyte (-).
The second approach may also be termed “matrix interference” and is applied to check whether the matrix (e.g. formulation buffer) of the drug may have an influence on the results by enhancing or quenching effects. The first approach can - in case of separation techniques such as HPLC methods - also be considered as “separation selectivity”. This means that the method is checked if it is still able to separate a sample spiked with known amounts of potentially interfering substances.
Quality control (QC) lab methods to be validated according to GMP can be clustered into 3 main groups (identification tests, impurity tests and assays). For all of them, the validation parameter specificity is required. But it can be easily understood, that for identification methods, specificity is absolutely necessary as it has to be ensured that only the analyte in the sample that is supposed to be determined is detected and that no cross-reactions with other substances also present occur.
Sometimes the term “selectivity” is used to say the same, although the definition of selectivity is a little bit different. This term is not mentioned in the ICH guideline, but instead used in European guideline on bioanalytical method validation and defined as:
“The analytical method should be able to differentiate the analyte(s) of interest and IS [note from the editor: IS = internal standard] from endogenous components in the matrix or other components in the sample.”
In other words, we can say selectivity is like specificity except that the identification of all components in a mixture is mandatory. From the example above, selectivity requires identification of all the keys in the bunch and not just the one that opens the lock.
It is important to understand that the term specificity is used to tell something about the method’s ability responding to one single analyte only, while selectivity is used when the method is able to respond to several different analytes in the sample. In context of analytical chemistry selectivity is preferred as per IUPAC recommendations.
For analytical methods such as separation techniques, individual components should be marked. For chromatographic techniques, the chromatograms should show a clear resolution between the different peaks as required by the ICH Q2(R1) guideline: “For critical separations, specificity can be demonstrated by the resolution of the two components which elute closest to each other.”.