Purity determinations are applied as part of the quality control of pharmaceuticals, their active pharmaceutical ingredients (and, if applicable, their excipients in case they are not yet obtained in compendial quality). This includes quantitative determinations, resulting in a definite statement of the quantity or concentration, as well as qualitative / semi-quantitative determinations with results allowing only conclusions to be drawn as to whether the impurity is present or not or whether it is below a certain limit value. Such determinations are also known as "limit tests". Some people will think back to a laboratory practical training in drug analysis during their studies with more or less enthusiasm when they hear the word "limit test"...
What are limit tests?
Limit tests are methods of purity testing that are used to detect (± frequently occurring) impurities that can only be tolerated in small quantities. In most cases, the substance to be tested is compared with a reference substance which contains the impurity of interest in a known concentration (comparison test). This enables a semi-quantitative statement to be made about the test substance.
Why are such limit tests important?
The impurities that can be examined applying limit tests can be very diverse. They range from endotoxins, hepatitis C virus RNA and various (metal) ions to nitrosamines and arsenic, to name just a few examples. Depending on their diversity, different sources are the origin for these substances to become present in the drug, for example by using certain reagents during the (chemical) synthesis in the manufacturing process, by contaminated water, by uptake from the soil (e.g. in case of plant-based active ingredients), by bacterial or viral contamination or by release of unsuitable pipelines or, or, or... Depending on the substance class, such impurities may be harmless per se, but are reducing the therapeutic effect of the drug or, in excessive concentrations, they can cause damage to health or be carcinogenic or toxic. Their content must therefore be limited. This is why limit values are specified in the monographs of the pharmacopoeias or by the ICH Q3 guidelines.
How to perform limit tests in practice?
If we’d like to examine an active pharmaceutical ingredient that is listed in a compendial monograph, we’ve got to check within the monograph which limit test(s) is / are to be used, prepare the corresponding test solution and then have a look into the general methods of the pharmacopoeia to see how this test is to be carried out. The reference solution containing the impurities’ limit value is also described there. Depending on the complexity (and topicality), spectroscopic or chromatographic (TLC, GC, LC, etc.) methods can then be used for the analysis, but in the simplest case only a visual examination by our eyes is applied. Depending on the chemical reaction, this can lead to cloudiness (such as the precipitation of chloride with silver nitrate) or a color reaction. Both the sample and the reference solution are treated in the same way. The degree of turbidity or the color intensity of the test solution is then compared with that of the reference solution.
And what is to tell about validation of limit tests?
This is absolutely plausible: the method must be specific to ensure that only the impurity of interest is recognized within the drug mixture and that no false positive results are obtained in case other structurally similar substances are present. It must also be ensured that the impurity of interest is actually detected and that we do not receive any false-negative results. The requirement of the determination of the limit of detection points into the same direction. Since these are qualitative / semi-quantitative analyses, the detection limit ensures that our statements on the limit value are correct.
I would rather apply the term “method validation” for methods developed in-house (i.e. those that are not listed in the pharmacopoeias). For the limit tests listed in the pharmacopoeias, I would prefer to use the term verification. And as far as the extend of the verification is concerned, the individual test parameters to be applied should be selected according to the complexity of the method. Determinations as simple as sulphated ash may only need to be quickly verified, while for others, despite their inclusion in the pharmacopoeia and detailed previous round-robin tests, the examination of further validation parameters, such as accuracy and robustness, can be useful. And while we are already talking about robustness, from a practical point of view, an equal treatment of test and reference solution during the test performance is decisive, which can be easily explained using a precipitation reaction as example. A precipitation reaction depends on various factors, including, of course, the duration. If we allow our test and reference solution to react for different times, comparability is no longer provided. In the event of a positive result of our test solution, we would not be able to know whether we obtained it because the test solution really contained such a high amount of impurities or because the incubation time of the test solution was unfortunately longer than the one of the reference solution...
But… (to what extent) do semi-quantitative limit tests still play a role nowadays?
That's a good question. There are some limit tests that have certainly proven themselves due to their simplicity and reliability depending on the criticality of the drug. These limit tests will probably continue to be valid. But if you have a look at the trends of the last few years, e.g. the USP chapters <232> and <233> entered into force on January 1, 2018 and replacing the more than 100-year-old colorimetric limit test for heavy metals (the former chapter <231>) clearly demonstrate, that changes are absolutely necessary and that some of such semi-quantitative limit tests are nowadays simply outdated.
Comments powered by CComment