A Quantitative Assessment Of Large Scale Data Processing For LC/UV And MS Based Compound QC

Pittcon 2010, Orlando, FL, USA, 23rd – 27th Januray 2010, Mark Bayliss, Joseph Simpkins

In our experience, we have found a significant number of situations that force us to have to QC a much greater percentage of our LC/MS UV, ELSD compound QC results than we feel should be really necessary. This oftentimes means a 100% QC. Some of the reasons are summarized as: Target(s) Found (Green) but the purity or concentration of the sample being too low to be of practical usage. Targets found but eluting in a region of significant level impurities and therefore more challenging for auto-purification. Targets eluting within the solvent front or end of the chromatographic run typically with poor integration. Targets being poorly classified as found, maybe or not found due to challenges in the signal processing, baselining, peak integration, MS peak classification, poor assignment of adducts and so on. The major issue of course, was that we were not really sure to what level these issues were prevalent or were causing us to over QC results. To better understand these effects we have undertaken a relatively large scale review of our results to determine where most of the problem situation occurs and to remedy as many as possible. We were also looking to increase the trust we have our processing and to be able to trap those situations where an analyst needs to make an informed decision and communicate this effectively. This presentation summarizes some of our finding and how we have attempted to solve these issues.

 DOWNLOAD