Many analytes absorb UV-light, typically in the 200-300 nm range. The UV detector is therefore popular since it can detect compounds that contain conjugated or isolated double bonds, i.e. compounds with free or conjugated electron pairs. Some 'UV'-detectors offer both UV and visible light sources. The principle is the same regardless of the wavelength of light used:
UV Absorption detector
A light beam with a known wavelength (λ) and intensity (I) is transmitted through a sample cell. The intensity of the transmitted light (Is) is compared to the intensity transmitted by an empty reference cell (Ir).
As long as the two intensities are the same, no signal is generated. If an absorbing solute is present in the sample cell, the reference intensity and the sample intensity are no longer equal and a signal is generated. The ratio of the measured intensities Is/Ir, is called transmittance (T). The logarithm of 1/T is called the absorbance, (A = -logT), and is linearly proportional to the concentration of the solute in the eluent.
According to Beer's Law, the absorbance (A) depends on the concentration of the compound (c), the pathlength of the cell (b), and on the molar extinction coefficient (ε) of the analyte:
A = εbc (Beer's law)
Obviously, the wavelength of the light and pathlength of the cell must be kept constant during the measurement in order to produce meaningful data.
Most compounds show the best absorbance in the UV-region (200 - 300 nm). When choosing the eluent, it is important to make sure that the eluent itself does not absorb light at the selected detection wavelength. In other words we must make sure that absorption is caused mainly or exclusively by sample components. Otherwise the detected absorbance signal caused by the analyte will become too low, resulting in a lower sensitivity.