Traditional cosmic ray filtering algorithms used in X-ray imaging detectors aboard space telescopes perform event reconstruction based on the properties of activated pixels above a certain energy threshold, within 3×3 or 5×5 pixel sliding windows. This approach can reject up to 98% of the cosmic ray background. However, the remaining unrejected background constitutes a significant impediment to studies of low surface brightness objects, which are especially prevalent in the high-redshift universe. The main limitation of the traditional filtering algorithms is their ignorance of the long-range contextual information present in image frames. This becomes particularly problematic when analyzing signals created by secondary particles produced during interactions of cosmic rays with body of the detector. Such signals may look identical to the energy deposition left by X-ray photons, when one considers only the properties within the small sliding window. Additional information is present, however, in the spatial and energy correlations between signals in different parts of the same frame, which can be accessed by modern machine learning (ML) techniques. In this work, we continue the development of an ML-based pipeline for cosmic ray background mitigation. Our latest method consist of two stages: first, a frame classification neural network is used to create class activation maps (CAM), localizing all events within the frame; second, after event reconstruction, a random forest classifier, using features obtained from CAMs, is used to separate X-ray and cosmic ray features. The method delivers > 40% relative improvement over traditional filtering in background rejection in standard 0.3-10 keV energy range, at the expense of only a small (< 2%) level of lost X-ray signal. Our method also provides a convenient way to tune the cosmic ray rejection threshold to adapt to a user’s specific scientific needs.
|