KEYWORDS: Endoscopy, RGB color model, Education and training, Diseases and disorders, Light sources and illumination, Color, Polyps, Image processing, Image analysis, Deep learning
Wireless capsule endoscopy (WCE) offers a minimally invasive approach to inspecting the gastrointestinal (GI) tract, crucial for diagnosing conditions such as malnutrition, dehydration, and potential cancers. However, WCE image diagnostics can be compromised by inadequate illumination and adversarial contrast reduction attacks. Adversarial contrast reduction attacks are intentional efforts to degrade image contrast and mislead automated diagnosis systems. Such challenges can result in misclassifications, negatively impacting patient safety. This study examines the effects of contrast degradation on Deep Learning (DL) models designed for WCE image analysis. The study emphasizes the adverse impact of substantial contrast reductions from adversarial attacks on classification accuracy. We propose a novel texture descriptor to mitigate this vulnerability: the Color Quaternion Modulus and Phase Patterns (CQ-MPP). This descriptor effectively extracts textural features within WCE images, enabling the identification of potentially cancerous regions, even under significantly reduced contrast. The effectiveness of CQ-MPP is evaluated using the Wireless Capsule Endoscopy Curated Colon Disease Dataset. Results show that CQ-MPP maintains good accuracy in detecting cancerous lesions and demonstrates remarkable resilience to contrast adversarial degradation. This method ensures reliable performance amidst severe contrast reduction, offering significant potential to improve safety of GI disease diagnosis via WCE.
Skin cancer is the most common type of cancer in United States with 9,500 new cases diagnosed daily. It is one of the deadliest forms, however early detection and treatments can lead to recovery. More and more modern medical systems employs deep learning (DL) vision models as an assistive secondary diagnostic tool. This progress is derived from the superior performance by convolutional neural networks (CNNs) across a wide number of medical applications. However, recent discovery has revealed that adding small but faint noises to images can cause these models to make classification errors. These adversarial attacks can undermine defense measures and hamper the operations of deep learning models in real-world settings. The objective of this paper is to explore the effects of image degradation on popular off-the-shelf Deep Learning (DL) vision models. First, the investigation of the effects of adversarial attacks on image classification accuracy, sensitivity, and specificity are evaluated. Then we introduce pepper noise as an adversarial attack, which is an extension of the one-pixel attack on deep learning models. Second, we propose a novel texture descriptor Ordered statistics Local Binary Patterns (OS-LBP) for recognizing potential skin cancer areas. Third, we will demonstrate how our OS-LBP is successful in mitigating some of the effects of image degradations caused by adversarial attacks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.