Stroke is a devastating and life-threatening medical condition that demands immediate intervention. Timely diagnosis and treatment are paramount in reducing mortality and mitigating long-term disabilities associated with stroke. This research aims to address these critical needs by proposing a real-time stroke detection system based on Deep Learning (DL) with the incorporation of Federated Learning (FL), which offers improved accuracy and privacy preservation. The purpose of this research is to develop an efficient and accurate model capable of distinguishing between stroke and non-stroke cases in real-time, assisting healthcare professionals in making rapid and informed decisions. Stroke detection has traditionally relied on manual interpretation of medical images, which is time-consuming and prone to human error. DL techniques have shown significant promise in automating this process, but the need for large and diverse datasets, as well as privacy concerns, remains challenging. To achieve this goal, our methodology involves training the DL model on extensive datasets containing both stroke and non-stroke medical images. This training process will enable the model to learn complex patterns and features associated with stroke, thereby improving its diagnostic accuracy. Furthermore, we will employ Federated Learning, a decentralized training approach, to enhance privacy while maintaining model performance. This approach allows the model to learn from data distributed across multiple healthcare institutions without sharing sensitive patient information. The proposed approach has been executed on NVIDIA platforms, taking advantage of their advanced GPU capabilities to enable real-time processing and analysis. This optimized model has the potential to revolutionize stroke diagnosis and patient care, ultimately saving lives and improving the quality of healthcare services in the field of neurology.
COVID-19 is an infectious virus caused by acute respiratory syndrome SARS-CoV-2. It was first discovered in December 2019 in Wuhan, China. This ongoing pandemic caused infected cases, including many deaths around the world. Coronavirus is spread mainly by air droplets near the infected person due to sneezing, coughing, and talking. Pretrained DL models utilize large CNN layers, which require more disk size on IoTembedded devices and affect real-time detection. This research presents an integrated lightweight DL approach for real-time and multi-task (social distancing, mask detection, and facial temperature) video measurement to control the spread of coronavirus among individuals. The three tasks have used the most recent YOLO detectors (YOLOv7-tiny). It is an object detection model optimized based on the original YOLOv7 to simply the neural network architecture. The trained models have been evaluated in terms of mean average precision, Recall, and Precision to assess the algorithm performance. The proposed approach has been deployed and executed on NVIDIA devices (Jetson nano, Jetson Xavier AGX) composed of visible and thermal cameras. The visible camera is used for face mask detection, while the thermal camera is used for facial temperature measurement and social distancing. This research enriched the prevention system of COVID-19 by the integrated approach compared to the state-of-the-art methodologies. In addition, we obtained promising results for real-time detection. The proposed approach is suitable for a surveillance system to monitor social distancing, Face mask detection, and measuring the facial temperature among individuals.
The new coronavirus disease (COVID-19) comprises the public health systems around the world. The number of infected people and deaths are escalating day-to-day, which puts enormous pressure on healthcare systems. COVID-19 symptoms include fatigue, cough, and fever. These symptoms are also diagnosed for other pneumonia, which creates complications in identifying COVID-19, especially throughout the influenza season. The rise of the COVID-19 pandemic among individuals has made it essential to improve medical image screening of this pneumonia. Rapid identification is a necessary step to stop the spread of this virus and plays a vital role in early detection. With this as a motivator, we applied deep learning techniques to diagnose the coronavirus using chest X-ray images and to implement a robust AI application to classify COVID-19 pneumonia from non-COVID-19 for the respiratory system in these images. This paper proposes different deep learning algorithms, including classification and segmentation methods. By taking advantage of convolutional neural network models, we exploited different pre-trained deep learning models such as (ResNet50, ResNet101, VGG-19, and U-Net architectures) to extract features from chest X-ray images. Four datasets of chest X-ray images have been employed to assess the performance of the proposed methods. These datasets have been split into 80% for training and 20% for validation of the architectures. The experimental results showed an overall accuracy of 99.42% for the classification and 93% for segmentation approaches. The proposed approaches can help radiologists and medical specialists to identify the insights of infected regions for the respiratory system in the early stages.
Fingerprinting is one form of biometrics, a science that can be used for personal identification. It is one of the important techniques and security measures for human authentication across the globe due to its uniqueness and individualistic characteristics. Fingerprints are made up of an arrangement of ridges, called friction ridges. Each ridge consists of pores, that are attached to the glands under the skin. Several algorithms proposed different approaches to recreate fingerprint images. However, these works encountered problems with poor quality and presence of structured noise on these images. In this paper, we present a novel fingerprint system that provides more unique and robust algorithms which are capable to distinguish between individuals effectively. A sparse autoencoder (SAE) algorithm is used to reconstruct fingerprint images. It is an unsupervised deep learning model that replicates its input at the output. The architecture is designed and trained with datasets of fingerprint images that are pre-processed to be able to fit them in the model. Three datasets of fingerprint images have been utilized to validate the robustness of the model. This dataset has been split into 70% for training and 30% for testing the model. SAE is fine-tuned and optimized with L2 and sparsity regularization, thus it increased the efficiency of learning representation for the architecture. The sparse autoencoder is a suitable deep learning model to improve the recreation of fingerprint images significantly. The proposed approach showed promising results, and it can enhance the quality of reproduced fingerprint images with a clear ridge structure and eliminating various overlapping patterns.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.