Hand x-rays are used for tasks such as detecting fractures and investigating joint pain. The choice of the x-ray view plays a crucial role in a medical expert’s ability to make an accurate diagnosis. This is particularly important for the hand, where the small and overlapping bones of the carpals can make diagnosis challenging, even with proper positioning. In this study, we develop a prototype that uses deep learning models, iterative methods and a depth sensor to estimate hand and x-ray machine parameters. These parameters are then used to generate feedback that helps ensure proper radiographic hand positioning. The method of this study consists of five steps: detector table parameter estimation, 2D hand joint landmark prediction, hand joint landmark depth estimation, radiographic positioning parameter extraction, and radiographic protocol constraint verification. Detector plane parameter estimation is achieved by fitting a plane to randomly queried depth points using RANSAC. Google’s MediaPipe HandPose model is used for 2D hand joint landmark prediction, and hand joint depth estimation is determined using the OAK-D Pro sensor. Finally, hand positioning parameters are extracted and evaluated for the selected radiographic viewing protocol. We focus on three commonly used hand positioning protocols: posterior-anterior, oblique, and lateral view. The prototype also has a user interface and a feedback system designed for practical use in the x-ray room. Two evaluations are undertaken to validate our prototype. First, with the help of a radiology technician, we rate the tool’s positioning feedback. Second, using a bespoke left-hand x-ray phantom and an x-ray machine, we generate images with and without the prototype guidance for a double-blind study where the images are rated by a radiologist.
Frauke Wilm, Michaela Benz, Volker Bruns, Serop Baghdadlian, Jakob Dexl, David Hartmann, Petr Kuritcyn, Martin Weidenfeller, Thomas Wittenberg, Susanne Merkel, Arndt Hartmann, Markus Eckstein, Carol Immanuel Geppert
Purpose: Automatic outlining of different tissue types in digitized histological specimen provides a basis for follow-up analyses and can potentially guide subsequent medical decisions. The immense size of whole-slide-images (WSIs), however, poses a challenge in terms of computation time. In this regard, the analysis of nonoverlapping patches outperforms pixelwise segmentation approaches but still leaves room for optimization. Furthermore, the division into patches, regardless of the biological structures they contain, is a drawback due to the loss of local dependencies.
Approach: We propose to subdivide the WSI into coherent regions prior to classification by grouping visually similar adjacent pixels into superpixels. Afterward, only a random subset of patches per superpixel is classified and patch labels are combined into a superpixel label. We propose a metric for identifying superpixels with an uncertain classification and evaluate two medical applications, namely tumor area and invasive margin estimation and tumor composition analysis.
Results: The algorithm has been developed on 159 hand-annotated WSIs of colon resections and its performance is compared with an analysis without prior segmentation. The algorithm shows an average speed-up of 41% and an increase in accuracy from 93.8% to 95.7%. By assigning a rejection label to uncertain superpixels, we further increase the accuracy by 0.4%. While tumor area estimation shows high concordance to the annotated area, the analysis of tumor composition highlights limitations of our approach.
Conclusion: By combining superpixel segmentation and patch classification, we designed a fast and accurate framework for whole-slide cartography that is AI-model agnostic and provides the basis for various medical endpoints.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.