Open Access
31 January 2024 Resolution-enhanced multi-core fiber imaging learned on a digital twin for cancer diagnosis
Author Affiliations +
Abstract

Significance

Deep learning enables label-free all-optical biopsies and automated tissue classification. Endoscopic systems provide intraoperative diagnostics to deep tissue and speed up treatment without harmful tissue removal. However, conventional multi-core fiber (MCF) endoscopes suffer from low resolution and artifacts, which hinder tumor diagnostics.

Aim

We introduce a method to enable unpixelated, high-resolution tumor imaging through a given MCF with a diameter of around 0.65 mm and arbitrary core arrangement and inhomogeneous transmissivity.

Approach

Image reconstruction is based on deep learning and the digital twin concept of the single-reference-based simulation with inhomogeneous optical properties of MCF and transfer learning on a small experimental dataset of biological tissue. The reference provided physical information about the MCF during the training processes.

Results

For the simulated data, hallucination caused by the MCF inhomogeneity was eliminated, and the averaged peak signal-to-noise ratio and structural similarity were increased from 11.2 dB and 0.20 to 23.4 dB and 0.74, respectively. By transfer learning, the metrics of independent test images experimentally acquired on glioblastoma tissue ex vivo can reach up to 31.6 dB and 0.97 with 14 fps computing speed.

Conclusions

With the proposed approach, a single reference image was required in the pre-training stage and laborious acquisition of training data was bypassed. Validation on glioblastoma cryosections with transfer learning on only 50 image pairs showed the capability for high-resolution deep tissue retrieval and high clinical feasibility.

1.

Introduction

Minimally invasive imaging is important to optogenetics13 and cancer diagnostics46 since it minimizes the damage to living tissues. Conventional brain cancer diagnosis requires surgical biopsy and resection, histological staining, and observation. The procedure is time-consuming, leading to treatment delay, and has no visual feedback during the surgery, which brings additional risk and complications.79 Label-free imaging techniques like autofluorescence46,1012 and Raman spectroscopy1216 enable locating target tissue in situ for in vivo tumor diagnosis,1719 where high spatial resolution plays a critical role. Multi-core fibers (MCFs) are often used in endoscopy since they are flexible and ultra-thin (diameter<1  mm) and provide an efficient way to illuminate and detect in real-time,2024 which allows minimal invasive access directly to deep tissue for intraoperative imaging. However, the fiber structure leads to honeycomb artifacts, which limit spatial resolution to the core-to-core spacing. Many approaches were proposed to enhance the resolution of fiber endoscopy, including physical methods,2528 computational methods,2933 and deep neural networks (DNNs).3440 DNNs are advantageous because of their real-time capability, and no sophisticated optical systems are required.41

Convolutional neural networks (CNNs) greatly promote the development of image-based medical diagnosis in the last decade, e.g., surgical navigation42 and cancer recognition.43 Based on amounts of training data, CNNs can learn to extract, summarize, and reconstruct histomorphological features of tissue images using convolutional operations. In previous work, we proposed a near-video rate resolution enhancement method for MCF imaging, which enables all optical biopsies with minimal invasiveness.34 The learning-based approach inverts the image transmission properties for a given MCF-based endoscope. However, in reality, MCFs differ in core arrangement and transmissivity since glass fibers are not perfectly manufactured, leading to random and inhomogeneous optical properties. As a result, the DNN-based reconstruction requires experimental acquisition of an MCF-specific dataset, which is laborious and not easily transferable to clinics. Kim et al.38 proposed a reconstruction method for MCFs with random core arrangement, but the distortion resulting from inhomogeneous transmissivity and limited clinical data for training remains unsolved.

Here, we present a streamlined process via a digital twin for MCF image retrieval with very few measurements of biological samples, as demonstrated in Fig. 1. In the pre-training, a single reference image of MCF was captured under incoherent widefield illumination, which offers physics priors of core arrangement and transmission for the data simulation. The reconstruction network was then pre-trained to remove honeycomb artifacts and enhance the image resolution. Subsequently, transfer learning was performed on 50 measured image pairs of brain tumor cryosections. Based on that, we demonstrate high-resolution MCF image retrieval on limited medical data, which is transferable in clinical practice and can significantly improve image-based tumor classification,34 for instance.

Fig. 1

Digital twin concept for high-resolution image retrieval through a randomly selected MCF. (a) Single-reference-based endoscopy simulation and pre-training of U-Net + EDSR network. The MCF-specific reference provided physics priors of inhomogeneous optical properties of the MCF. (b) Endoscopy of biological samples in real-world contexts. Based on the pre-trained network, 50 autofluorescence image pairs of glioblastoma tissue through the same MCF as in (a) were collected for transfer learning.

NPh_11_S1_S11505_f001.png

2.

Methods

2.1.

Reconstruction Network

A cascaded network consisting of a U-Net44 for depixelation and an enhanced deep super-resolution network (EDSR)45 for super-resolution was used. In previous work, this architecture was shown to enhance image resolution and benefit tumor classification. To make use of the physics priors of MCF, an extra input channel was added to the network transmissivity correction of MCF.

2.2.

Simulated Dataset

The MCF dataset was simulated with the detected core arrangement and transmission of a randomly selected MCF (Fujikura FIGH-30-650S) based on a simulator.46 In total, 5,000 images from ImageNet47 were used for training, 100 for validation, and 400 for testing.

2.3.

MCF Measurements

The reference image of the MCF used for the simulation was captured under incoherent widefield illumination, which provides core arrangement and transmission information for inhomogeneity correction and high-resolution retrieval. For validation, the autofluorescence images of cryosections of glioblastoma tissue prepared with a standard protocol5 were imaged through the same MCF as the reference. The samples were illuminated and imaged through the MCF using a 473 nm laser and camera CAM1 to emulate an endoscopic system, see Fig. 3(a). Autofluorescence was detected between 500 and 550 nm. High-resolution ground truth (GT) data was captured simultaneously with camera CAM2.

3.

Results

The U-Net + EDSR model trained on the simulated MCF images of ImageNet was tested on two instances, paper tissue and resolution chart. Although these test image types had not been seen by the model during the training, the test results in Figs. 2(f) and 2(o) demonstrate good generalizability of the U-Net + EDSR. The reconstruction of a paper tissue image using the reference-based approach is shown in Fig. 2(h). For comparison, we present the results by the no-reference-based approach, namely the U-Net + EDSR with a single input, in Figs. 2(f) and 2(g). When an image through an inhomogeneous MCF was tested with the network trained on a homogeneous MCF dataset, distortion and hallucination appeared [see Fig. 2(g)]. The network did not learn how to correct the transmission inhomogeneity from the training data, consequently, the image quality of the reconstruction degraded significantly. In contrast, the reference-based approach learned priors containing MCF transmission information from the MCF-specific reference, where the average peak-to-noise ratio (PSNR) and structural similarity (SSIM) values of the test images are increased from 11.2 dB and 0.20 to 23.4 dB and 0.74, respectively, as shown in Figs. 2(l) and 2(m). The reconstruction of the resolution chart using different methods in Figs. 2(o)2(q) demonstrates that the Group 7 Element 6 can be resolved by the reconstruction network. The cross sections in Fig. 2(r) show the imaging contrast.

Fig. 2

Retrieval of simulated MCF images by the pre-trained U-Net + EDSR network. (a) GT of a paper tissue instance. (b) Residual map of (c) and (d). (c) Simulated MCF image with homogeneous and (d) with inhomogeneous core intensity transmission. (e) Reference image containing core transmissivity as an additional input into the network. (f) and (g) Reconstructions of (c) and (d) by the no-reference-based network. (h) Reference-based reconstruction of (d). (i)–(k) Residual maps of (f)–(h) compared with GT. Although the visual difference of (c) and (d) is slight, (c) had a good reconstruction (f), while (d) resulted in image distortion in (g) by the same network. The distortion in (g)–(p), strongly depending on the inhomogeneous transmissivity, was eliminated by the reference-based approach with (e). (l) and (m) Quantitative image quality evaluation on the test sets in terms of PSNR and SSIM. The labels “c, d, f-h” in (m) correspond to the test sets of (c), (d), (f)–(h). (n)–(q) Simulated MCF image of a resolution test chart and the reconstructions using different approaches. (r) Cross section of the lines in (n)–(q).

NPh_11_S1_S11505_f002.png

To further verify the retrieval of biological samples, the MCF was subsequently used for imaging cryosections of glioblastoma tissue. We captured the autofluorescence images of glioblastoma using the setup in Fig. 3(a), which combines a MCF endoscope and a widefield fluorescence microscope to capture both GT and measurement data, simultaneously. We used the MCF in this manner to improve the image reconstruction quality by transfer learning and validated the use of the proposed digital twin ex vivo for the application as an in vivo endoscope without additional optical elements. As demonstrated in Fig. 3(b), the results of the pre-trained network were distorted due to hallucination and artifacts remained. To eliminate the distortion, we used 50 pairs of captured microscopic and endoscopic glioblastoma images and applied transfer learning to the pre-trained network. Despite the limited data size, transfer learning was still able to further enhance image quality of glioblastoma tissue, and PSNR and SSIM values of the independent test images were increased up to 31.6 dB and 0.97, separately, with a near-video rate of 14 frames per second computing on a NVIDIA RTX A6000 GPU. The validation on biological samples shows that the reference-based approach enables retrieving high-resolution images even for a small experimental dataset which is easily obtainable in clinics.

Fig. 3

MCF image retrieval of glioblastoma cryosections with transfer learning. (a) Experimental setup for acquiring pairs of microscopic and endoscopic tumor images in autofluorescence with the same MCF as the reference image. CAM, camera; BPF, bandpass filter; L, lens; BS, beam splitter; MO, microscopic objective. (b) Qualitative comparison of image retrieval by the pre-trained network and transfer learning. Residual maps were obtained by comparing reconstruction results with microscopic images. The results solely using the pre-trained network were greatly distorted with artifacts. (c) and (d) Quantitative evaluation: PSNR and SSIM distribution evaluated on 94 measured MCF images of glioblastoma.

NPh_11_S1_S11505_f003.png

4.

Conclusions

DNNs enable high-resolution imaging through an MCF with micron resolution. This demands expensive data collection however, and the image reconstruction strongly depends on the optical properties of a given MCF. That means, experimental acquisition of thousands of MCF-specific image pairs is required for each single endoscope, which is not easily transferable to clinics. Here, a digital twin-based workflow is proposed to bypass costly acquisition of biological data by single-reference-based simulation of optical properties for an arbitrary MCF. Besides, the MCF-specific reference also provides physics priors of MCF inhomogeneity during training processes. The idea was validated on biological samples by transfer learning. Taking autofluorescence images of glioblastoma as an example, our approach can achieve precise retrieval on independent test images and improve PSNR and SSIM values up to 31.6 dB and 0.97, respectively, which required only 50 measured image pairs as training data (100 times less data than before). Our reference-based approach shows a high feasibility for clinical translation and is capable of image retrieval to improve image-based tumor classification during neurosurgeries.

Disclosures

The authors declare no financial interests.

Code and Data Availability

The data that support the findings of this article are not publicly available due to ethical concerns. A part of them is available from the author upon reasonable request.

Ethical Statement

Written informed consent was obtained from all patients and the study was approved by the ethics committee at TU Dresden (EK 323122008).

Acknowledgments

We would like to thank the valuable discussion from Tom Glosemeyer and Martin Kroll. We thank the Else Kröner Fresenius Center and German Science Foundation (DFG Cz55/47-1, Cz55/48-1) for extensive funding of the project for support.

References

1. 

Y. Cai, J. Wu and Q. Dai, “Review on data analysis methods for mesoscale neural imaging in vivo,” Neurophotonics, 9 (4), 041407 https://doi.org/10.1117/1.NPh.9.4.041407 (2022). Google Scholar

2. 

M. Stibůrek et al., “110  μm thin endo-microscope for deep-brain in vivo observations of neuronal connectivity, activity and blood flow dynamics,” Nat. Commun., 14 (1), 1897 https://doi.org/10.1038/s41467-023-36889-z NCAOBW 2041-1723 (2023). Google Scholar

3. 

N. Accanto et al., “A flexible two-photon fiberscope for fast activity imaging and precise optogenetic photostimulation of neurons in freely moving mice,” Neuron, 111 (2), 176 –189.e6 https://doi.org/10.1016/j.neuron.2022.10.030 NERNET 0896-6273 (2023). Google Scholar

4. 

R. Galli et al., “Identification of distinctive features in human intracranial tumors by label‐free nonlinear multimodal microscopy,” J. Biophotonics, 12 (10), e201800465 https://doi.org/10.1002/jbio.201800465 (2019). Google Scholar

5. 

O. Uckermann et al., “Label-free multiphoton imaging allows brain tumor recognition based on texture analysis—a study of 382 tumor patients,” Neuro-Oncol. Adv., 2 (1), vdaa035 https://doi.org/10.1093/noajnl/vdaa035 (2020). Google Scholar

6. 

R. Galli et al., “Rapid label-free analysis of brain tumor biopsies by near infrared Raman and fluorescence spectroscopy—a study of 209 patients,” Front. Oncol., 9 1165 https://doi.org/10.3389/fonc.2019.01165 FRTOA7 0071-9676 (2019). Google Scholar

7. 

H. Malone et al., “Complications following stereotactic needle biopsy of intracranial tumors,” World Neurosurg., 84 (4), 1084 –1089 https://doi.org/10.1016/j.wneu.2015.05.025 (2015). Google Scholar

8. 

M. L. Huang et al., “Stereotactic breast biopsy: pitfalls and pearls,” Tech. Vasc. Interv. Radiol., 17 (1), 32 –39 https://doi.org/10.1053/j.tvir.2013.12.006 (2014). Google Scholar

9. 

M. D. Krieger et al., “Role of stereotactic biopsy in the diagnosis and management of brain tumors,” Semin. Surg. Oncol., 14 (1), 13 –25 https://doi.org/10.1002/(SICI)1098-2388(199801/02)14:1<13::AID-SSU3>3.0.CO;2-5 SSONEV 1098-2388 (1998). Google Scholar

10. 

A. Dilipkumar et al., “Label‐free multiphoton endomicroscopy for minimally invasive in vivo imaging,” Adv. Sci., 6 (8), 1801735 https://doi.org/10.1002/advs.201801735 (2019). Google Scholar

11. 

A. Abdelfattah et al., “Neurophotonic tools for microscopic measurements and manipulation: status report,” Neurophotonics, 9 (S1), 013001 https://doi.org/10.1117/1.NPh.9.S1.013001 (2022). Google Scholar

12. 

A. Lukic et al., “Endoscopic fiber probe for nonlinear spectroscopic imaging,” Optica, 4 (5), 496 https://doi.org/10.1364/OPTICA.4.000496 (2017). Google Scholar

13. 

J. Trägårdh et al., “Label-free CARS microscopy through a multimode fiber endoscope,” Opt. Express, 27 (21), 30055 https://doi.org/10.1364/OE.27.030055 OPEXFF 1094-4087 (2019). Google Scholar

14. 

C. W. Freudiger et al., “Label-free biomedical imaging with high sensitivity by stimulated Raman scattering microscopy,” Science, 322 (5909), 1857 –1861 https://doi.org/10.1126/science.1165758 SCIEAS 0036-8075 (2008). Google Scholar

15. 

A. Lombardini et al., “High-resolution multimodal flexible coherent Raman endoscope,” Light Sci. Appl., 7 (1), 10 https://doi.org/10.1038/s41377-018-0003-3 (2018). Google Scholar

16. 

T. Sehm et al., “Label-free multiphoton microscopy as a tool to investigate alterations of cerebral aneurysms,” Sci. Rep., 10 (1), 12359 https://doi.org/10.1038/s41598-020-69222-5 SRCEC3 2045-2322 (2020). Google Scholar

17. 

C. Shu et al., “Label-free follow-up surveying of post-treatment efficacy and recurrence in nasopharyngeal carcinoma patients with fiberoptic Raman endoscopy,” Anal. Chem., 93 (4), 2053 –2061 https://doi.org/10.1021/acs.analchem.0c03778 ANCHAM 0003-2700 (2021). Google Scholar

18. 

Y. Sun et al., “Endoscopic fluorescence lifetime imaging for In Vivo intraoperative diagnosis of oral carcinoma,” Microsc. Microanal., 19 (4), 791 –798 https://doi.org/10.1017/S1431927613001530 MIMIF7 1431-9276 (2013). Google Scholar

19. 

N. Fang et al., “Automatic and label-free detection of meningioma in dura mater using the combination of multiphoton microscopy and image analysis,” Neurophotonics, 10 (3), 035006 https://doi.org/10.1117/1.NPh.10.3.035006 (2023). Google Scholar

20. 

Z. Wen et al., “Single multimode fibre for in vivo light-field-encoded endoscopic imaging,” Nat. Photonics, 17 679 –687 https://doi.org/10.1038/s41566-023-01240-x NPAHBY 1749-4885 (2023). Google Scholar

21. 

S. Ali, “Where do we stand in AI for endoscopic image analysis? Deciphering gaps and future directions,” NPJ Digital Med., 5 (1), 184 https://doi.org/10.1038/s41746-022-00733-3 (2022). Google Scholar

22. 

V. R. Muthusamy et al., “The role of endoscopy in the management of GERD,” Gastrointest. Endosc., 81 (6), 1305 –1310 https://doi.org/10.1016/j.gie.2015.02.021 (2015). Google Scholar

23. 

S. A. Vasquez-Lopez et al., “Subcellular spatial resolution achieved for deep-brain imaging in vivo using a minimally invasive multimode fiber,” Light Sci. Appl., 7 (1), 110 https://doi.org/10.1038/s41377-018-0111-0 (2018). Google Scholar

24. 

S. Turtaev et al., “High-fidelity multimode fibre-based endoscopy for deep brain in vivo imaging,” Light Sci. Appl., 7 (1), 92 https://doi.org/10.1038/s41377-018-0094-x (2018). Google Scholar

25. 

R. Kuschmierz et al., “Ultra-thin 3D lensless fiber endoscopy using diffractive optical elements and deep neural networks,” Light Adv. Manuf., 2 (4), 1 https://doi.org/10.37188/lam.2021.030 (2021). Google Scholar

26. 

N. Badt and O. Katz, “Real-time holographic lensless micro-endoscopy through flexible fibers via fiber bundle distal holography,” Nat. Commun., 13 (1), 6055 https://doi.org/10.1038/s41467-022-33462-y NCAOBW 2041-1723 (2022). Google Scholar

27. 

E. Scharf et al., “Video-rate lensless endoscope with self-calibration using wavefront shaping,” Opt. Lett., 45 (13), 3629 https://doi.org/10.1364/OL.394873 OPLEDP 0146-9592 (2020). Google Scholar

28. 

R. Kuschmierz et al., “Self-calibration of lensless holographic endoscope using programmable guide stars,” Opt. Lett., 43 (12), 2997 https://doi.org/10.1364/OL.43.002997 OPLEDP 0146-9592 (2018). Google Scholar

29. 

J. Shin et al., “A minimally invasive lens-free computational microendoscope,” Sci. Adv., 5 (12), eaaw5595 https://doi.org/10.1126/sciadv.aaw5595 STAMCV 1468-6996 (2019). Google Scholar

30. 

D. J. Waterhouse et al., “Quantitative evaluation of comb-structure correction methods for multispectral fibrescopic imaging,” Sci. Rep., 8 (1), 17801 https://doi.org/10.1038/s41598-018-36088-7 SRCEC3 2045-2322 (2018). Google Scholar

31. 

A. Perperidis et al., “Image computing for fibre-bundle endomicroscopy: a review,” Med. Image Anal., 62 101620 https://doi.org/10.1016/j.media.2019.101620 (2020). Google Scholar

32. 

J.-H. Han, S. M. Yoon and G.-J. Yoon, “Decoupling structural artifacts in fiber optic imaging by applying compressive sensing,” Opt. - Int. J. Light Electron Opt., 126 (19), 2013 –2017 https://doi.org/10.1016/j.ijleo.2015.05.045 (2015). Google Scholar

33. 

S. P. Mekhail et al., “Fiber-bundle-basis sparse reconstruction for high resolution wide-field microendoscopy,” Biomed. Opt. Express, 9 (4), 1843 https://doi.org/10.1364/BOE.9.001843 BOEICL 2156-7085 (2018). Google Scholar

34. 

J. Wu et al., “Learned end-to-end high-resolution lensless fiber imaging towards real-time cancer diagnosis,” Sci. Rep., 12 (1), 18846 https://doi.org/10.1038/s41598-022-23490-5 SRCEC3 2045-2322 (2022). Google Scholar

35. 

J. Shao et al., “Fiber bundle imaging resolution enhancement using deep learning,” Opt. Express, 27 (11), 15880 https://doi.org/10.1364/OE.27.015880 OPEXFF 1094-4087 (2019). Google Scholar

36. 

J. Shao et al., “Fiber bundle image restoration using deep learning,” Opt. Lett., 44 (5), 1080 https://doi.org/10.1364/OL.44.001080 OPLEDP 0146-9592 (2019). Google Scholar

37. 

D. Ravì et al., “Adversarial training with cycle consistency for unsupervised super-resolution in endomicroscopy,” Med. Image Anal., 53 123 –131 https://doi.org/10.1016/j.media.2019.01.011 (2019). Google Scholar

38. 

E. Kim et al., “Honeycomb artifact removal using convolutional neural network for fiber bundle imaging,” Sensors, 23 (1), 333 https://doi.org/10.3390/s23010333 SNSRES 0746-9462 (2022). Google Scholar

39. 

J. Sun et al., “Real-time complex light field generation through a multi-core fiber with deep learning,” Sci. Rep., 12 (1), 7732 https://doi.org/10.1038/s41598-022-11803-7 SRCEC3 2045-2322 (2022). Google Scholar

40. 

J. Sun et al., “Quantitative phase imaging through an ultra-thin lensless fiber endoscope,” Light Sci. Appl., 11 (1), 204 https://doi.org/10.1038/s41377-022-00898-2 (2022). Google Scholar

41. 

Q. Zhang et al., “Learning the matrix of few-mode fibers for high-fidelity spatial mode transmission,” APL Photonics, 7 (6), 066104 https://doi.org/10.1063/5.0088605 (2022). Google Scholar

42. 

M. Pfeiffer et al., “Learning soft tissue behavior of organs for surgical navigation with convolutional neural networks,” Int. J. Comput. Assist. Radiol. Surg., 14 (7), 1147 –1155 https://doi.org/10.1007/s11548-019-01965-7 (2019). Google Scholar

43. 

J. N. Kather et al., “Deep learning can predict microsatellite instability directly from histology in gastrointestinal cancer,” Nat. Med., 25 (7), 1054 –1056 https://doi.org/10.1038/s41591-019-0462-y 1078-8956 (2019). Google Scholar

44. 

O. Ronneberger, P. Fischer and T. Brox, “U-net: convolutional networks for biomedical image segmentation,” Lect. Notes Comput. Sci., 9351 234 –241 https://doi.org/10.1007/978-3-319-24574-4_28 LNCSD9 0302-9743 (2015). Google Scholar

45. 

B. Lim et al., “Enhanced deep residual networks for single image super-resolution,” in Proc. IEEE Conf. Comput. Vision and Pattern Recognit. Workshops, 136 –144 (2017). Google Scholar

46. 

M. Hughes, “Fibre bundle image processing/core removal (Matlab),” https://ww2.mathworks.cn/matlabcentral/fileexchange/75157-fibre-bundle-simulator (2020). Google Scholar

47. 

O. Russakovsky et al., “ImageNet large scale visual recognition challenge,” Int. J. Comput. Vision, 115 (3), 211 –252 https://doi.org/10.1007/s11263-015-0816-y IJCVEQ 0920-5691 (2015). Google Scholar

Biography

Tijue Wang is a PhD student at the Laboratory of Measurement and Sensor System Technique, TU Dresden. He received his bachelor’s degree in material engineering from the Wuhan University of Science and Technology in 2017 and master’s degree in mechanical engineering from TU Dresden in 2021, respectively. He is a member of SPIE.

Jakob Dremel is a research assistant in the Laboratory of the Measurement and Sensor System Technique, TU Dresden. He received his diploma degree in mechatronics from TU Dresden in 2021 with an award for being one of the best students. His current research interests include fiber-based endoscopy and the translation of this technology to the neurosurgery department. He is currently the president of the SPIE+Optica student chapter Dresden.

Jürgen W. Czarske (fellow EOS, Optica, SPIE, IET, IoP) is full chair professor and director at TU Dresden, Germany. He is an international prize-winning inventor of laser-based technologies. His awards include the 2008 Berthold Leibinger Innovation Prize of Trumpf Laser Systems, 2019 Optica Joseph-Fraunhofer-Award/Robert-M.-Burley-Prize, 2020 Laser Instrumentation Award of IEEE Photonics Society, and 2022 SPIE Chandra S Vikram Award. He is vice president of the International Commission for Optics (ICO) and was the general chair of the general congress ICO-25-OWLS-16-Dresden-Germany-2022.

Robert Kuschmierz, PhD, is a member of Optica and SPIE. He studied electrical and mechanical engineering. He did his PhD on interferometric in-process metrology. He received the measurement technique award from the company SICK and the award for outstanding dissertations from Dr.-Ing. Siegfried Werth Foundation. Since 2017, he has been the head of the optical process metrology group at the Laboratory for Measurement and Sensor System Techniques at TU Dresden. His research interests include holography, wavefront shaping and artificial intelligence, especially for minimally invasive, lensless endoscopy.

Biographies of the other authors are not available.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Tijue Wang, Jakob Dremel, Sven Richter, Witold Polanski, Ortrud Uckermann, Ilker Eyüpoglu, Jürgen W. Czarske, and Robert Kuschmierz "Resolution-enhanced multi-core fiber imaging learned on a digital twin for cancer diagnosis," Neurophotonics 11(S1), S11505 (31 January 2024). https://doi.org/10.1117/1.NPh.11.S1.S11505
Received: 20 September 2023; Accepted: 8 January 2024; Published: 31 January 2024
Advertisement
Advertisement
KEYWORDS
Multicore fiber

Image retrieval

Tissues

Biological imaging

Inhomogeneities

Education and training

Image restoration

Back to Top