Open Access
12 April 2024 Neurophotonics beyond the surface: unmasking the brain’s complexity exploiting optical scattering
Fei Xia, Caio Vaz Rimoli, Walther Akemann, Cathie Ventalon, Laurent Bourdieu, Sylvain Gigan, Hilton B. de Aguiar
Author Affiliations +
Abstract

The intricate nature of the brain necessitates the application of advanced probing techniques to comprehensively study and understand its working mechanisms. Neurophotonics offers minimally invasive methods to probe the brain using optics at cellular and even molecular levels. However, multiple challenges persist, especially concerning imaging depth, field of view, speed, and biocompatibility. A major hindrance to solving these challenges in optics is the scattering nature of the brain. This perspective highlights the potential of complex media optics, a specialized area of study focused on light propagation in materials with intricate heterogeneous optical properties, in advancing and improving neuronal readouts for structural imaging and optical recordings of neuronal activity. Key strategies include wavefront shaping techniques and computational imaging and sensing techniques that exploit scattering properties for enhanced performance. We discuss the potential merger of the two fields as well as potential challenges and perspectives toward longer term in vivo applications.

1.

Introduction

The brain acts as the central regulator in all vertebrate and most invertebrate organisms.1 Comprehensive study of its structure and function is not only paramount to our scientific understanding but also crucial for developing interventions for brain-related pathologies.2,3 In this context, the field of neurophotonics, a domain that capitalizes on optical tools to study the nervous system (Fig. 1), has emerged as a powerful strategy for brain studies. Three defining strengths of optical approaches include: (i) their minimal invasiveness;57 (ii) their enhanced specificity when combined with molecular labeling812 or label-free optical techniques,6,1317 allowing for targeted imaging at cellular and molecular levels; and (iii) the possibility of chronically recording the same structures of interest, such as neurons, dendrites, and spines,18 during development, learning, and sensory deprivation.19,20 However, there are persisting challenges that limit the comprehensive use of optical techniques in brain research. In this perspective paper, we specifically focused on optical imaging and sensing tools to probe the brains of animal models. Neuroscientists have proposed a key objective for optical probing of the brain: to develop and integrate advanced optical probing techniques that offer high spatiotemporal resolution, large-scale recording and mapping of neural activity while ensuring safety and minimal invasiveness.2123 Meeting this objective necessitates advancements in: (1) probing depth, especially important given the size variations of the brain, from larger scales in humans to smaller scales in other species8,2426 [Fig. 2(a)]. (2) Expanding the field of view (FOV), allowing for a more holistic capture and understanding of neuronal networks3133 [Fig. 2(d)]. (3) Improving probing speed to capture and interpret dynamic biological activities in both 2D and 3D contexts10,28,3436 [Fig. 2(b)]. (4) Ensuring biocompatibility: minimizing phototoxicity and avoiding damage from implanted devices, thereby preserving the brain’s structural and functional integrity during investigations of the brain using optical methods37,38 [Fig. 2(c)].

Fig. 1

Overview of diverse brain probing techniques: (a) microscopy: traditional imaging with direct optical access to the brain. (b) Multimode fiber: flexible approach using a fiber optic cable for light delivery and signal collection. (c) GRIN lens: minimally invasive imaging through a small-diameter lens. (d) Head fixed: apparatus for stable imaging with restrained subject movement. (e) Freely moving: setup allowing for natural behavior during imaging with a mobile recording system. Panels (a)–(e) were created with BioRender, Ref. 4.

NPh_11_S1_S11510_f001.png

Fig. 2

Representative advances from tools commonly used in the complex media community to address challenges in optical probing of the brain. (a) depth: scattering and aberration compensation using computational techniques to enhance reflectance imaging of cortical myelin through the skull in the living mouse brain.27 Before: conventional reflectance microscopy through the mouse skull. After: computational conjugated adaptive optical corrected reflectance microscopy of cortical myelin in the mouse brain through skull. Right panel: 3D reconstruction of label-free structural information through skull. Scale bar: 40  μm. (b) Speed: fast 3D volumetric imaging with targeted illumination of neurons in the mouse cortex labeled with a calcium indicator (GCaMP6f) to increase signal-to-noise of recorded neurons. Before: conventional volumetric calcium imaging with electrically tunable lens and extracted traces after deconvolution. After: illumination-targeted volumetric calcium imaging and extracted traces after deconvolution.28 Scale bar 50  μm. (c) Biocompatibility: upper panel: enhanced signal given the same laser power enabled by AO.29 Before: low signal-to-background of fluorescence-labeled neurons in the hippocampus around 1 mm depth imaged transcranially by conventional three-photon fluorescence microscopy. After: high signal-to-background neurons in the hippocampus imaged by AO. Scale bar: 20  μm. Lower panel: brain imaging of deep subcortical neurons labeled with a genetically encoded calcium indicator GCaMP6s using a multimode fiber-based endoscope combined with wavefront shaping for minimally invasive imaging.30 Scale bar: 30  μm. (d) FOV: enlarged FOV with diffraction-limited high-resolution imaging enabled by computational conjugated AO (after) compared with computational AO without conjugation (before, white boxes)27 Left: image of myelin. Right: phase pattern for aberration correction. SLM, spatial light modulator; DMD, digital micromirror devices; and MMF, multimode fibers. Panel (a) adapted with permission from Ref. 27 under license CC-BY 4.0. Panel (b) adapted with permission from Ref. 28 under license CC-BY 4.0. (c) The top images adapted with permission from Ref. 29 and the bottom images adapted with permission from Ref. 30 under license CC-BY 4.0. Panel (d) adapted from Ref. 27 under license CC-BY 4.0.

NPh_11_S1_S11510_f002.png

Here, we review recent advances in techniques and devices popularized in the complex media community that have begun to show promise in addressing some of the key challenges (Fig. 2) and discuss our perspectives on moving forward for in vivo applications.

2.

Opportunities: Bridging the Gap

The complex media field studies light propagation in materials with highly inhomogeneous optical properties. Tools developed in this area include advanced computations on light scattering in optically heterogeneous micro- media and algorithm design for shaping light through diffusive materials and image recovery using scattering information.39 While rooted in fundamental light scattering, its implications naturally extend to neurophotonics, due to the highly scattering nature of brain tissues.

Key techniques in the complex media field can be broadly categorized into two groups: wavefront shaping3942 through complex media and computational imaging and sensing techniques using complex media.43 Wavefront shaping, a technique that modulates the phase and amplitude of incoming light waves using light shaping devices, such as spatial light modulator (SLM), is emerging as a promising avenue. Adaptive optics (AO), a wavefront shaping method focused on compensating for low-order light distortion, has already enhanced both the signal intensity and spatiotemporal resolution across various optical imaging modalities.29,4457 Looking ahead, recent wavefront shaping techniques that address scattering (higher-order light distortion)50,5867 have the potential to further improve signal and resolution, especially at depths where scattering becomes a critical limitation [Fig. 1(a)]. Recent insights into local correlation during scattering events, i.e., the memory effect,68 in chromatic,69,70 shift,71 tilt/angular,72,73 and others74 may guide more efficient light manipulation deep within tissues.

The memory effect refers to the phenomenon where the optical fields of scattered light remain correlated when certain properties of the light, such as position, wavevector direction, polarization, or spectrum, change over a specific range. As illustrated in Fig. 3: “chromatic” refers to changes in the light’s wavelength; “shift” pertains to the displacement or angular deviation of light beams; and “angular” or “tilt” involves changes in the direction of light propagation. Memory effect enables the prediction of how light’s properties change with scattering, facilitating computational or hardware-based tools for enhanced imaging quality through scattering tissues or interfaces with complex optical properties, such as multimode fibers. For instance, by conjugating the light modulation plane to specific locations within the scattering medium [Fig. 2(d)], we might find an optimal balance between enhancements of the signal intensity, FOV, and spatial resolution.75,76 In the brain, despite the relative dense packing of neurons and vasculature, fluorescence microscopy often reveals a sparser distribution particularly when given at a certain color channel, a result of selective fluorescent labeling targeting specific cellular or vascular components or a sparse expression of fluorescence.26,7779 Leveraging sparse and compressive sampling or scanning techniques, like acousto-optic deflectors8083 (AOD) and digital micromirror devices28,8489 (DMD) [Fig. 2(b)], ensures efficient photon utilization. Such methods not only expedite the imaging process but also preserve the photon budget, setting the stage for up to one order of magnitude increase in the imaging speed [Fig. 2(b)] and reduction in the laser power [Fig. 2(c)] for faster and physiologically safer recording given proper guide stars for wavefront shaping.90 A guide star in imaging is akin to its astronomical counterparts; it serves as a reference light source from various contrast mechanisms, such as harmonic, photoacoustic, fluorescence, and scattered light,90 within the sample to facilitate the correction of light distortion caused by scattering. By employing a guide star, we can guide the wavefront shaping process to more precisely manipulate incoming light waves. This improves the efficiency of the photon budget of the incoming field in enhancing the focus intensity and signal at greater penetration depths of imaging systems and in reducing laser intensity, minimizing potential photodamage to biological tissues. It could even help capture faster events such as millisecond action potentials in neurons10,28,34,9193 [Fig. 2(b)]. Furthermore, compressive random-access sampling with fast light modulators, like AODs, permits to integrate fast temporal sampling and wavefront shaping,82,91,92 including adaptive correction of aberrations and scattering over an extended FOV that effectively exceeds the range of the angular memory effect, by taking advantage of the fast AODs’ update rate to correct multiple local aberrations almost synchronously with the progression of a scanning beam, whether in pixel-by-pixel or random-access scan mode.94 The primary advantage of employing wavefront shaping in enhancing the capabilities of state-of-the-art optical microscopy lies in the optimization of the photon budget. This technique enables the strategic redistribution of photons to either augment the imaging speed or expand the FOV, all while maintaining a fixed photon allowance for biological imaging under safe physiological conditions. However, challenges remain in terms of its shaping speed, which needs to be improved to overcome the temporal decorrelation of the scattered light field [Fig. 3(h)].

Fig. 3

Optical access to the mouse brain through a scattering medium: (a) schematic of a live mouse highlighting the brain area, (b) inhomogeneous structures within the mouse brain that can cause optical scattering, (c) multimode fiber (MMF), a frequently studied complex scattering medium in complex media field, is also often utilized for optical access to the brain, (d) scattering-induced wavefront distortion, (e)–(i) various memory effects: (e) tilt/angular memory effect, (f) lateral shift memory effect, (g) axial shift memory effect, (h) temporal memory effect, (i) chromatic memory effect, and (j) representative quantitative correlation of wavefront correction pattern for achieving diffraction-limited focusing/imaging in highly scattering brain tissue, demonstrating that the range of the memory effect (defined by the full width at half maximum of the correlation curve) is substantially narrower compared to less scattering scenarios as shown in panel (k). The patterns for correcting wavefront distortion in highly scattering media are more complex in panel (j) than in panel (k). Note in panels (j) and (k), ζ could be any of the types of memory effect above in panels (e)–(i), but for the illustrative example we chose ζ=Δx (shift). Panels (a) and (b) and (d)–(i) were created with BioRender, Ref. 4.

NPh_11_S1_S11510_f003.png

In the realm of computational imaging and sensing techniques through complex media, image reconstruction43,95,96 and signal processing9799 methods that exploit random or scattering media properties have emerged as potential game-changers. Leveraging the inherently locally correlated nature of scattered light techniques, such as auto-correlation,100 cross-correlation,101,102 and patch-connecting-based103 image reconstruction methods, have been proposed. These aim to directly reconstruct images through highly scattering media, with the potential to achieve a larger FOV at greater depths. Image reconstruction fundamentally is a process of solving optimization problems, which can be categorized into convex and non-convex cases. Convex optimization problems are generally more straightforward to solve because their global minima are easily identifiable. On the other hand, non-convex optimization problems, more common in imaging through scattering tissue, often suffer from multiple local minima, complicating the search for the global minimum. In this challenging landscape, deep learning104,105 emerges as a powerful tool, offering robust methods that learn from data to effectively approximate global optima—opening new exciting avenues for neurophotonics imaging, with significant potential to enhance its capabilities. It provides not only an alternative tool for solving optimization problems, such as optimization as unrolled neural networks,101,106,107 but also enhances image reconstruction with deep learning models for better generalization of the scattering problems.104 On the other hand, the analysis and understanding of speckle—a highly sensitive interference pattern commonly seen when light propagates through complex media108,109—has proven to be extremely powerful and promising. In the brain, the detected speckle signal can be highly sensitive to various events, such as calcium signaling,110 an indirect indicator of voltage fluctuations, and blood flow.111 Computational imaging techniques have shown promising enhanced results in brain imaging. For example, advanced signal processing methods such as non-negative matrix factorization (NMF), have proven instrumental in calcium imaging experiments by effectively removing noise and isolating signal components.112,113 Additionally, the utility of computational imaging extends to blood flow estimation and reconstruction from speckle patterns observed in brain tissues.114 Furthermore, computational tools such as constrained NMF115 and DeepCAD116 have greatly enhanced denoising techniques and the ability to retrieve signals from significantly high background noise levels.115119 These advancements are particularly valuable for imaging through highly scattering tissues, where traditional imaging methods are challenged to provide clear and reliable data. Although still at its early stage, it is anticipated that computational imaging will continue to enhance the clarity and utility of acquired images, enabling more detailed and accurate studies of neural structures and functions in challenging imaging conditions.

3.

Challenges and Limitations Toward Longer-Term In Vivo Applications

Although the progress mentioned earlier has been exciting, the further adoption of these for longer-term in vivo biological studies still faces challenges.

In vivo applications involve imaging and sensing activities within the brain of a living and behaving animal, which raise a first challenge in term of recording artifacts linked to movements. The most common in vivo strategy is to fix the animal’s head under a microscope, allowing for a good control of the sensory stimuli applied to the animal, as well as to accurately measure its behavior. Neurophotonics techniques developed for in vitro samples can be adapted for head-fixed animals provided that motion is taken into account. This encompasses micro- to milli-meter scale motions from heartbeats and respiration, to blood flow, and bulk motions induced by body movements and muscle contractions. These motions cause spatiotemporal noise dynamics in the tissue’s scattering properties. Temporally, these dynamics are observable down to the millisecond range, and spatially, they can be seen down to the micron level. For example, regarding the bulk motion of the brain, in the case of two-photon imaging experiments in the cortex with a cranial window, motion artifacts were observed to be around 2 to 4  μm in axial direction much shorter than 150  μm from the optical window.82 When implanting an optical fiber, one expects to encounter similar motion artifacts when exploring shallow regions of the brain. Interestingly, however, fewer motion artifacts are observed when a fiber is implanted in deeper brain regions. Indeed, this has been observed for two-photon imaging with gradient refractive index (GRIN) lenses.120,121 From a technical standpoint, this poses concerns regarding the stability and speed of wavefront shaping techniques, as well as noise issues in computational imaging and sensing techniques. These factors underscore the need for adaptive imaging solutions that can recalibrate in real-time, ensuring consistent performance.122 However, these hurdles, though significant, are not insurmountable. The way forward may involve a co-design philosophy, harmoniously melding wavefront shaping systems, algorithms, and imaging systems. For head-fixed animals, introducing an “animal-in-the-loop” design could be revolutionary. This innovative approach would use real-time feedback from the animal’s physiological and behavioral changes to continually adapt the imaging process, such as using online motion tracking to adapt in real-time the scanning scheme123 or the heartbeat signal to gate the optical signal and remove heartbeat-related imaging noise.29

A second challenge is improving depth penetration in brain tissues. In brain imaging, the depth achievable with current technologies varies significantly across different microscopy techniques and contrast mechanisms. For example, we have summarized the depth penetration capabilities of some of the most popular fluorescence microscopy techniques, including one-photon, two-photon, and three-photon excited fluorescence, as follows (in the context of in vivo adult mouse brain imaging).

Conventional one-photon (1P) microscopy is limited to depths of approximately 0.3 to 0.4 mm due to light scattering and absorption in the commonly used visible range of light.124,125 Conventional two-photon (2P) microscopy extends this depth to about 0.6 to 0.8 mm.8,126 Conventional three-photon (3P) microscopy further increases imaging depth to 1.2 to 2.1 mm.127,128 The potential depth limits (Table 1, column 3) for these imaging methods can be estimated based on effective attenuation lengths130 depending on the excitation and detection method.124,126,133

Table 1

Current and the potential estimated depth penetration capabilities of one-, two-, three-photon excited fluorescence microscopy. Visible range of light: 380–700 nm; near-infrared I light: 700–900 nm. *Indicates optimal imaging windows around 1300 and 1700 nm (in the region of near-Infrared II between 1000–1700 nm also called the short-wave infrared range in similar or even broader ranges in some definitions).

Fluorescence microscopyDemonstrated depths so far with high spatial resolution (close to diffraction-limited)Potential estimated depth limits with high spatial resolution (close to diffraction-limited)
Excitation: One-photon excited0.10.2  mm129 (visible range of light)0.60.8  mm124,130 (*near-infrared II or short-wave infrared light)
Detection: widefield
Excitation: one-photon excited0.30.4  mm125 (visible range of light)1.52  mm124,130 (*near-infrared II or short-wave infrared light)
Detection: confocal
Excitation: two-photon excited (temporal focusing)0.30.4  mm131 (near-infrared I light)0.60.8  mm124,126,130 (*near-infrared II or short-wave infrared light)
Detection: widefield
Excitation: two-photon excited0.60.8  mm8,127 (near-infrared I light)1.52  mm126,130 (*near-infrared II or short-wave infrared light)
Detection: single-element detector [e.g., photomultiplier tube, (PMT)]
Excitation: three-photon excited1.22.1  mm127,132 (near-infrared II or short-wave infrared light)34  mm130,133 (*near-infrared II or short-wave infrared light)
Detection: single-element detector (e.g., PMT)

Techniques such as wavefront shaping and computational imaging have been developed to mitigate scattering and aberrations, potentially enhancing imaging depth and resolution. These advancements enable more efficient light delivery and collection deep within tissues. Particularly, wavefront shaping can be and has been coupled with 1P-, 2P-, or 3P-excited fluorescence contrasts, thereby having the capability to extend the depth for each modality. For example, in a proof-of-concept multiphoton wavefront shaping experiment, an enhancement of at least one order of magnitude for the 2P signal and a two orders of magnitude gain for the 3P signal were observed.56

One fundamental barrier is the depth beyond which even sophisticated light manipulation or computational imaging strategies become potentially impractical (refer to Table 1, column 3) for diffraction-limited focusing and reconstruction. In such cases, minimally invasive fiber optics are the only viable option for high-resolution imaging, especially in scenarios requiring high mobility or minimal interference with the subject (animals). Devices such as miniature multimode fibers can be used to bypass the scattering of tissues. Endoscopes incorporate multimode fibers (MMF)134137 [Figs. 1(b), 3(c), and 2(c)], as a relay between the animal and a benchtop wavefront shaping microscope, ensuring minimal invasiveness. Pioneering works, such as MMF-based imaging for mouse brains30,136 and in vivo histology138 as well as deep learning for image reconstruction through MMF,139,140 provide glimpses into the potential future of neurophotonics for deep brain imaging.

Advanced wavefront shaping techniques can also help to address a third challenge: imaging in freely behaving configuration, which provides access to a wider range of behaviors, such as social interactions and sleep. Freely behaving imaging was achieved thanks to the use of wavefront shaping assisted endoscopes based on fiber bundles.141,142 Another approach is miniatures microscopes143147 that allow measuring neuronal activity using conventional widefield,143,144 two-photon,145,146 and three-photon148150 imaging methods. However, combining these microscopes with wavefront shaping techniques will necessitate miniaturization of beam shaping devices. The ultimate goal would be the development of wireless miniscopes,151,152 freeing subjects from physical restraints and promoting natural behaviors.

On the other hand, computational techniques such as machine learning can generally facilitate a more robust search for the global optimum in non-convex optimization problems that exist in probing through scattering tissues. Major application directions involve (1) the reconstruction of high-fidelity images from scattered light patterns, effectively “learning” the tissue’s scattering properties to inversely map the captured signals back to their original, unscattered state; (2) denoising images during high-speed imaging or challenging imaging scenarios; (3) decoding the scattered field/patterns for biomedical insights; and (4) predicting correction masks in wavefront shaping.

For instance, neural networks have been utilized to predict the unscattered light path, allowing for real-time correction of distorted images caused by tissue scattering.153 This method has the potential to enhance the depth penetration and resolution of imaging modalities, such as two-photon and three-photon microscopy, making it possible to visualize neuronal activity deeper within the brain with unprecedented clarity. At the same time, deep learning has enabled enhanced image quality in challenging imaging scenarios by image denoising, such as high-speed voltage imaging154 and high-quality calcium imaging.116,118 This advancement has led to improved neural activity traces, facilitating more accurate spike inference. Furthermore, deep learning models have been applied to interpret the speckle patterns resulting from coherent light scattering, extracting meaningful biological signals, such as cerebral blood flow from noise and thereby facilitating non-invasive imaging techniques that can monitor brain dynamics.155 Deep learning has recently also been applied in predicting scattering or aberration correction patterns in brain imaging.156

Looking ahead, machine learning, with its strengths in generalization and robustness, can be invaluable. Algorithms supported by machine learning can process and interpret vast amounts of data rapidly, ensuring that researchers keep pace with the time-varying optical properties in in vivo environment. As neurophotonics delves deeper into uncharted territories, a symbiotic relationship between industry and academia becomes essential. Industrial stakeholders can develop faster, more stable shapers and sensitive detectors, whereas academia can push boundaries in algorithmic and system design innovations.

4.

Conclusion

The merging of complex media research with neurophotonics marks the beginning of an era brimming with significant potential and opportunities. Moving forward, there is a need for collaboration and innovation across different disciplines, such as algorithm development, machine learning, optics, and neuroscience. This interdisciplinary approach is essential for overcoming existing technical challenges and unravelling better mechanistic understanding of the brain in the unexplored regime. Given the recent advancements in computational imaging and sensing through complex media, wavefront shaping technology, machine learning tools, and the myriad of chemical and biological tools developed in neuroscience, we believe there lies a tremendous opportunity to synergize these diverse fields.

Disclosures

The authors declare no conflicts of interest.

Code and Data Availability

Data sharing is not applicable to this article, as no new data was created or analyzed.

Acknowledgments

The authors thank Ruth Sims and Yusaku Hontani for their feedback and proofreading of the manuscript. F. X. and S.G. acknowledge funding from Chan Zuckerberg Initiative (Grant No. 2020-225346). C.V.R. and S.G. acknowledge funding from the Human Frontier Science Program (Award No. RGP003/2020). W.A., C.V., and L.B. acknowledge funding from the National Institutes of Health (NIH) BRAIN Initiative (Grant No. 1U01NS103464) and the Agence nationale de la recherche (ANR): Institut de Convergence Q-Life (Grant No. ANR-17-CONV-0005), ALPINS (Grant No. ANR-15-CE19-0011), EXPECT (Grant No. ANR-17-CE37-0022-01), Programme d'investissements d’avenir (Grant No. ANR-10-LABX-54), Memolife (Grant No. ANR-11-IDEX-0001-02), and Université PSL (Grant No. ANR-10-INSB-04-01) (France-BioImaging infrastructure). C.V. acknowledges funding from ANR MULTIMOD (Grant No. OP19-121-OTP-01). S.G. acknowledges funding from NIH (Grant No. 1RF1NS113251) and a senior fellowship from Institut Universitaire de France. S.G. and H.B.A. acknowledge funding from H2020 Future and Emerging Technologies (Grant No. 863203). H.B.A. acknowledges funding from ANR COCOhRICO (Grant No. ANR-21-CE42-0013).

References

1. 

E. R. Kandel et al., Principles of Neural Science, 4 McGraw-Hill, New York (2000). Google Scholar

2. 

M. Bear, B. Connors and M. A. Paradiso, Neuroscience: Exploring the Brain, Enhanced Edition: Exploring the Brain, Jones & Bartlett Learning( (2020). Google Scholar

3. 

T. R. Insel and S. C. Landis, “Twenty-five years of progress: the view from NIMH and NINDS,” Neuron, 80 561 –567 https://doi.org/10.1016/j.neuron.2013.09.041 NERNET 0896-6273 (2013). Google Scholar

4. 

, “BioRender: scientific image and illustration software,” https://www.bioRender.com (). Google Scholar

5. 

G. Strangman, D. A. Boas and J. P. Sutton, “Non-invasive neuroimaging using near-infrared light,” Biol. Psychiatry, 52 679 –693 https://doi.org/10.1016/S0006-3223(02)01550-0 BIPCBF 0006-3223 (2002). Google Scholar

6. 

A. Villringer and B. Chance, “Non-invasive optical spectroscopy and imaging of human brain function,” Trends Neurosci., 20 435 –442 https://doi.org/10.1016/S0166-2236(97)01132-6 TNSCDR 0166-2236 (1997). Google Scholar

7. 

P. A. Bandettini, “What’s new in neuroimaging methods?,” Ann. N. Y. Acad. Sci., 1156 260 –293 https://doi.org/10.1111/j.1749-6632.2009.04420.x ANYAA9 0077-8923 (2009). Google Scholar

8. 

F. Helmchen and W. Denk, “Deep tissue two-photon microscopy,” Nat. Methods, 2 932 –940 https://doi.org/10.1038/nmeth818 1548-7091 (2005). Google Scholar

9. 

Handbook of Biological Confocal Microscopy, 236 Springer Science & Business Media( (2006). Google Scholar

10. 

J. D. Marshall et al., “Cell-type-specific optical recording of membrane voltage dynamics in freely moving mice resource cell-type-specific optical recording of membrane voltage dynamics in freely moving mice,” Cell, 167 1650 –1662.e15 https://doi.org/10.1016/j.cell.2016.11.021 CELLB5 0092-8674 (2016). Google Scholar

11. 

A. Watakabe et al., “Fluorescent in situ hybridization technique for cell type identification and characterization in the central nervous system,” Methods, 52 367 –374 https://doi.org/10.1016/j.ymeth.2010.07.003 MTHDE9 1046-2023 (2010). Google Scholar

12. 

A. M. Packer, B. Roska and M. Häusser, “Targeting neurons and photons for optogenetics,” Nat. Neurosci., 16 805 –815 https://doi.org/10.1038/nn.3427 NANEFN 1097-6256 (2013). Google Scholar

13. 

B. C. Wilson and S. L. Jacques, “Optical reflectance and transmittance of tissues: principles and applications,” IEEE J. Quantum Electron., 26 2186 –2199 https://doi.org/10.1109/3.64355 IEJQA7 0018-9197 (1990). Google Scholar

14. 

T. E. Matthews et al., “Deep tissue imaging using spectroscopic analysis of multiply scattered light,” Optica, 1 105 https://doi.org/10.1364/OPTICA.1.000105 (2014). Google Scholar

15. 

C. L. Evans and X. S. Xie, “Coherent anti-stokes Raman scattering microscopy: chemical imaging for biology and medicine,” Annu. Rev. Anal. Chem., 1 883 –909 https://doi.org/10.1146/annurev.anchem.1.031207.112754 (2008). Google Scholar

16. 

T. Durduran and A. G. Yodh, “Diffuse correlation spectroscopy for non-invasive, micro-vascular cerebral blood flow measurement,” Neuroimage, 85 51 –63 https://doi.org/10.1016/j.neuroimage.2013.06.017 NEIMEF 1053-8119 (2014). Google Scholar

17. 

M. Jermyn et al., “Intraoperative brain cancer detection with Raman spectroscopy in humans,” Sci. Transl. Med., 7 274ra19 https://doi.org/10.1126/scitranslmed.aaa2384 STMCBQ 1946-6234 (2015). Google Scholar

18. 

G. Szalay et al., “Fast 3D imaging of spine, dendritic, and neuronal assemblies in behaving animals,” Neuron, 92 723 –738 https://doi.org/10.1016/j.neuron.2016.10.002 NERNET 0896-6273 (2016). Google Scholar

19. 

B. F. Grewe and F. Helmchen, “Optical probing of neuronal ensemble activity,” Curr. Opin. Neurobiol., 19 520 –529 https://doi.org/10.1016/j.conb.2009.09.003 COPUEN 0959-4388 (2009). Google Scholar

20. 

A. Zepeda, C. Arias and F. Sengpiel, “Optical imaging of intrinsic signals: recent developments in the methodology and its applications,” J. Neurosci. Methods, 136 1 –21 https://doi.org/10.1016/j.jneumeth.2004.02.025 JNMEDT 0165-0270 (2004). Google Scholar

21. 

L. A. Jorgenson et al., “The BRAIN initiative: developing technology to catalyse neuroscience discovery,” Philos. Trans. R. Soc. B Biol. Sci., 370 20140164 https://doi.org/10.1098/rstb.2014.0164 (2015). Google Scholar

22. 

A. P. Alivisatos et al., “The brain activity map project and the challenge of functional connectomics,” Neuron, 74 970 –974 https://doi.org/10.1016/j.neuron.2012.06.006 NERNET 0896-6273 (2012). Google Scholar

23. 

W. Yang et al., “Simultaneous multi-plane imaging of neural circuits,” Neuron, 89 269 –284 https://doi.org/10.1016/j.neuron.2015.12.012 NERNET 0896-6273 (2016). Google Scholar

24. 

N. G. Horton et al., “In vivo three-photon microscopy of subcortical structures within an intact mouse brain,” Nat. Photonics, 7 205 https://doi.org/10.1038/nphoton.2012.336 NPAHBY 1749-4885 (2013). Google Scholar

25. 

D. A. Sholl, The Organization of the Cerebral Cortex, (1956). Google Scholar

26. 

S. M. Sunkin et al., “Allen Brain Atlas: an integrated spatio-temporal portal for exploring the central nervous system,” Nucl. Acids Res., 41 D996 –D1008 (2012). https://doi.org/10.1093/nar/gks1042 Google Scholar

27. 

Y. Kwon et al., “Computational conjugate adaptive optics microscopy for longitudinal through-skull imaging of cortical myelin,” Nat. Commun., 14 105 https://doi.org/10.1038/s41467-022-35738-9 NCAOBW 2041-1723 (2023). Google Scholar

28. 

S. Xiao et al., “Video-rate volumetric neuronal imaging using 3D targeted illumination,” Sci. Rep., 8 1 –10 https://doi.org/10.1038/s41598-018-26240-8 SRCEC3 2045-2322 (2018). Google Scholar

29. 

L. Streich et al., “High-resolution structural and functional deep brain imaging using adaptive optics three-photon microscopy,” Nat. Methods, 18 1253 –1258 https://doi.org/10.1038/s41592-021-01257-6 1548-7091 (2021). Google Scholar

30. 

M. Stibuurek et al., “110 μm thin endo-microscope for deep-brain in vivo observations of neuronal connectivity, activity and blood flow dynamics,” Nat. Commun., 14 1897 (2023). https://doi.org/10.1038/s41467-023-36889-z Google Scholar

31. 

N. J. Sofroniew et al., “A large field of view two-photon mesoscope with subcellular resolution for in vivo imaging,” Elife, 5 e14472 https://doi.org/10.7554/eLife.14472 (2016). Google Scholar

32. 

A. T. Mok et al., “A large field of view two-and three-photon microscope for high-resolution deep tissue imaging,” in Conf. on Lasers and Electro-Opt. (CLEO), 1 –2 (2023). Google Scholar

33. 

C.-H. Yu et al., “Diesel2p mesoscope with dual independent scan engines for flexible capture of dynamics in distributed neural circuitry,” Nat. Commun., 12 6639 https://doi.org/10.1038/s41467-021-26736-4 NCAOBW 2041-1723 (2021). Google Scholar

34. 

Y. Gong et al., “High-speed recording of neural spikes in awake mice and flies with a fluorescent voltage sensor,” Science, 350 1361 –1366 https://doi.org/10.1126/science.aab0810 SCIEAS 0036-8075 (2015). Google Scholar

35. 

T. D. Weber et al., “High-speed multiplane confocal microscopy for voltage imaging in densely labeled neuronal populations,” Nat. Neurosci., 26 1642 –1650 https://doi.org/10.1038/s41593-023-01408-2 NANEFN 1097-6256 (2023). Google Scholar

36. 

J. Wu et al., “Kilohertz two-photon fluorescence microscopy imaging of neural activity in vivo,” Nat. Methods, 17 287 –290 https://doi.org/10.1038/s41592-020-0762-7 1548-7091 (2020). Google Scholar

37. 

J. Icha et al., “Phototoxicity in live fluorescence microscopy, and how to avoid it,” BioEssays, 39 1700003 https://doi.org/10.1002/bies.201700003 BIOEEJ 0265-9247 (2017). Google Scholar

38. 

K. Podgorski and G. Ranganathan, “Brain heating induced by near-infrared lasers during multiphoton microscopy,” J. Neurophysiol., 116 1012 –1023 https://doi.org/10.1152/jn.00275.2016 JONEA4 0022-3077 (2016). Google Scholar

39. 

S. Gigan et al., “Roadmap on wavefront shaping and deep imaging in complex media,” J. Phys. Photonics, 4 042501 https://doi.org/10.1088/2515-7647/ac76f9 (2022). Google Scholar

40. 

H. Yu et al., “Recent advances in wavefront shaping techniques for biomedical applications,” Curr. Appl. Phys., 15 632 –641 https://doi.org/10.1016/j.cap.2015.02.015 1567-1739 (2015). Google Scholar

41. 

S. Gigan, “Optical microscopy aims deep,” Nat. Photonics, 11 14 –16 https://doi.org/10.1038/nphoton.2016.257 NPAHBY 1749-4885 (2017). Google Scholar

42. 

Wavefront Shaping for Biomedical Imaging, Cambridge University Press( (2019). Google Scholar

43. 

L. Tian, “Computational imaging in complex media (Conference Presentation),” Proc. SPIE, 10656 106560R https://doi.org/10.1117/12.2302593 PSISDG 0277-786X (2018). Google Scholar

44. 

G. M. Pérez and P. Artal, “Impact of scattering and spherical aberration in contrast sensitivity,” J. Vision, 9 1 –10 https://doi.org/10.1167/9.3.19 (2018). Google Scholar

45. 

K. Wang et al., “Direct wavefront sensing for high-resolution in vivo imaging in scattering tissue,” Nat. Commun., 6 1 –6 https://doi.org/10.1038/ncomms8276 NCAOBW 2041-1723 (2015). Google Scholar

46. 

C. Rodr’iguez et al., “An adaptive optics module for deep tissue multiphoton imaging in vivo,” Nat. Methods, 18 1259 –1264 https://doi.org/10.1038/s41592-021-01279-0 1548-7091 (2021). Google Scholar

47. 

T. J. Gould et al., “Adaptive optics enables 3D STED microscopy in aberrating specimens,” Opt. Express, 20 20998 –21009 https://doi.org/10.1364/OE.20.020998 OPEXFF 1094-4087 (2012). Google Scholar

48. 

D. Sinefeld et al., “Three-photon adaptive optics for mouse brain imaging,” Front. Neurosci., 16 1 –10 https://doi.org/10.3389/fnins.2022.880859 1662-453X (2022). Google Scholar

49. 

M. J. Booth and B. R. Patton, “Adaptive optics for fluorescence microscopy,” Fluorescence Microscopy, 15 –33 Academic Press( (2014). Google Scholar

50. 

Z. Qin et al., “Deep tissue multi-photon imaging using adaptive optics with direct focus sensing and shaping,” Nat. Biotechnol., 40 1663 –1671 https://doi.org/10.1038/s41587-022-01343-w NABIF9 1087-0156 (2022). Google Scholar

51. 

M. Schwertner et al., “Measurement of specimen-induced aberrations of biological samples using phase stepping interferometry,” J. Microsc., 213 11 –19 https://doi.org/10.1111/j.1365-2818.2004.01267.x JMICAR 0022-2720 (2004). Google Scholar

52. 

X. Hao et al., “Aberrations in 4Pi microscopy,” Opt. Express, 25 14049 –14058 https://doi.org/10.1364/OE.25.014049 OPEXFF 1094-4087 (2017). Google Scholar

53. 

S. A. Rahman et al., “Adaptive optics for high-resolution microscopy: wave front sensing using back scattered light,” Proc. SPIE, 8253 82530I https://doi.org/10.1117/12.909845 PSISDG 0277-786X (2018). Google Scholar

54. 

N. Ji, D. E. Milkie and E. Betzig, “Adaptive optics via pupil segmentation for high-resolution imaging in biological tissues,” Nat. Methods, 7 141 –147 https://doi.org/10.1038/nmeth.1411 1548-7091 (2010). Google Scholar

55. 

N. Ji, T. R. Sato and E. Betzig, “Characterization and adaptive optical correction of aberrations during in vivo imaging in the mouse cortex,” Proc. Natl. Acad. Sci. U. S. A., 109 22 –27 https://doi.org/10.1073/pnas.1109202108 (2012). Google Scholar

56. 

D. Sinefeld et al., “Adaptive optics in multiphoton microscopy: comparison of two, three and four photon fluorescence,” Opt. Express, 23 31472 –31483 https://doi.org/10.1364/OE.23.031472 OPEXFF 1094-4087 (2015). Google Scholar

57. 

X. Tao et al., “A three-photon microscope with adaptive optics for deep-tissue in vivo structural and functional brain imaging,” Neural Imaging Sens., 10051 100510R https://doi.org/10.1117/12.2253922 (2017). Google Scholar

58. 

O. Katz et al., “Controlling light in complex media beyond the acoustic diffraction-limit using the acousto-optic transmission matrix,” Nat. Commun., 10 717 https://doi.org/10.1038/s41467-019-08583-6 NCAOBW 2041-1723 (2019). Google Scholar

59. 

B. Rauer et al., “Scattering correcting wavefront shaping for three-photon microscopy,” Opt. Lett., 47 6233 –6236 https://doi.org/10.1364/OL.468834 OPLEDP 0146-9592 (2022). Google Scholar

60. 

I. M. Vellekoop and A. P. Mosk, “Focusing coherent light through opaque strongly scattering media,” Opt. Lett., 32 2309 –2311 https://doi.org/10.1364/OL.32.002309 OPLEDP 0146-9592 (2007). Google Scholar

61. 

I. M. Vellekoop, A. Lagendijk and A. P. Mosk, “Exploiting disorder for perfect focusing,” Nat. Photonics, 4 320 –322 https://doi.org/10.1038/nphoton.2010.3 NPAHBY 1749-4885 (2010). Google Scholar

62. 

A. P. Mosk et al., “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics, 6 283 –292 https://doi.org/10.1038/nphoton.2012.88 NPAHBY 1749-4885 (2012). Google Scholar

63. 

J. Dong, F. Krzakala and S. Gigan, “Spectral method for multiplexed phase retrieval and application in optical imaging in complex media,” in ICASSP, IEEE Int. Conf. Acoust., Speech and Signal Process. - Proc., 4963 –4967 (2019). https://doi.org/10.1109/ICASSP.2019.8682329 Google Scholar

64. 

C. Berlage et al., “Deep tissue scattering compensation with three-photon F-SHARP,” Optica, 8 1613 –1619 https://doi.org/10.1364/OPTICA.440279 (2021). Google Scholar

65. 

M. A. May et al., “Fast holographic scattering compensation for deep tissue biological imaging,” Nat. Commun., 12 4340 https://doi.org/10.1038/s41467-021-24666-9 NCAOBW 2041-1723 (2021). Google Scholar

66. 

P. Pozzi et al., “Scattering compensation for deep brain microscopy: the long road to get proper images,” Front. Phys., 8 26 https://doi.org/10.3389/fphy.2020.00026 (2020). Google Scholar

67. 

I. N. Papadopoulos et al., “Scattering compensation by focus scanning holographic aberration probing (F-SHARP),” Nat. Photonics, 11 116 –123 https://doi.org/10.1038/nphoton.2016.252 NPAHBY 1749-4885 (2017). Google Scholar

68. 

I. Freund, M. Rosenbluh and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett., 61 2328 –2331 https://doi.org/10.1103/PhysRevLett.61.2328 PRLTAO 0031-9007 (1988). Google Scholar

69. 

P. Arjmand et al., “Three-dimensional broadband light beam manipulation in forward scattering samples,” Opt. Express, 29 6563 –6581 https://doi.org/10.1364/OE.412640 OPEXFF 1094-4087 (2021). Google Scholar

70. 

L. Zhu et al., “Chromato-axial memory effect through a forward-scattering slab,” Optica, 7 338 –345 https://doi.org/10.1364/OPTICA.382209 (2020). Google Scholar

71. 

B. Judkewitz et al., “Translation correlations in anisotropically scattering media,” Nat. Phys., 11 684 –689 https://doi.org/10.1038/nphys3373 NPAHAX 1745-2473 (2015). Google Scholar

72. 

H. Yılmaz et al., “Customizing the angular memory effect for scattering media,” Phys. Rev., 11 031010 https://doi.org/10.1103/PhysRevX.11.031010 PHRVAO 0031-899X (2020). Google Scholar

73. 

S. Schott et al., “Characterization of the angular memory effect of scattered light in biological tissues,” Opt. Express, 23 13505 –13516 https://doi.org/10.1364/OE.23.013505 OPEXFF 1094-4087 (2015). Google Scholar

74. 

G. Osnabrugge et al., “Generalized optical memory effect,” Optica, 4 886 https://doi.org/10.1364/OPTICA.4.000886 (2017). Google Scholar

75. 

K. M. Hampson et al., “Closed-loop multiconjugate adaptive optics for microscopy,” Proc. SPIE, 11248 1124809 https://doi.org/10.1117/12.2544391 PSISDG 0277-786X (2020). Google Scholar

76. 

I. N. Papadopoulos et al., “Dynamic conjugate F-SHARP microscopy,” Light Sci. Appl., 9 110 https://doi.org/10.1038/s41377-020-00348-x (2020). Google Scholar

77. 

X. Zhang et al., “High-resolution mapping of brain vasculature and its impairment in the hippocampus of Alzheimer’s disease mice,” Natl. Sci. Rev., 6 1223 –1238 https://doi.org/10.1093/nsr/nwz124 (2019). Google Scholar

78. 

X. Deng and M. Gu, “Penetration depth of single-, two-, and three-photon fluorescence microscopic imaging through human cortex structures: Monte Carlo simulation,” Appl. Opt., 42 3321 –3329 https://doi.org/10.1364/AO.42.003321 APOPAI 0003-6935 (2003). Google Scholar

79. 

C. Nicholson, “Diffusion and related transport mechanisms in brain tissue,” Rep. Prog. Phys., 64 815 https://doi.org/10.1088/0034-4885/64/7/202 RPPHAG 0034-4885 (2001). Google Scholar

80. 

E. Z. Chong et al., “Fast multiplane functional imaging combining acousto-optic switching and remote focusing,” Proc. SPIE, 10070 100700X https://doi.org/10.1117/12.2251476 PSISDG 0277-786X (2017). Google Scholar

81. 

B. Blochet et al., “Fast wavefront shaping for two-photon brain imaging with multipatch correction,” Proc. Natl. Acad. Sci., 120 (51), e2305593120 (2023). Google Scholar

82. 

W. Akemann et al., “Fast optical recording of neuronal activity by three-dimensional custom-access serial holography,” Nat. Methods, 19 100 –110 https://doi.org/10.1038/s41592-021-01329-7 1548-7091 (2022). Google Scholar

83. 

R. Salomé et al., “Ultrafast random-access scanning in two-photon microscopy using acousto-optic deflectors,” J. Neurosci. Methods, 154 161 –174 https://doi.org/10.1016/j.jneumeth.2005.12.010 JNMEDT 0165-0270 (2006). Google Scholar

84. 

Y. Otsu et al., “Optical monitoring of neuronal activity at high frame rate with a digital random-access multiphoton (RAMP) microscope,” J. Neurosci. Methods, 173 259 –270 https://doi.org/10.1016/j.jneumeth.2008.06.015 JNMEDT 0165-0270 (2008). Google Scholar

85. 

H. B. De Aguiar, S. Gigan and S. Brasselet, “Enhanced nonlinear imaging through scattering media using transmission-matrix-based wave-front shaping,” Phys. Rev. A, 94 043830 https://doi.org/10.1103/PhysRevA.94.043830 (2016). Google Scholar

86. 

C. Gentner et al., “Compressive Raman microspectroscopy parallelized by single-photon avalanche diode arrays,” (2023). Google Scholar

87. 

F. Soldevila et al., “Fast compressive Raman bio-imaging via matrix completion,” Optica, 6 341 –346 https://doi.org/10.1364/OPTICA.6.000341 (2019). Google Scholar

88. 

B. Sturm et al., “High-sensitivity high-speed compressive spectrometer for Raman imaging,” ACS Photonics, 6 1409 –1415 https://doi.org/10.1021/acsphotonics.8b01643 (2019). Google Scholar

89. 

C. Zheng et al., “De-scattering with excitation patterning enables rapid wide-field imaging through scattering media,” Sci. Adv., 7 eaay5496 https://doi.org/10.1126/sciadv.aay5496 STAMCV 1468-6996 (2021). Google Scholar

90. 

R. Horstmeyer, H. Ruan and C. Yang, “Guidestar-assisted wavefront-shaping methods for focusing light into biological tissue,” Nat. Photonics, 9 563 –571 https://doi.org/10.1038/nphoton.2015.140 NPAHBY 1749-4885 (2015). Google Scholar

91. 

Z. Liu et al., “Sustained deep-tissue voltage recording using a fast indicator evolved for two-photon microscopy,” Cell, 185 3408 –3425.e29 https://doi.org/10.1016/j.cell.2022.07.013 CELLB5 0092-8674 (2022). Google Scholar

92. 

V. Villette et al., “Ultrafast two-photon imaging of a high-gain voltage indicator in awake behaving mice,” Cell, 179 1590 –1608.e23 https://doi.org/10.1016/j.cell.2019.11.004 CELLB5 0092-8674 (2019). Google Scholar

93. 

R. R. Sims et al., “Scanless two-photon voltage imaging,” Res. Sq., 24 rs.3.rs-2412371 https://doi.org/10.21203/rs.3.rs-2412371/v1 (2023). Google Scholar

94. 

B. Blochet et al., “Fast wavefront shaping for two-photon brain imaging with multipatch correction,” Proc. Natl. Acad. Sci. U. S. A., 120 e2305593120 https://doi.org/10.1073/pnas.2305593120 (2023). Google Scholar

95. 

H. Zhuang et al., “High speed color imaging through scattering media with a large field of view,” Sci. Rep., 6 32696 https://doi.org/10.1038/srep32696 SRCEC3 2045-2322 (2016). Google Scholar

96. 

S. Popoff et al., “Image transmission through an opaque material,” Nat. Commun., 1 81 https://doi.org/10.1038/ncomms1078 NCAOBW 2041-1723 (2010). Google Scholar

97. 

C. Moretti and S. Gigan, “Readout of fluorescence functional signals through highly scattering tissue,” Nat. Photonics, 14 361 –364 https://doi.org/10.1038/s41566-020-0612-2 NPAHBY 1749-4885 (2020). Google Scholar

98. 

F. Soldevila et al., “Functional imaging through scattering medium via fluorescence speckle demixing and localization,” Opt. Express, 31 21107 –21117 https://doi.org/10.1364/OE.487768 OPEXFF 1094-4087 (2023). Google Scholar

99. 

C. V. Rimoli et al., “Demixing fluorescence time traces transmitted by multimode fibers,” (2023). Google Scholar

100. 

O. Katz, E. Small and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics, 6 549 –553 https://doi.org/10.1038/nphoton.2012.150 NPAHBY 1749-4885 (2012). Google Scholar

101. 

A. d’Arco et al., “Physics-based neural network for non-invasive control of coherent light in scattering media,” Opt. Express, 30 30845 –30856 https://doi.org/10.1364/OE.465702 OPEXFF 1094-4087 (2022). Google Scholar

102. 

L. Zhu et al., “Large field-of-view non-invasive imaging through scattering layers using fluctuating random illumination,” Nat. Commun., 13 1447 https://doi.org/10.1038/s41467-022-29166-y NCAOBW 2041-1723 (2022). Google Scholar

103. 

Y. Zhang et al., “Deep imaging inside scattering media through virtual spatiotemporal wavefront shaping,” (2023). Google Scholar

104. 

Y. Li, Y. Xue and L. Tian, “Deep speckle correlation: a deep learning approach toward scalable imaging through scattering media,” Optica, 5 1181 –1190 https://doi.org/10.1364/OPTICA.5.001181 (2018). Google Scholar

105. 

N. Antipa et al., “DiffuserCam: lensless single-exposure 3D imaging,” Optica, 5 1 –9 https://doi.org/10.1364/OPTICA.5.000001 (2018). Google Scholar

106. 

K. Monakhova et al., “Unrolled, model-based networks for lensless imaging,” in NeurIPS 2019 Workshop Solving Inverse Prob. with Deep Networks, (2019). Google Scholar

107. 

K. Monakhova et al., “Learned reconstructions for practical mask-based lensless imaging,” Opt. Express, 27 28075 –28090 https://doi.org/10.1364/OE.27.028075 OPEXFF 1094-4087 (2019). Google Scholar

108. 

J. W. Goodman, Speckle Phenomena in Optics: Theory and Applications, Roberts and Company Publishers( (2007). Google Scholar

109. 

J. W. Goodman, “Some fundamental properties of speckle,” JOSA, 66 1145 –1150 https://doi.org/10.1364/JOSA.66.001145 JSDKD3 (1976). Google Scholar

110. 

C. Moretti and S. Gigan, “Readout of fluorescence functional signals through highly scattering tissue,” Nat. Photonics, 14 (6), 361 –364 (2020). Google Scholar

111. 

J. D. Briers, “Laser speckle contrast imaging for measuring blood flow,” Opt. Appl., 37 (2007). Google Scholar

112. 

D. Carbonero et al., “Non-negative matrix factorization for analyzing state dependent neuronal network dynamics in calcium recordings,” (2023). Google Scholar

113. 

J. Friedrich et al., “Fast constrained non-negative matrix factorization for whole-brain calcium imaging data,” in NIPS Workshop Stat. Methods for Understanding Neural Syst., (2015). Google Scholar

114. 

M. M. Qureshi et al., “Quantitative blood flow estimation in vivo by optical speckle image velocimetry,” Optica, 8 1092 –1101 https://doi.org/10.1364/OPTICA.422871 (2021). Google Scholar

115. 

E. A. Pnevmatikakis et al., “Simultaneous denoising, deconvolution, and demixing of calcium imaging data,” Neuron, 89 285 –299 https://doi.org/10.1016/j.neuron.2015.11.037 NERNET 0896-6273 (2016). Google Scholar

116. 

X. Li et al., “Reinforcing neuron extraction and spike inference in calcium imaging using deep self-supervised denoising,” Nat. Methods, 18 1395 –1400 https://doi.org/10.1038/s41592-021-01225-0 1548-7091 (2021). Google Scholar

117. 

X. Li et al., “Real-time denoising enables high-sensitivity fluorescence time-lapse imaging beyond the shot-noise limit,” Nat. Biotechnol., 41 282 –292 https://doi.org/10.1038/s41587-022-01450-8 NABIF9 1087-0156 (2023). Google Scholar

118. 

Y. Zhang et al., “Rapid detection of neurons in widefield calcium imaging datasets after training with synthetic data,” Nat. Methods, 20 747 –754 https://doi.org/10.1038/s41592-023-01838-7 1548-7091 (2023). Google Scholar

119. 

P. Zhou et al., “Efficient and accurate extraction of in vivo calcium signals from microendoscopic video data,” Elife, 7 e28728 https://doi.org/10.7554/eLife.28728 (2018). Google Scholar

120. 

M. E. Bocarsly et al., “Minimally invasive microendoscopy system for in vivo functional imaging of deep nuclei in the mouse brain,” Biomed. Opt. Express, 6 4546 –4556 https://doi.org/10.1364/BOE.6.004546 BOEICL 2156-7085 (2015). Google Scholar

121. 

G. Meng et al., “High-throughput synapse-resolving two-photon fluorescence microendoscopy for deep-brain volumetric imaging in vivo,” Elife, 8 e40805 https://doi.org/10.7554/eLife.40805 (2019). Google Scholar

122. 

L. Valzania and S. Gigan, “Online learning of the transmission matrix of dynamic scattering media,” Optica, 10 708 –716 https://doi.org/10.1364/OPTICA.479962 (2023). Google Scholar

123. 

V. A. Griffiths et al., “Real-time 3D movement correction for two-photon imaging in behaving animals,” Nat. Methods, 17 741 –748 https://doi.org/10.1038/s41592-020-0851-7 1548-7091 (2020). Google Scholar

124. 

F. Xia et al., “Short-wave infrared confocal fluorescence imaging of deep mouse brain with a superconducting nanowire single-photon detector,” ACS Photonics, 8 2800 –2810 https://doi.org/10.1021/acsphotonics.1c01018 (2021). Google Scholar

125. 

A. J. Schain, R. A. Hill and J. Grutzendler, “Label-free in vivo imaging of myelinated axons in health and disease with spectral confocal reflectance microscopy,” Nat. Med., 20 443 https://doi.org/10.1038/nm.3495 1078-8956 (2014). Google Scholar

126. 

P. Theer and W. Denk, “On the fundamental imaging-depth limit in two-photon microscopy,” J. Opt. Soc. Am. A, 23 3139 https://doi.org/10.1364/JOSAA.23.003139 JOAOD6 0740-3232 (2006). Google Scholar

127. 

T. Wang et al., “Quantitative analysis of 1300-nm three-photon calcium imaging in the mouse brain,” Elife, 9 e53205 https://doi.org/10.7554/eLife.53205 (2020). Google Scholar

128. 

K. Takasaki, R. Abbasi-Asl and J. Waters, “Superficial bound of the depth limit of two-photon imaging in mouse brain,” eNeuro, 7 ENEURO.0255-19.2019 https://doi.org/10.1523/ENEURO.0255-19.2019 (2020). Google Scholar

129. 

Z. Li et al., “Fast widefield imaging of neuronal structure and function with optical sectioning in vivo,” Sci. Adv., 6 eaaz3870 https://doi.org/10.1126/sciadv.aaz3870 STAMCV 1468-6996 (2020). Google Scholar

130. 

M. Wang et al., “Comparing the effective attenuation lengths for long wavelength in vivo imaging of the mouse brain,” Biomed. Opt. Express, 9 3534 –3543 https://doi.org/10.1364/BOE.9.003534 BOEICL 2156-7085 (2018). Google Scholar

131. 

E. Papagiakoumou, E. Ronzitti and V. Emiliani, “Scanless two-photon excitation with temporal focusing,” Nat. Methods, 17 571 –581 https://doi.org/10.1038/s41592-020-0795-y 1548-7091 (2020). Google Scholar

132. 

H. Liu et al., “In vivo deep-brain structural and hemodynamic multiphoton microscopy enabled by quantum dots,” Nano Lett., 19 5260 –5265 https://doi.org/10.1021/acs.nanolett.9b01708 NALEFD 1530-6984 (2019). Google Scholar

133. 

N. Akbari et al., “Imaging deeper than the transport mean free path with multiphoton microscopy,” Biomed. Opt. Express, 13 452 –463 https://doi.org/10.1364/BOE.444696 BOEICL 2156-7085 (2022). Google Scholar

134. 

R. Turcotte et al., “Volumetric two-photon fluorescence imaging of live neurons using a multimode optical fiber,” Opt. Lett., 45 6599 –6602 https://doi.org/10.1364/OL.409464 OPLEDP 0146-9592 (2020). Google Scholar

135. 

Y. Sych et al., “High-density multi-fiber photometry for studying large-scale brain circuit dynamics,” Nat. Methods, 16 553 –560 https://doi.org/10.1038/s41592-019-0400-4 1548-7091 (2019). Google Scholar

136. 

S. Turtaev et al., “High-fidelity multimode fibre-based endoscopy for deep brain in vivo imaging,” Light Sci. & Appl., 7 92 https://doi.org/10.1038/s41377-018-0094-x (2018). Google Scholar

137. 

S. Sivankutty et al., “Ultra-thin rigid endoscope: two-photon imaging through a graded-index multi-mode fiber,” Opt. Express, 24 825 –841 https://doi.org/10.1364/OE.24.000825 OPEXFF 1094-4087 (2016). Google Scholar

138. 

Z. Wen et al., “Single multimode fibre for in vivo light-field-encoded endoscopic imaging,” Nat. Photonics, 17 679 –687 https://doi.org/10.1038/s41566-023-01240-x NPAHBY 1749-4885 (2023). Google Scholar

139. 

S. Resisi, S. M. Popoff and Y. Bromberg, “Image transmission through a dynamically perturbed multimode fiber by deep learning,” Laser Photonics Rev., 15 2000553 https://doi.org/10.1002/lpor.202000553 (2021). Google Scholar

140. 

N. Borhani et al., “Learning to see through multimode fibers,” Optica, 5 960 –966 https://doi.org/10.1364/OPTICA.5.000960 (2018). Google Scholar

141. 

N. Accanto et al., “A flexible two-photon fiberscope for fast activity imaging and precise optogenetic photostimulation of neurons in freely moving mice,” Neuron, 111 176 –189.e6 https://doi.org/10.1016/j.neuron.2022.10.030 NERNET 0896-6273 (2023). Google Scholar

142. 

V. Szabo et al., “Spatially selective holographic photoactivation and functional fluorescence imaging in freely behaving mice with a fiberscope,” Neuron, 84 1157 –1169 https://doi.org/10.1016/j.neuron.2014.11.005 NERNET 0896-6273 (2014). Google Scholar

143. 

K. K. Ghosh et al., “Miniaturized integration of a fluorescence microscope,” Nat. Methods, 8 871 –878 https://doi.org/10.1038/nmeth.1694 1548-7091 (2011). Google Scholar

144. 

D. Aharoni and T. M. Hoogland, “Circuit investigations with open-source miniaturized microscopes: past, present and future,” Front. Cell. Neurosci., 13 141 https://doi.org/10.3389/fncel.2019.00141 (2019). Google Scholar

145. 

W. Zong et al., “Fast high-resolution miniature two-photon microscopy for brain imaging in freely behaving mice,” Nat. Methods, 14 713 –719 https://doi.org/10.1038/nmeth.4305 1548-7091 (2017). Google Scholar

146. 

W. Zong et al., “Large-scale two-photon calcium imaging in freely moving mice,” Cell, 185 1240 –1256.e30 https://doi.org/10.1016/j.cell.2022.02.017 CELLB5 0092-8674 (2022). Google Scholar

147. 

E. R. Andresen et al., “Ultrathin endoscopes based on multicore fibers and adaptive optics: a status review and perspectives,” J. Biomed. Opt., 21 121506 https://doi.org/10.1117/1.JBO.21.12.121506 JBOPFO 1083-3668 (2016). Google Scholar

148. 

C. Zhao et al., “Miniature three-photon microscopy maximized for scattered fluorescence collection,” Nat. Methods, 20 617 –622 https://doi.org/10.1038/s41592-023-01777-3 1548-7091 (2023). Google Scholar

149. 

A. Klioutchnikov et al., “A three-photon head-mounted microscope for imaging all layers of visual cortex in freely moving mice,” Nat. Methods, 20 610 –616 https://doi.org/10.1038/s41592-022-01688-9 1548-7091 (2023). Google Scholar

150. 

A. Klioutchnikov et al., “Three-photon head-mounted microscope for imaging deep cortical layers in freely moving rats,” Nat. Methods, 17 509 –513 https://doi.org/10.1038/s41592-020-0817-9 1548-7091 (2020). Google Scholar

151. 

G. Barbera et al., “A wireless miniScope for deep brain imaging in freely moving mice,” J. Neurosci. Methods, 323 56 –60 https://doi.org/10.1016/j.jneumeth.2019.05.008 JNMEDT 0165-0270 (2019). Google Scholar

152. 

Y. Wang et al., “Cable-free brain imaging with miniature wireless microscopes,” (2022). Google Scholar

153. 

W. Tahir, H. Wang and L. Tian, “Adaptive 3D descattering with a dynamic synthesis network,” Light Sci. Appl., 11 42 https://doi.org/10.1038/s41377-022-00730-x (2022). Google Scholar

154. 

J. Platisa et al., “High-speed low-light in vivo two-photon voltage imaging of large neuronal populations,” Nat. Methods, 20 1095 –1103 https://doi.org/10.1038/s41592-023-01820-3 1548-7091 (2023). Google Scholar

155. 

Y. Pan et al., “Dynamic 3D imaging of cerebral blood flow in awake mice using self-supervised-learning-enhanced optical coherence Doppler tomography,” Commun. Biol., 6 298 https://doi.org/10.1038/s42003-023-04656-x (2023). Google Scholar

156. 

Q. Hu et al., “Universal adaptive optics for microscopy through embedded neural network control,” Light Sci. Appl., 12 270 https://doi.org/10.1038/s41377-023-01297-x (2023). Google Scholar

Biography

Fei Xia received her PhD in 2021 from Cornell University, focusing on in vivo deep tissue microscopy. She is now a postdoctoral researcher at the École Normale Supérieure and the French National Centre for Scientific Research in Paris, France, working on computational microscopy and computational optics for machine learning. Her primary research interest lies in developing enhanced optical tools for biomedicine.

Caio Vaz Rimoli holds a PhD in biophysics from the Institut Fresnel (Marseille/France). After doing a postdoc with S. Gigan and C. Ventalon at the Laboratoire Kastler Brossel and IBENS (École Normale Supérieure/ENS) in Paris, he has just obtained a tenured position (2024) as a research engineer from INSERM to join the new SAIRPICO project-team led by Ludger Johannes and Charles Kervrann at the Institut Curie in Paris to work with advanced bioimaging methods.

Walther Akemann earned his PhD from Heinrich-Heine University. He completed postdocs at the Helmholtz Association, CEA Saclay, and the University of South Paris. He worked as a staff scientist at the RIKEN Brain Science Institute with Thomas Knöpfel, contributing to pioneering methods for measuring neuronal activity. Currently, he is a post-doc with Laurent Bourdieu and Sylvain Gigan at the École Normale Supérieure in Paris, focusing on advancing in vivo activity recording in 3D cortical networks.

Cathie Ventalon was originally trained in optics. During her postdoc at Boston University, she developed fluorescence imaging methods with optical sectioning. In 2007, she joined the Emiliani lab at Paris Descartes University as CNRS researcher and focused on the combination of fluorescence imaging with spatially selective photoactivation. In 2014, she joined the Bourdieu team at the Biology Institute of ENS where she develops optical methods for recordings and manipulation of neuronal activity in unrestrained rodents.

Laurent Bourdieu is head of the “Cortical Dynamics and Coding Mechanisms” team at IBENS, Paris. He was trained in physics at the Ecole Normale Supérieure (ENS). After his PhD at Institut Curie and his postdoc at Princeton University, he joined the CNRS in Strasbourg to work in the field of biophysics. In 2004, he moved to ENS to study the cortical coding of sensory information and its modulation by top-down processes, using advanced non-linear microscopy.

Sylvain Gigan earned his PhD in 2004 in physics from University Pierre et Marie Curie, in Paris (France), on quantum imaging. He later joined ESPCI ParisTech in Paris, as associate professor, and started working on wavefront shaping in complex media at the Langevin institute. Since 2014, he has been a professor at Sorbonne Université and PI at the Complex Media Optics Lab within Kastler Brossel Laboratory. He is broadly interest in computational imaging, optical computing, and quantum optics in complex media.

Hilton B. de Aguiar holds a PhD from EPFL/Lausanne (Switzerland), jointly with the Max-Planck Institute/Stuttgart (Germany). After being in a junior research chair position at the Ecole Normale Superieure/Paris, he is currently a CNRS researcher at the Laboratoire Kastler Brossel within the group Complex Media Optics Lab. He develops new imaging schemes exploiting vibrational contrasts mechanisms, complex media photonics and computational tools.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Fei Xia, Caio Vaz Rimoli, Walther Akemann, Cathie Ventalon, Laurent Bourdieu, Sylvain Gigan, and Hilton B. de Aguiar "Neurophotonics beyond the surface: unmasking the brain’s complexity exploiting optical scattering," Neurophotonics 11(S1), S11510 (12 April 2024). https://doi.org/10.1117/1.NPh.11.S1.S11510
Received: 8 November 2023; Accepted: 14 March 2024; Published: 12 April 2024
Advertisement
Advertisement
KEYWORDS
Brain

Neurophotonics

Wavefronts

Light scattering

Biological imaging

Neuroimaging

Geometrical optics

Back to Top