Since their conception as Diffractive Deep Neural Networks (D2NNs) by Lin in 2018, the field of optical information processing with sub-wavelength patterned thin scatterers has raised significant attention, as it not only allows for leveraging the inherent advantages of optical signals, such as parallelism and high speed, but also allows for convenient processing of optical information in its native domain. The nanoprinting of such diffractive networks with two-photon polymerization methods is of interest, as it allows for fabrication of optical inference systems with record-high neurons densities, targeted for the VIS/NIR wavelength regime and fabricated directly on commercial CMOS imaging sensors. Current methods for in-silicon training of these Diffractive Neural Networks employ numerical models of ‘masked’ detectors, where only certain areas of the networks output plane are compared for performance evaluation, while the remainder of the detector area is being ignored. While such training methods have the advantage of straightforward numerical implementation, for fabricated devices where typically the whole output plane in captured for data acquisition, this can lead to significantly reduced performance due to strong background signals in the areas that were ‘masked off’ during training and hence susceptibility to errors in aligning the distinct detector areas for readout. In this contribution we critically discuss methods for training of diffractive neural networks which consider the whole output plane of the network in order to achieve low background noise and hence high detector-to-background contrast for 3D nanoprinted diffractive neural networks with increased experimental robustness due to reduced susceptibility for errors in alignment of detector areas.
In this work we investigate the integration of optical and electronic hardware to create ML and DL accelerators with reduced computational complexity and energy consumption. We compare the performance of optical or hybrid optical-electronic neural networks and verify that by using these computation architectures it is possible to perform classification tasks with performances comparable with standard electronic neural networks but saving computational resources up to a factor of 10.
KEYWORDS: Neural networks, Neurons, Visualization, Information visualization, Machine learning, Data processing, Signal processing, Nanolithography, Printing, 3D modeling
We numerically investigate the performance of optical implementations of deep neural network for complex field data processing in the form of multi-layer nanoscale diffractive neural networks, trained to perform image classification tasks. We discuss the parameter optimization and the limitations that fabrication errors put on the performance of such direct phase retrieval systems. The diffractive neural networks studied here may have transformative impact on adaptive optics, data processing and sensing and may be crucial in the development of robust and generalized quantitative phase imaging methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.