Neural networks have provided faster and more straightforward solutions for laser modulation. However, their effectiveness when facing diverse structured lights and various output resolutions remains vulnerable because of the specialized end-to-end training and static model. Here, we propose a redefinable neural network (RediNet), realizing customized modulation on diverse structured light arrays through a single general approach. The network input format features a redefinable dimension designation, which ensures RediNet wide applicability and removes the burden of processing pixel-wise light distributions. The prowess of originally generating arbitrary-resolution holograms with a fixed network is first demonstrated. The versatility is showcased in the generation of 2D/3D foci arrays, Bessel and Airy beam arrays, (perfect) vortex beam arrays, and even snowflake-intensity arrays with arbitrarily built phase functions. A standout application is producing multichannel compound vortex beams, where RediNet empowers a spatial light modulator (SLM) to offer comprehensive multiplexing functionalities for free-space optical communication. Moreover, RediNet has the hitherto highest efficiency, only consuming 12 ms (faster than the mainstream SLM framerate of 60 Hz) for a |
1.IntroductionStructured light,1 as a huge category of inhomogeneous electromagnetic waves with exceptional physical interpretation and tailored field distribution, has profoundly advanced the frontiers of optical technology and fueled applications from imaging,2,3 microscopy,4–6 communication,7–9 quantum information10 to nanomanufacturing.11 Concurrently, beam parallelization technology11,12 brings the possibility of tunably splitting lasers into multiple same or distinct subbeams via programmable devices, such as the spatial light modulator (SLM).13,14 The merging of these two fundamental technologies holds tremendous potential, enabling the generation of parallel structured light arrays with customizable positions, energy proportions, and varied optical properties. The positive outlook of this synergy lies in the fact that the flexible structured light arrays pave the way for the pursuit of massive information throughput, higher multiplexing dimension in communication, and multivoxel processing ability with a single exposure. In the quest to engineer fully controllable structured light arrays without compromising the intrinsic properties of individual beams, phase modulation is widely adopted, whose competence depends on the design of the phase hologram. Researchers have devised numerous phase hologram generating strategies. Starting with the generation of a simple three-dimensional (3D) focus array, there are the weighted Gerchberg–Saxton,15 the 3D iterative Fourier transform (3DIFTA),16 and the spherical aberration-compensated automatic differentiation (SACAD)17 algorithms, achieving excellent performance that are close to theoretical optima. For nondiffracting beam arrays, the periodic arrangement of specific phase patterns or nanostructures has emerged as a straightforward method.18,19 Vortex beam arrays with varied orbital angular momentum (OAMs) and spatial gconfigurations can be produced through encoded holographic gratings20 or by subtly overlaying multiple spiral phase patterns.21 These methods commonly take the pixel-wise or beam-wise distribution as the target and contain a flip-flop iterative process to minimize the error. Nevertheless, this kind of classic scheme has difficulty in correct convergence facing the 3D target or complex amplitude target that is introduced by structured light distribution. Burdensome loss function setup and priori regularization undermine the simplicity as well. Moreover, the speed of closed-loop iterative optimization of these algorithms presents a bottleneck for real-time deployment and a long-lasting challenge that remains to be addressed. On that account, neural networks have drawn massive attention in pixel-wise image tasks due to the prosperity of hardware and algorithms in recent years.22,23 With ingenious architectures filled with enormous quantity of parameters, neural networks provide rapid solutions for planar and volumetric phase retrieval,24,25 lensless imaging,26 and atmospheric turbulence adaptive correction.27 Their remarkable efficiency and accuracy originate from their direct end-to-end training and one-way computation flow. However, the reliance on data-based training and application-specific network topology are also what limit its universality. A neural network is likely to collapse when confronted with obvious variations in the input, output, or even divergent resolution. A single neural network that can undertake the modulation tasks in diversiform structured light array applications is still pending. To this end, in this work, we propose a novel and efficient redefinable neural network architecture, termed RediNet, which can transcend the dilemmas of established neural network models and fulfill the possibility of fulfilling speed, flexibility, and resolution requirements simultaneously. Facing different target structured light array species [Fig. 1(a)], our strategy seeks and exploits the sparsity in the arrayed light field, unifying numerous optical properties in a general framework rather than training a dedicated network for every kind of target [Fig. 1(b)]. We also eschew the traditional method of targeting the pixelated complex amplitude. Instead, using the analytic characteristic phase functions (CPFs), a parameter space [Fig. 1(c)] is built whose dimension designations can be flexibly redefined. A concise neural network is trained as an inverse problem solver for the primitive function taking the preset parameter space as the Fourier series coefficients. Next, mapping the CPFs into the primitive function, a computer-generated holograph (CGH) for arbitrary structured light parallelization is generated, covering the abilities of multiple neural networks and other algorithms [Fig. 1(d)]. We demonstrate that the parallel array containing almost all kinds of structure lights that can be generated using a phase CGH have been realized, without any iterative optimization or retraining of the network. Specifically, we construct a simple experimental setup, generating diverse CGHs to produce 2D and 3D focal arrays, Bessel and Airy beam arrays, Laguerre–Gaussian (LG) mode beam arrays, vortex and perfect vortex beam arrays, and snowflake-intensity arrays with arbitrarily built phase functions. Because of a lightweight model (about 22.4 million network parameters), the neural network finishes a prediction in 2.9 ms, while also allowing for post hoc determination of CGH resolution (taking a total of 6.7 ms for -resolution and 12 ms for -resolution CGHs). We believe that the fine resolution, high speed, and unprecedented universality of RediNet offer a practical solution for the designing of structured light arrays in next-generation optical communication, parallel laser direct writing, optical traps, and so on. 2.Method2.1.Architecture of RediNetThe structure of RediNet is shown in Fig. 2. The typical neural networks employed in phase modulation regard pixel-wise light distribution and the corresponding pixel-wise CGH as the input and output. Although effective and straightforward, this direct approach often encourages an oversized and overspecialized network.24,25 To circumvent this issue, we propose an indirect three-step procedure to isolate the variation of structured light species and resolution. The neural network serves as the computation kernel [Fig. 2(b)], while the parameter space acquisition [Fig. 2(a)] and primitive functions mapping [Fig. 2(c)] are externalized as pre- and postprocessing. In first step, the preprocessing is about determining the CPFs by the properties of the structured light and constructing a uniform parameter space. CPF is essentially an exact phase distribution for producing a structured light beam with a single optical property, as listed in Fig. 2(a). Most structured lights have their corresponding analytic CPFs, which are linear with the modulation parameters. In the examples shown in Fig. 2(a), the CPF for the direction shifting is the blazed grating phase , for the vortex beam with topological charge (TC) is the spiral phase , and for the Airy beam with modulation parameter is the cubic phase , where , , , and are the Cartesian or polar coordinates. Beyond these cases, some important CPFs defy this linear property, but their first-order approximations can serve as the acceptable CPFs. For instance, when axial defocusing amount is negligible relative to focus length , a proper defocusing CPF approximation is . More examples of CPFs and corresponding parameters are listed in the Supplementary Material. The linear modulation parameters, including , , , and mentioned above, are the key elements of controlling the structured light properties (e.g., altering the TC to change the carried OAM in a vortex beam). Based on them, the description of the structured light array using CPFs and modulation parameters is similar to describing several vectors in a multidimensional linear space with a set of basis vectors and coordinates. In this concept, we can build the multidimensional parameter space, where the “coordinates” about the structured light array are stored, and each dimension represents an independent modulation parameter and is intrinsically linked to the corresponding CPF. In addition, the dimensions in parameter space are with equal standing and are redefinable. Typically, this parameter space is more compact than the complex amplitude profile on a pixel-wise basis. For visualization clarity, our exploration is confined to the 3D parameter space, depicted as the left cube in Fig. 2(b). Nonetheless, this framework is scalable and naturally adaptable for parameter spaces of higher or lower dimensions. The second step focuses on harnessing the neural network to calculate an effective primitive function. As in the above description, the CPFs and the parameter space are sufficient to determine a certain structured light array, but the exact solution for the CGH is still confusing. Here, we present a mathematical framework to link the target parameter space and the CGH (see Supplementary Material for a more detailed description). Like the analysis of Dammann grating,28 the discrete and periodic nature of the array enables us to represent the CGH as a weighted summation of the complex amplitudes of the CPF’s multiples, where is the 2D phase distribution on the CGH, is the dimension of the parameter space ( here), is the coordinate of the th dimension in the parameter space, is the CPF of the th dimension, and is . In Eq. (1), is the modulated complex amplitude corresponding to each independent structured light defined in the parameter space. Crucially, the complex coefficient is the weight of each structured light. directly relates to the energy proportion of the beam with coordinates and is preset in the target parameter space.Equation (1) is noted to be very similar to a Fourier series. Assume there is a multidimensional primitive function , whose moduli all equal to 1 and whose angle is . The value of is subtly arranged to meet the criterion that its Fourier series coefficients share the same moduli with . Then, the Fourier series of primitive functions will become exactly the same on the right side of Eq. (1) when we consider as and it indicates In this way, is converted into a composite function form. The inner function is the CPFs and the outer one is the primitive function . In the following, our claims about primitive function only refer to the angle part for simplicity. To determine the value in , we introduce a neural network, illustrated in Fig. 2(b), where the input and output are shown with particular emphasis. The input of the network is the aforementioned 3D parameter space, with the same eight-pixel resolution in every dimension. The output is the primitive function with 64-pixel resolution. The network adopted the well-established U-net structure29 and integrated the resblock.30 U-net framework features an encoder-decoder configuration and contains skip connections that simultaneously capture both overall trends and local details. The loss function in training is first the classical L2 loss and then shifts to , where is the predicted value of the network and is the ground truth. The latter loss function can perfectly evaluate the periodic phase values. Even though the common 3D convolutional networks are parameter-dense, RediNet is exceptionally lightweight, consisting of mere 22.4 million network parameters. This conciseness translates to swift prediction time, typically under 3 ms on a consumer-grade graphics processor. Details of the neural network structure and training can be found in the Supplementary Material. In the third step, as shown in Fig. 2(c), the postprocessing maps a 2D CGH from the 3D primitive function , which is the output of the network. For each pixel, the mapping procedure is the same, similar to the composite function evaluation. For pixel , we can compute three values. Taking them as 3D coordinates, we can evaluate by computing as the instruction in Eq. (2). Then, repeating this procedure for every pixel yields the whole CGH . 2.2.Generation of 3D Primitive Function Data SetNetwork training demands a data set containing 3D Fourier series of coefficients and corresponding primitive functions, but there is no suitable data set, to our knowledge. Therefore, we employed an iterative algorithm to generate the data set. The data set is the randomly distributed parameter space and the corresponding primitive function as ground truth. Normally, nonlinear optimization is needed for this kind of ill-posed problem, so we give a parallel iterative Fourier transform procedure. The constraints in the iteration contain two crucial criteria: for the primitive function, it needs to be real-valued; for the parameter space, its modulus needs to be identical to the given parameter space. The details of this iteration algorithm can be found in the Supplementary Material. The generated data set is saved as a 3D matrix in a MAT file (with commercial software MATLAB) and can be loaded and converted to tensors. Code for generating the data set and several pairs of data samples are available at https://github.com/LiHengyang1/RediNet. 2.3.Computational EnvironmentThe calculation, including data set generation, training, prediction, CGH generation, and evaluation, is performed on a personal desktop with an Intel processor I7 13700, 32 GB of memory, and a graphic processor Nvidia RTX3080Ti. Network training and prediction are based on Pytorch 1.12, and the pre- and postprocessing stages are performed on MATLAB R2023a. All the comparison studies are performed in the same computational environment. 2.4.Phase Modulation SystemThe laser holographic system consists of an Nd:YAG laser with a central wavelength of 1064 nm. The beam is expanded to 7.9 mm diameter and collimated. A phase-only reflective liquid crystal SLM (X13138, Hamamatsu) is utilized for the phase modulation, which features a resolution and a pixel pitch. In the experiment, only the central circular portion of the SLM is used to modulate the wavefront, while the other pixels are set to blazed grating. A phase CGH with 8-bit depth is loaded onto the SLM, corresponding to the phase modulation depth from 0 to . A 12-bit depth camera is used to capture the intensity distribution. The pictures of the experimental setup are in the Supplementary Material. To determine the phase pattern on the focal plane, a reference beam is used for interference. By deliberately introducing an angle between the modulated beam and reference beam, the phase distribution can be solved with a single fringe intensity picture. The detailed approach is given in the Supplementary Material. 3.Results3.1.Customizing Structured Light Array with RediNetRediNet is applied to the generation of myriad structured light arrays with diverse distributions. We should underline that the network parameters of RediNet remain constant, and no iterative procedures are performed throughout the experiments. As mentioned above, the experimental setup is easily accessible, where a phase SLM is used as the only modulation device. First, we have generated focus arrays on the focal plane and in 3D space. As shown in Fig. 3(a), by utilizing only two dimensions of the parameter space as - and -direction shifting, a symmetric equal-energy four-foci array can be generated with an intensity root-mean-square error (RMSE) of 0.051. By additionally including the modulation over the axial defocusing in the third dimension of the parameter space, we can sculpt a 3D focus array with four layers, a total of 19 foci randomly distributed, and an RMSE intensity of 0.193. At this point, the dimensions in parameter space exactly correspond to the physical space dimensions, so the intuitive graphical match between them is apparent in Fig. 3(a). As mentioned before, we used an approximate CPF for axial defocusing, so the positions of the four planes show a slight deviation on the axis. Associated with the 400-mm-focus-length lens used in the experiments, the actual four planes are located at , 12.4, 0, and , corresponding to the target 24, 12, 0, and , respectively. In practical applications, this discrepancy is negligible when the defocusing magnitude is small and it is virtually nonexistent in tight focusing situations, confirmed by its theoretical axial defocusing CPF.31 RediNet can produce parallel arrays of LG mode beams by imitating their phase distributions. The corresponding modulation phase patterns for different orders of LG mode are given in Fig. 3(b), encoded as a combination of spiral phase and radial step phase. Since the radial step phase is difficult to express as a linear function of a parameter, all its inner phase rings are considered as invariants, and the outermost ring is independently manipulated with one dimension in the parameter space with values only “switching” between binary numbers 0 and 1. Using the generated CGHs, a fundamental mode Gaussian beam is sculpted into beam arrays with LG00, LG01, LG10, and LG11 modes and LG22, LG23, LG32, and LG33 modes at different positions. Researchers have experimentally produced a variety of nondiffracting beams whose transverse light field patterns remain almost unchanged after long-distance propagation.32 This unique feature has propelled a range of practical applications, from optical micromanipulation,33 laser drilling,34 to light bullets.35 Using RediNet, we multiplexed three Bessel beams and three Airy beams, respectively. Three identical Bessel beams are generated with different propagation directions, as shown in Fig. 3(c). Three Airy beams are distributed at different positions, and the modulation parameter in the CPF takes the values of , , and , respectively, which leads to the observable different bending trajectories and directions. At the same time, the larger absolute value of the parameter contributes to more pronounced sidelobes of the Airy beam that appeared on the focal plane. The light field expression of a vortex beam has an term, so it possesses a spiral phase pattern. Not limited in the optical traps and microstructure fabrication, extensive applications could benefit from RediNet owing to customized control on various vortex beam arrays. There are similar circular intensity patterns in Figs. 4(a) and 4(b), yet they emerge from different modulation strategies. In Fig. 3(b), we utilize a radially linear CPF , which mimics a conical lens, making the intensities on the focal plane transform into rings with the radius proportional to the parameter , but without the spiral phase property. In contrast, in Fig. 4(c), we used the angularly linear CPF to obtain a vortex beam array. The smaller donut distributions on the diagonal all carry TCs, and the larger ones on either side carry TCs. The ring radius of a conventional vortex beam depends on the TC , as can be observed in Fig. 4(b). To circumvent this relevance, a perfect vortex beam is introduced whose ring radius is independent.36 It can be realized by overlaying spiral and conical phases, similar to combining the effects of Figs. 4(a) and 4(b). This implementation is realized in Fig. 4(c), where perfect vortex beams at different positions carry different TCs but with nearly identical ring radii and widths. Moreover, RediNet allows for independent modulation of the angular and radial CPFs at the same time. In Fig. 4(d), four perfect vortex beams are shown on the left side with the ring radii and TCs varying from each other. On the right side, there are four concentric rings with TCs progressively changing from to as the ring radius increases. Dissimilar from the vortex beam, the helico-conical beam produces an unclosed helical intensity distribution on the focal plane.37 It is obtained by multiplying radial and angular CPFs, leading to a nonseparable term. To deal with this special multiplication relation, we take the cross-product term as a whole. In Fig. 4(e), a four helico-conical beams array is realized, forming dual sets of unclosed helical trajectories surrounded by each other. The arrays with a large number of structured light species have been enumerated in Figs. 3 and 4, and they basically all correspond to a family of wave equation solutions so that the CPFs and the formed structure light field have exact physical meanings and properties. Beyond them, structured light could be a broader concept, not limited by physically grounded solutions. In the architectural essence, RediNet does not distinguish the physical meaning of structured lights. Any CPFs are equally input and multiplexed with the same principles, and this arbitrariness is the quintessence of network redefinability. To verify this interesting capability, the snowflake beam array is generated. First, we arbitrarily build two CPFs: and and add shifting. Then, the phase CGH is generated just in the same way as other commonly seen structured light arrays. The obtained intensity distribution is shown in Fig. 4(f), where different hexagonal periodic structures appear at different locations, similar to snowflakes with different structures. In this implementation, complicated patterns are controlled by only adjusting two parameters, indicating a property like the vector graph that a small number of parameters are enough to control a huge number of pixels. 3.2.Modulating a Multichannel Compound Vortex Beam Array with RediNetRecently, many studies have explored the possibilities of OAM in free-space optical communication.38,39 Because of the orthodox property, the vortex beams with different OAMs can be used as separate signal channels or coding formats, or even both.40,41 Going even further, the compound vortex beam array is generated with RediNet, offering a novel example of concurrent control on spatial position, energy proportion, and multiple encoded OAM states. From the implementation level, there were OAM multiplexing experiments about splitting multiple vortex beams at discrete spatial positions42 or gathering them into a singular compound beam.43 The compound vortex beam array here is a much more integrated solution thanks to the versatility of RediNet. With only a phase SLM and a fundamental mode Gaussian incidence, RediNet can produce a multichannel vortex beam array, with each carrying the simultaneously dual OAM states shown in Fig. 5(a). To realize the array in Fig. 5(b), we designate - and -shifting CPFs in two dimensions, and the angular CPF in the third dimension in the parameter space. Only 0 or 2 nonzero values are associated with an exact and coordinate, which implies only 0 or 2 vortex beams are generated in an exact position on the focal plane. In Fig. 5(b), randomly distributed eight-channel compound vortex beams are shown, with each channel simultaneously carrying a positive and a negative OAM state. This phenomenon finds its physical counterpart in multiposition vortex beam interference experiments. For instance, in Fig. 5(c), compound vortex beams with OAM states 3 and –4 are interfered, resulting in seven petals corresponding to their gap. The experimental intensity and phase distribution are compared with the simulation results, which are basically consistent. We added an overall conical phase in the CGH to achieve the compound perfect vortex beam array. As a result, uniform-sized ring radii are realized, probably simplifying the engineering effort of coupling beams into the same model of special-designed waveguides.44 It is also worth mentioning that RediNet takes roughly 12 ms to generate this phase CGH with resolution, which is faster than the mainstream SLM frame response time (16.7 ms or 60 Hz). This efficiency is crucial for optical communication and other real-time required scenarios, outperforming the iteration algorithms encumbered by their compute-storage-use procedure. 3.3.Flexibility, Speed, and Accuracy of RediNetA uniform and redefinable parameter space underpins the versatility of RediNet. Within the conventional neural networks related to spatial 3D light fields, the axis (wave vector direction) often plays a specialized role due to its implication with wave propagation characteristics. In contrast, RediNet’s architecture endows each dimension with equal standing. This symmetry permits users to arbitrarily redefine the designation of each dimension, thereby designing the parameter space complying with their own rules. Here, to verify this feature, we provide an illustrative example of exchanging the designation of dimensions [Fig. 6(a)]: first, we define three dimensions as shifting, shifting, and the radius of ring focus, respectively, fill in some arbitrary values into the target parameter space and save it as a 3D matrix. Accordingly, an intensity distribution on the image plane can be obtained. In the second step, the parameter designations on the three dimensions are kept unchanged, but the values of the parameter space , i.e., the 3D matrix is permuted. Now, the correspondence of the value and parameter is disordered so that the obtained distribution of light intensity is divergent from that obtained in the first step. In the third step, the parameter designations and the values are permuted together from the initial state. Although the values are now transferred to new positions in the parameter space, the value-parameter correspondence is restored to its initial alignment. Consequently, the result intensity distribution is identical to that of the initial state. In the realm of neural networks, the resolution of the output is typically immutable, firmly anchored by the network’s architecture, and infeasible to adjust after training is completed. If a high-resolution CGH is required, one needs to reconstruct and retrain a larger-scaled network, bringing about an increase in training cost. RediNet, however, has an unusual architecture liberated from the confinement of output CGH resolution. The mapping procedure allows the resolution of the primitive function to remain static, while the resolution of the CGH can be arbitrarily adjusted through different CPF resolutions. In this way, the power of resolution control on the CGH is ceded to the explicit CPFs, which are some simple and exact formulations. This exotic property of providing arbitrary resolution CGHs with a fixed network is verified in Fig. 6(b). RediNet accomplishes a prediction task and then the single output primitive function generates three different CGHs with different resolutions. Since the mapping from the CPFs to the CGHs is a serial procedure, time consumption increases proportionally with the increase in resolution. From the zoomed-in figures, it is evident that the three CGHs share the same pattern, whereas the highest-resolution one keeps richly detailed phase variations, allowing more accurate manipulation of the spatial frequency. Unlike the upsampling on a bitmap, the approach and performance from to resolution are also similar to vector graphic scaling, based on a set of control parameters and fundamental elements rather than relying on the low-resolution image itself. Figures 6(a) and 6(b) show the flexibility coming from the input preprocessing and output postprocessing of the network, respectively. Furthermore, RediNet is fast due to its small-scale input as well as its simple network structure. We restrict the application scenario to 3D multifocal arrays to compare our method with some established algorithms. Among them, NOVOCGH utilizes a nonconvex optimization method, mainly aiming at the phase retrieval of arbitrary light-field distributions on multiple transverse planes;45 the 3DIFTA method adopts Gerchberg-Saxton-like iterations and the 3D Fourier transform relationship between the Ewald’s cap and the 3D image space;16 the SACAD algorithm uses automatic differentiation to obtain high-quality 3D focal array inside the material;17 and DeepCGH also relies on neural network to realize multidepth light-field reconstruction, which can output complex CGH on both amplitude and phase.46 RediNet consumes significantly less time than the other algorithms in generating a CGH for the seven-layer focus arrays, both in the case of and resolutions. This swift performance underscores RediNet’s computational efficiency, but admittedly it compromises the complexity of the target light field due to the small resolution of the parameter space. Finally, a brief assessment of the accuracy of RediNet is illustrated. The trends of diffraction efficiency and RMSE with the number of subbeams in an array, , are shown in Fig. 6(d). As becomes larger, the diffraction efficiency decreases and the RMSE increases. When is 50, the diffraction efficiency is about 70%, and the RMSE does not exceed 0.2. In the training data set we used, data with more than 32 is nonexistent. The blue area on the right side in Fig. 6(d) shows the performance exceeding the biggest in training data, and the trends of diffraction efficiency and remain basically unchanged, which reflects the generalization of the network. As a regression task, the correspondence between the input and the expanded from output is illustrated in Fig. 6(e). The regression plots for the three different cases are given, revealing a satisfactory input–output correlation. When the input value is negligible (a beam with energy below 10% of the maximum), the network may handle it as 0. The overall correlation coefficient is 0.9711. 4.Discussion and ConclusionThe effectiveness of RediNet has been experimentally demonstrated, yet its capabilities can be further extended, aiming against some limitations we have discovered. First, the 3D parameter spaces and primitive functions used here are with small effective resolutions ( and ). There is no obstacle to expanding them to higher resolutions according to our framework, enabling larger-scale arrays as well as denser steps of the CPF values. Second, limited by the difficulty of visualizing higher dimensions, only the 3D RediNet is reported in the article, but Eqs. (1) and (2) substantiate the method’s applicability to higher dimensional extensions. This means that more structured light species can be multiplexed concurrently, potentially yielding more complex or even entirely new structured lights. Third, continuously integrating innovations from the research on high-functioning neural networks is an effective way to promote performance and is what we will actively pursue in the future. In terms of computational acceleration, as shown in Fig. 6(b), our analysis reveals a marked disparity in computational efficiency between network prediction and the mapping process. As delineated in Fig. 2(c), the mapping process in our code is executed serially based on pixel positions. This is a task of accessing storage by address, which is expected to be parallelized by the specially designed codes or hardware, contributing to a huge computational speedup. In summary, we have demonstrated RediNet, a versatile, noniterative, and resolution-flexible strategy for generating numerous kinds of structured light arrays based on the phase CGH. There are two critical features compared with conventional neural networks targeting beam modulation problems. One is the input of RediNet in a uniform parameter space, where the designation of each dimension can be redefined, ensuring RediNet adapts to almost all structured light arrays as long as they are theoretically possible to achieve with a phase hologram. The other is that the CGH resolution is decoupled from the network architecture, so that a mini, static network can generate original resolution-arbitrary CGHs. By mathematically distilling key information from the light field, RediNet seemingly steps away from the common end-to-end concept in deep learning, but it exploits the sparsity of the arrayed light field, obtaining a remarkable reduction in network complexity and achieving the isolation of the fixed computation kernel from variations. Therefore, this concise and one-way algorithm may empower researchers to deploy the program on low-cost processors rather than expensive high-performance computers for real-time CGH generation. We anticipate that RediNet can accelerate the application of structured light arrays in high-dimensional free-space optical communication, single-exposure parallel laser direct writing, flexible high-throughput medical imaging, optical traps, and so on. Code and Data AvailabilityThe authors declare that the data supporting the findings of this study are available within the manuscript and its Supplementary Material. The algorithms and codes supporting the findings of this study are available in the main text and on GitHub ( https://github.com/LiHengyang1/RediNet). Author ContributionsH.L. and J.X. proposed the idea and initiated the project. H.L. mainly conducted the experiments and simulations and wrote the manuscript. Z.W. helped with simulations. J.X., H.Z., and C.H. helped with experiments. Y.X., X.T., C.W., G.X., and Y.Q. supervised the project, and G.X. edited the manuscript. AcknowledgmentsThis work was supported by the Innovation Project of Optics Valley Laboratory (Grant No. OVL2023PY006); the National Natural Science Foundation of China (Grant No. 62275097); the Key Research and Development Project of Hubei Province, China (Grant No. 2020AAA003); and the Major Program (JD) of Hubei Province (Grant No. 2023BAA015). ReferencesA. Forbes, M. De Oliveira and M. R. Dennis,
“Structured light,”
Nat. Photonics, 15 253
–262 https://doi.org/10.1038/s41566-021-00780-4 NPAHBY 1749-4885
(2021).
Google Scholar
F. O. Fahrbach and A. Rohrbach,
“Propagation stability of self-reconstructing Bessel beams enables contrast-enhanced imaging in thick media,”
Nat. Commun., 3 632 https://doi.org/10.1038/ncomms1646 NCAOBW 2041-1723
(2012).
Google Scholar
R. Ivaškevičiūtė-Povilauskienė et al.,
“Terahertz structured light: nonparaxial Airy imaging using silicon diffractive optics,”
Light Sci. Appl., 11 326 https://doi.org/10.1038/s41377-022-01007-z
(2022).
Google Scholar
Z. Chen et al.,
“Multifocal structured illumination optoacoustic microscopy,”
Light Sci. Appl., 9 152 https://doi.org/10.1038/s41377-020-00390-9
(2020).
Google Scholar
G. Kim et al.,
“Metasurface-driven full-space structured light for three-dimensional imaging,”
Nat. Commun., 13 5920 https://doi.org/10.1038/s41467-022-32117-2 NCAOBW 2041-1723
(2022).
Google Scholar
T. Vettenburg et al.,
“Light-sheet microscopy using an Airy beam,”
Nat. Methods, 11 541
–544 https://doi.org/10.1038/nmeth.2922 1548-7091
(2014).
Google Scholar
T. Lei et al.,
“Massive individual orbital angular momentum channels for multiplexing enabled by Dammann gratings,”
Light Sci. Appl., 4 e257
–e257 https://doi.org/10.1038/lsa.2015.30
(2015).
Google Scholar
J. Wang et al.,
“Orbital angular momentum and beyond in free-space optical communications,”
Nanophotonics, 11 645
–680 https://doi.org/10.1515/nanoph-2021-0527
(2022).
Google Scholar
A. E. Willner et al.,
“Optical communications using orbital angular momentum beams,”
Adv. Opt. Photonics, 7 66 https://doi.org/10.1364/AOP.7.000066 AOPAC7 1943-8206
(2015).
Google Scholar
A. Mair et al.,
“Entanglement of the orbital angular momentum states of photons,”
Nature, 412 313
–316 https://doi.org/10.1038/35085529
(2001).
Google Scholar
P. S. Salter and M. J. Booth,
“Adaptive optics in laser processing,”
Light Sci. Appl., 8 110 https://doi.org/10.1038/s41377-019-0215-1
(2019).
Google Scholar
S. Hasegawa et al.,
“Massively parallel femtosecond laser processing,”
Opt. Express, 24 18513 https://doi.org/10.1364/OE.24.018513 OPEXFF 1094-4087
(2016).
Google Scholar
C. Maurer et al.,
“What spatial light modulators can do for optical microscopy,”
Laser Photonics Rev., 5 81
–101 https://doi.org/10.1002/lpor.200900047
(2011).
Google Scholar
Y. Yang, A. Forbes and L. Cao,
“A review of liquid crystal spatial light modulators: devices and applications,”
Opto-Electron. Sci., 2 230026
–230026 https://doi.org/10.29026/oes.2023.230026
(2023).
Google Scholar
R. Di Leonardo, F. Ianni and G. Ruocco,
“Computer generation of optimal holograms for optical trap arrays,”
Opt. Express, 15 1913
–1922 https://doi.org/10.1364/OE.15.001913 OPEXFF 1094-4087
(2007).
Google Scholar
H. Zhang et al.,
“Modulation of high-quality internal multifoci based on modified three-dimensional Fourier transform,”
Opt. Lett., 48 900
–903 https://doi.org/10.1364/OL.479102 OPLEDP 0146-9592
(2023).
Google Scholar
H. Li et al.,
“Comprehensive holographic parallel beam modulation inside material based on automatic differentiation,”
Opt. Laser Technol., 167 109656 https://doi.org/10.1016/j.optlastec.2023.109656 OLTCAS 0030-3992
(2023).
Google Scholar
Y. Lu et al.,
“Arrays of Gaussian vortex, Bessel and Airy beams by computer-generated hologram,”
Opt. Commun., 363 85
–90 https://doi.org/10.1016/j.optcom.2015.11.001 OPCOB8 0030-4018
(2016).
Google Scholar
E. Stankevičius et al.,
“Bessel-like beam array formation by periodical arrangement of the polymeric round-tip microstructures,”
Opt. Express, 23 28557
–28566 https://doi.org/10.1364/OE.23.028557 OPEXFF 1094-4087
(2015).
Google Scholar
S. Fu, T. Wang and C. Gao,
“Perfect optical vortex array with controllable diffraction order and topological charge,”
J. Opt. Soc. Am. A, 33 1836
–1842 https://doi.org/10.1364/JOSAA.33.001836 JOAOD6 0740-3232
(2016).
Google Scholar
J. Lin et al.,
“Collinear superposition of multiple helical beams generated by a single azimuthally modulated phase-only element,”
Opt. Lett., 30 3266
–3268 https://doi.org/10.1364/OL.30.003266 OPLEDP 0146-9592
(2005).
Google Scholar
Y. LeCun, Y. Bengio and G. Hinton,
“Deep learning,”
Nature, 521 436
–444 https://doi.org/10.1038/nature14539
(2015).
Google Scholar
S. Minaee et al.,
“Image segmentation using deep learning: a survey,”
IEEE Trans. Pattern Anal. Mach. Intell., 44
(7), 3523
–3542 https://doi.org/10.1109/TPAMI.2021.3059968 ITPIDJ 0162-8828
(2021).
Google Scholar
R. Horisaki, R. Takagi and J. Tanida,
“Deep-learning-generated holography,”
Appl. Opt., 57 3859
–3863 https://doi.org/10.1364/AO.57.003859 APOPAI 0003-6935
(2018).
Google Scholar
X. Wang et al.,
“Phase-only hologram generated by a convolutional neural network trained using low-frequency mixed noise,”
Opt. Express, 30 35189
–35201 https://doi.org/10.1364/OE.466083 OPEXFF 1094-4087
(2022).
Google Scholar
A. Sinha et al.,
“Lensless computational imaging through deep learning,”
Optica, 4 1117
–1125 https://doi.org/10.1364/OPTICA.4.001117
(2017).
Google Scholar
K. Wang et al.,
“Deep learning wavefront sensing and aberration correction in atmospheric turbulence,”
PhotoniX, 2 8 https://doi.org/10.1186/s43074-021-00030-4
(2021).
Google Scholar
H. Dammann and K. Görtler,
“High-efficiency in-line multiple imaging by means of multiple phase holograms,”
Opt. Commun., 3 312
–315 https://doi.org/10.1016/0030-4018(71)90095-2 OPCOB8 0030-4018
(1971).
Google Scholar
O. Ronneberger, P. Fischer and T. Brox,
“U-Net: convolutional networks for biomedical image segmentation,”
(2015).
Google Scholar
K. He et al.,
“Deep residual learning for image recognition,”
(2015). Google Scholar
B. Richards and E. Wolf,
“Electromagnetic diffraction in optical systems, II. Structure of the image field in an aplanatic system,”
Proc. R. Soc. Lond. Ser. Math. Phys. Sci., 253 358
–379 https://doi.org/10.1098/rspa.1959.0200
(1959).
Google Scholar
J. Durnin,
“Exact solutions for nondiffracting beams. I. The scalar theory,”
J. Opt. Soc. Amer. A, 4 651
–654 https://doi.org/10.1364/JOSAA.4.000651
(1987).
Google Scholar
V. Garcés-Chávez et al.,
“Simultaneous micromanipulation in multiple planes using a self-reconstructing light beam,”
Nature, 419 145
–147 https://doi.org/10.1038/nature01007
(2002).
Google Scholar
F. Courvoisier, R. Stoian and A. Couairon,
“[Invited] Ultrafast laser micro- and nano-processing with nondiffracting and curved beams: invited paper for the section: hot topics in Ultrafast Lasers,”
Opt. Laser Technol., 80 125
–137 https://doi.org/10.1016/j.optlastec.2015.11.026 OLTCAS 0030-3992
(2016).
Google Scholar
A. Chong et al.,
“Airy–Bessel wave packets as versatile linear light bullets,”
Nat. Photonics, 4 103
–106 https://doi.org/10.1038/nphoton.2009.264 NPAHBY 1749-4885
(2010).
Google Scholar
P. Vaity and L. Rusch,
“Perfect vortex beam: Fourier transformation of a Bessel beam,”
Opt. Lett., 40 597
–600 https://doi.org/10.1364/OL.40.000597 OPLEDP 0146-9592
(2015).
Google Scholar
C. A. Alonzo, P. J. Rodrigo and J. Glückstad,
“Helico-conical optical beams: a product of helical and conical phase fronts,”
Opt. Express, 13 1749
–1760 https://doi.org/10.1364/OPEX.13.001749 OPEXFF 1094-4087
(2005).
Google Scholar
M. Krenn et al.,
“Twisted light transmission over 143 km,”
Proc. Natl. Acad. Sci. U. S. A., 113 13648
–13653 https://doi.org/10.1073/pnas.1612023113
(2016).
Google Scholar
J. Wang et al.,
“Terabit free-space data transmission employing orbital angular momentum multiplexing,”
Nat. Photonics, 6 488
–496 https://doi.org/10.1038/nphoton.2012.138 NPAHBY 1749-4885
(2012).
Google Scholar
Z. Feng et al.,
“High-density orbital angular momentum mode analyzer based on the mode converters combining with the modified Mach–Zehnder interferometer,”
Opt. Commun., 435 441
–448 https://doi.org/10.1016/j.optcom.2018.11.068 OPCOB8 0030-4018
(2019).
Google Scholar
D. Mamadou et al.,
“High-efficiency sorting and measurement of orbital angular momentum modes based on the Mach–Zehnder interferometer and complex phase gratings,”
Meas. Sci. Technol., 30 075201 https://doi.org/10.1088/1361-6501/ab0e62 MSTCEP 0957-0233
(2019).
Google Scholar
S. Li and J. Wang,
“Experimental demonstration of optical interconnects exploiting orbital angular momentum array,”
Opt. Express, 25 21537
–21547 https://doi.org/10.1364/OE.25.021537 OPEXFF 1094-4087
(2017).
Google Scholar
H. Huang et al.,
“100 Tbit/s free-space data link enabled by three-dimensional multiplexing of orbital angular momentum, polarization, and wavelength,”
Opt. Lett., 39 197
–200 https://doi.org/10.1364/OL.39.000197 OPLEDP 0146-9592
(2014).
Google Scholar
L. Yan, P. Kristensen and S. Ramachandran,
“Vortex fibers for STED microscopy,”
APL Photonics, 4 022903 https://doi.org/10.1063/1.5045233
(2018).
Google Scholar
J. Zhang et al.,
“3D computer-generated holography by non-convex optimization,”
Optica, 4 1306
–1313 https://doi.org/10.1364/OPTICA.4.001306
(2017).
Google Scholar
M. Hossein Eybposh et al.,
“DeepCGH: 3D computer-generated holography using deep learning,”
Opt. Express, 28 26636
–26650 https://doi.org/10.1364/OE.399624 OPEXFF 1094-4087
(2020).
Google Scholar
BiographyHengyang Li received his BSc degree from Huazhong University of Science and Technology, Wuhan, China, in 2021, where he is currently pursuing a PhD in the Department of Laser Science and Technology. His research interests include holography, laser nanomanufacturing, computational imaging, and tunable laser resonators. Jiaming Xu received his BSc degree from Huazhong University of Science and Technology, Wuhan, China, in 2019 and is currently pursuing a PhD in the Department of Laser Science and Technology. His research interests encompass holography, laser nanomanufacturing, and computational imaging. Huaizhi Zhang received his doctoral degree from Huazhong University of Science and Technology, Wuhan, China, in 2023. His research interests include laser nanomanufacturing, computing imaging, and optical communication technology. Zining Wan received his BSc degree from Beijing Forestry University, Beijing, China, in 2021. He is currently pursuing a master’s degree at the State Key Laboratory of Media Integration and Communication, Communication University of China, Beijing, China. His research interests include computational neuroscience, machine learning, pattern recognition, and binaural sound localization. Gang Xu is working as a professor in the School of Optical and Electronic Information at Huazhong University of Science and Technology, Wuhan, China. His research interests include ultrafast lasers, nonlinear optics, fiber lasers, and soliton dynamics. He has published more than 50 peer-reviewed journals and 40 international conference papers. Yingxiong Qin is a professor and director of the Department of Laser Science and Technology of Huazhong University of Science and Technology, Wuhan, China. He received his PhD in optics engineering in 2008. His research interests include holography, freeform optics, gas laser systems, and high-power laser manufacturing. |