The field of artificial intelligence and machine learning (AI/ML) has experienced unprecedented growth over the last decade driven by computationally demanding applications. The computing power has been so far provided by general-purpose digital hardware such as central processing units (CPUs) and graphics processing units (GPUs). As the potential for continuous technological advancements in digital electronics is brought into question, research is focusing on alternative paradigms such as application-specific analog hardware. Both electronics and photonic analog hardware are being actively investigated with promising results showing advantages in terms of processing speed and/or energy efficiency. However, a systematic comparison of these different hardware platforms in terms of high-level computing performance is missing. In this work, we compare these hardware platforms focusing on use cases with different requirements in terms of, e.g., compute capacity, efficiency, and density. The comparison highlights current advantages and key challenges to be addressed in each field.
Integrated photonic circuits offer a promising platform to implement matrix-vector multiplication in optical feedforward neural networks. The most common implementations rely on thermal phase shifters, which are inevitably susceptible to effects such as thermal and electrical crosstalk. Although deterministic, crosstalk-induced distortions have been challenging to accurately incorporate into physics-based analytical models. Additionally, analog hardware platforms suffer from fabrication deviations, that can have a significant impact on the computing performance, thus limiting scalability in implemented matrix size. In contrast, data-driven modeling techniques have shown to be promising approaches to modeling such circuits, yet they rely on black-box physics-agnostic modeling, require massive and unscalable amounts of training data, and cannot guarantee physically plausible results. Going beyond the data-driven black-box modeling techniques, but still taking advantage of the information captured by the data, we investigate the advantages of using physics-informed machine learning for photonic meshes. We analyze the ability of this approach to provide more accurate, less data-hungry, and physically plausible models for programmable photonic meshes. Moreover, we explore the potential to extract the knowledge from the trained model.
Silicon microring resonators (MRRs) have shown strong potential in acting as the nonlinear nodes of photonic reservoir computing (RC) schemes. By using nonlinearities within a silicon MRR, such as the ones caused by free-carrier dispersion (FCD) and thermo-optic (TO) effects, it is possible to map the input data of the RC to a higher dimensional space. Furthermore, by adding an external waveguide between the through and add ports of the MRR, it is possible to implement a time-delay RC (TDRC) with enhanced memory. The input from the through port is fed back into the add port of the ring with the delay applied by the external waveguide effectively adding memory. In a TDRC, the nodes (virtual) are multiplexed in time, and their respective time evolutions are detected at the drop port. The performance of MRR-based TDRC is highly dependent on the amount of nonlinearity in the MRR. The nonlinear effects, in turn, are dependent on the physical properties of the MRR as they determine the lifetime of the effects. Another factor to take into account is the stability of the MRR response, as strong time-domain discontinuities at the drop port are known to emerge from FCD nonlinearities due to self-pulsing (high nonlinear behaviour). However, quantifying the right amount of nonlinearity that RC needs for a certain task in order to achieve optimum performance is challenging. Therefore, further analysis is required to fully understand the nonlinear dynamics of this TDRC setup. Here, we quantify the nonlinear and linear memory capacity of the previously described microring-based TDRC scheme, as a function of the time constants of the generated carriers and the thermal of the TO effects. We analyze the properties of the TDRC dynamics that generate the parameter space, in terms of input signal power and frequency detuning range, over which conventional RC tasks can be satisfactorily performed by the TDRC scheme.
KEYWORDS: Machine learning, Control systems, Photonics systems, Design and modelling, Control systems design, Frequency combs, Telecommunications, Raman amplifiers, Optical circuits, Neural networks
Machine learning techniques are proving to be very useful for design of optical amplifiers, noise characterization of frequency combs, optimization of fiber-optic communications systems, inverse design of photonics components and quantum-noise limited signal detection. In this talk, we will review some of the successful applications of machine learning in photonics, and look into what is next in this emerging field. More specifically, we will look into how reinforcement learning can be used for the generation of programmable pulse shapes, which has a broad range of applications in classical and quantum engineering.
Data protection and confidentiality have become a serious concern in today’s world. Their security is guaranteed by cryptographic protocols, which heavily rely on random numbers as a measure against predictability. Classically, randomness is generated via complex but deterministic algorithms, which are vulnerable to attacks. Quantum Random Number Generators (QRNGs) have emerged as a promising solution, as they provide true random numbers based on the intrinsic non-deterministic nature of quantum mechanics. However, critical challenges for QRNGs are the certification and quantification of their genuine randomness, especially in the presence of untrusted devices, and their compactness for systematic deployment. In this feasibility study, to face these challenges, we propose to use a silicon-photonic platform, leveraging on the concept of quantum contextuality for a semi-device independent generator. In particular, we use Klyachko-Can-Binicioglu-Shumovsky (KCBS) inequality to assess a fundamental property of quantum measurements: that their outcomes depend on the specific measurement context.
We propose a methodology to analyze a 3×3 Mach-Zenhder-based neuromorphic optical network used as a programmable logic gate. The investigated approach starts from the electromagnetic simulation of the integrated optical elements, then moves to the description of the thermal heaters including thermal cross-talk, and finally addresses the definition of the logical levels.
Machine learning (ML) is becoming a ubiquitous and powerful tool helping to address challenges in countless fields. Applications of ML addressing optics challenges have been extensively studied in recent years opening up new research directions. In particular, here, we review some of our current efforts and provide examples of successful applications of ML to the characterization of photonic devices, design, and modeling of optical subsystems, and complete end-to-end optical system optimization. ML and statistical tools can yield additional insight from measurement data, e.g. by targeted filtering of noise sources. They have also been shown to assist complex or inaccurate physics-based models through black and grey-box modeling of photonics components or subsystems. Such ML-aided models have enabled easier optimization and design (including inverse design) of optical systems.
KEYWORDS: Neural networks, Signal detection, Data centers, Signal processing, Receivers, Optoelectronics, Numerical analysis, Data communications, Computer architecture
The substantial increase in communication throughput driven by the ever-growing machine-to-machine communication within a data center and between data centers is straining the short-reach communication links. To satisfy such demand - while still complying with the strict requirements in terms of energy consumption and latency - several directions are being investigated with a strong focus on equalization techniques for intensity- modulation/direct-detection (IM/DD) transmission. In particular, the key challenge equalizers need to address is the inter-symbol interference introduced by the fiber dispersion when making use of the low-loss transmission window at 1550 nm. Standard digital equalizers such as feed-forward equalizers (FFEs) and decision-feedback equalizers (DFEs) can provide only limited compensation. Therefore more complex approaches either relying on maximum likelihood sequence estimation (MLSE) or using machine-learning tools, such as neural network (NN) based equalizers, are being investigated. Among the different NN architectures, the most promising approaches are based on NNs with memory such as time-delay feedforward NN (TD-FNN), recurrent NN (RNN), and reservoir computing (RC). In this work, we review our recent numerical results on comparing TD-FNN and RC equalizers, and benchmark their performance for 32-GBd on-off keying (OOK) transmission. A special focus will be dedicated to analyzing the memory properties of the reservoir and its impact on the full system performance. Experimental validation of the numerical findings is also provided together with reviewing our recent proposal for a new receiver architecture relying on hybrid optoelectronic processing. By spectrally slicing the received signal, independently detecting the slices and jointly processing them with an NN-based equalizer (wither TD-FNN or RC), significant extension reach is shown both numerically and experimentally.
We experimentally demonstrated 10 GHz frequency comb spectral broadening in an AlGaAsOI nano-waveguide with the peak power of only several watts. The spectral broadened 10 GHz frequency comb has high optical signal to noise ratio (OSNR) at the output of the nano-waveguide. As far as we know, it is the first photonic chip based frequency comb, relying on spectral broadening of a 10 GHz mode-locked laser comb in an AlGaAsOI nano-waveguide, with a sufficient comb output power to support several hundred Tbit/s optical data.
Space division multiplexing (SDM) is currently widely investigated in order to provide enhanced capacity thanks to the utilization of space as a new degree of multiplexing freedom in both optical fiber communication and on-chip interconnects. Basic components allowing the processing of spatial modes are critical for SDM applications. Here we present such building blocks implemented on the silicon-on-insulator (SOI) platform. These include fabrication tolerant wideband (de)multiplexers, ultra-compact mode converters and (de)multiplexers designed by topology optimization, and mode filters using one-dimensional (1D) photonic crystal silicon waveguides. We furthermore use the fabricated devices to demonstrate on-chip point-to-point mode division multiplexing transmission, and all-optical signal processing by mode-selective wavelength conversion. Finally, we report an efficient silicon photonic integrated circuit mode (de)multiplexer for few-mode fibers (FMFs).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.