Mask optimization has arisen as a vital challenge in the flow of VLSI manufacturing, primarily because the critical dimension of integrated circuits stands much smaller than the light source's wavelength. Inverse lithography technology (ILT), a notable resolution enhancement technology, is known for its efficacy in enhancing mask printability. However, its extensive computational complexity has obstructed its broad-spectrum adoption. In our paper, we introduce a detail-enhanced Pix2Pix network, founded on GAN principles, to speed up the ILT process. This network has the capacity to generate quasi-optimal masks from given target layouts, thereby reducing the amount of traditional ILT steps required to produce high-quality masks. Our experiments on the ICCAD 2013 benchmarks demonstrate that, in comparison to the latest cGAN-based method, the L2 error, PVB, and runtime in our work have seen reductions of 7.2%, 5.9%, and 18.4% respectively. Thus, our approach not only expedites the ILT process but also guarantees enhanced printability.
Network systems have seen a significant transformation with the growing acceptance of RDMA for low-latency communications in data centers. Unfortunately, studies have shown that RDMA one-sided operations are subject to security risks such as packet eavesdropping, packet injection, and packet tampering; therefore, we are seeing new RDMA designs taking secure features into account, while most of which still neglecting efficiency in some ways. We propose SEC-RDMA, a scheme being compatible with the original RoCEv2 protocol and enhancing confidentiality and authentication for one-sided operations during RDMA transmissions, mainly focusing on the efficiency of two critical aspects: hard-wired key management and message-based packet authentication. We implement such a scheme on an FPGA-based RDMA network interface card to prove its viability. In testing with this implementation, message-based packet authentication takes roughly 84.6% less time than packet-based one, while hard-wired key management takes approximately 85.5% less time than the typical key exchange strategy at the QP level. This SEC-RDMA implementation adds 45K LUTs and 29K registers to the FPGA-based RDMA network interface card.
The purpose of this paper is to use digital holographic imaging to capture the hologram of the underwater particles. Digital holography includes two steps: wavefront recording and wavefront reconstruction. In the experiments of the underwater particles, some technique issues, such as the principle of digital holography, zero-order item suppressing and reconstruction algorithm are discussed. The complex amplitude of the underwater particles is reconstructed using numerical methods based on the Kirchhoff-Fresnel integral formula. In this case, it enables reconstruction of the object wavefield in a plane at a distance d from hologram. This novel method for imaging the underwater particles using digital holography is an extremely valuable technique for the study of underwater particles fields as it has many advantages such as large depth of field, no focusing, high resolution and has the ability to record the wavefront information of all particles in the entire volume. Finally, this paper demonstrates it is possible for digital holographic imaging setup with less complexity to image underwater particles according to our laboratory results.
Fluorescence spectroscopy and absorption spectroscopy are common physical methods used for water quality monitoring and analysis. However, in terms of sensitivity and selectivity, the absorption spectroscopy is still inferior; limited categories of organic contaminants can emit fluorescence, which constrains the analytical range. Here in, a novel feature extraction method is proposed in conjoint analysis of fluorescence and absorption spectroscopy to predict the category of water contaminants. The three-dimensional fluorescence spectra and absorption spectra of eight typical substances were studied. We extracted the outline of every three-dimensional fluorescence spectrum along the emission wavelengths axis, and then transformed it into a wavenumber spectrum. The symmetry axis and Stokes shift between fluorescence emission peak and absorption peak in their wavenumber spectra were set as two features. Theoretically, they depend only on the molecular structures of different substances. Moreover, four integral parameters in different absorption spectral ranges corresponding to functional groups were introduced to expand the analytical coverage of diverse contaminants including some non-fluorescent substances. Furthermore, we conducted long-term monitoring of river water near a dyeing and printing plant to demonstrate the prediction potential of this method. As an early warning system, the rapid prediction results can provide guidance for more targeted and detailed analysis and treatment.
In this paper, a novel approach was implemented to image the marine plankton using the in-line digital holographic technology. the digital holography can image all plankton in a certain volume and more information can be recorded including the intensity and phase information. Moreover, the lensless system cases no aberrations and reduces the complexity of structure. In the process of hologram reconstruction, numerical algorithms were developed based on the angle spectrum theory. In the experiments of marine plankton, some technical issues, such as reconstruction algorithm, numerical refocusing, zero-order term suppression, were discussed. We can obtain the reconstructing image layer by layer at different distances by changing the distance step, which demonstrates that digital holographic imaging is capable of digital refocusing. Digital holographic imaging has clear advantages over other optical methods for the analysis of marine plankton, which contributes to further microorganism identification in the oceanographic observation by using the digital image processing and microscopy techniques.
Hotspot classification is an important step of hotspot management. Under possible center-shifting condition,
conventional hotspot classification by calculating pattern similarity through overlaying two hotspot patterns directly is not effective. This paper proposes a hotspot classification method based on higher-order local autocorrelation (HLAC). Firstly, we extract the features of the hotspot patterns using HLAC method. Secondly, the principal component analysis (PCA) is performed on the features for dimension reduction. Thirdly, the simplified low dimensional vector features of the hotspot patterns are used in the pre-clustering step. Finally, detailed clustering using pattern similarity calculated by discrete 2-d correlation is carried out. Because the HLAC based features are shifting-invariant, the center-shifting problem caused by the defect location inaccuracy can be overcome during the pre-clustering process. Experiment results show that the proposed method can classify hotspots under center-shifting condition effectively and speed up the classification process greatly.
KEYWORDS: Design for manufacturing, Finite element methods, Process modeling, Computer aided design, Manufacturing, Optimization (mathematics), Data modeling, Convolution, Personal protective equipment, Semiconducting wafers
A layout design that passes the design rule check (DRC) may still have manufacturing problems today, especially around
areas of critical patterns. Thus a design-for-manufacturability (DfM) model, which can simulate the process from
designed layout to wafer and predict the final contours, is necessary. A new kind of DfM model called free-elementmodel
(FEM) is proposed in this paper. The framework of FEM is borrowed from the forward process model, which is
basically a set of convolution kernels in matrix form, yet the unknown variables are the kernel elements instead of
process parameters. The modeling process is transformed into a non-linear optimization problem, with equality
constraints which involve norm-2 regulation of kernels and inner production of any two kernels to keep the
normalization and orthogonality of optimized kernels. Gradient-based method with Lagrange penalty function is
explored to solve the optimization problem to minimize the difference between simulated contours and real contours.
The dimension of kernels in FEM is determined by the cutoff frequency and the ambit. Since kernels are calculated by
optimization method instead of decomposition of transmission cross coefficient (TCC), every element of kernels
becomes a factor to describe the process. FEM is more flexible, and in it all effects that can be integrated into
convolution kernels join naturally, such as the resist deviation and asymmetry of the process. No confidential process
parameters, for example NA and defocus, appear in FEM explicitly, and thus the encapsulated FEM is suitable for IC
manufacturers to publish. Moreover, enhancements and supplements to FEM are discussed in this paper, including the
sufficiency of test patterns. In our experiments, DfM models for 2 process lines are generated based on test patterns, and
the results show that the simulated shapes have an area error less than 2% compared to the real shapes of test patterns and an area error less than 3% compared to the shapes in typical blocks chosen from chip for verification purpose. The root mean square error of contour deviation between the 2 simulation results from FEM and conventional lithographic model is 10nm in a 65nm process.
As the most important RET (Resolution Enhancement Technology), OPC (Optical Proximity Correction) technology has
been widely used in today's IC manufacturing and is still developing very fast both in its principle and its practice. In
this invited paper, key techniques of OPC are classified and overviewed; progresses of OPC technology in recent years
published in major SPIE symposiums are reviewed as well. Recent research results produced by Zhejiang University's
team are described and reviewed with highlighting. An OPC tool suite named ZOPC, which has been designed to enable
new OPC techniques to be integrated into one platform, is presented. The framework of ZOPC as well as its working
scheme is demonstrated with real examples.
To reduce design spin time, OPC-unfriendly spots in IC layout should be found out by designer before tapeout.
This can be done by firstly running a "trial OPC" step on the layout, followed by running an ORC step
to verify the result. In this paper we introduce a specialized cell-wise OPC method using an edge bias modeling
method to improve the accuracy while keeping the advantage on correction speed, which is dozens of times faster
than traditional model-based OPC method. This makes the algorithm a good choice for "trial OPC".
The correction accuracy of a model-based OPC (MB-OPC) depends critically on its edge offset calculation
scheme. In a normal MB-OPC algorithm, only the impact of the current edge is considered in calculating each
edge offset. As the k1 process factor decreases and design complexity increases, however, the interaction between
the edge segments becomes much larger. As a result, the normal MB-OPC algorithm may not always converge
or converge slowly. Controlling the EPE is thus become harder. To address this issue, a new kind of MB-OPC
algorithm based on MEEF matrix was introduced which is also called matrix OPC. In this paper, a variant of
such matrix OPC algorithm is proposed which is suitable for kernel-based lithography models. Comparing with
that based on MEEF matrix, this algorithm requires less computation in matrix construction. Sparsity control
scheme and RT reuse scheme are also used to make the correction speed be close to a normal one while keeping
its advantages on EPE control.
SOFT (Smooth OPC Fixing Technique) is a new OPC flow developed from the basic OPC framework. It provides a new
method to reduce the computation cost and complexities of ECO-OPC (Engineering Change Order - Optical Proximity
Correction). In this paper, we introduce polygon comparison to extract the necessary but possibly lost fragmentation and
offset information of previous post-OPC layout. By reusing these data, we can start the modification on each segment
from a more accurate initial offset. In addition, the fragmentation method in the boundary of the patch in the previous
OPC process is therefore available for engineers to stitch the regional ECO-OPC result back to the whole post-OPC
layout seamlessly. For the ripple effect in the OPC, by comparing each segment's movement in each loop, we much free
the fixing speed from the limitation of patch size. We handle layout remodification, especially in three basic kinds of
ECO-OPC processes, while maintaining other design closure. Our experimental results show that, by utilizing the
previous post-OPC layout, full-chip ECO-OPC can realize an over 5X acceleration and the regional ECO-OPC result can
also be stitched back into the whole layout seamlessly with the ripple effect of the lithography interaction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.