Mask Error Enhancement Factor (MEEF) has been a standard measure of mask quality [1]. One of the key
assumptions in the construction of MEEF is that mask CD uniformity is not dependent on the shape of mask
feature and can be considered to be a constant for given mask process. This assumption is no longer valid for
small (<100nm), curvilinear or diagonal features. In this paper we extend definition of MEEF to be valid for all
mask shapes call new metric extended MEEF or eMEEF. We also demonstrate on the example of ILT features that
eMEEF increases predictability of mask and wafer CD uniformity sometimes changing overall conclusion about
mask/wafer manufacturability.
Model-Based Mask Data Preparation (MB-MDP) has been discussed in the literature for its benefits in reducing mask
write times [1][2]. By being model based (i.e., simulation based), overlapping shots, per-shot dose modulation, and
circular and other character projection shots are enabled. This reduces variable shaped beam (VSB) shot count for
complex mask shapes, and particularly ideal ILT shapes [3]. In this paper, the authors discuss another even more
important aspect of MB-MDP. MB-MDP enhances CD Uniformity (CDU) on the mask, and therefore on the wafer.
Mask CDU is improved for sub-80nm features on mask through the natural increase in dose that overlapping provides,
and through per-shot dose modulation. The improvement in CDU is at the cost of some write times for the less complex
EUV masks with only rectangular features. But these masks do not have the basis of large write times that come from
complex SRAFs. For ArF masks for the critical layers at the 20nm logic node and below, complex SRAFs are
unavoidable. For these shapes, MB-MDP enhances CDU while simultaneously reducing write times. Simulated and
measured comparison of conventional methodology and MB-MDP methodology are presented.
High resolution airborne multispectral and thermal infrared imagery was acquired over the Mojave River, California with
the Utah State University airborne remote sensing system integrated with the LASSI imaging Lidar also built and operated at USU. The data were acquired in pre-established mapping blocks over a 2 day period covering approximately 144 Km of the Mojave River floodplain and riparian zone, approximately 1500 meters in width. The
multispectral imagery (green, red and near-infrared bands) was ortho-rectified using the Lidar point cloud data through a direct geo-referencing technique. Thermal Infrared imagery was rectified to the multispectral ortho-mosaics. The lidar point cloud data was classified to separate ground surface returns from vegetation returns as well as structures such as buildings, bridges etc. One-meter DEM's were produced from the surface returns along with vegetation canopy height
also at 1-meter grids. Two surface energy balance models that use remote sensing inputs were applied to the high
resolution imagery, namely the SEBAL and the Two Source Model. The model parameterizations were slightly modified to accept high resolution imagery (1-meter) as well as the lidar-based vegetation height product, which was used to estimate the aerodynamic roughness length. Both models produced very similar results in terms of latent heat fluxes (LE). Instantaneous LE values were extrapolated to daily evapotranspiration rates (ET) using the reference ET
fraction, with data obtained from a local weather station. Seasonal rates were obtained by extrapolating the reference ET
fraction according to the seasonal growth habits of the different species. Vegetation species distribution and area were
obtained from classification of the multispectral imagery. Results indicate that cottonwood and salt cedar (tamarisk) had
the highest evapotranspiration rates followed by mesophytes, arundo, mesquite and desert shrubs. This research showed that high-resolution airborne multispectral and thermal infrared imagery integrated with precise full-waveform lidar data can be used to estimate evapotranspiration and water use by riparian vegetation.
The Eyesafe Ladar Test-bed (ELT) is a raster scanning, single-beam, energy-detection ladar with the capability
of digitizing and recording the return pulse waveform at 2 GHz in the field for off-line 3D point cloud formation
research in the laboratory. The ELT serves as a prime tool in understanding the behavior of ladar waveforms.
Signal processing techniques have been applied to the ELT waveform in an effort to exploit the signal with
respect to noise reduction, range resolution improvement, and ability to discriminate between two surfaces of
similar range.
This paper presents a signal processing method used on the ELT waveform. In the processing, three deconvolution
techniques were investigated-the Wiener filter, Richardson-Lucy deconvolution, and a new method that
synthesizes the surface response using least squares minimization. Range error and range resolution are reported
for these methods.
KEYWORDS: LIDAR, Interference (communication), Sensors, Systems modeling, Receivers, Signal to noise ratio, Optical amplifiers, Imaging systems, Error analysis, Data modeling
The development of an experimental full-waveform LADAR system has been enhanced with the assistance of the
LadarSIM system simulation software. The Eyesafe LADAR Test-bed (ELT) was designed as a raster scanning,
single-beam, energy-detection LADAR with the capability of digitizing and recording the return pulse waveform
at up to 2 GHz for 3D off-line image formation research in the laboratory. To assist in the design phase, the
full-waveform LADAR simulation in LadarSIM was used to simulate the expected return waveforms for various
system design parameters, target characteristics, and target ranges. Once the design was finalized and the ELT
constructed, the measured specifications of the system and experimental data captured from the operational
sensor were used to validate the behavior of the system as predicted during the design phase.
This paper presents the methodology used, and lessons learned from this "design, build, validate" process.
Simulated results from the design phase are presented, and these are compared to simulated results using measured
system parameters and operational sensor data. The advantages of this simulation-based process are also
presented.
A new experimental full-waveform LADAR system has been developed that fuses a pixel-aligned color imager within
the same optical path. The Eye-safe LADAR Test-bed (ELT) consists of a single beam energy-detection LADAR that
raster scans within the same field of view as an aperture-sharing color camera. The LADAR includes a pulsed 1.54 μm
Erbium-doped fiber laser; a high-bandwidth receiver; a fine steering mirror for raster scanning; and a ball joint gimbal
mirror for steering over a wide field of regard are all used. The system has a 6 inch aperture and the LADAR has pulse
rate of up to 100 kHz. The color imager is folded into the optical path via a cold mirror. A novel feature of the ELT is its
ability to capture LADAR and color data that are registered temporally and spatially. This allows immediate direct
association of LADAR-derived 3D point coordinates with pixel coordinates of the color imagery. The mapping allows
accurate pointing of the instrument at targets of interest and immediate insight into the nature and source of the LADAR
phenomenology observed. The system is deployed on a custom van designed to enable experimentation with a variety of
objects.
In this work we examine the dynamic implications of active and attentive scanning for LADAR based automatic
target/object recognition and show that a dynamically constrained, scanner based, ATR system's ability to
identify objects in real-time is improved through attentive scanning. By actively and attentively scanning only
salient regions of an image at the density required for recognition, the amount of time it takes to find a target
object in a random scene is reduced. A LADAR scanner's attention is guided by identifying areas-of-interest using
a visual saliency algorithm on electro-optical images of a scene to be scanned. Identified areas-of-interest are
inspected in order of decreasing saliency by scanning the most salient area and saccading to the next most salient
area until the object-of-interest is recognized. No ATR algorithms are used; instead, an object is considered to
be recognized when a threshold density of pixels-on-target is reached.
The USU LadarSIM software package is a ladar system engineering tool that has recently been enhanced to include the modeling of the radiometry of Ladar beam footprints. This paper will discuss our validation of the radiometric model and present a practical approach to future validation work.
In order to validate complicated and interrelated factors affecting radiometry, a systematic approach had to be developed. Data for known parameters were first gathered then unknown parameters of the system were determined from simulation test scenarios. This was done in a way to isolate as many unknown variables as possible, then build on the previously obtained results. First, the appropriate voltage threshold levels of the discrimination electronics were set by analyzing the number of false alarms seen in actual data sets. With this threshold set, the system noise was then adjusted to achieve the appropriate number of dropouts. Once a suitable noise level was found, the range errors of the simulated and actual data sets were compared and studied.
Predicted errors in range measurements were analyzed using two methods: first by examining the range error of a surface with known reflectivity and second by examining the range errors for specific detectors with known responsivities. This provided insight into the discrimination method and receiver electronics used in the actual system.
The autonomous close-in maneuvering necessary for the rendezvous and docking of two spacecraft requires a relative navigation sensor system that can determine the relative position and orientation (pose) of the controlled spacecraft with respect to the target spacecraft. Lidar imaging systems offer the potential for accurately measuring the relative six degree-of-freedom positions and orientations and the associated rates.
In this paper, we present simulation results generated using a high fidelity modeling program. A simulated lidar system is used to capture close-proximity range images of a model target spacecraft, producing 3-D point cloud data. The sequentially gathered point-clouds are compared with the previous point-cloud using a real-time point-plane correspondence-less variant of the Iterative Closest Points (ICP) algorithm. The resulting range and pose estimates are used in turn to prime the next time-step iteration of the ICP algorithm. Results from detailed point-plane simulations and will be presented. The implications for real-time implementation are discussed.
This paper presents a generic simulation model for a ladar scanner with up to three scan elements, each having a steering, stabilization and/or pattern-scanning role. Of interest is the development of algorithms that automatically generate commands to the scan elements given beam-steering objectives out of the ladar aperture, and the base motion of the sensor platform. First, a straight-forward single-element body-fixed beam-steering methodology is presented. Then a unique multi-element redirective and reflective space-fixed beam-steering methodology is explained. It is shown that standard direction cosine matrix decomposition methods fail when using two orthogonal, space-fixed rotations, thus demanding the development of a new algorithm for beam steering. Finally, a related steering control methodology is presented that uses two separate optical elements mathematically combined to determine the necessary scan element commands. Limits, restrictions, and results on this methodology are presented.
Ladar systems are an emerging technology with applications in many fields. Consequently, simulations for these systems have become a valuable tool in the improvement of existing systems and the development of new ones. This paper discusses the theory and issues involved in reliably modeling the return waveform of a ladar beam footprint in the Utah State University LadarSIM simulation software. Emphasis is placed on modeling system-level effects that allow
an investigation of engineering tradeoffs in preliminary designs, and validation of behaviors in fabricated designs. Efforts have been made to decrease the necessary computation time while still maintaining a usable model. A full waveform simulation is implemented that models optical signals received on detector followed by electronic signals and discriminators commonly encountered in contemporary direct-detection ladar systems. Waveforms are modeled using a novel hexagonal sampling process applied across the ladar beam footprint. Each sample is weighted
using a Gaussian spatial profile for a well formed laser footprint. Model fidelity is also improved by using a bidirectional reflectance distribution function (BRDF) for target reflectance. Once photons are converted to electrons, waveform processing is used to detect first, last or multiple return pulses. The detection methods discussed in this paper are a threshold detection method, a constant fraction method, and a derivative zero-crossing method. Various detection phenomena, such as range error, walk error, drop outs and false alarms, can be studied using these detection methods.
USU LadarSIM Release 2.0 is a ladar simulator that has the ability to feed high-level mission scripts into a processor
that automatically generates scan commands during flight simulations. The scan generation depends on specified flight
trajectories and scenes consisting of terrain and targets. The scenes and trajectories can either consist of simulated or
actual data. The first modeling step produces an outline of scan footprints in xyz space. Once mission goals have been
analyzed and it is determined that the scan footprints are appropriately distributed or placed, specific scans can then be
chosen for the generation of complete radiometry-based range images and point clouds. The simulation is capable of
quickly modeling ray-trace geometry associated with (1) various focal plane arrays and scanner configurations and (2)
various scene and trajectories associated with particular maneuvers or missions.
In recent years, NASA's interest in autonomous rendezvous and docking operations with impaired or non-cooperative spacecraft has grown extensively. In order to maneuver and dock, a servicing spacecraft must be able to determine the relative 6 degree-of-freedom (6 DOF) motion between the vehicle and the target spacecraft. One method to determine the relative 6 DOF position and attitude is through lidar imaging. A flash lidar sensor system can capture close-proximity range images of the target spacecraft, producing 3-D point cloud data sets. These sequentially collected point-cloud data sets can be compared to a point cloud image of the target at a known location using a point correspondence-less variant of the Iterative Closest Points (ICP) algorithm to determine the relative 6 DOF displacements. Simulation experiments indicate that the MSE, angular error, mean, and standard deviations for position and orientation estimates did not vary as a function of position and attitude. Furthermore, the computational times required by this algorithm were comparable to previously reported variants of the point-to-point and point-to-plane based ICP variants.
An integrated ladar/EO imager has been developed that synchronizes and aligns CMOS digital camera readouts with the scan motion of a time-of-flight pulsed ladar . A prototype has been developed at the Utah State University Center for Advanced Imaging Ladar that reads out a 13 by 13 patch of RGB pixels within the subtended angle of a single ladar beam footprint. The readout location for the patch is slaved to the ladar and follows the ladar beam as it is scanned within the field-of-view. As the scanning occurs, the x-y-z position of each footprint and associated image patch is determined via the ladar. Multiple patches can then be mosaiked to build up a 3D image composed of 3D texture elements (texels) or 3D splats. Because of its ability to produce texels on-the-fly, the system is called a Texel Camera. The approach precludes mismatched occlusions and other ill-effects when motion occurs in the scene. The existing prototype consists of a single-channel flying-spot ladar running at approximately 470 shots/second and a color imager running at approximately 160 times the shot rate. Other designs are in development that employ line-flash and array flash ladar components that will run at pixel rates up to two orders of magnitude faster. The ability to create high-fidelity combined ladar/EO data sets in real time will be advantageous for time-critical applications such as cruise missile automatic target recognition. The design has the potential for applications in space rendezvous and dock, airborne automatic target recognition, surveillance from a tripod, and others that benefit from real-time 3D imagery creation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.