We present an exploration of collection geometries for producing three-dimensionally (3D) focused synthetic aperture radar (SAR) derived point clouds. We consider collection geometries that can be produced by a series of continuous curves such as multiple flight paths of a fixed wing aircraft or multiple passes of a satellite orbiting the earth. As part of our analysis, we use sparse methods to reconstruct undersampled radar data. We use back-projection to focus the radar data into the spatial domain, onto a uniform volumetric grid. Additionally, we use a 3D resonance finding algorithm to extract scattering centers from volumetric radar data to produce 3D point clouds. Our analysis is based upon synthetic radar data produced using the parameters derived from our laboratory’s in-door turntable inverse synthetic radar aperture (ISAR) system. A key point of our analysis is to determine how many repeat passes are required to achieve a given fidelity of an object’s 3D representation. Analysis will include a comparison with interferometric methods, particularly with regard to the fidelity and the point cloud density. We use a digital model of a civilian pickup truck that has been validated for use in synthetic prediction, both as a full-size model in outdoor collects as well as a reduced scale model measured indoors in our lab. Future research directions are also discussed.
We present an analysis of image reconstruction quality that includes the use of traditional and deep-learning quality metrics for sparse reconstructions of three-dimensionally (3D) focused synthetic aperture radar (SAR) data. A major goal of our analysis is to explore the usefulness of various metrics to demonstrate their utility in 3D focused scenarios. We make use of synthetic prediction to help fully span the large parameter space of a two-dimensional cross-range aperture. The analysis including the synthetic prediction will help guide future measurements of scale models in our compact radar range.1
We present experiments to explore the use of deep neural network classification models for estimating the orientation of objects with linear structures from polarimetric radar data. We derive all radar data from two physical model aircraft and their corresponding computerized surface models. We make extensive use of synthetic pre- diction to help fully span the large parameter space as is consistent with best practice. Synthetic predictions are based upon a linear quad-polarized (H: horizontal, V: vertical) Ka-band stepped frequency measurement inverse synthetic aperture radar (ISAR) turntable system located inside the Air Force Research Laboratory (AFRL) Sensor Directorate's Indoor Range. The use of multiple polarimetric channels in a deep learning classification framework are shown to significantly help estimate orientation when the co-polarization channels significantly differ from each other. Future research directions are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.