This paper describes a method to represent the predicted accuracy of an arbitrary 3d geospatial product from a specific type or class of products; for example, the class of 3d point clouds generated from EO imagery by vendor “abc” within date range “xyz”. The predicted accuracy is based on accuracy assessments of previous products from the same type or class of products; in particular, based on corresponding sample statistics of geolocation error computed using groundtruth or surveyed geolocations. The representation of predicted accuracy is theoretically rigorous, flexible, and practical and is based on the underlying concepts of Mixed Gaussian Random Fields (MGRF). It also allows for a representation of predicted accuracy that can vary over the product, allowing for increased geolocation uncertainty for a priori“problem areas” in the product. The MGRF-based approach for the representation of predicted accuracy is particularly applicable to 3d geospatial products that do not have product-specific predicted accuracies generated with the product itself. This is the typical situation, particularly for commodities-based geospatial products. The paper also describes a method for the near-optimal adjustment of a geospatial product based on its predicted accuracy and its fusion with other products.
This paper presents recommended principles and processes for the Quality Assurance (QA) and Quality Control (QC) of estimators and their outputs in Geolocation Systems. Relevant estimators include both batch estimators, such as Weighted Least Squares (WLS) estimators, and (near) real-time sequential estimators, such as Kalman filters. The estimators typically solve for (estimate) the value of a state vector X_true containing 3d geolocations and/or corrections to the sensor metadata corresponding to the measurements supplied to the estimator. Along with a best estimate X of X_true, the estimator outputs predicted accuracy, typically an error covariance matrix CovX corresponding to the error in the solution X. It is essential that the estimator output a reliable and near-optimal estimate X as well as a reliable error covariance matrix CovX. This paper presents various procedures, including detailed algorithms, to help ensure that this is the case, and if not, flag the problem along with supporting metrics. The majority of the QA/QC procedures involve data internal to the estimator, such as measurement residuals, and can be built-in to the estimator. Examples include measurement editing, solution convergence detection, and confidence interval tests based on the reference variance.
This paper presents an overview of the Full Motion Video-Geopositioning Test Bed (FMV-GTB) developed to
investigate algorithm performance and issues related to the registration of motion imagery and subsequent extraction of
feature locations along with predicted accuracy. A case study is included corresponding to a video taken from a
quadcopter. Registration of the corresponding video frames is performed without the benefit of a priori sensor attitude
(pointing) information. In particular, tie points are automatically measured between adjacent frames using standard
optical flow matching techniques from computer vision, an a priori estimate of sensor attitude is then computed based
on supplied GPS sensor positions contained in the video metadata and a photogrammetric/search-based structure from
motion algorithm, and then a Weighted Least Squares adjustment of all a priori metadata across the frames is performed.
Extraction of absolute 3D feature locations, including their predicted accuracy based on the principles of rigorous error
propagation, is then performed using a subset of the registered frames. Results are compared to known locations (check
points) over a test site. Throughout this entire process, no external control information (e.g. surveyed points) is used
other than for evaluation of solution errors and corresponding accuracy.
The specification of geolocation accuracy requirements and their validation is essential for the proper performance of a
Geolocation System and for trust in resultant three dimensional (3d) geolocations. This is also true for predicted
accuracy requirements and their validation for a Geolocation System, which assumes that each geolocation produced
(extracted) by the system is accompanied by an error covariance matrix that characterizes its specific predicted accuracy.
The extracted geolocation and its error covariance matrix are standard outputs of (near) optimal estimators, either
associated (internally) with the Geolocation System itself, or with a “downstream” application that inputs a subset of
Geolocation System output, such as sensor data/metadata: for example, a set of images and corresponding metadata of
the imaging sensor’s pose and its predicted accuracy. This output allows for subsequent (near) optimal extraction of
geolocations and associated error covariance matrices based on the application’s measurements of pixel locations in the
images corresponding to objects of interest. This paper presents recommended methods and detailed equations for the
specification and validation of both accuracy and predicted accuracy requirements for a general Geolocation System.
The specification/validation of accuracy requirements are independent from the specification/validation of predicted
accuracy requirements. The methods presented in this paper are theoretically rigorous yet practical.
In order to better understand the issues associated with Full Motion Video (FMV) geopositioning and to develop corresponding strategies and algorithms, an integrated test bed is required. It is used to evaluate the performance of various candidate algorithms associated with registration of the video frames and subsequent geopositioning using the registered frames. Major issues include reliable error propagation or predicted solution accuracy, optimal vs. suboptimal vs. divergent solutions, robust processing in the presence of poor or non-existent a priori estimates of sensor metadata, difficulty in the measurement of tie points between adjacent frames, poor imaging geometry including small field-of-view and little vertical relief, and no control (points). The test bed modules must be integrated with appropriate data flows between them. The test bed must also ingest/generate real and simulated data and support evaluation of corresponding performance based on module-internal metrics as well as comparisons to real or simulated “ground truth”. Selection of the appropriate modules and algorithms must be both operator specifiable and specifiable as automatic. An FMV test bed has been developed and continues to be improved with the above characteristics. The paper describes its overall design as well as key underlying algorithms, including a recent update to “A matrix” generation, which allows for the computation of arbitrary inter-frame error cross-covariance matrices associated with Kalman filter (KF) registration in the presence of dynamic state vector definition, necessary for rigorous error propagation when the contents/definition of the KF state vector changes due to added/dropped tie points. Performance of a tested scenario is also presented.
KEYWORDS: Monte Carlo methods, Error analysis, Statistical analysis, Computer simulations, Data modeling, Roads, Statistical modeling, Correlation function, Mining, Complex systems
Geostatistical modeling of spatial uncertainty has its roots in the mining, water and oil reservoir exploration communities, and has great potential for broader applications as proposed in this paper. This paper describes the underlying statistical models and their use in both the estimation of quantities of interest and the Monte-Carlo simulation of their uncertainty or errors, including their variance or expected magnitude and their spatial correlations or inter-relationships. These quantities can include 2D or 3D terrain locations, feature vertex locations, or any specified attributes whose statistical properties vary spatially. The simulation of spatial uncertainty or errors is a practical and powerful tool for understanding the effects of error propagation in complex systems. This paper describes various simulation techniques and trades-off their generality with complexity and speed. One technique recently proposed by the authors, Fast Sequential Simulation, has the ability to simulate tens of millions of errors with specifiable variance and spatial correlations in a few seconds on a lap-top computer. This ability allows for the timely evaluation of resultant output errors or the performance of a “down-stream” module or application. It also allows for near-real time evaluation when such a simulation capability is built into the application itself.
Optimal full motion video (FMV) registration is a crucial need for the Geospatial community. It is required for
subsequent and optimal geopositioning with simultaneous and reliable accuracy prediction. An overall approach being
developed for such registration is presented that models relevant error sources in terms of the expected magnitude and
correlation of sensor errors. The corresponding estimator is selected based on the level of accuracy of the a priori
information of the sensor’s trajectory and attitude (pointing) information, in order to best deal with non-linearity effects.
Estimator choices include near real-time Kalman Filters and batch Weighted Least Squares. Registration solves for
corrections to the sensor a priori information for each frame. It also computes and makes available a posteriori
accuracy information, i.e., the expected magnitude and correlation of sensor registration errors. Both the registered
sensor data and its a posteriori accuracy information are then made available to “down-stream” Multi-Image
Geopositioning (MIG) processes. An object of interest is then measured on the registered frames and a multi-image
optimal solution, including reliable predicted solution accuracy, is then performed for the object’s 3D coordinates. This
paper also describes a robust approach to registration when a priori information of sensor attitude is unavailable. It
makes use of structure-from-motion principles, but does not use standard Computer Vision techniques, such as
estimation of the Essential Matrix which can be very sensitive to noise. The approach used instead is a novel, robust,
direct search-based technique.
KEYWORDS: Cameras, Atomic force microscopy, Video, Field emission displays, 3D image processing, Global Positioning System, Lithium, Airborne remote sensing, Video acceleration, Error analysis
In this paper we demonstrate a technique for extracting 3-dimensional data from 2-dimensional GPS-tagged video. We call our method Minimum Separation Vector Mapping (MSVM), and we verify it's performance versus traditional Structure From Motion (SFM) techniques in the field of GPS-tagged aerial imagery, including GPS-tagged full motion video (FMV). We explain how MSVM is better posed to natively exploit the a priori content of GPS tags when compared to SFM. We show that given GPS-tagged images and moderately well known intrinsic camera parameters, our MSVM technique consistently outperforms traditional SFM implementations under a variety of conditions.
KEYWORDS: Error analysis, Sensors, Matrices, Statistical analysis, Filtering (signal processing), Chemical elements, Modeling, 3D image processing, Data modeling, Correlation function
Whether statistically representing the errors in the estimates of sensor metadata associated with a set of images, or statistically representing the errors in the estimates of 3D location associated with a set of ground points, the corresponding “full” multi-state vector error covariance matrix is critical to exploitation of the data. For sensor metadata, the individual state vectors typically correspond to sensor position and attitude of an image. These state vectors, along with their corresponding full error covariance matrix, are required for optimal down-stream exploitation of the image(s), such as for the stereo extraction of a 3D target location and its corresponding predicted accuracy. In this example, the full error covariance matrix statistically represents the sensor errors for each of the two images as well as the correlation (similarity) of errors between the two images. For ground locations, the individual state vectors typically correspond to 3D location. The corresponding full error covariance matrix statistically represents the location errors in each of the ground points as well as the correlation (similarity) of errors between any pair of the ground points. It is required in order to compute reliable estimates of relative accuracy between arbitrary ground point pairs, and for the proper weighting of the ground points when used as control, in for example, a fusion process. This paper details the above, and presents practical methods for the representation of the full error covariance matrix, ranging from direct representation with large bandwidth requirements, to high-fidelity approximation methods with small bandwidth requirements.
The classic problem of computer-assisted conflation involves the matching of individual features (e.g., point, polyline,
or polygon vectors) as stored in a geographic information system (GIS), between two different sets (layers) of features.
The classical goal of conflation is the transfer of feature metadata (attributes) from one layer to another. The age of free
public and open source geospatial feature data has significantly increased the opportunity to conflate such data to create
enhanced products. There are currently several spatial conflation tools in the marketplace with varying degrees of
automation. An ability to evaluate conflation tool performance quantitatively is of operational value, although manual
truthing of matched features is laborious and costly. In this paper, we present a novel methodology that uses spatial
uncertainty modeling to simulate realistic feature layers to streamline evaluation of feature matching performance for
conflation methods. Performance results are compiled for DCGIS street centerline features.
Geolocation of objects or points of interest on the ground from airborne sensors is an enabler to
support many useful purposes. While many commercial handheld cameras today perform
rudimentary geo-tagging of images, few outside of commercial or military tactical airborne sensors
have implemented the methods necessary to produce full three-dimensional coordinates as well as
perform rigorous metric error propagation to estimate the uncertainties of those calculated
coordinates. The critical ingredients for this fully metric capability include careful characterization
of the sensor system, capturing and disseminating a complete metadata profile with the imagery, and
having a validated sensor model to support the necessary transformations between the image space
and the ground space. This paper describes important characteristics of metadata, the methods of
geopositioning which can be applied, and including advantages and limitations. In addition, it will
present the benefits of using active sensors and some recent efforts focusing on geopositioning from
full-motion video (FMV) sensors.
A video-stream associated with an Unmanned System or Full Motion Video can support the extraction of ground
coordinates of a target of interest. The sensor metadata associated with the video-stream includes a time series of
estimates of sensor position and attitude, required for down-stream single frame or multi-frame ground point extraction,
such as stereo extraction using two frames in the video-stream that are separated in both time and imaging geometry.
The sensor metadata may also include a corresponding time history of sensor position and attitude estimate accuracy
(error covariance). This is required for optimal down-stream target extraction as well as corresponding reliable
predictions of extraction accuracy. However, for multi-frame extraction, this is only a necessary condition. The
temporal correlation of estimate errors (error cross-covariance) between an arbitrary pair of video frames is also
required. When the estimates of sensor position and attitude are from a Kalman filter, as typically the case, the
corresponding error covariances are automatically computed and available. However, the cross-covariances are not.
This paper presents an efficient method for their exact representation in the metadata using additional, easily computed,
data from the Kalman filter. The paper also presents an optimal weighted least squares extraction algorithm that
correctly accounts for the temporal correlation, given the additional metadata. Simulation-based examples are presented
that show the importance of correctly accounting for temporal correlation in multi-frame extraction algorithms.
Sufficient conditions for strictly positive definite correlation functions are developed. These functions are associated
with wide-sense stationary stochastic processes and provide practical models for various errors affecting tracking, fusion,
and general estimation problems. In particular, the expected magnitude and temporal correlation of a stochastic error
process are modeled such that the covariance matrix corresponding to a set of errors sampled (measured) at different
times is positive definite (invertible) - a necessary condition for many applications. The covariance matrix is generated
using the strictly positive definite correlation function and the sample times. As a related benefit, a large covariance
matrix can be naturally compressed for storage and dissemination by a few parameters that define the specific correlation
function and the sample times. Results are extended to wide-sense homogeneous multi-variate (vector-valued) random
fields. Corresponding strictly positive definite correlation functions can statistically model fiducial (control point) errors
including their inter-fiducial spatial correlations. If an estimator does not model correlations, its estimates are not
optimal, its corresponding accuracy estimates (a posteriori error covariance) are unreliable, and it may diverge. Finally,
results are extended to approximate error covariance matrices corresponding to non-homogeneous, multi-variate random
fields (a generalization of non-stationary stochastic processes). Examples of strictly positive definite correlation
functions and corresponding error covariance matrices are provided throughout the paper.
BAE SYSTEMS is developing a "4D Registration" capability for DARPA's Dynamic Tactical Targeting program. This will further advance our automatic image registration capability to use moving objects for image registration, and extend our current capability to include the registration of non-imaging sensors. Moving objects produce signals that are identifiable across multiple sensors such as radar moving target indicators, unattended ground sensors, and imaging sensors. Correspondences of those signals across sensor types make it possible to improve the support data accuracy for each of the sensors involved in the correspondence. The amount of accuracy improvement possible, and the effects of the accuracy improvement on geopositioning with the sensors, is a complex problem. The main factors that contribute to the complexity are the sensor-to-target geometry, the a priori sensor support data accuracy, sensor measurement accuracy, the distribution of identified objects in ground space, and the motion and motion uncertainty of the identified objects. As part of the 4D Registration effort, BAE SYSTEMS is conducting a sensitivity study to investigate the complexities and benefits of multisensor registration with moving objects. The results of the study will be summarized.
An algorithm that significantly reduces the time required to photogrammetrically extract valid double-line drains is described. The algorithm automatically adjusts an initial double-line drain delineation for accuracy and consistency. It is based on a rigorous, optimal solution technique and is currently implemented on the Defense Mapping Agency Digital Production System.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.