Image matching is the first step in almost any 3D computer vision task, and hence has received extensive attention. In
this paper, the problem is addressed from a novel perspective, which is different from the classic stereo matching
paradigm. Two images with different resolutions, that is high resolution versus low resolution are matched. Since the
high resolution image only corresponds to a small region of the low resolution one, the matching task therefore consists
in finding a small region in the low resolution image that can be assigned to the whole high resolution image under the
plane similarity transformation, which can be defined by the local area correlation coefficient to match the interest points
and rectified by similarity transform. Experiment shows that our matching algorithm can be used for scale changing up
to a factor of 6. And it is successful to deal with the point matching between two images under large scale.
In this paper, Image inpainting based on different displacement view images is proposed, which is the problem of filling
in the occluded or damaged regions of an image by the visible information from other different displacement view
images. The key problems are how to convert the visible information in different displacement view images to
consistence and to use available information to repair the target image. The first step of our method is to divide all
images into different scene planar regions and to transform all image regions into current view by projective
transformation computed from the matched points; thus the visible information can be directly used, then a new
inpainting algorithm based on image fusion with spatial frequency is applied. The experiment shows good and
harmonious results for the repaired image.
Matching features computed in images is an important process in multiview image analysis. When the motion between two images is large, the matching problem becomes very difficult. In this paper, we propose a contour matching algorithm based on geometric constraints. With the assumption that the contours are obtained from images taken from a moving camera with static scenes, we apply the epipolar constraint between two sets of contours and compute the corresponding points on the contours. From the initial epipolar constraints obtained from comer point matching, candidate contours are selected according to the epipolar geometry, the linear relation among tangent vectors of the contour. In order to reduce the possibility of false matches, the curvature of the contour of match points on a contour is also used as a selection method. The initial epipolar constraint is refined from the matched sets of contours. The algorithm can be applied to a pair or two pairs of images. All of the processes are fully automatic and successfully implemented and tested with various synthetic images.
KEYWORDS: Cameras, Calibration, Machine vision, Computer vision technology, 3D image processing, Sensors, 3D metrology, 3D modeling, Reconstruction algorithms, 3D vision
We describe the principles of building a moving vision platform (a Rig) that once calibrated can thereon self-adjust to changes in its internal configuration and maintain an Euclidean representation of the 3D world using only projective measurements. The calibration paradigm is termed "Omni-Rig". We assume that after calibration the cameras may change critical elements of their configuration, including internal parameters and rotations. Theoretically we show that knowing only the relative positions between a set of cameras is sufficient for Euclidean calibration even varying focal length and unknown rotations. No other information of the world is required.
A factorization method is proposed for recovering camera motion and object shapes from line correspondences observed in multiple images with perspective projection. For any factorization-based approaches for perspective image, just like from point correspondences, scaling parameters called projecting depths must be estimated in order to obtain a measurement matrix that could be decomposed into motion and shape. One possible approach, proposed by Sturm and Triggs[Sturm-96], is to compute projective depths from linear relations of multiple images, for example, fundamental matrix for two images, and the trifocal tensor for three images. However, the estimation process of the fundamental matrices or trifocal tensor might be unstable if the measurement noise is large or the camera and the object points are nearly in critical configurations. In this paper, we first develop the line imaging geometry from the point imaging geometry, and then form a measurement matrix j ust like for the points, and propose an algorithm by which the projective depths are iteratively estimated so that the measurement matrix is made to be as close as possible to rank 6. At last, we factorize the measurement matrix into two matrices, and exploit the relation of Plucker coordinates of the lines to achieve the camera motion and shape in 3D projective space. This estimation process requires no the linear relations between the images and is therefore robust against measurement noises. The validity of the proposed method is confirmed by experiments with synthetic and real images.
The paper describes a new method of self-calibration of a moving vision platform (a rig) when the cameras' centers relative to each other during the motion have been determined. Given the projective structure from the corresponding points across the set of images acquired by the cameras, the varying internal and external parameters can be computed using only the positions of the cameras ' centers. The method in this paper shows that the maximal generality is reached when the rig consists of 5 cameras whose centers of projection are given. Furthermore, if the only goal is to construct the Euclidean structure from a projective structure, the internal and external parameters need not be computed, and the process is linear and hence it is more convenient and stable.
To predict image features in an image from image features in two other images has a wide potential appliance in computer vision such as Visual Recognition, and model-based vision animation and view synthesis, object detection and tracking. In this paper, we provide a novel method to predict image features in the third image based on the trifocal tensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.