Many applications in computer vision require calibrated cameras, but identifying camera calibration parameters is a tedious task. Common methods require custom-built calibration patterns from which many images from different perspectives have to be taken. This research introduces a novel auto calibration method to reduce the work to a minimum. The method utilizes a neural network framework and learns the parameters through backpropagation and gradient descent. Three views of the same arbitrarily textured flat surface are used as an input. Two of the views are transformed to match the third reference view by plane homographies. Feature maps are extracted and the views are compared with their help. In- and extrinsic, as well as distortion parameters can then be learned by maximizing the similarity between the transformed views and the reference view. The results show that the method is able to find the calibration parameters of artificially distorted images. Results with real camera images are comparable to common methods that require planar calibration patterns, which makes the proposed method a quick alternative.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.