KEYWORDS: Digital watermarking, 3D image processing, Digital image processing, Image processing, 3D visualizations, Visualization, Image compression, Digital filtering, Image filtering, Image quality, Geometrical optics, Digital imaging
Common camera loses a huge amount of information obtainable from scene as it does not record the value of individual rays passing a point and it merely keeps the summation of intensities of all the rays passing a point. Plenoptic images can be exploited to provide a 3D representation of the scene and watermarking such images can be helpful to protect the ownership of these images. In this paper we propose a method for watermarking the plenoptic images to achieve this aim. The performance of the proposed method is validated by experimental results and a compromise is held between imperceptibility and robustness.
We propose to combine the Kinect and the Integral-Imaging technologies for the implementation of Integral Display. The Kinect device permits the determination, in real time, of (x,y,z) position of the observer relative to the monitor. Due to the active condition of its IR technology, the Kinect provides the observer position even in dark environments. On the other hand, SPOC 2.0 algorithm permits to calculate microimages adapted to the observer 3D position. The smart combination of these two concepts permits the implementation, for the first time we believe, of an Integral Display that provides the observer with color 3D images of real scenes that are viewed with full parallax and which are adapted dynamically to its 3D position.
KEYWORDS: 3D image processing, Video, 3D displays, Integral imaging, Cameras, Device simulation, 3D image enhancement, Integral transforms, Algorithm development, Zoom lenses
Integral imaging (InI) is a 3D auto-stereoscopic technique that captures and displays 3D images. We present a method for easily projecting the information recorded with this technique by transforming the integral image into a plenoptic image, as well as choosing, at will, the field of view (FOV) and the focused plane of the displayed plenoptic image. Furthermore, with this method we can generate a sequence of images that simulates a camera travelling through the scene from a single integral image. The application of this method permits to improve the quality of 3D display images and videos.
We propose the fusion between two concepts that are very successful in the area of 3D imaging and sensing. Kinect technology permits the registration, in real time, but with low resolution, of accurate depth maps of big, opaque, diffusing 3D scenes. Our proposal consists on transforming the sampled depth map, provided by the Kinect technology, into an array of microimages whose position; pitch and resolution are in good accordance with the characteristics of an integral- imaging monitor. By projecting this information onto such monitor we are able to produce 3D images with continuous perspective and full parallax.
KEYWORDS: Cameras, 3D displays, 3D image processing, Microlens, Integral imaging, Sensors, 3D image reconstruction, Image processing, Imaging systems, Microlens array
Plenoptic cameras capture a sampled version of the map of rays emitted by a 3D scene, commonly known as the Lightfield. These devices have been proposed for multiple applications as calculating different sets of views of the 3D scene, removing occlusions and changing the focused plane of the scene. They can also capture images that can be projected onto an integral-imaging monitor for display 3D images with full parallax. In this contribution, we have reported a new algorithm for transforming the plenoptic image in order to choose which part of the 3D scene is reconstructed in front of and behind the microlenses in the 3D display process.
KEYWORDS: Cameras, Microlens, Integral imaging, 3D image processing, Microlens array, Sensors, Near field, 3D displays, 3D visualizations, Photographic lenses
One of the differences between the near-field integral imaging (NInI) and the far-field integral imaging
(FInI), is the ratio between number of elemental images and number of pixels per elemental image. While
in NInI the 3D information is codified in a small number of elemental images (with many pixels each), in
FInI the information is codified in many elemental images (with only a few pixels each). The later codification
is similar that the one needed for projecting the InI field onto a pixelated display when aimed to
build an InI monitor. For this reason, the FInI cameras are specially adapted for capturing the InI field
with display purposes. In this contribution we research the relations between the images captured in NInI
and FInI modes, and develop the algorithm that permits the projection of NInI images onto an InI monitor.
KEYWORDS: Imaging systems, Cameras, 3D image processing, Integral imaging, Image resolution, 3D displays, 3D image reconstruction, Stereoscopic cameras, Data processing, Light
An analysis and comparison of the lateral and the depth resolution in the reconstruction of 3D scenes from images obtained
either with a classical two view stereoscopic camera or with an Integral Imaging (InI) pickup setup is presented.
Since the two above systems belong to the general class of multiview imaging systems, the best analytical tool for the
calculation of lateral and depth resolution is the ray-space formalism, and the classical tools of Fourier information
processing. We demonstrate that InI is the optimum system to sampling the spatio-angular information contained in a
3D scene.
In multi-view three-dimensional imaging, to capture the elemental images of distant objects, the use of a field-like lens
that projects the reference plane onto the microlens array is necessary. In this case, the spatial resolution of reconstructed
images is determined by the spatial density of microlenses in the array. In this paper we report a simple method,
based on the realization of double snapshots, to double the 2D pixel density of reconstructed scenes. Experiments
are reported to support the proposed approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.