We are currently researching a next-generation broadcasting system called Super Hi-Vision (SHV). We have proposed
the following video parameters for SHV: 33-million-pixel (33M-pixel) resolution, 120-Hz frame frequency, and wide
gamut color system. In order to capture the SHV images, we investigate a 33M-pixel image-capturing system operable at
a frame frequency of 120 Hz. The system consists of four CMOS image sensors that can not only output 33M-pixel at
60-Hz frame frequency but also output half of the 33M-pixel in either odd or even lines at 120-Hz frame frequency. Two
image sensors are used for the green channel (G1 and G2), and the other two are assigned to each of the red and blue
channels. The G1 sensor outputs the odd lines, while the G2 sensor outputs the even lines. A combination of G1 and G2
outputs 33M-pixel green images. The red and blue sensors scan odd lines and even lines, respectively. Subsequently, the
unscanned lines are interpolated in the vertical direction. In addition, we design a prism for wide color gamut
reproducibility. We develop a prototype and evaluate its resolution, image lag, and color reproducibility. The
performance of the proposed system is found to be satisfactory for capturing 33M-pixel images at 120 Hz.
KEYWORDS: 3D image processing, Imaging systems, Integral imaging, 3D displays, 3D image reconstruction, LCDs, 3D vision, Staring arrays, Multichannel imaging systems, Image quality
We developed a three-dimensional (3-D) imaging system with an enlarged horizontal viewing angle for integral
imaging that uses our previously proposed method for controlling the ratio of the horizontal to vertical viewing
angles by tilting the lens array used in a conventional integral imaging system. This ratio depends on the tilt
angle of the lens array. We conducted an experiment to capture and display 3-D images and confirmed the
validity of the proposed system.
Integral 3D television based on integral imaging requires huge amounts of information. Earlier, we built an Integral 3D
television using Super Hi-Vision (SHV) technology, with 7680 pixels horizontally and 4320 pixels vertically. Here we
report on an improvement of image quality by developing a new video system with an equivalent of 8000 scan lines and
using this for Integral 3D television. We conducted experiments to evaluate the resolution of 3D images using this
prototype equipment and were able to show that by using the pixel-offset method we have eliminated aliasing that was
produced by the full-resolution SHV video equipment. As a result, we confirmed that the new prototype is able to
generate 3D images with a depth range approximately twice that of Integral 3D television using the full-resolution SHV.
We present a method of changing the ratio of a horizontal to vertical angles by rotating a lens array in integral
imaging. We arranged an elemental image with a width and height that is not equal to the pitch of an elemental
lens as the total number of the pixels of the elemental image is invariant. Additionally, we rotated the lens
array to avoid overlapping the specially-shaped elemental images. We enlarged the horizontal viewing angle by
arranging the elemental images with a width that is larger than height and the pitch of the elemental lens. We
investigated the arrangement of these images and found that rotating the lens array changed the ratio of the
horizontal to vertical viewing angles.
An integral 3DTV system needs high-density elemental images to increase the reconstructed 3D image's resolution,
viewing zone, and depth representability. The dual green pixel-offset method, which uses two green
channels of images, is a means of achieving ultra high-resolution imagery. We propose a precise and easy method
for detecting the pixel-offset distance when a lens array is mounted in front of the integral imaging display. In
this method, pattern luminance distributions based on sinusoidal waves are displayed on each panel of green
channels. The difference between phases (amount of phase variation) of these patterns is conserved when the
patterns are sampled and transformed to a lower frequency by aliasing with the lens array. This allows the
pixel-offset distance of the display panel to be measured in a state of magnification. The relation between the
contrast and the amount of phase variation of the pattern is contradicted in relation to the pattern frequency.
We derived a way to find the optimal spatial frequency of the pattern by regarding the product of the contrast
and amount of phase variation of the patterns as an indicator of accuracy. We also evaluated the pixel-offset
detection method in an experiment with the developed display system. The results demonstrate that the resolution
characteristics of the projected image were refined. We believe that this method can be used to improve
the resolution characteristics of the depth direction of integral imaging.
We present a generating method of stereoscopic images from moving pictures captured by a single high-definition
television camera mounted on the Japanese lunar orbiter Kaguya (Selenological and Engineering Explorer, SELENE).
Since objects in the moving pictures look as if they are moving vertically, vertical disparity is caused by
the time offset of the sequence. This vertical disparity is converted into horizontal disparity by rotating the images
by 90 degrees. We can create stereoscopic images using the rotated images as the images for a left and right
eyes. However, this causes spatial distortion resulting from the
axi-asymmetrical positions of the corresponding
left and right cameras. We reduced this by adding a depth map that was obtained by assuming that the lunar
surface was spherical. We confirmed that we could provide more acceptable views of the Moon by using the
correction method.
KEYWORDS: Modulation transfer functions, Spatial frequencies, 3D image processing, Image processing, Integral imaging, Image restoration, Modulation, 3D image reconstruction, Imaging systems, 3D displays
Integral imaging system uses a lens array to capture an object and display a three-dimensional (3-D) image of
that object. In principle, a 3-D image is generated at the depth position of the object, but for an object located
away from the lens array in the depth direction, the modulation transfer function (MTF) of the integral imaging
system will be degraded. In this paper, we propose a method that uses pupil modulation and depth-control
processing to alleviate this degraded MTF. First, to alleviate changes in the MTF due to differences in depth
when capturing the object, we use a pupil-modulated elemental lens to obtain an elemental image. Next, we use
a filter having characteristics opposite those of the MTF characteristics of the pupil-modulated elemental lens
to restore the degraded image. Finally, we apply depth-control processing to the restored elemental image to
generate a reconstructed image near the lens array. This method can alleviate degradation in the MTF of the
integral imaging system when an object is located at a distance from the lens array. We also show results of
computer simulations that demonstrated the effectiveness of the proposed method.
We are studying electronic holography and have already developed a real-time color holography system for live
scene which includes three functional blocks, capture block, processing block, and display block. One of the issues
of such systems is to spoil half of the captured 3-D information due to half-zone-plate processing in processing
block, which means the resolution of reconstructed 3-D objects is reduced to half at the instant of processing
block. This issue belongs to not only our system but also all similar systems, because electronic display devices
do not have enough resolution for hologram even now. In this paper, we propose to use semi-lens lens array
(SLLA) in capture block to solve this issue, whose optical axis of elemental lens is not at the center of elemental
lens but at the edge of it. In addition to that, we will describe the processing block for SLLA. We show the basic
experimental results that SLLA is better than general lens array.
We are studying electronic holography and have developed a real-time color holographic movie system which includes three functional blocks, capture block, processing block, and display block. We will introduce the system and its technology in this paper. The first block, capture block, uses integral photography (IP) technology to capture color 3-D objects in real time. This block mainly consists of a lens array with approximately 120(W)x67(H) convex lenses and a video camera with 1920(W)x1080(H) pixels to capture IP images. In addition to that, the optical system to reduce the crosstalk between elemental images is mounted. The second block, processing block, consists of two general personal computers to generate holograms from IP images in real time. Three half-zone-plated holograms for red, green and blue (RGB) channels are generated for each frame by using Fast Fourier Transform. The last block, display block, mainly consists of three liquid crystal displays for displaying the holograms and three laser sources for RGB to reconstruct the color 3-D objects. This block is a single-sideband holography display, which cuts off conjugate and carrier images from primary images. All blocks work in real time, i.e., in 30 frames per second.
KEYWORDS: Holograms, Digital signal processing, 3D image reconstruction, Lenses, Image processing, Photography, Field programmable gate arrays, Near field diffraction, Cameras, 3D image processing
Holography is a 3-D display method that fully satisfies the visual characteristics of the human eye. However, the
hologram must be developed in a darkroom under laser illumination. We attempted hologram generation under white
light by adopting an integral photography (IP) technique as the input. In this research, we developed a hardware
converter to convert IP input (with 120×66 elemental images) to a hologram with high definition television (HDTV)
resolution (approximately 2 million pixels). This conversion could be carried out in real time. In this conversion method,
each elemental image can be independently extracted and processed. Our hardware contains twenty 300-MHz floating-point
digital signal processors (DSPs) operating in parallel. We verified real-time conversion operations by the
implemented hardware.
Single-sideband holography with half-zone-plate processing is a well-known method of displaying computer generated holograms (CGHs) using electronic devices such as liquid crystal displays (LCDs) that do not have narrow pixel intervals. Half-zone plate only permits primary images to pass through a single-sideband spatial filter and cuts off conjugate and carrier images; however, there is a problematic restriction on this method in that objects being shot must be either in front of or behind the hologram. This paper describes a new approach to simultaneously placing them on both sides of the hologram, which means we can eliminate this restriction. The underlying idea is when half-zone plate permits the primary images in front of the hologram to pass through a single-sideband spatial filter, the conjugate images cannot pass through it. When we prepare a half-zone plate on the opposite side, the primary images on both sides of the hologram can pass through but the conjugate images cannot. This approach not only doubles the area of objects but also reduces computational time because objects can be placed close to the hologram. We implemented this approach, tested it, and confirmed its effectiveness.
Holography is one of the most promising candidates to realize a fully realistic 3D video communication system. We
propose a hologram generation method by using depth maps of real scenes. In this study, we employed a static laser
scanner and captured a depth map of real objects at 0.4mm resolution. Then, a Fresnel hologram was calculated off-line
on a computer. We used two types of SLMs. One is 12micron transparent LCD, and the other is 10.4micron pixel
reflective LCD panel. By irradiating He-Ne laser to the hologram, we observed 3D real object images are reconstructed
in the space with approx. 5cm of depth range.
The visual-resolution characteristics of an array, comprising many elemental afocal optical units, for an optical viewer are investigated. First, it is confirmed by wave optics that lightwaves exiting the array will converge, forming a three-dimensional optical image. Next, it is shown that the convergence point and resolution characteristics depend on the angular magnification of the afocal unit. When the magnification is 1.0, the lightwaves focus on the convergence point, and the resolution characteristics depend only on the diffraction of the afocal units. At magnifications other than 1.0, the lightwaves do not focus on the convergence point, decreasing the resolution as a result. To clarify this result quantitatively, we determined the viewing distances at which the alignment of the afocal units is not perceptible when an optical image formed by the array is viewed by an observer, and we calculated the modulation transfer functions normalized by the viewing distance.
When designing a system capable of capturing and displaying three-dimensional (3-D) moving images in real time by the integral imaging (II) method, one challenge is to eliminate pseudoscopic images. To overcome this problem, we propose a simple system with an array of three convex lenses. This paper first describes by geometrical optics the lateral magnification of the elemental optics and expansion of an elemental image, confirming that the elemental optics satisfies the conditions under which pseudoscopic images can be avoided. By the II method, adjacent elemental images must not overlap, a condition also satisfied by the proposed optical system. Next, the paper describes an experiment carried out to acquire and display 3-D images. The real-time system we have constructed comprises elemental optics array with 54(H) x 59(V) elements, a CCD camera to capture a group of elemental images created by the lens array, and a liquid crystal panel to display these images. The experiment results confirm that the system produces orthoscopic images in real time and so is effective for
real-time application of the II method.
The authors describe visual resolution characteristics of an array comprising many afocal optical units. It is shown by wave optics that light beams exiting the array will converge, forming a three- dimensional optical image. It is also clarified that the converging point and resolution are dependent on the angular magnification of the afocal unit. When the magnification is 1.0, the optical wave focuses on the converging point, and the resolution is dependent only on the diffraction of the afocal unit. If the magnification is not 1.0, the optical wave does not focus on the converging point, affecting the resolution as a result. Based on this, we have obtained viewing distances and object distances at which the effects on the resolution by diffraction or defocusing are not perceptible when an optical image formed by the array is viewed by an observer.
KEYWORDS: Holograms, 3D image reconstruction, LCDs, Photography, Lenses, Holography, 3D image processing, IP cameras, Near field diffraction, Light sources
This paper describes a method of generating holograms by calculation from an image captured using the integral photography (IP) technique. In order to reduce the calculation load in hologram generation, a new algorithm that shifts the optical field along the exit plane of the micro lenses in a lens array is proposed. We also explain the aliasing that occurs when a hologram is generated by IP. Furthermore, an elemental image size and micro lens's focal length at which aliasing does not occur are suggested. Finally, we use the algorithm to calculate a hologram from an IP image of a real object captured with an IP camera, confirming by optical reconstruction that a three-dimensional image can be formed from the hologram.
An afocal lens array is proposed to form three-dimensional (3D) images. The array, which is composed of many afocal optical units, can form an image whose depth position is dependent on the angular magnification of the unit. The point of an image formed by the whole array differs from that where an image is formed by an afocal unit, except in the case that the angular magnification is 1.0. Especially, when the angular magnification has a negative value, an optical image has a negative longitudinal magnification, i.e., a 3D image with inverted depth. When used for integral imaging, the array can control the depth position and avoid pseudoscopic images with reversed depth.
Integral photography (IP) or integral imaging is a way to create natural-looking three-dimensional (3-D) images with full parallax. Integral three-dimensional television (integral 3-D TV) uses a method that electronically presents 3-D images in real time based on this IP method. The key component is a lens array comprising many micro-lenses for shooting and displaying. We have developed a prototype device with about 18,000 lenses using a super-high-definition camera with 2,000 scanning lines. Positional errors of these high-precision lenses as well as the camera's lenses will cause distortions in the elemental image, which directly affect the quality of the 3-D image and the viewing area. We have devised a way to compensate for such geometrical position errors and used it for the integral 3-D TV prototype, resulting in an improvement in both viewing zone and picture quality.
This paper describes a new means to control depth positions of 3-D images for integral photography. A GRIN lens array is set in front of a lens array for image capturing. The length of each elemental GRIN lens composing the array is a half of one period of the optical path. The GRIN lens array also avoids pseudoscopic effects with revised depth. The depth position of the 3-D images is controlled by adjusting the distance between the GRIN lens array and the capturing lens array, thus producing 3-D images without depth distortion.
This paper describes a holographic display using liquid crystal panels from which holographic images can be perceived with both eyes. In the display, the hologram plane is composed of two high-resolution liquid crystal panels each with a pixel pitch of 10 microns (both horizontally and vertically) and 3840 (horizontal) × 2048 (vertical) pixels. The horizontal viewing zone is doubled by applying the viewing-zone enlargement method, which uses higher-order diffraction beams, to the high-resolution liquid crystal panels. In addition, obstacles resulting from conjugate beams are eliminated using the modified single-sideband method. As a result, the viewing zone of the display is 6.5 cm, which is equivalent to the distance between pupils, at a viewing distance of 90 cm. Thus, moving three-dimensional holographic images free off conjugate beam obstacles could be perceived with both eyes.
In an integral three-dimensional television (integral 3-D TV) system, 3-D images are reconstructed by integrating the light beams from elemental images captured by a pickup system. 160(H) x 118(V) elemental images are used for reconstruction in this system. We use a camera with 2000 scanning lines for the pickup system and a high-resolution liquid crystal display for the display system and have achieved an integral 3-D TV system with approximately 3000(H) x 2000(V) effective pixels. Comparisons with theoretical resolution and viewing angle are performed, and it is shown that the resolution and viewing angle of 3-D images are improved about 2 times and 1.5 times respectively compared to previous system. The accuracy of alignment of microlenses is another factor that should be considered for integral 3-D TV system. If the lens array of the pickup system or display system is not aligned accurately, positional errors of elemental images may occur, which cause the 3-D image to be reconstructed at an incorrect position. The relation between positional errors of elemental image and reconstructed image is also shown. As a result, the 3-D images reconstructed far from the lens array are greatly influenced by such positional error.
This paper proposes a new function of the two-dimensional lens array that is composed of many gradient-index lenses. The length of the lenses is an odd-integer multiple of the half period of the optical path. The array produces pseudoscopic three-dimensional (3D) images with reversed depth. Two lens arrays are positioned at a suitable distance so that orthoscopic 3D images with correct depth are formed in front of the lens arrays. The combined array captures, transmits and displays 3D images without other devices. A diffuser or opto-electronic amplifier can be inserted at the specific plane within the lens array.
KEYWORDS: 3D image processing, 3D displays, Image fusion, Principal component analysis, Cameras, Factor analysis, Error analysis, Analytical research, 3D imaging standards, 3D vision
In order to identify the conditions which make stereoscopic images easier to view, we analyzed the psychological effects using a stereoscopic HDTV system, and examined the relationship between this analysis and the parallax distribution patterns. First, we evaluated the impression of 3-D pictures of the standard 3-D test chart and past 3-D video programs using some evaluation terms. Two factors were thus extracted, the first related to the sense of presence and the second related to ease of viewing. Secondly, we applied principal component analysis to the parallax distribution of the stereoscopic images used in the subjective evaluation tests, in order to extract the features of the parallax distribution, then we examined the relationship between the factors and the features of the parallax distribution. The results indicated that the features of the parallax distribution are strongly related to ease of viewing, and for ease of viewing 3-D images, the upper part of the screen should be located further away from the viewer with less parallax irregularity, and the entire image should be positioned at the back.
This paper surveys the technical requirements essential for 3-D television, particularly stereoscopic HDTV system is based on binocular parallax. HDTV provides high definition as well as a wide angle of view, which are features that can easily be used as core technologies for the 3-D television system. HDTV, with its high applicability and ability to provide high quality pictures, is the most promising technology for 3-D television at present.
This paper focuses mainly on technologies for display, signal processing, and image pickup needed to put these stereoscopic pictures to practical use as a broadcasting system. Studies on the characteristics of stereoscopic images in psychophysical experiments are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.