PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This paper surveys the technical requirements essential for 3-D television, particularly stereoscopic HDTV system is based on binocular parallax. HDTV provides high definition as well as a wide angle of view, which are features that can easily be used as core technologies for the 3-D television system. HDTV, with its high applicability and ability to provide high quality pictures, is the most promising technology for 3-D television at present.
This paper focuses mainly on technologies for display, signal processing, and image pickup needed to put these stereoscopic pictures to practical use as a broadcasting system. Studies on the characteristics of stereoscopic images in psychophysical experiments are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Stereoscopic imaging systems require to have a mean of creating viewing zone(s) to make viewers to perceive images with a certain depth. Holographic screen is a kind of holographic optical elements, designed for both image projection and the viewing zone creation. It has properties of a lens + a diffuser for transmission type and a diffused or simple spherical mirror for reflection type. Making full color holographic screens with a desired property requires an extra process of making chirp-type fringes in the developing stage or stacking three primary color screens for the reflection type and aligning a long narrow slit type diffuser as an object m recording process for the transmission type. The holographic screen is an analogue type screen which can create many separated viewing zones for many viewers and with a reasonable depth. The viewing zone size is proportional to projector's objective exit pupil size. Hence it can be controlled for any desired size. Making any desired size screen is also possible, however, its focal length is proportional to its size. This is why the screen is allowed to be used for only projection type stereoscopic systems. The viewing angle obtainable with the screen can be up to 70 degrees in reflection mode operation. This angle is big enough for the multiview image projection. An image combiner for Head Up Display for cars is an example of a non-stereoscopic area of application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have been developing the stereoscopic display using the Holographic Optical Element(HOE) for applying the new Head Mounted Display(HMD). The HOE is the grating that is made by Holography technique. It has the interference :fringe by two-laser beam interference method and its optical fimction can provide us with the binocular parallax images. We have already developed two kinds of stereoscopic displays using HOEs, one is the new HMO using the diverging HOE, and another is the Retinal Projection Display using the converging HOE. In this report, we will introduce the concept and experimental prototype of these displays.
First, we introduce the HOE which is very suitable for a combiner of the HMO. The only one piece of HOE recorded the interference fringe can separate the stereoscopic images onto each left and right eye. In addition, the visible rays through the HOE can be transparent with high intensity and the virtual images can be diffiacted with high efficiency. By using these optical characteristics of HOE, we accomplished the new HMO using the HOE that we can observe the real world with wide field of view and the virtual images with high intensity
Second, we propose the Retinal Projection Display. We completed the new optical system using the converging HOE that has the feature of the Maxwellian View. We can see the virtual images without our ocular accommodation because the focal dep1h of the image is extremely deep. The coherent parallel laser rays modulated by the Spatial Light Modulator(LCD or DMD) are converged at the center of the aystalline lens and they are projected on the retina directly. Th.is condition is same as the case of pan focus. If this characteristic is applied to the binocular condition, it is very suitable for binocular parallax stereoscopic display because the problem of the dissociation between accommodation and convergence may be solved
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A three-dimensional video system based on integral photography using a micro-lens array is described. Four problems with the system and technical means to solve them are proposed. Especially, resolution characteristics including those of pickup and display are described in detail. The experimental system using television camera and liquid crystal display (LCD) provides full color and autostereoscopic 3-D images with full parallax in real time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A computed three-dimensional (3-D) displaying system based on integral imaging is presented. The 3-D image is reconstructed by numerical processing of an optically observed image array formed by a micro-lens array. The algorithm to reconstruct 3-D images is simple, and it enables us to obtain the images viewed from arbitrary directions. The computer-based image retrieval makes it possible to improve quality of the image such as contrast, brightness, and resolution. In this presentation, we show the experimental results of the 3-D image reconstruction to test and verify the performance of the algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes in detail the historical development of the ICVision system which is based on the partial pixel architecture. The partial pixel architecture allows the realization of three-dimensional (3-D) displays that are functionally equivalent to realtime holographic stereograms. As such, this architecture permits the simultaneous presentation of multiple stereoscopic images so that motion parallax is discernible in the resultant 3-D scene. The key innovation of the architecture is that each pixel is subdivided into partial pixels, which in turn can be implemented as individual diffraction gratings.
In addition to describing the partial pixel architecture, this paper presents the details of several demonstration devices including a static device developed for image evaluation, and two dynamic systems based on liquid crystal devices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Holography, in the form of interactive holographic video, promises to fulfil some of the long-time promises of "no glasses" 3D image communication. However, the computational demands of holographic video remain extremely high, and threaten to slow down the technology beyond acceptable response rates. Faster image response is made possible by using holographic stereograms instead of true, fully-computed holograms. Holographic stereograms find their inspiration in older autostereoscopic imaging technologies that date back to as early as 1862.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with practical and theoretical issues related to motion parallax. Motion parallax implies that the perception of depth can be extracted from a temporal sequence of images that contain different perspectives. The present paper will focus on the relative effectiveness of motion parallax as compared to stereoscopic depth perception. It will be argued that motion parallax alone will generate a strong sense of depth, even in the absence of stereoscopic cues. Two studies directly comparing motion parallax and stereoscopy will be presented showing that, under certain conditions, these cues can be equally efficient and that there can be an additive effect when both cues are present. A theoretical discussion on the effect of optical distortions and how such distortions can influence motion parallax from a viewer's perspective will follow. Particular emphasis will be placed on the optical distortions produced by progressive addition lenses used to correct for presbyopia. Finally, research avenues will be proposed to answer some of the theoretical and practical issues related to motion parallax in our daily activities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is important to know how motion affects on the perception of depth when we present moving images on a three dimensional display. Effect of motion on binocular depth perception has at least three aspects. First, difference in image motion provides the difference of temporal frequency contents of stimulation that perhaps influences the depth perception. Second, the mechanism of motion analysis may interact with that for the binocular depth perception. Third, temporal changes of monocular and binocular depth cues provide the information of motion in depth. Psychophysical studies related to these aspects have revealed the importance of consideration of motion signals for depth processing in the visual system. Reviewing these studies, we discuss possible interactions between motion and depth perception, in addition to the general effect of temporal characteristics of moving stimulus. We also discuss the possible contribution of variety of depth cues to motion in depth perception.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The 3D research project has been developing the 3D display system in which natural obserbation is realized in Telecommunication Advancement Organization (TAO) in Japan. Here, recent results are mentioned.
The researches are mainly classified in three parts. The first research is the development of 3D display apparatus that realizes super-multi-view condition. The second one is the development of image processing system which generates the super-multi-view video signals. The third is the research to measure the eye control states when super-multi-view 3D images are observed, and the development of the measuring system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ongoing integration of IT systems is offering computer users a wide range of new (networked) services and the access to an explosively growing host of multimedia information. Consequently, apart from the currently established computer tasks, the access and exchange of information is generally considered as the driving computer application of the next decade. Major advances are required in the development of the human-computer interface, in order to enable end-users with the most varied backgrounds to master the added functionalities of their computers with ease, and to make the use of computers in general an enjoyable experience. Research efforts in computer science are concentrated on user interfaces that support the highly evolved human perception and interaction capabilities better than today's 2D graphic user interfaces with a mouse and keyboard. Thanks to the boosting computing power of general purpose computers, highly interactive interfaces are becoming feasible supporting a variety of input and output modalities. Multimodal interaction not only makes working with a computer more "natural" and "intuitive" but can substantially help to disambiguate the exchange of information in both directions between the user and the computer. Recent approaches aim to create a (virtual) three-dimensional interaction environment where users find a clear arrangement of the media objects, paths and links used, as well as highly responsive tools supporting continuous interaction. The article overviews the current concepts in general and describes an approach developed by the authors in greater detail as a case study of a 3D PC (the mUltimo3D project). This particular approach uses 3D displays which do not require stereo glasses to present a 3D graphic user interface. A newly developed 3D display makes it possible to seamlessly integrate the virtual interaction space into the real working space. Video trackers remotely sensing the user offer new modalities for unencumbered interaction with the displayed objects. Interface
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For a high sense of reality in the next-generation communications, it is very important to realize three-dimensional (3D) spatial media, instead of existing 2D image media. In order to comprehensively deal with a variety of 3D visual data formats, the authors first introduce the concept of ”Integrated 3D Visual Communication,” which reflects the necessity of developing a neutral representation method independent of input/output systems. Then, the following discussions are concentrated on the ray-based approach to this concept, in which any visual sensation is considered to be derived from a set of light rays. This approach is a simple and straightforward to the problem of how to represent 3D space, which is an issue shared by various fields including 3D image communications, computer graphics, and virtual reality. This paper mainly presents the several developments in this approach, including some efficient methods of representing ray data, a real-time video-based rendering system, an interactive rendering system based on the integral photography, a concept of virtual object surface for the compression of tremendous amount of data, and a light ray capturing system using a telecentric lens. Experimental results demonstrate the effectiveness of the proposed techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe streaming 3D video on the Argus sensor space. Argus is a Beowulf-style distributed computer with 64 processors and 64 video camera/capture pairs. Argus is a test-bed for comparing sensor space modeling and reconstruction algorithms. We describe the implementation of tomographic and stereo triangulation algorithms on this space and consider mappings from the sensor space to associated display spaces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Volume holographic imaging elements are capable of extracting two-dimensional narrow-band slices from extended three-dimensional polychromatic radiators. We demonstrate the use of this property for imaging in the context of the 90° volume holographic geometry.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work we describe a method for optoelectronic encryption of 3D images based in digital holography. Phase-shift digital holography is used to record the complex ampli-tude distribution of the Fresnel diffraction patterns generated by a 3D object illuminated with coherent light. First, we review a technique to encrypt optically the information contained in a 2D object by using digital holography. The method is then extended to encrypt 3D information. In both cases encryption is performed by using random phase codes as key functions. Decryption can be carried out optical or digitally. We also show that, after decryption, 3D objects can be reconstructed digitally with different perspec-tives by performing simple operations on the decrypted hologram. Experimental results are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.