KEYWORDS: Acoustics, 3D volumetric display, Visualization, 3D volumetric displays, Particles, Particle systems, Holography, Plasmonics, 3D displays, 3D visualizations
Current display approaches, such as VR, allow us to get a glimpse of multimodal 3D experiences, but users need to wear headsets as well as other devices in order to trick our brains into believing that the content we are seeing, hearing or feeling is real. Light-field, holographic or volumetric displays avoid the use of headsets, but they constraint the user’s ability to interact with them (e.g. content is not reachable to user’s hands, user’s constrained to specific locations) and, most importantly, still cannot simultaneously deliver sound and touch. In this talk, we will present the Multimodal Acoustic Trapping Display (MATD): a mid-air volumetric display that can simultaneously deliver visual, tactile and audio content, using phased arrays of ultrasound transducers. The MATD makes use of ultrasound to trap, quickly move and colour a small particle in mid-air, to create coloured volumetric shapes visible to our naked eyes. Making use of the pressure delivered by the ultrasound waves, the MATD can also create points of high pressure that our bare hands can feel and induce air vibrations that create audible sound. The system demonstrates particle speeds of up to 8.75 m/s and 3.75 m/s in the vertical and horizontal directions, respectively. In addition, our technique offers opportunities for non-contact, highspeed manipulation of matter, with applications in computational fabrication and biomedicine.
Electroholography enables the projection of three-dimensional (3-D) images using a spatial-light modulator. The extreme computational complexity and load involved in generating a hologram make real-time production of holograms difficult. Many methods have been proposed to overcome this challenge and realize real-time reconstruction of 3-D motion pictures. We review two real-time reconstruction techniques for aerial-projection holographic displays. The first reduces the computational load required for a hologram by using an image-type computer-generated hologram (CGH) because an image-type CGH is generated from a 3-D object that is located on or close to the hologram plane. The other technique parallelizes CGH calculation via a graphics processing unit by exploiting the independence of each pixel in the holographic plane.
We designed and developed a control circuit for a three-dimensional (3-D) light-emitting diode (LED) array to be used in volumetric displays exhibiting full-color dynamic 3-D images. The circuit was implemented on a field-programmable gate array; therefore, pulse-width modulation, which requires high-speed processing, could be operated in real time. We experimentally evaluated the developed system by measuring the luminance of an LED with varying input and confirmed that the system works appropriately. In addition, we demonstrated that the volumetric display exhibits different full-color dynamic two-dimensional images in two orthogonal directions. Each of the exhibited images could be obtained only from the prescribed viewpoint. Such directional characteristics of the system are beneficial for applications, including digital signage, security systems, art, and amusement.
KEYWORDS: Field programmable gate arrays, Microscopy, Image processing, Real time image processing, Optical imaging, Imaging systems, Cameras, Particles, Image resolution, Digital signal processing, Prototyping
High-speed imaging is an indispensable technique, particularly for identifying or analyzing fast-moving objects. The serial time-encoded amplified microscopy (STEAM) technique was proposed to enable us to capture images with a frame rate 1,000 times faster than using conventional methods such as CCD (charge-coupled device) cameras. The application of this high-speed STEAM imaging technique to a real-time system, such as flow cytometry for a cell-sorting system, requires successively processing a large number of captured images with high throughput in real time. We are now developing a high-speed flow cytometer system including a STEAM camera. In this paper, we describe our approach to processing these large amounts of image data in real time. We use an analog-to-digital converter that has up to 7.0G samples/s and 8-bit resolution for capturing the output voltage signal that involves grayscale images from the STEAM camera. Therefore the direct data output from the STEAM camera generates 7.0G byte/s continuously. We provided a field-programmable gate array (FPGA) device as a digital signal pre-processor for image reconstruction and finding objects in a microfluidic channel with high data rates in real time. We also utilized graphics processing unit (GPU) devices for accelerating the calculation speed of identification of the reconstructed images. We built our prototype system, which including a STEAM camera, a FPGA device and a GPU device, and evaluated its performance in real-time identification of small particles (beads), as virtual biological cells, owing through a microfluidic channel.
We have developed an algorithm which can record multiple two-dimensional (2-D) gradated projection patterns in a single three-dimensional (3-D) object. Each recorded pattern has the individual projected direction and can only be seen from the direction. The proposed algorithm has two important features: the number of recorded patterns is theoretically infinite and no meaningful pattern can be seen outside of the projected directions. In this paper, we expanded the algorithm to record multiple 2-D projection patterns in color. There are two popular ways of color mixing: additive one and subtractive one. Additive color mixing used to mix light is based on RGB colors and subtractive color mixing used to mix inks is based on CMY colors. We made two coloring methods based on the additive mixing and subtractive mixing. We performed numerical simulations of the coloring methods, and confirmed their effectiveness. We also fabricated two types of volumetric display and applied the proposed algorithm to them. One is a cubic displays constructed by light-emitting diodes (LEDs) in 8×8×8 array. Lighting patterns of LEDs are controlled by a microcomputer board. The other one is made of 7×7 array of threads. Each thread is illuminated by a projector connected with PC. As a result of the implementation, we succeeded in recording multiple 2-D color motion pictures in the volumetric displays. Our algorithm can be applied to digital signage, media art and so forth.
KEYWORDS: Computer generated holography, RGB color model, 3D image reconstruction, Holograms, Diffraction, Chromium, Digital holography, 3D image processing, Spatial light modulators, 3D displays
A calculation reduction method for color digital holography (DH) and computer-generated holograms (CGHs) using color space conversion is reported. Color DH and color CGHs are generally calculated on RGB space. We calculate color DH and CGHs in other color spaces for accelerating the calculation (e.g., YCbCr color space). In YCbCr color space, a RGB image or RGB hologram is converted to the luminance component (Y), blue-difference chroma (Cb), and red-difference chroma (Cr) components. In terms of the human eye, although the negligible difference of the luminance component is well recognized, the difference of the other components is not. In this method, the luminance component is normal sampled and the chroma components are down-sampled. The down-sampling allows us to accelerate the calculation of the color DH and CGHs. We compute diffraction calculations from the components, and then we convert the diffracted results in YCbCr color space to RGB color space. The proposed method, which is possible to accelerate the calculations up to a factor of 3 in theory, accelerates the calculation over two times faster than the ones in RGB color space.
We present a special-purpose computer named HORN (HOlographic ReconstructioN) for fast calculation of computer-
generated holograms (CGHs). The HORN can realize parallel processing of the CGH calculation by using field-
programmable gate arrays. The latest version of HORNs, HORN-7, can reconstruct holographic images more clearly than previous HORNs because HORN-7 can make CGHs as a phase-only hologram (kinoform). In addition, the HORN-
7 can directly output calculated CGHs on a spatial-light modulator via Digital Visual Interface. In this paper, we demonstrate real-time reconstruction of holographic motion pictures by the HORN-7. We calculated CGHs, which
consist of 1,920 × 1,080 pixels, from the object data of ~6,000 points, and succeeded in reconstructing holographic motion pictures from the calculated CGHs at the rate of ~7 frames per second.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.