Open Access
31 December 2018 Single-frame three-dimensional imaging using spectral-coded patterns and multispectral snapshot cameras
Chen Zhang, Anika Brahm, Andreas Breitbarth, Maik Rosenberger, Gunther Notni
Author Affiliations +
Abstract
We present an approach for single-frame three-dimensional (3-D) imaging using multiwavelength array projection and a stereo vision setup of two multispectral snapshot cameras. Thus a sequence of aperiodic fringe patterns at different wavelengths can be projected and detected simultaneously. For the 3-D reconstruction, a computational procedure for pattern extraction from multispectral images, denoising of multispectral image data, and stereo matching is developed. In addition, a proof-of-concept is provided with experimental measurement results, showing the validity and potential of the proposed approach.

1.

Introduction

High-speed 3-D imaging matches the increasing demands for real-time capability in nondestructive industrial inspection, human–machine interaction, biomedical and security applications, etc. The common real-time solutions for three-dimensional (3-D) imaging, e.g., passive stereo,1,2 time-of-flight (ToF) cameras,3,4 and focal tomography,5 are limited by their depth resolution and not suitable for tasks with high-accuracy requirements. A well-established high-accuracy 3-D imaging technique is the pattern projection-based stereo photogrammetry. It solves the stereo matching problem with the pixel-level virtual features created by the projection of a sequence of light patterns. So, the pattern projection method can realize the pixel matching even at a wide baseline, thus improving the depth resolution. Moreover, the virtual features ensure a high measurement robustness at nontextured or sparse-textured surfaces. For such 3-D imaging systems, there are two possible approaches to raise their 3-D frame rates. The first approach is to minimize the number N of patterns for the computation of a single 3-D image, obviously N=1. The previous 3-D techniques based on single-frame pattern projection could be classified into two groups according to the pattern type: monochromatic patterns with phase modulation in the frequency domain68 and composite RGB fringe patterns, in which the phase shift is coded with RGB colors.911 However, the decoding of monochromatic patterns will confront a lot of artifacts at objects with shape discontinuities or very sharp edges, and a hard challenge at the RGB fringe projection is that the phase map unwrapping becomes difficult without additional patterns (e.g., gray codes). Moreover, these techniques are affected from nonuniform surface properties and thus are limited by their application occasions.

The other way to real-time 3-D imaging is the speed-up of both pattern projection and single-image acquisition. Typical high-speed projection techniques are laser speckle projection with acousto-optical deflection,12 multiaperture or array projection,13,14 and GOBO projection technologies.15,16 With these mechanical-analogues projection principles, a 2-D projection frequency of maximum 100 kHz can be realized, enabling a 3-D frame rate up to 10 kHz. However, a drawback is that the expensive high-speed cameras in accordance to the projection frequencies are needed, which cause also a high effort in the data transfer.

Currently, multispectral cameras,17 especially miniaturized multispectral snapshot cameras,1820 offer possibilities for real-time spectral imaging. They can realize simultaneous image data acquisition at multiple spectral bands. In this contribution, we demonstrate an approach for active single-frame 3-D imaging based on multiwavelength pattern projection and a stereo vision setup of two multispectral snapshot cameras. Further, we present a proof-of-concept with the first experimental results.

2.

Approach to Multiwavelength Pattern Projection

The basic concept of the proposed approach is to realize the projection of various patterns at different wavelengths and the detection of these patterns at the corresponding spectral bands of multispectral snapshot cameras in stereo arrangement, simultaneously (see Fig. 1). To this, an adaptation of the projector’s spectral characteristics to the spectral bands of the cameras is necessary. In this way, the 3-D reconstruction from a single stereo image pair can be performed using a sequence of patterns that enhances the stability and robustness of stereo matching. On the other hand, the effort of data transfer is much lower than the high-speed 3-D imaging techniques using temporal pattern sequence projection, enabling a lower hardware utilization.

Fig. 1

Sketch of the single-frame active stereo 3-D imaging system.

OE_57_12_123105_f001.png

2.1.

Multispectral Snapshot Camera

Nowadays, the miniaturized multispectral snapshot cameras are mainly based on the principle of an on-chip multispectral filter array21 (MSFA). Generally, MSFAs are available with up to 25 spectral bands in the visible and near infrared (NIR) spectral range. They are composed of multiple mosaic filter elements, each of whose subelement corresponds to a special spectral band. The MSFA is pixel-synchronously mounted in front of a monochromatic image sensor, as demonstrated in Fig. 2(a). As usual, a demosaicing algorithm is necessary to recovery the missing spectral components at the individual pixels because each sensor pixel captures only one spectral component.

Fig. 2

Multispectral snapshot camera: (a) schematic of an MSFA-based multispectral camera and (b) spectral responses of the Silios multispectral NIR camera.22

OE_57_12_123105_f002.png

Figure 2(b) shows the spectral responses of the Silios multispectral cameras used in this work.22 This MSFA is composed of one panchromatic neutral band and eight spectral bands in the red to NIR spectral range. Their central wavelengths lie between 650 and 930 nm, and they have a full-width at half-maximum (FWHM) of about 40 nm. Hence, the spectral characteristics of the projector should be designed according to this MSFA.

Supposing a linear transfer function of the image sensor, the spectral response values I of the ideal digital camera can be formulated by

Eq. (1)

I=λL(λ)o(λ)scam(λ)r(λ)dλ,
where λ is the wavelength, L(λ) denotes the spectral power distribution of the illumination, o(λ) the spectral transmission of the camera optics, scam(λ) is the spectral sensitivity of the image sensor, and r(λ) is the spectral reflectance of object. Using a discrete representation of these spectral functions, Eq. (1) can be written as the following matrix notation:

Eq. (2)

I=M·r,
where the row vector M denotes the product of the vector form of L(λ), o(λ), and scam(λ), and r is the spectral reflectance in the form of a column vector. Finally, the mathematical expression of the eight-band multispectral image acquisition can be formulated by extending Eq. (2) to multiple bands

Eq. (3)

Iλ=Mλ·r,with  Iλ=[I1,,I8]T,Mλ=[M1M8]
where Iλ is the spectral responses values at an image pixel, Mλ is the overall spectral sensitivity matrix of the multispectral camera, and each row of Mλ corresponds to a spectral response curve in Fig. 2(b).

2.2.

Multiwavelength Array Projector with Aperiodic Sinusoidal Fringe Patterns

For the realization of a simultaneous multiwavelength projection, the multiaperture principle14 is applied. In the implementation of this principle, it is unavoidable that some projection units are in the state of off-axis projection. Thus the use of pseudostatistical patterns is advantageous for the projection optics design and the alignment of single-projection units, because with these patterns the dispersion at the projector lens has no influence on 3-D results, and it is not necessary to control the patterns’ characteristics precisely, as well as their relation between each other. In this work, the aperiodic sinusoidal fringe patterns23 are used by the reason of simpler pattern fabrication than the commonly used speckle patterns. Experimental investigations24 have shown that reasonable 3-D measurement results can be obtained with the use of N6 vertical patterns. With the adaption that these patterns are spectrally coded instead of being temporally projected, an array projector is developed that consists of N projection units to project N different spectral-coded fringe patterns simultaneously

Eq. (4)

Iλproj(x,y)=aλ(x)+bλ(x)sin[cλ(x)x+dλ(x)],λ=1,,N,
with spatially and spectrally varying properties aλ(x), bλ(x), cλ(x), and dλ(x).

In the case of aperiodic patterns, the offset aλ(x) and amplitude bλ(x) are the properties of the spectral light sources in each single projection units, and these light sources should be adjusted to the same brightness level. The period length 2π/cλ(x) and phase shift dλ(x) of each pattern are generated using the random method in Ref. 23. First, N spatial-frequency spectra are generated with a pseudorandom number generator. Then a bandpass filter is applied to these spectra in order to control the maximal and minimal half period lengths of the corresponding intensity profiles. In the middle working distance, the maximal and minimal half period lengths of projected fringes observed by cameras should be 20 and 10 pixels, respectively. Finally, these filtered spectra are transformed backward into the spatial domain to generate aperiodic intensity profiles. Figure 3 shows the intensity profiles at different wavelengths within the marked horizontal line segment. Nevertheless, the fabrication of such patterns with continuously varying transmission is technically difficult. As a simplification, the intensity profiles in Fig. 3 are binarized so that the patterns can be fabricated as slides with gray codes, and the sinusoidal profile shapes are produced by a minor defocusing of projection.

Fig. 3

Spectral-coded aperiodic sinusoidal fringe patterns (N=8).

OE_57_12_123105_f003.png

Figure 4 shows the schematic of the multiwavelength array projector. Figure 4(a) shows the optical setup of a single projection unit. The concentrator with a gold coating shapes the light emitted by a light-emitting diode (LED) into a homogeneous beam for a slide with an aperiodic fringe pattern. In Fig. 4(b), the arrangement of the projection units is shown. The projector consists of eight pipe projection units with different high-power LEDs (3 to 5 W). Figure 4(c) shows the central wavelengths of the used LEDs with an FWHM of ca. 50 nm. The cameras and single projection units are adjusted for a specified middle working distance with overlapping viewing and illumination fields, whereas the projection units are with some defocusing in order to generate the sinusoidal intensity profiles. The LEDs illuminate permanently, so that all patterns can be extracted from a single multispectral image.

Fig. 4

Optical setup of the multiwavelength array projector: (a) sketch of a single projection unit,14 (b) N=8 multiwavelength array projector, and (c) spectral bands of the multiwavelength array projector.

OE_57_12_123105_f004.png

3.

Experimental Setup

Figure 5 shows the experimental setup consisting of two Silios multispectral NIR cameras and one multiwavelength array projector at the center. The cameras are synchronized and work at a frame rate of 60 Hz, and the full resolution of the CMOS sensor is 1280×1024  pixel. The subimages at each spectral band have a reduced resolution of about 0.146 megapixel and require to be restored to the full image sensor resolution by demosaicing.

Fig. 5

Single-frame 3-D imaging system: experimental setup.

OE_57_12_123105_f005.png

The stereo vision setup has a triangulation angle of ca. 18 deg and a baseline of ca. 480 mm. The measurement volume is about 300  mm×300  mm×300  mm, and the middle working distance is 1.5 m. By the use of objectives that are optimized for the NIR spectral range, the cross-channel image distortion due to chromatic aberration is neglected. The geometric calibration of the stereo vision camera setup is performed using Zhang’s camera calibration method25 with the middle spectral band at 730 nm.

4.

3-D Reconstruction With Multiwavelength Pattern Projection

4.1.

Pattern Extraction and Image Data Denoising

At first, dark signal correction of the multispectral images is performed, and the image resolution is recovered to the original sensor resolution by demosaicing, for which we used a simple bilinear interpolation method. Because of the high spectral crosstalk at some bands due to the irregular shapes of their sensor spectral responses [see Fig. 6(a)], the fringes appear smeared on the images due to the cross-channel mixture of different spectral patterns, as shown in Fig. 7(a). In order to recovery the designed spatial modulation of the patterns, a computational crosstalk compensation is performed. As a result, a virtual multispectral image cube with lower crosstalk is reconstructed by a linear combination of the spectral bands, whose weighting coefficients are determined based on the real spectral sensitivity data of the multispectral image sensors.

Fig. 6

Crosstalk compensation: (a) original spectral response curves, (b) ideal Gaussian spectral response curves, and (c) corrected spectral response curves.

OE_57_12_123105_f006.png

For the crosstalk compensation, a virtual spectral sensitivity matrix Mλv is defined, in which the spectral response curves at each band have narrow Gaussian shapes with the same FWHM values26 [see Fig. 6(b)]. Subsequently, a correction matrix T containing the weighting coefficients of the original spectral bands is calculated from a linear mapping between the original spectral sensitivity matrix Mλ and the ideal virtual spectral sensitivity matrix Mλv. This can be solved by minimizing the cost function f

Eq. (5)

f=MλvMλT2.

As a verification of the obtained correction matrix T, it is applied to the original spectral response curves Mλ to calculate the corrected spectral response curves that are shown in Fig. 6(c). In our experiment, the corrected spectral response curves exhibit a mean correlation coefficient of 0.989 to the ideal Gaussian spectral response curves Mλv, verifying the used linear method. Using the correction matrix T, the eight raw spectral values Iλr=[I1r,,I8r]T at a pixel can be converted into virtual spectral data Iλv=[I1v,,I8v]T

Eq. (6)

Iλv=T·Iλr.

By applying this crosstalk compensation at each pixel, the reconstructed virtual image appears as if the sensor had corrected spectral response curves in Fig. 6(c). Figure 7 shows the performance of the crosstalk compensation at the band 890 nm, which suffers from a high spectral crosstalk. It can be seen that the image after crosstalk compensation exhibits a higher fringe contrast and less smearing. Furthermore, the sensor sensitivity difference between the stereo multispectral cameras is reduced by performing crosstalk compensation with respect to the same ideal spectral response curves. The following step is to filter out the ambient light and the high-frequency sensor noise. Therefore, the bandpass filtering method by Guo and Huang27 is implemented. Initially, the Fourier-transformed image is pixel-wise multiplied with a hamming window in order to mitigate the artifacts in the marginal areas of the image. Then, a horizontal bandpass filter can be designed based on the knowledge about the bandwidth range of the fringe patterns and applied to extract the projected patterns.

Fig. 7

Crosstalk compensation at the band 890 nm: (a) raw image and (b) after crosstalk compensation.

OE_57_12_123105_f007.png

4.2.

Stereo Matching and 3-D Reconstruction

After the preprocessing, the projected fringe patterns are isolated from the raw image data. As preparation for the pixel matching, the rectification method by Hartley28 is performed for the stereo cameras. In the stereo rectification, a pair of 2-D projective transformations are determined based on the fundamental matrix and a number of controlling point pairs and applied to the two images in order to match the epipolar lines, whereby the image distortion after the rectification should be minimal at both cameras. After the rectification, the stereo images are line-by-line registered, as shown in Fig. 8.

Fig. 8

Stereo rectification.

OE_57_12_123105_f008.png

Furthermore, the depth range of the 3-D imaging system could provide another restriction for the search of corresponding points.29 As shown in Fig. 9, the ray containing the 3-D object point P in the first camera is restricted by Pnear and Pfar that correspond to the lower and upper boundaries of the depth range. Thus the search area for the corresponding point P(2) of image point P(1) is restricted within an epipolar segment. By applying the rectification transformations, this epipolar segment is transformed to a horizontal line segment in the rectified image of the second camera. In this way, an acceleration of stereo matching is achieved, and it reduces the possibility of global matching errors that could occur at a low number of statistical patterns.

Fig. 9

Illustration of depth range based search area restriction.

OE_57_12_123105_f009.png

The stereo pixel matching is realized by calculating the normalized cross correlation between the sequence of eight spectral values I1(1),,I8(1) at each pixel in the first camera and the spectral value sequence I1(2),,I8(2) at all pixels of the corresponding line segment in the second camera. The matched points should have the highest correlation coefficient ρ

Eq. (7)

ρ=λ=18[Iλ(1)I¯(1)][Iλ(2)I¯(2)]λ=18[Iλ(1)I¯(1)]2λ=18[Iλ(2)I¯(2)]2,with  I¯(k)=18λ=18Iλ(k),k{1,  2}.

Moreover, a subpixel accuracy up to 1/10 pixel is realized in the pixel matching using a linear interpolation of the spectral values along the same row in all spectral subimages. The difference between the x-coordinates of the corresponding point pair is the so-called disparity value. The disparity map is then denoised by median filtering and completed by filling the small gaps with extrapolation. In the end, the disparity values are converted into homogeneous 3-D points via a mapping matrix20 that can be calculated from the calibration data of the stereo vision system.

5.

3-D Measurement Results and Discussion

For the characterization of the system accuracy, we measured a prism and a sphere with opaque and diffuse surfaces [see Fig. 10(a)], and Fig. 10(b) shows the 3-D measurement results. The obtained 3-D point clouds exhibit a high measurement completeness, but with some microscopic periodical artifacts, maybe as a consequence of the remaining cross-channel spectral crosstalk. At the prism, the plane fitting standard deviations at flats F1 to F5 were calculated, resulting in a mean standard deviation of 0.284 mm, while the measurement of the sphere delivers a sphere fitting standard deviation of 0.337 mm. Additionally, Fig. 11 shows the qualitative measurement result of a free-form women figure. These experimental results indicate that the quality of the spectral-coded aperiodic fringe patterns are sufficient for the cross-correlation-based stereo matching and 3-D reconstruction, and the proposed approach can achieve a reasonable 3-D accuracy and robustness at nontextured diffuse objects under a wide-baseline configuration of the camera setup.

Fig. 10

Sample measurement: (a) photos of test prism and sphere and (b) false-color representation of 3-D results.

OE_57_12_123105_f010.png

Fig. 11

3-D measurement of a women figure: (a) object and (b) false-color representation of 3-D result.

OE_57_12_123105_f011.png

The current main restriction of this system is the reduced spatial resolution of the multispectral snapshot cameras at each band. The reduced camera resolution leads to a decrease of the achievable resolution of disparity value and thereby the depth resolution of reconstructed 3-D image. Thus an improved demosaicing algorithm, such as in Ref. 30, is needed for the enhancement of the depth accuracy. Further possible approaches for a high-quality restoration of spatially downsampled images may be the adaption of some compressive sensing methods.3133 However, customized MSFAs with spatially pseudorandomized filter arrangement or synthetic coded apertures are needed for this, which require more expensive fabrication techniques and lead to higher costs.

In addition, the fixed pattern noise, especially the photoresponse nonuniformity in each spectral band of the image sensors could also result in some minor artifacts in the stereo matching and thus in the 3-D measurement results. For this problem, an in-depth sensor characterization in compliance with EMVA1288 standard34 will be need as a basis for a pixel-wise correction of the spatial nonuniformity.

Moreover, we recognized that the spectral crosstalk at the spectral bands 850, 890, and 930 nm of the multispectral cameras is markedly stronger. The achievable signal-to-noise ratio of these bands is lower due to the limitation of the sensor saturation capacity. It results in artifacts during the pixel matching process and a degradation of the matching precision. Hence, the progress in multispectral sensor techniques regarding the spectral characteristics is the other crucial condition for further developments of the proposed approach.

6.

Summary

In this work, we presented the principle and design of an optical 3-D sensor based on multispectral snapshot cameras and multiwavelength pattern projection. Its benefit is the realization of single-frame 3-D imaging with great spatial resolution, excellent depth accuracy, and high measurement robustness even at nontextured objects. With this sensor principle, the 3-D frame rate corresponds directly to the 2-D frame rate of the applied multispectral cameras and can be raised significantly by the use of high-speed image sensors. The first measurement results obtained with the prototype setup show the validity and potential of the proposed approach. Future research will be carried out to improve the image processing algorithms, especially the demosaicing and the fixed pattern noise correction, as well as the multispectral camera technology concerning the signal-to-noise ratio and spectral crosstalk between the individual spectral bands.

Acknowledgments

This work has been sponsored by the German Federal Ministry of Education and Research in the program Zwanzig20—Partnership for Innovation as part of the research alliance 3Dsensation (Project No. 03ZZ0462). The authors sincerely thank SILIOS Technologies for providing the spectral calibration data of the multispectral snapshot cameras.

References

1. 

S. Forstmann et al., “Real-time stereo by using dynamic programming,” in IEEE CVPR Workshop, 29 –36 (2004). https://doi.org/10.1109/CVPR.2004.428 Google Scholar

2. 

N. Sabater, J.-M. Morel and A. Almansa, “How accurate can block matches be in stereo vision?,” SIAM J. Imaging Sci., 4 472 –500 (2011). https://doi.org/10.1137/100797849 Google Scholar

3. 

S. Foix, G. Alenya and C. Torras, “Lock-in time-of-flight (ToF) cameras: a survey,” IEEE Sens. J., 11 (9), 1917 –1926 (2011). https://doi.org/10.1109/JSEN.2010.2101060 ISJEAZ 1530-437X Google Scholar

4. 

Y. He et al., “Depth errors analysis and correction for time-of-flight (ToF) cameras,” Sensors, 17 (1), 92 (2017). https://doi.org/10.3390/s17010092 SNSRES 0746-9462 Google Scholar

5. 

P. Llull et al., “Image translation for single-shot focal tomography,” Optica, 2 (9), 822 –825 (2015). https://doi.org/10.1364/OPTICA.2.000822 Google Scholar

6. 

M. Takeda and K. Mutoh, “Fourier transform profilometry for the automatic measurement of 3-D object shapes,” Appl. Opt., 22 (24), 3977 –3982 (1983). https://doi.org/10.1364/AO.22.003977 APOPAI 0003-6935 Google Scholar

7. 

C. Guan, L. G. Hassebrook and D. L. Lau, “Composite structured light pattern for three-dimensional video,” Opt. Express, 11 (5), 406 –417 (2003). https://doi.org/10.1364/OE.11.000406 OPEXFF 1094-4087 Google Scholar

8. 

P. Cao et al., “3D shape measurement based on projection of triangular patterns of two selected frequencies,” Opt. Express, 22 (23), 29234 –29248 (2014). https://doi.org/10.1364/OE.22.029234 OPEXFF 1094-4087 Google Scholar

9. 

Z. Zhang et al., “Absolute phase calculation from one composite RGB fringe pattern image by windowed Fourier transform algorithm,” Proc. SPIE, 7855 78550I (2010). https://doi.org/10.1117/12.868767 Google Scholar

10. 

Z. Wang et al., “Absolute phase calculation from one composite RGB fringe pattern image by wavelet transform algorithm,” Proc. SPIE, 8200 82000H (2011). https://doi.org/10.1117/12.904843 Google Scholar

11. 

Y. Wang et al., “A novel color encoding fringe projection profilometry based on wavelet ridge technology and phase-crossing,” Int. J. Performability Eng., 14 (5), 917 –926 (2018). https://doi.org/10.23940/ijpe.18.05.p10.917926 Google Scholar

12. 

M. Schaffer et al., “High-speed three-dimensional shape measurements of objects with laser speckles and acousto-optical deflection,” Opt. Lett., 36 (16), 3097 –3099 (2011). https://doi.org/10.1364/OL.36.003097 OPLEDP 0146-9592 Google Scholar

13. 

S. Heist et al., “Array projection of aperiodic sinusoidal fringes for high-speed three-dimensional shape measurement,” Opt. Eng., 53 (11), 112208 (2014). https://doi.org/10.1117/1.OE.53.11.112208 Google Scholar

14. 

A. Brahm et al., “Fast 3D NIR systems for facial measurement and lip-reading,” Proc. SPIE, 10220 102200P (2017). https://doi.org/10.1117/12.2263283 Google Scholar

15. 

S. Heist et al., “High-speed three-dimensional shape measurement using GOBO projection,” Opt. Lasers Eng., 87 90 –96 (2016). https://doi.org/10.1016/j.optlaseng.2016.02.017 Google Scholar

16. 

J.-S. Hyun, G. T.-C. Chiu and S. Zhang, “High-speed and high-accuracy 3D surface measurement using a mechanical projector,” Opt. Express, 26 (2), 1474 –1487 (2018). https://doi.org/10.1364/OE.26.001474 OPEXFF 1094-4087 Google Scholar

17. 

C. Zhang et al., “A novel 3D multispectral vision system based on filter wheel cameras,” in IEEE Int. Conf. Imaging Systems and Techniques (IST), 267 –272 (2016). https://doi.org/10.1109/IST.2016.7738235 Google Scholar

18. 

B. Geelen, N. Tack and A. Lambrechts, “A compact snapshot multispectral imager with a monolithically integrated per-pixel filter mosaic,” Proc. SPIE, 8974 89740L (2014). https://doi.org/10.1117/12.2037607 Google Scholar

19. 

P.-J. Lapray et al., “Multispectral filter arrays: recent advances and practical implementation,” Sensors, 14 (11), 21626 –21659 (2014). https://doi.org/10.3390/s141121626 SNSRES 0746-9462 Google Scholar

20. 

S. Heist et al., “5D hyperspectral imaging: fast and accurate measurement of surface shape and spectral characteristics using structured light,” Opt. Express, 26 (18), 23366 –23379 (2018). https://doi.org/10.1364/OE.26.023366 OPEXFF 1094-4087 Google Scholar

21. 

A. Lambrechts et al., “A CMOS-compatible, integrated approach to hyper- and multispectral imaging,” in IEEE Int. Electron Devices Meeting, 10.5.1 –10.5.4 (2014). https://doi.org/10.1109/IEDM.2014.7047025 Google Scholar

22. 

, “Micro-optics supplier,” (2018) http://www.silios.com/ September 2018). Google Scholar

23. 

S. Heist et al., “Theoretical considerations on aperiodic sinusoidal fringes in comparison to phase-shifted sinusoidal fringes for high-speed three-dimensional shape measurement,” Appl. Opt., 54 (35), 10541 –10551 (2015). https://doi.org/10.1364/AO.54.010541 APOPAI 0003-6935 Google Scholar

24. 

S. Heist et al., “Experimental comparison of aperiodic sinusoidal fringes and phase-shifted sinusoidal fringes for high-speed three-dimensional shape measurement,” Opt. Eng., 55 (2), 024105 (2016). https://doi.org/10.1117/1.OE.55.2.024105 Google Scholar

25. 

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell., 22 (11), 1330 –1334 (2000). https://doi.org/10.1109/34.888718 ITPIDJ 0162-8828 Google Scholar

26. 

V. Sauget et al., “Application note for CMS camera and CMS sensor users: post-processing method for crosstalk reduction in multispectral data and images,” (2018) http://www.silios.com/ September 2018). Google Scholar

27. 

H. Guo and P. S. Huang, “3-D shape measurement by use of a modified Fourier transform method,” Proc. SPIE, 7066 70660E (2008). https://doi.org/10.1117/12.798170 Google Scholar

28. 

R. I. Hartley, “Theory and practice of projective rectification,” Int. J. Comput. Vision, 35 (2), 115 –127 (1999). https://doi.org/10.1023/A:1008115206617 IJCVEQ 0920-5691 Google Scholar

29. 

C. Bräuer-Burchardt et al., “Fringe projection based high-speed 3D sensor for real-time measurements,” 808212 (2011). https://doi.org/10.1117/12.889459 Google Scholar

30. 

A. V. Kanaev et al., “Imaging with multi-spectral mosaic-array cameras,” Appl. Opt., 54 (31), F149 –F157 (2015). https://doi.org/10.1364/AO.54.00F149 APOPAI 0003-6935 Google Scholar

31. 

C. V. Correa, H. Arguello and G. R. Arce, “Snapshot colored compressive spectral imager,” J. Opt. Soc. Am. A, 32 (10), 1754 –1763 (2015). https://doi.org/10.1364/JOSAA.32.001754 JOAOD6 0740-3232 Google Scholar

32. 

T.-H. Tsai et al., “Spectral-temporal compressive imaging,” Opt. Lett., 40 (17), 4054 –4057 (2015). https://doi.org/10.1364/OL.40.004054 OPLEDP 0146-9592 Google Scholar

33. 

V. Farber et al., “Compressive 4D spectro-volumetric imaging,” Opt. Lett., 41 (22), 5174 –5177 (2016). https://doi.org/10.1364/OL.41.005174 OPLEDP 0146-9592 Google Scholar

34. 

European Machine Vision Association, “EMVA Standard 1288—Standard for characterization of image sensors and cameras,” (2016). Google Scholar

Biography

Chen Zhang received his BS and MS degrees in mechanical engineering from Ilmenau University of Technology in 2013 and 2014, respectively. Since 2015, he has been a PhD student in the Group for Quality Assurance and Industrial Image Processing at the Ilmenau University of Technology. He is working on the development and improvement of multimodal (3-D and multispectral) imaging systems.

Anika Brahm received her BEng and MEng degrees in laser and optotechnologies from the University of Applied Sciences, Jena, Germany, in 2006 and 2009, respectively, and her PhD from Friedrich Schiller University, Jena, Germany, in 2015. In 2009, she joined Fraunhofer Institute for Applied Optics and Precision Engineering (IOF), Jena, Germany, and worked in the field of terahertz technologies. Now, she is working as a scientist in the 3-D Metrology Group of the Optical Systems Department at the Fraunhofer IOF with focus on optical metrology, optical system design, and development of 3-D measurement systems.

Andreas Breitbarth received his diploma and PhD degrees in computer science from Friedrich Schiller University, Jena, Germany, in 2008 and 2015, respectively. During 2009 to 2015, he has been with the Optical System Department at the Fraunhofer IOF, Jena, Germany. Since 2015, he has been a member of the Group for Quality Assurance and Industrial Image Processing at the Ilmenau University of Technology. He is working on the development and improvement of three-dimensional imaging systems.

Maik Rosenberger studied mechanical engineering deepening direction measurement and sensor technology at the Ilmenau University of Technology. From 2005, he worked as a project engineer in several subprojects at the Steinbeis Transferzentrum in Ilmenau. After his graduation, he became head of the research group QualiMess. Currently, he works on several projects concerning image processing and measurement technologies.

Gunther Notni received his diploma and his PhD degrees in physics from Friedrich Schiller University, Jena, Germany, in 1988 and 1992, respectively. Since 1992, he has been a staff member at the Fraunhofer IOF, where he is the head of the Optical Systems Department. His research interests include 3-D shape measurement, surface characterization, interferometry, THz imaging, and phase conjugation. Since October 2014, he has been a professor at the Ilmenau University of Technology and headed the Group for Quality Assurance and Industrial Image Processing.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Chen Zhang, Anika Brahm, Andreas Breitbarth, Maik Rosenberger, and Gunther Notni "Single-frame three-dimensional imaging using spectral-coded patterns and multispectral snapshot cameras," Optical Engineering 57(12), 123105 (31 December 2018). https://doi.org/10.1117/1.OE.57.12.123105
Received: 24 September 2018; Accepted: 7 December 2018; Published: 31 December 2018
Lens.org Logo
CITATIONS
Cited by 2 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Cameras

3D image processing

3D image reconstruction

Stereoscopy

Image sensors

Fringe analysis

Projection systems

Back to Top