This paper introduces a snapshot spectral volumetric imaging approach based on light field image slicing and encoding. By slicing and encoding light field information, followed by spectral dispersion and array reimaging lens acquisition of aliased data, a four-dimensional data hypercube is reconstructed using deep learning-based algorithms. This hypercube contains three-dimensional spatial information and one-dimensional spectral information of the scene. The proposed approach utilizes Sanpshot Compressed Imaging Mapping Spectrometer(SCIMS)principle for initial light field spectral data acquisition. Reconstruction of this data employs traditional algorithms like Alternating Direction Method of Multipliers (ADMM) and Generalized Alternating Projection (GAP), as well as deep learning methods such as LRSDN and PnP-DIP. Simulation experiments reveal that classical compressive sensing-based spectral data reconstruction algorithms perform poorly, especially affecting digital refocusing of individual spectral bands in light field images. In contrast, deep learning algorithms exhibit significant improvements, effectively extracting and preserving spatial distribution characteristics of light field data, thus robustly recovering light field information. This validates the effectiveness of the proposed spectral volumetric imaging approach and deep learning-based reconstruction methods. In future research, we will refine the mathematical model, integrate spatial and spectral correlations of light field imaging, develop specialized deep neural network algorithms, and enhance reconstruction of light field spectral data.
KEYWORDS: 3D modeling, Hyperspectral imaging, RGB color model, Volume rendering, Reflection, Data modeling, Education and training, Cameras, Neural networks, 3D image processing
This paper utilizes a Neural Radiance Fields -based method for the 3D reconstruction of hyperspectral images to enhance the 3D reconstruction effect and expand its application areas. Hyperspectral imaging technology provides rich optical information across multiple spectral bands, far exceeding traditional RGB images, and can reveal more material properties of objects. Additionally, hyperspectral images can display more texture information and detailed structures of objects, capturing more high-frequency information, making them more advantageous in 3D reconstruction. This paper proposes a NeRF-based hyperspectral image 3D reconstruction method that learns the 3D spatial density distribution and spectral information of objects through a neural network, achieving high-quality 3D image generation from any viewpoint. This study demonstrates the NeRF-based hyperspectral image 3D reconstruction method, which has broad application prospects in fields such as remote sensing, cultural heritage preservation, and agricultural monitoring. By fully utilizing hyperspectral data, the NeRF model can generate 3D images with richer details and more realistic target objects, expanding the potential for hyperspectral imaging applications in 3D reconstruction. Future research can further optimize NeRF algorithms and models, fully leveraging hyperspectral information features to improve the efficiency and accuracy of 3D reconstruction data processing, meeting more application needs and promoting the development and application of hyperspectral 3D reconstruction technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.