KEYWORDS: Non line of sight propagation, Image restoration, Optical flow, Cameras, Single photon avalanche diodes, Relays, Reflection, Sensors, Education and training, Data hiding
Non-Line-of-Sight Imaging (NLOS Imaging) is an advanced technique designed to capture images of scenes that are not directly visible to the camera. This method leverages transient sensors to collect time-resolved signals, enabling the reconstruction of hidden scenes. Traditional NLOS imaging techniques rely solely on surface normals rather than depth information to represent implicit surfaces. This limitation results in less accurate refinement of the surface features of reconstructed objects. To address this issue, our research introduces a novel neural implicit learning approach that incorporates depth information for optimization. By integrating depth data, we achieve more precise surface reconstruction in NLOS environments. The process involves extracting depth information through the fusion of albedo plots obtained from different perspectives, which are generated by transient images and optical flow. This combined data enhances the accuracy and quality of the reconstructed images. Additionally, we introduce a depth loss component to facilitate smoother reconstruction of object surfaces while simultaneously constraining the Signed Distance Function (SDF) regression. This dual approach ensures that the reconstructed surfaces are both smooth and accurately defined. Our method has been rigorously tested on both synthetic and real datasets, and the experimental results demonstrate its superiority over existing techniques. Our approach consistently delivers high-quality reconstructions of hidden objects in various scenarios, outperforming current methods in terms of precision and detail.
Image dehazing technology is a hot topic in the fields of image processing and computer vision, aiming to obtain details and texture features of the original scene from foggy images, and then obtain clear and fog free images. Most of the existing research methods are suitable for tasks in low fog scenarios. As the fog concentration increases, the image reconstruction quality of the algorithm significantly decreases, accompanied by detail loss and distortion. In addition, most existing algorithms require a large amount of foggy datasets during model training, and model training takes a long time, which reduces the practicality of the model. In response to the above issues, this paper proposes an image dehazing model based on a small sample multi attention mechanism and multi frequency branch fusion (MFBF-Net). This model can effectively extract high-frequency and low-frequency detail information in the image, and reconstruct the real image to the greatest extent possible. The experimental results show that the dehazing model proposed in this paper exhibits good dehazing performance on small sample datasets, and has good performance in different concentrations of foggy scenes.
Non-line-of-sight(NLOS) imaging through fog has been extensively researched in the fields of optics and computer vision. However, due to the influence of strong backscattering and diffuse reflection generated by the dense fog on the temporal-spatial correlations of photons returning from the target object, the reconstruction quality of most existing methods is significantly reduced under dense fog conditions. In this study, we define the optical imaging process in a foggy environment and propose a hybrid intelligent enhancement perception(HIEP) system based on Time-of-Flight(ToF) methods and physics-driven Swin transformer(ToFormer) to eliminate scattering effects and reconstruct targets under heterogeneous fog with varying optical thickness. Furthermore, we assembled a prototype of the HIEP system and established the Active Non-Line-of-Sight Imaging Through Dense Fog(NLOSTDF) dataset to train the reconstruction network. The experimental results demonstrate that even in dense fog short-distance scenarios with an optical thickness of up to 2.5 and imaging distances less than 6 meters, our approach achieves clear imaging of the target scene, surpassing existing optical and computer vision methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.