Poor lighting conditions in the real world may lead to ill-exposure in captured images which suffer from compromised aesthetic quality and information loss for post-processing. Recent exposure correction works address this problem by learning the mapping from images of multiple exposure intensities to well-exposed images. However, it requires a large number of paired training data, which is hard to implement for certain data-inaccessible scenarios. This paper presents a highly robust exposure correction method based on self-supervised learning. Specifically, two sub-networks are designed to deal with under- and over-exposed regions in ill-exposed images respectively. This hybrid architecture enables adaptive ill-exposure correction. Then, a fusion module is employed to fuse the under-exposure corrected image and the over-exposure corrected image to obtain a well-exposed image with vivid color and clear textures. Notably, the training process is guided by histogram-equalized images with the application of histogram equalization prior (HEP), which means that the presented method only requires ill-exposed images as training data. Extensive experiments on real-world image datasets validate the robustness and superiority of this technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.