Underwater images currently suffer from color tone bias, reduced contrast, and blurring, which significantly hinder their use in advanced visual tasks. To address these visual quality issues, we present an underwater image restoration method guided by a physics model, integrating generative adversarial network and Unified Transformer frameworks. The network is divided into three parts: a physical model parameter estimation network (Par-subnet), an encoder–decoder, and discriminators. Initially, the parameter estimation network is used to compute the parameters required by the physical model, resulting in a preliminarily restored underwater image through inversion. We enhance the encoder–decoder network by incorporating the Unified Transformer and Reinforced Swin-Convs Transformer, thereby improving its capacity to capture both global and local features effectively. Simultaneously, a degradation quantization module is used to assess the degraded regions of the underwater image, further improving the restoration performance of the encoder–decoder. Given the limited availability of real underwater reference images, we employ the reliable bank mechanism to filter the best-quality results from the model output images and the reference images for training the discriminators. This process facilitates the transition from supervised to semi-supervised learning. The comparative experimental findings indicate that our methodology outperforms existing methods across both quantitative and qualitative metrics on multiple benchmark datasets, fully verifying the reliability and efficacy of our method. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one