# PINE: Physics-Informed Neural Enhancement for Underwater Video via Implicit Representation
**Physics-Informed Neural Enhancement for Underwater Video Enhancement**
[](https://jinxinshao.github.io/PINE/)
[](https://drive.google.com/file/d/1o4hoxZ1zBW1VEYeTqmAIkhd5wECrUkYV/view?usp=drive_link)
[](LICENSE)

**[Jinxin Shao](https://jinxinshao.github.io/)**
*Submitted 2025*
📌 Table of Contents
🌊 Overview
PINE (Physics-Informed Neural Enhancement) is a novel framework for underwater video enhancement that leverages implicit neural representations guided by physical underwater imaging models. Our method addresses the fundamental challenges of underwater video restoration by incorporating:
- 🌈 Wavelength-dependent attenuation modeling
- 💨 Backscatter compensation
- ⏱️ Temporal consistency across frames
- 🧠 Unified neural architecture
Traditional underwater enhancement methods often struggle with color distortion, low contrast, and temporal inconsistencies. PINE overcomes these limitations by embedding physical priors into implicit neural representations, achieving state-of-the-art performance with fewer parameters.
✨ Key Features
Our approach integrates the underwater image formation model directly into the neural architecture:
- Models wavelength-dependent light attenuation
- Accounts for depth-varying backscatter
- Preserves physical consistency across the color spectrum
🎯 Implicit Neural Representation
- Continuous spatial-temporal representation
- Parameter-efficient architecture
- Natural handling of arbitrary resolutions
🎬 Temporal Consistency
- Maintains coherent enhancement across video frames
- No optical flow estimation required
- Smooth transitions without flickering artifacts
⚡ Efficient Architecture
- Fewer parameters than existing methods
- Real-time processing capability
- Scalable to high-resolution videos
🔍 Methodology
Overview of the PINE framework: Physics-informed implicit neural representation for underwater video enhancement
Our method consists of three key components:
- Physical Modeling Module: Incorporates the underwater image formation model with wavelength-dependent attenuation
- Implicit Neural Representation: Encodes video frames as continuous functions using coordinate-based networks
- Temporal Consistency Module: Ensures smooth transitions across frames without requiring explicit motion estimation
📊 Results
Quantitative Comparison
Performance on the UVEB benchmark dataset:
Note: ↑ indicates higher is better
Qualitative Results
| Input (Degraded) | LANet (CVPR 2022) | PINE (Ours) |
|:---:|:---:|:---:|
|  |  |  |
🎬 Video Comparisons
For video comparisons and more results, please visit our Project Page.
[](https://jinxinshao.github.io/PINE/)
🚀 Installation
Prerequisites
Setup
💻 Quick Start
Training
Testing
Inference on Your Own Videos
# Enhance your underwater video
📁 Datasets
We evaluate PINE on the following benchmarks:
UVEB (Underwater Video Enhancement Benchmark)
Data Preparation
📖 Citation
If you find this work helpful, please consider citing:
@article{shao2025pine,
title={PINE: Physics-Informed Neural Enhancement for Underwater Video via Implicit Representation},
author={Shao, Jinxin},
journal={arXiv preprint arXiv:XXXX.XXXXX},
year={2025}
}
🙏 Acknowledgments
This research was supported by [Your Institution/Funding]. We thank the authors of the following projects for their open-source contributions:
Dr. Jinxin Shao
For questions or collaborations, feel free to open an issue or contact me directly.
📜 License
This project is licensed under the MIT License - see the LICENSE file for details.
### ⭐ Star History
If you find this project helpful, please consider giving it a star! ⭐
[](https://star-history.com/#jinxinshao/PINE&Date)
---
**Made with ❤️ by the Underwater Vision Community**