Image Enhancement

← Back to Research


Image Enhancement Based on Pigment Representation

Qualitative comparison for tone mapping
Fig. 1. Comparisons of the (a) 1D LUT-based method, (b) 3D LUT-based method, and (c) the proposed pigment representation-based method. In (c), the input image is first converted into a set of pigments, which are then transformed using pigment reprojection functions. The reprojected pigments are subsequently combined to reconstruct the enhanced image.
Pigment representation framework
Fig. 2. Overall framework of the proposed pigment representation-based image enhancement method.

We develop deep learning-based image enhancement methods that adaptively improve visual quality across diverse conditions. Our core approach transforms input RGB colors into a high-dimensional pigment representation customized for each image, enabling complex color mappings that go beyond conventional pre-defined color spaces such as RGB or CIE LAB. The pigment-based method consists of five stages: visual encoder, pigment expansion, pigment reprojection, pigment blending, and RGB reconstruction. In parallel, we explore deformable control point networks (DCPNet) that flexibly parameterize global transformation functions per color channel, applicable to photo retouching, tone mapping, and underwater image enhancement.

Qualitative comparison for photo retouching
Fig. 3. Qualitative comparisons on the MIT-Adobe FiveK dataset for photo retouching: (a) shows GT (retouched by expert C) with its corresponding input image. (b), (c), and (d) show the resultant images and their corresponding error maps obtained by 4D LUT, CoTF, and the proposed method, respectively.
TABLE I. Quantitative comparisons on MIT-Adobe FiveK (480p) for photo retouching. Best result is bold.
MethodPSNR ↑SSIM ↑ΔEab# Params.Runtime
UPE21.880.85310.80927.1K
DPE23.750.9089.343.4M
HDRNet24.660.9158.06483.1K
CSRNet25.170.9217.7536.4K0.71ms
DeepLPF24.730.9167.991.7M36.69ms
3D LUT25.290.9207.55593.5K0.80ms
SepLUT25.470.9217.54119.8K6.20ms
AdaInt25.490.9267.47619.7K1.89ms
RSFNet25.490.9247.2316.1M7.28ms
4D LUT25.500.9317.27924.4K1.25ms
HashLUT25.500.9267.46114.0K
CoTF25.540.9387.46310.0K4.28ms
Proposed25.820.9397.15765.0K1.43ms

Publications

  • Se-Ho Lee, Keunsoo Koh, and Seung-Wook Kim, “Image enhancement based on pigment representation,” IEEE Transactions on Multimedia, 2026. [DOI] [Project]
  • Se-Ho Lee and Seung-Wook Kim, “DCPNet: Deformable control point network for image enhancement,” Journal of Visual Communication and Image Representation, vol. 104, pp. 104308, Oct. 2024. [DOI]