MuLUT: Cooperating Multiple Look-Up Tables for Efficient Image Super-Resolution

1University of Science and Technology of China,

2Huawei Noah’s Ark Lab

*Equal contribution #Correspondence author

In ECCV 2022

TL; DR

A neural network with restricted receptive field (RF) can be converted into a LUT, yielding a learnable and efficient solution for image super-resolution to avoid substantial computations on edge devices.

We propose cooperating multiple LUTs (MuLUT) to overcome the intrinsic limitation of single LUT (i.e., the restricted RF).

Our method significantly outperforms the single-LUT solution, while preserving its efficiency.

Understanding MuLUT

Why MuLUT: SR-LUT and Its Limitation

An SR-LUT is obtained and deployed as the following.

1. Training an SR network on paired LR-HR dataset.

2. Caching the SR network via traversing all possible LR inputs and saving the corresponding HR results, obtainning a list of index-value pairs, i.e., a LUT.

3. Retrieving values from the LUT by querying give LR inputs.

However, due to exhaustive calculation, the size of a LUT grows exponentially as the dimension of its indexing entry increases.

Thus, its receptive field (RF) has to be limited, resulting in inferior performance.

this slowpoke moves

What MuLUT Does: From Single LUT to Multiple LUTs

Our work, MuLUT, addresses the exponential disaster by cooperating multiple LUTs. The size of MuLUT grows linearly instead of exponentially, yielding a practical solution to expand RF.

MuLUT significantly outperforms SR-LUT while preserving its efficiency, achieving a better performance-efficiency trade-off.

this slowpoke moves

How MuLUT Works: LUTs in a Neural Network Way

We construct multiple LUTs both in width and in depth via complementary indexing and hierarchical indexing, just like constructing a neural network.

this slowpoke moves

Learn More

Related Works

We generalize MuLUT to DNN-of-LUTs, showing its versatility in low-level vision tasks. Learn more at arXiv.

Our following work, LeRF, further extends the ability of LUT to arbitrary-scale super-resolution, making up for the regrets of MuLUT on replacing interpolation methods via achiving continuous resampling. Please learn more about LeRF at its project page.

BibTeX

@InProceedings{Li_2022_ECCV,
      author    = {Li, Jiacheng and Chen, Chang and Cheng, Zhen and Xiong, Zhiwei},
      title     = {MuLUT: Cooperating Multiple Look-Up Tables for Efficient Image Super-Resolution},
      booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
      year      = {2022},
  }

@arxiv{Li_2023_DNN_LUT,
  author    = {Li, Jiacheng and Chen, Chang and Cheng, Zhen and Xiong, Zhiwei},
  title     = {Toward {DNN} of {LUTs}: Learning Efficient Image Restoration with Multiple Look-Up Tables},
  booktitle = {arxiv},
  year      = {2023},
}

  @InProceedings{Li_2023_CVPR,
    author    = {Li, Jiacheng and Chen, Chang and Huang, Wei and Lang, Zhiqiang and Song, Fenglong and Yan, Youliang and Xiong, Zhiwei},
    title     = {Learning Steerable Function for Efficient Image Resampling},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2023},
    pages     = {5866-5875}
}

Acknowledgement

We would like to thank Shiyu Deng, Bo Hu, Zeyu Xiao, and Xueyan Huang for benchmark testing, and Xihao Chen for paper revision.