The Unreasonable Effectiveness of Deep Networks as a Perceptual Metric
Richard Zhang1
Phillip Isola13
Alexei A. Efros1
Eli Shechtman2
Oliver Wang2
1UC Berkeley
2Adobe Research
Code [GitHub]
CVPR 2018 [preprint]


While it is nearly effortless for humans to quickly assess the perceptual similarity between two images, the underlying processes are thought to be quite complex. Despite this, the most widely used perceptual metrics today, such as PSNR and SSIM, are simple, shallow functions, and fail to account for many nuances of human perception. Recently, the deep learning community has found that features of the VGG network trained on the ImageNet classification task has been remarkably useful as a training loss for image synthesis. But how perceptual are these so-called "perceptual losses"? What elements are critical for their success? To answer these questions, we introduce a new Full Reference Image Quality Assessment (FR-IQA) dataset of perceptual human judgments, orders of magnitude larger than previous datasets. We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics. We find that deep features outperform all previous metrics by huge margins. More surprisingly, this result is not restricted to ImageNet-trained VGG features, but holds across different deep architectures and levels of supervision (supervised, self-supervised, or even unsupervised). Our results suggest that perceptual similarity is an emergent property shared across deep visual representations.

Try the Model/Download the Dataset



R. Zhang, P. Isola, A. A. Efros,
E. Shechtman, O. Wang.

The Unreasonable Effectiveness of
Deep Features as a Perceptual Metric.

In CVPR, 2018 (preprint).



This research was supported, in part, by grants from Berkeley Deep Drive, NSF IIS-1633310, and hardware donations by NVIDIA Corp. We thank members of the Berkeley AI Research Lab and Adobe Creative Intelligence Lab for helpful discussions. We also thank Radu Timofte, Zhaowen Wang, Michael Waechter, Simon Niklaus, and Sergio Guadarrama for help preparing data. RZ is partially supported by an Adobe Research Fellowship, and much of this work was done while RZ was an intern at Adobe Research.