Real-Time User-Guided Image Colorization with Learned Deep Priors
Richard Zhang*
Jun-Yan Zhu*
Phillip Isola
Xinyang Geng
Angela S. Lin
Tianhe Yu
Alexei A. Efros
University of California, Berkeley
Code [GitHub] [Training]
SIGGRAPH 2017 [Paper]
Conference [Talk]
Slides [ppt]


We propose a deep learning approach for user-guided image colorization. The system directly maps a grayscale image, along with sparse, local user ``hints" to an output colorization with a Convolutional Neural Network (CNN). Rather than using hand-defined rules, the network propagates user edits by fusing low-level cues along with high-level semantic information, learned from large-scale data. We train on a million images, with simulated user inputs. To guide the user towards efficient input selection, the system recommends likely colors based on the input image and current user inputs. The colorization is performed in a single feed-forward pass, enabling real-time use. Even with randomly simulated user inputs, we show that the proposed system helps novice users quickly create realistic colorizations, and show large improvements in colorization quality with just a minute of use. In addition, we show that the framework can incorporate other user "hints" as to the desired colorization, showing an application to color histogram transfer.

Demo Video


Results on Legacy Photos

We trained our system on 1.3M color photos, which were made grayscale "synthetically" (by removing the color components). Here, we show some examples on legacy grayscale photographs. This is Figure 10 in our full paper.

Selected Legacy Photos

Additional Results

We show all of the results from our user study. Each user spent just 1 minute on each image. Each of the 28 users was given minimal training (short 2 minute explanation, and a few questions), and given 10 images to colorize. We show all 280 examples in the link below. This is an extension of Figures 4 & 5 of our paper. Please see Section 4.2 of our paper for additional details.

Full User Study Results

We show additional examples of our network incorporating global histogram information. Please see Sections 3.3 and 4.4 for additional details. This is an extension of Figure 9 in our paper.

Random Histogram Transfer Results

Try the network

GitHub [Official Repository] [PyTorch Training]


R. Zhang*, J.Y. Zhu*, P. Isola,
X. Geng, A. S. Lin, T. Yu, A. A. Efros.

Real-Time User-Guided Image Colorization with Learned Deep Priors.
In SIGGRAPH, 2017. (hosted on arXiv)


Related and Concurrent Work

Automatic Colorization with Deep Networks

R. Zhang, P. Isola, A. A. Efros. Colorful Image Colorization. In ECCV, 2016. [PDF] [Website] [Demo]

G. Larsson, M. Maire, and G. Shakhnarovich. Learning Representations for Automatic Colorization. In ECCV, 2016. [PDF] [Website]

S. Iizuka, E. Simo-Serra, and H. Ishikawa. Let there be Color!: Joint End-to-end Learning of Global and Local Image Priors for Automatic Image Colorization with Simultaneous Classification. In SIGGRAPH, 2016. [PDF] [Website]

User Interaction with Deep Networks

P. Isola, J.Y. Zhu, T. Zhou, A. A. Efros. Image to Image Translation with Conditional Adversarial Networks. In CVPR, 2017. [PDF] [Website] [Demo]

P. Sangkloy, J. Lu, C. Fang, F. Yu, J. Hays. Scribbler: Controlling Deep Image Synthesis with Sketch and Color. In CVPR, 2017. [PDF] [Website]

J.Y. Zhu, P. Krähenbühl, E. Shechtman, A. A. Efros. Generative Visual Manipulation on the Natural Image Manifold. In ECCV, 2016. [PDF] [Website]

K. Frans. Outline Colorization through Tandem Adversarial Networks. In Arxiv, 2017. [PDF] [Demo]

Preferred Networks, Inc. PaintsChainer. [Demo]


We thank members of the Berkeley Artificial Intelligence Research Lab for helpful discussions. We also thank the participants in our user study, along with Aditya Deshpande and Gustav Larsson for providing images for comparison. This work has been supported, in part, by NSF SMA-1514512, a Google Grant, BAIR, and a hardware donation by NVIDIA.