I2S2: Image-To-Scene Sketch Translation Using Conditional Input and Adversarial Networks
Document Type
Conference Proceeding
Publication Date
11-1-2020
School
Computing Sciences and Computer Engineering
Abstract
Image generation from sketch is a popular and well-studied computer vision problem. However, the inverse problem image-to-sketch (I2S) synthesis still remains open and challenging, let alone image-to-scene sketch (I2S 2 ) synthesis, especially when full-scene sketch generations are highly desired. In this paper, we propose a framework for generating full-scene sketch representations from natural scene images, aiming to generate outputs that approximate hand-drawn scene sketches. Specifically, we exploit generative adversarial models to produce full-scene sketches given arbitrary input images that are actually conditions which are incorporated to guide the distribution mapping in the context of adversarial learning. To advance the use of such conditions, we further investigate edge detection solutions and propose to utilize Holistically-nested Edge Detection (HED) maps to condition the generative model. We conduct extensive experiments to validate the proposed framework and provide detailed quantitative and qualitative evaluations to demonstrate its effectiveness. In addition, we also demonstrate the flexibility of the proposed framework by using different conditional inputs, such as the Canny edge detector.
Publication Title
Proceedings - International Conference on Tools with Artificial Intelligence, ICTAI
Volume
2020-November
First Page
773
Last Page
778
Recommended Citation
McGonigle, D.,
Wang, T.,
Yuan, J.,
He, K.,
Li, B.
(2020). I2S2: Image-To-Scene Sketch Translation Using Conditional Input and Adversarial Networks. Proceedings - International Conference on Tools with Artificial Intelligence, ICTAI, 2020-November, 773-778.
Available at: https://aquila.usm.edu/fac_pubs/19079
COinS