Please use this identifier to cite or link to this item:
Title: An Empirical Study of Language CNN for Image Captioning
Authors: Gu J.
Wang G.
Cai J.
Chen T. 
Issue Date: 2017
Publisher: Institute of Electrical and Electronics Engineers Inc.
Citation: Gu J., Wang G., Cai J., Chen T. (2017). An Empirical Study of Language CNN for Image Captioning. Proceedings of the IEEE International Conference on Computer Vision 2017-October : 1231-1240. ScholarBank@NUS Repository.
Abstract: Language models based on recurrent neural networks have dominated recent image caption generation tasks. In this paper, we introduce a language CNN model which is suitable for statistical language modeling tasks and shows competitive performance in image captioning. In contrast to previous models which predict next word based on one previous word and hidden state, our language CNN is fed with all the previous words and can model the long-range dependencies in history words, which are critical for image captioning. The effectiveness of our approach is validated on two datasets: Flickr30K and MS COCO. Our extensive experimental results show that our method outperforms the vanilla recurrent neural network based language models and is competitive with the state-of-the-art methods.
Source Title: Proceedings of the IEEE International Conference on Computer Vision
ISBN: 9781538610329
ISSN: 15505499
DOI: 10.1109/ICCV.2017.138
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.


checked on Sep 23, 2018

Google ScholarTM



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.