Please use this identifier to cite or link to this item:
https://doi.org/10.1145/3343031.3350870
Title: | Learning Using Privileged Information for Food Recognition | Authors: | Lei Meng Long Chen Xun Yang Dacheng Tao Hanwang Zhang Chunyan Miao |
Keywords: | Cross-modal fusion Food recognition Heterogeneous feature alignment Learning using privileged information |
Issue Date: | 21-Oct-2019 | Citation: | Lei Meng, Long Chen, Xun Yang, Dacheng Tao, Hanwang Zhang, Chunyan Miao (2019-10-21). Learning Using Privileged Information for Food Recognition. ACM MM 2019 : 557-565. ScholarBank@NUS Repository. https://doi.org/10.1145/3343031.3350870 | Abstract: | Food recognition for user-uploaded images is crucial in visual diet tracking, an emerging application linking multimedia and healthcare domains. However, it is challenging due to the various visual appearances of food images. This is caused by different conditions when taking the photos, such as angles, distances, light conditions, food containers, and background scenes. To alleviate such a semantic gap, this paper presents a cross-modal alignment and transfer network (ATNet), which is motivated by the paradigm of learning using privileged information (LUPI). It additionally utilizes the ingredients in food images as an “intelligent teacher” in the training stage to facilitate cross-modal information passing. Specifically, ATNetfi rst uses a pair of synchronized autoencoders to build the base image and ingredient channels for informationfl ow. Subsequently, the information passing is enabled through a two-stage cross-modal interaction. Thefi rst stage of interaction adopts a two-step method, called partial heterogeneous transfer, to 1) alleviate the intrinsic heterogeneity between images and ingredients and 2) align them in a shared space to make their carried information about food classes interact. In the second stage, ATNet learns to map the visual embeddings of images to the ingredient channel for food recognition from the view of “teacher”. This leads a refined recognition by a multi-view fusion. Experiments on two real-world datasets show that ATNet can be incorporated with any state-of-the-art CNN models to consistently improve their performance. © 2019 Association for Computing Machinery. | Source Title: | ACM MM 2019 | URI: | https://scholarbank.nus.edu.sg/handle/10635/167714 | ISBN: | 9781450368896 | DOI: | 10.1145/3343031.3350870 |
Appears in Collections: | Staff Publications Elements |
Show full item record
Files in This Item:
File | Description | Size | Format | Access Settings | Version | |
---|---|---|---|---|---|---|
3343031.3350870.pdf | 2.17 MB | Adobe PDF | OPEN | None | View/Download |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.