Please use this identifier to cite or link to this item: https://doi.org/10.1109/ICME52920.2022.9859753
DC FieldValue
dc.titleMix-Up Self-Supervised Learning for Contrast-Agnostic Applications
dc.contributor.authorZhang, Y
dc.contributor.authorYin, Y
dc.contributor.authorZhang, Y
dc.contributor.authorZimmermann, R
dc.date.accessioned2023-06-06T14:14:31Z
dc.date.available2023-06-06T14:14:31Z
dc.date.issued2022-01-01
dc.identifier.citationZhang, Y, Yin, Y, Zhang, Y, Zimmermann, R (2022-01-01). Mix-Up Self-Supervised Learning for Contrast-Agnostic Applications. 2022 IEEE International Conference on Multimedia and Expo (ICME) 2022-July. ScholarBank@NUS Repository. https://doi.org/10.1109/ICME52920.2022.9859753
dc.identifier.isbn9781665485630
dc.identifier.issn1945-7871
dc.identifier.issn1945-788X
dc.identifier.urihttps://scholarbank.nus.edu.sg/handle/10635/241605
dc.description.abstractContrastive self-supervised learning has attracted significant research attention recently. It learns effective visual represen-tations from unlabeled data by embedding augmented views of the same image close to each other while pushing away embeddings of different images. Despite its great success on ImageNet classification, COCO object detection, etc., its performance degrades on contrast-agnostic applications, e.g., medical image classification, where all images are visually similar to each other. This creates difficulties in optimizing the embedding space as the distance between images is rather small. To solve this issue, we present the first mix-up self-supervised learning framework for contrast-agnostic applications. We address the low variance across images based on cross-domain mix-up and build the pretext task based on two synergistic objectives: image reconstruction and transparency prediction. Experimental results on two benchmark datasets validate the effectiveness of our method, where an improve-ment of 2.5% 7.4% in top-1 accuracy was obtained compared to existing self-supervised learning methods.
dc.publisherIEEE
dc.sourceElements
dc.typeConference Paper
dc.date.updated2023-06-05T23:37:52Z
dc.contributor.departmentDEPARTMENT OF COMPUTER SCIENCE
dc.contributor.departmentInstitute of Data Science
dc.description.doi10.1109/ICME52920.2022.9859753
dc.description.sourcetitle2022 IEEE International Conference on Multimedia and Expo (ICME)
dc.description.volume2022-July
dc.published.statePublished
Appears in Collections:Staff Publications
Elements

Show simple item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
2022062137.pdfPublished version2.21 MBAdobe PDF

OPEN

NoneView/Download

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.