Please use this identifier to cite or link to this item: https://doi.org/10.1109/TIP.2022.3195642
DC FieldValue
dc.titleAdaptive Boosting for Domain Adaptation: Toward Robust Predictions in Scene Segmentation
dc.contributor.authorZheng, Zhedong
dc.contributor.authorYang, Yi
dc.date.accessioned2023-11-09T05:28:14Z
dc.date.available2023-11-09T05:28:14Z
dc.date.issued2022
dc.identifier.citationZheng, Zhedong, Yang, Yi (2022). Adaptive Boosting for Domain Adaptation: Toward Robust Predictions in Scene Segmentation. IEEE TRANSACTIONS ON IMAGE PROCESSING 31 : 5371-5382. ScholarBank@NUS Repository. https://doi.org/10.1109/TIP.2022.3195642
dc.identifier.issn1057-7149
dc.identifier.issn1941-0042
dc.identifier.urihttps://scholarbank.nus.edu.sg/handle/10635/245850
dc.description.abstractDomain adaptation is to transfer the shared knowledge learned from the source domain to a new environment, i.e., target domain. One common practice is to train the model on both labeled source-domain data and unlabeled target-domain data. Yet the learned models are usually biased due to the strong supervision of the source domain. Most researchers adopt the early-stopping strategy to prevent over-fitting, but when to stop training remains a challenging problem since the lack of the target-domain validation set. In this paper, we propose one efficient bootstrapping method, called Adaboost Student, explicitly learning complementary models during training and liberating users from empirical early stopping. Adaboost Student combines deep model learning with the conventional training strategy, i.e., adaptive boosting, and enables interactions between learned models and the data sampler. We adopt one adaptive data sampler to progressively facilitate learning on hard samples and aggregate 'weak' models to prevent over-fitting. Extensive experiments show that (1) Without the need to worry about the stopping time, AdaBoost Student provides one robust solution by efficient complementary model learning during training. (2) AdaBoost Student is orthogonal to most domain adaptation methods, which can be combined with existing approaches to further improve the state-of-the-art performance. We have achieved competitive results on three widely-used scene segmentation domain adaptation benchmarks.
dc.language.isoen
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
dc.sourceElements
dc.subjectScience & Technology
dc.subjectTechnology
dc.subjectComputer Science, Artificial Intelligence
dc.subjectEngineering, Electrical & Electronic
dc.subjectComputer Science
dc.subjectEngineering
dc.subjectAdaptation models
dc.subjectData models
dc.subjectTraining
dc.subjectPredictive models
dc.subjectComputational modeling
dc.subjectSemantics
dc.subjectBenchmark testing
dc.subjectDomain adaptation
dc.subjectscene segmentation
dc.typeArticle
dc.date.updated2023-11-09T04:15:48Z
dc.contributor.departmentDEPARTMENT OF COMPUTER SCIENCE
dc.description.doi10.1109/TIP.2022.3195642
dc.description.sourcetitleIEEE TRANSACTIONS ON IMAGE PROCESSING
dc.description.volume31
dc.description.page5371-5382
dc.published.statePublished
Appears in Collections:Staff Publications
Elements

Show simple item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
TIP_Adaboost.pdfAccepted version4.03 MBAdobe PDF

OPEN

Post-printView/Download

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.