Please use this identifier to cite or link to this item: https://doi.org/10.1109/CVPR.2018.00404
DC FieldValue
dc.titleLeft-Right Comparative Recurrent Model for Stereo Matching
dc.contributor.authorJie, Z
dc.contributor.authorWang, P
dc.contributor.authorLing, Y
dc.contributor.authorZhao, B
dc.contributor.authorWei, Y
dc.contributor.authorFeng, J
dc.contributor.authorLiu, W
dc.date.accessioned2020-05-14T01:43:19Z
dc.date.available2020-05-14T01:43:19Z
dc.date.issued2018-12-14
dc.identifier.citationJie, Z, Wang, P, Ling, Y, Zhao, B, Wei, Y, Feng, J, Liu, W (2018-12-14). Left-Right Comparative Recurrent Model for Stereo Matching. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition : 3838-3846. ScholarBank@NUS Repository. https://doi.org/10.1109/CVPR.2018.00404
dc.identifier.issn10636919
dc.identifier.urihttps://scholarbank.nus.edu.sg/handle/10635/168132
dc.description.abstract© 2018 IEEE. Leveraging the disparity information from both left and right views is crucial for stereo disparity estimation. Left-right consistency check is an effective way to enhance the disparity estimation by referring to the information from the opposite view. However, the conventional left-right consistency check is an isolated post-processing step and heavily hand-crafted. This paper proposes a novel left-right comparative recurrent model to perform left-right consistency checking jointly with disparity estimation. At each recurrent step, the model produces disparity results for both views, and then performs online left-right comparison to identify the mismatched regions which may probably contain erroneously labeled pixels. A soft attention mechanism is introduced, which employs the learned error maps for better guiding the model to selectively focus on refining the unreliable regions at the next recurrent step. In this way, the generated disparity maps are progressively improved by the proposed recurrent model. Extensive evaluations on KITTI 2015, Scene Flow and Middlebury benchmarks validate the effectiveness of our model, demonstrating that state-of-the-art stereo disparity estimation results can be achieved by this new model.
dc.publisherIEEE
dc.sourceElements
dc.subjectcs.CV
dc.subjectcs.CV
dc.typeArticle
dc.date.updated2020-05-13T08:20:41Z
dc.contributor.departmentELECTRICAL AND COMPUTER ENGINEERING
dc.contributor.departmentTEMASEK LABORATORIES
dc.description.doi10.1109/CVPR.2018.00404
dc.description.sourcetitleProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
dc.description.page3838-3846
dc.published.statePublished
Appears in Collections:Staff Publications
Elements

Show simple item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
1804.00796v1.pdfPublished version8.77 MBAdobe PDF

OPEN

Post-printView/Download

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.