Please use this identifier to cite or link to this item:
https://scholarbank.nus.edu.sg/handle/10635/141264
DC Field | Value | |
---|---|---|
dc.title | EXTRACTION OF BUILDINGS FROM AERIAL IMAGES | |
dc.contributor.author | LU KANGKANG | |
dc.date.accessioned | 2018-04-30T18:01:50Z | |
dc.date.available | 2018-04-30T18:01:50Z | |
dc.date.issued | 2018-01-18 | |
dc.identifier.citation | LU KANGKANG (2018-01-18). EXTRACTION OF BUILDINGS FROM AERIAL IMAGES. ScholarBank@NUS Repository. | |
dc.identifier.uri | http://scholarbank.nus.edu.sg/handle/10635/141264 | |
dc.description.abstract | Deep learning has been applied to segment buildings from high resolution images and achieved promising results. However, there still exist the problems stemming from training and testing on split patches and class imbalances. To overcome these problems, we propose a dual-resolution U-Net that uses pair of images as inputs to capture both high and low-resolution features. We also use soft Jaccard loss to place more emphasis on the sparse and low accuracy samples. The images from different cities are further balanced according to the number of buildings in each city. With our architecture, we achieved state-of-the-art results on the INRIA aerial image labeling dataset at the time of submission without any post-processing. | |
dc.language.iso | en | |
dc.subject | Deep learning, semantic segmentation, building extraction, remote sensing, U-Net, IOU loss | |
dc.type | Thesis | |
dc.contributor.department | ELECTRICAL & COMPUTER ENGINEERING | |
dc.contributor.supervisor | ONG SIM HENG | |
dc.description.degree | Master's | |
dc.description.degreeconferred | MASTER OF ENGINEERING | |
dc.identifier.orcid | 0000-0002-1872-5494 | |
Appears in Collections: | Master's Theses (Open) |
Show simple item record
Files in This Item:
File | Description | Size | Format | Access Settings | Version | |
---|---|---|---|---|---|---|
Thesis - v2.pdf | 12.76 MB | Adobe PDF | OPEN | None | View/Download |
Google ScholarTM
Check
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.