Browsing by Author "Yan, Hailun"
Now showing 1 - 1 of 1
Results Per Page
Sort Options
Item Open Access Automated Building Extraction from Remote Sensing Imagery Using Deep Learning(2022-10-31) Yan, Hailun; Wang, Ruisheng; Bayat, Sayeh; Hay, GeoffreyAutomatically extracting high-quality building-footprint polygons from satellite and aerial images is crucial for supporting various land use and land cover mapping applications. The conventional building polygon extraction process requires hand-crafted features and high human intervention, which is time-consuming and often has limited generalization capability. In recent years, deep learning-based methods have shed light on the problem with higher levels of automation, segmentation quality, and generalization capability. These methods often involve two stages: first, building segmentations are predicted from remote sensing images using deep neural networks; next, the irregular-shaped building segmentations are regularized into straight-edged and right-angle-cornered building polygons using conventional or deep learning-based methods. As a result, the extraction performance is often highly affected by the quality of the segmentation predictions. However, from experiments, the current widely used segmentation DNNs show significant defects in their building segmentation results, especially for the buildings with rotated angles between the building edges and the image edges. Moreover, although DNN-based regularization methods have shown greater generalization potentials at regularizing buildings in various shapes compared to the conventional regularization methods, the qualities of the regularization results are generally dissatisfactory. This thesis proposes an end-to-end deep learning-based building extraction method based on PolygonCNN. The proposed model consists of a segmentation module to predict building segmentations and a regularization module to regularize the building contours traced from the building segmentation results. First, an upgraded Mask R-CNN model, which is integrated with the rotatable bounding box technique, the Swin Transformer backbone network, and the FPN module, is adopted as the segmentation module of the proposed model to segment buildings in vastly different scales and orientations. Moreover, the Feature Pooling module and the BRegNet of the original PolygonCNN are modified to exploit the multi-scale feature maps of the FPN module. As a result, the proposed model can effectively extract high-quality building polygons with various scales and orientations and has shown promising performance compared to several other popular end-to-end deep-learning-based building extraction models. In addition, the thesis provides supplemental architecture choices, which offer flexibility between the quality of the building extraction result and the memory consumption of the model.