Please use this identifier to cite or link to this item: https://doi.org/10.1109/TMM.2021.3060951
Title: GPS2Vec: Pre-trained Semantic Embeddings for Worldwide GPS Coordinates
Authors: Yin, Y
Zhang, Y
Liu, Z
Wang, S 
Shah, RR
Zimmermann, R
Issue Date: 1-Jan-2021
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Citation: Yin, Y, Zhang, Y, Liu, Z, Wang, S, Shah, RR, Zimmermann, R (2021-01-01). GPS2Vec: Pre-trained Semantic Embeddings for Worldwide GPS Coordinates. IEEE Transactions on Multimedia : 1-1. ScholarBank@NUS Repository. https://doi.org/10.1109/TMM.2021.3060951
Abstract: GPS coordinates are fine-grained location indicators that are difficult to be effectively utilized by classifiers in geo-aware applications. Previous GPS encoding methods concentrate on generating hand-crafted features for small areas of interest. However, many real world applications require a machine learning model, analogous to the pre-trained ImageNet model for images, that can efficiently generate semantically-enriched features for planet-scale GPS coordinates. To address this issue, we propose a novel two-level grid-based framework, termed GPS2Vec, which is able to extract geo-aware features in real-time for locations worldwide. The Earth's surface is first discretized by the Universal Transverse Mercator (UTM) coordinate system. Each UTM zone is then considered as a local area of interest that is further divided into fine-grained cells to perform the initial GPS encoding. We train a neural network in each UTM zone to learn the semantic embeddings from the initial GPS encoding. The training labels can be automatically derived from large-scale geotagged documents such as tweets, check-ins, and images that are available from social sharing platforms. We conducted comprehensive experiments on three geo-aware applications, namely place semantic annotation, geotagged image classification, and next location prediction. Experimental results demonstrate the effectiveness of our approach, as prediction accuracy improves significantly based on a simple multi-feature early fusion strategy with deep neural networks, including both CNNs and RNNs.
Source Title: IEEE Transactions on Multimedia
URI: https://scholarbank.nus.edu.sg/handle/10635/200723
ISSN: 15209210
19410077
DOI: 10.1109/TMM.2021.3060951
Appears in Collections:Staff Publications
Elements

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
final version.pdf1.81 MBAdobe PDF

OPEN

PublishedView/Download

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.