EarthVQA: Towards Queryable Earth via Relational Reasoning-Based Remote Sensing Visual Question Answering
AAAI 2024

Abstract

overview
overview

This is follow-up work of our LoveDA (NeurIPS2021)

Earth vision research typically focuses on extracting geospatial object locations and categories but neglects the exploration of relations between objects and comprehensive reasoning. Based on city planning needs, we develop a multi-modal multi-task VQA dataset (EarthVQA) to advance relational relational-based judging, counting, and comprehensive analysis. The EarthVQA dataset contains 6000 images, corresponding semantic masks, and 208,593 QA pairs with urban and rural governance requirements embedded. As objects are the basis for complex relational reasoning, we propose a Semantic OBject Awareness framework (SOBA) to advance VQA in an object-centric way. To preserve refined spatial locations and semantics, SOBA leverages a segmentation network for object semantics generation. The object-guided attention aggregates object interior features via pseudo masks, and bidirectional cross attention further models object external relations hierarchically. To optimize object counting, we propose a numerical difference loss that dynamically adds difference penalties, unifying the classification and regression tasks. Experimental results show that SOBA outperforms both advanced general and remote sensing methods. We believe this dataset and framework provide a strong benchmark for Earth vision's complex analysis.

Experiments on EarthVQA

overview

BibTeX

@article{wang2024earthvqa, 
                    title={EarthVQA: Towards Queryable Earth via Relational Reasoning-Based Remote Sensing Visual Question Answering},
                    url={https://ojs.aaai.org/index.php/AAAI/article/view/28357}, 
                    DOI={10.1609/ai.v38i6.28357}, 
                    author={Wang, Junjue and Zheng, Zhuo and Chen, Zihang and Ma, Ailong and Zhong, Yanfei}, 
                    year={2024}, 
                    month={Mar.},
                    volume={38},
                    pages={5481-5489}}
@inproceedings{wang2021loveda,
                    title={Love{DA}: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation},
                    author={Junjue Wang and Zhuo Zheng and Ailong Ma and Xiaoyan Lu and Yanfei Zhong},
                    booktitle={Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks},
                    editor = {J. Vanschoren and S. Yeung},
                    year={2021},
                    volume = {1},
                    pages = {},
                    url={https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/file/4e732ced3463d06de0ca9a15b6153677-Paper-round2.pdf}
                }
                

Acknowledgments

This work was supported by National Natural Science Foundation of China under Grant Nos. 42325105, 42071350, and 42171336.

The website template was borrowed from Bowen Cheng and Michaƫl Gharbi.