The 2nd International Workshop on

Geospatial Knowledge Graphs and GeoAI: Methods, Models, and Resources

12th September, 2023, Leeds, UK

Workshop Program (Tentative)

This workshop will be held on Sept. 12, 2023 (BST).

Session Time Speaker Title
Opening Introduction 9:00 - 9:10
Paper Session 1 9:10 - 9:30 Guiye Li Bayesian Super-resolution using Deep Generative Models
9:30 - 9:50 Joe Tuccillo SONET++: A Knowledge Graph of Geographic Categories based on OSM Tag Representation
Coffee break 9:50 - 10:20
Paper Session 2 10:20 - 10:40 Sergios Kefalidis The Question Answering System GeoQA2
10:40 - 11:00 Stefano De Sabbata Learning urban form through unsupervised graph-convolutional neural networks
Keynote 11:00 - 12:00 Anthony G Cohn Evaluating the Spatial Reasoning Capabilities of Large Language Models
Closing Remark 12:00 - 12:10

Call for Papers

The rapid increase in multimodal data, advanced deep learning algorithms, and the availability of fast hardware have largely contributed to a renewed interest in Artificial Intelligence (AI). Despite many successful stories in computer vision and natural language processing, there are challenges that remain to be solved, such as large scale neural-symbolic reasoning and automatic knowledge graph construction. Unsurprisingly, one of the most prominent topics in AI, nowadays, is the combination of representation learning (connectionist AI) with symbolic representation and reasoning (symbolic AI). By combining these two aspects, we are capable of developing scalable and explainable machine learning models. Such a trend becomes even more pressing with the foundation models (e.g., GPT-4) have been used everywhere just within the past few months. While embracing their surprisingly fascinating performance, the society also calls for a more explainable and responsible use of these foundation models.

From a geospatial point-of-view, GeoAI, as an interdisciplinary field of GIScience and AI, advocating the development of spatially explicit machine learning approaches and the use of novel AI techniques in geography and earth science. Graphs are at the core of GeoAI, as they have been shown to allow effective representations of semantics as well as spatial and temporal relationships. Geospatial knowledge graphs, particularly as symbolic representations of geospatial knowledge, facilitate many intelligent applications such as geospatial data integration and knowledge discovery. Nevertheless, many deep learning models treat geographic entities as ordinary entities in which spatial characteristics, such as spatial footprint or distance decay, are often ignored. This results in suboptimal performance in many geospatial related tasks including geospatial knowledge graph completion, geographic question answering, geographic entity alignment, as well as geographic knowledge graph summarization.

Following its former success in GIScience 2021, this workshop at GIScience 2023 will continue highlighting the importance of geospatial information and principles in designing, developing, and utilizing geospatial knowledge graphs and other GeoAI techniques to discover knowledge in geosciences. We invite researchers from disparate disciplines (e.g., environmental studies, GIScience, AI, cognition, supply chain, humanities, etc.) to submit papers in the following three formats. All submitted papers will be peer-reviewed by our Program Committee.

Paper formats

  • Full research paper: 8 pages
  • Short research paper or industry demo paper: 6 pages
  • Vision or statement paper: 4 pages

Topics include but not limited to:

  • Deep Learning and Reinforcement Learning on Geospatial Knowledge Graphs
    • Geographic Knowledge Graph Embeddings
    • Geographic Question Answering and Semantic Parsing based on Knowledge Graphs
    • Geospatial Knowledge Graph Summarization
  • Geo-Ontology Engineering and Geospatial Knowledge Graph Construction
    • Spatio-Temporal Scoping of Knowledge Graphs
    • Gazetteer Data Management
    • Coreference Resolution for Geographic Entities
    • Geographic Ontology Alignment
    • Geospatial Knowledge Graph Construction and Completion
    • Geographic Entity Similarity Measurement
  • Querying and Visualization on Geospatial Knowledge Graphs
    • GeoSPARQL and Spatial Query Evaluation
    • Knowledge Graph Visualization
    • Geo-Ontology Visualization
  • Geographic Information Retrieval and Geo-Text Analysis
    • Text Geoparsing, Toponym Recognition, and Toponym Resolution
    • Information Extraction from Location-Based Social Media
    • Searching and Indexing Texts based on Locations
    • Open Domain Geographic Question Answering
    • Human Experience Extraction from Place Descriptions
  • Spatially Explicit Machine Learning Methods and Models for GeoAI
    • Bridging GIScience Methods with Deep Learning
    • Model Invariance/Equivariance for Geospatial Applications (e.g., equivariance to changes in input scale or rotation)
  • GeoAI for Geospatial Image Analysis
    • Classification, Segmentation, and Object/Instance Recognition
    • Remote Sensing Images
    • Street View Images
    • Scanned Paper Maps and Historical Imagery
  • GeoAI Resources and Infrastructures
    • Data Augmentation Strategies and Dataset Generation
    • Development of benchmark Datasets, Tools, and Platforms
    • Spatial Data Infrastructures Supporting GeoAI
  • Other GeoAI Topics and Applications
    • AI Applications in Human Geography
    • AI Applications in Physical Geography
    • Transportation Modeling and Trajectory Data Analysis
    • Spatial Optimization
    • Spatio-Temporal Data Fusion and Assimilation
    • Spatial Simulation (i.e. Learning Agents in Agent-based Simulations)

Important Dates

  • Paper submission deadline: June 30th, 2023 July 7th, 2023
  • Notification of paper acceptance: uly 28th, 2023
  • Camera ready version: August 18th, 2023
  • Workshop date: September 12th, 2023

Keynote Speaker

Title: Evaluating the Spatial Reasoning Capabilities of Large Language Models

Abstract: In this talk I will present some initial results on evaluating the spatial reasoning capabilities of Large Language Models (LLMs). Whilst LLMs have shown remarkable apparent abilities in many areas of question answering, their abilities to perform reasoning is less clear. I will present some results, particularly focussing on qualitative spatial representations and reasoning for some LLMs. One way to probe the limits of LLM capabilities is to conduct an extended conversation (which we call “dialectical evaluation”) rather than simply using a pre-designed benchmark which usually has a fixed set of answer possibilities and I will show some results based on this evaluation method.

Submission Guidance

Submissions should be updated to the workshop EasyChair page. All submissions will be peer-reviewed. Submissions should use the Lecture Notes in Computer Science template. Additionally, accepted papers will be invited to submit an extended version to a special issue at the International Journal of Applied Earth Observation and Geoinformation on the topic of Spatially Explicit Machine Learning and Artificial Intelligence.

Organizers

Program Committee

  • Cogan Shimizu, Wright State University, United States of America
  • Kang Liu, Chinese Academy of Sciences, China
  • Johannes Scholz, Graz University of Technology, Austria
  • Bruno Martins, Instituto Superior Técnico of the University of Lisbon, Portugal
  • Stephan Law, University College London, United Kingdom
  • Grant McKenzie, McGill University, Canada
  • Martin Raubal, ETH Zurich, Switzerland
  • Weiming Huang, Nanyang Technological University, Singapore
  • Di Zhu, University of Minnesota, Twin Cities, United States of America
  • Haosheng Huang, Ghent University, Belgium
  • Filip Biljecki, National University of Singapore, Singapore
  • Wenwen Li, Arizona State University, United States of America
  • Dalia Varanka, U.S. Geological Survey, United States of America
  • Manolis Koubarakis, National and Kapodistrian University of Athens, Greece
  • Song Gao, University of Wisconsin, Madison, United States of America

Contact

For any further information, please contact Rui Zhu or Ling Cai.

--> --> -->