Spatial Finance 2.0: Combining Large Language Models (LLMs) with remote sensing to discover the most polluting economic assets

Recent advances in computer vision allowed detection of polluting assets in satellite imagery at scale. Now Large Language Models step in to enhance annotation perimeters, thus empowering the finance industry with more trustworthy insights to enable green transition
Published in Sustainability
Spatial Finance 2.0: Combining Large Language Models (LLMs) with remote sensing to discover the most polluting economic assets
Like

Share this post

Choose a social network to share with, or copy the shortened URL to share elsewhere

This is a representation of how your post may appear on social media. The actual post will vary between social networks

How computer vision in remote sensing domain assured emergence of spatial finance and asset-level datasets?

Spatial finance, the intersection of geospatial data with financial decision-making, has experienced rapid growth in the recent years. Ever since the term was coined in one of the most prominent Oxford labs, Sustainable Finance Group, back in 2016, the major catalyst for this emergence has been an application of computer vision to satellite imagery. Satellites have long provided us with a bird’s-eye view of our planet as their vast troves of imagery carried detailed information about land use, urban development, environmental changes, and much more. However, manually sifting through these massive datasets to derive actionable insights was historically a labor-intensive, slow, and often imprecise process. Hence computer vision (CV), a subfield of artificial intelligence, revolutionized the paradigm by transferring the algorithms, originally developed for the natural photograhy (such as CNNs, U-nets, LSTMs or Masked R-CNNs), onto geolocated rasters of various depths of spectral and temporal resolutions.

Computer vision algorithms are designed to interpret and act on visual data, in both supervised and unsupervised manners. When applied to satellite imagery, they can automatically detect changes in landscapes, pinpoint areas of deforestation, identify oil spills in oceans, or monitor urban development - as well as detect 'missing' industrial production facilities, which often remain unacknowledged by finance books or major webmapping platforms. These algorithms enabled analyses of vast areas in a fraction of the time it could take human analysts, and often, with much higher accuracies. As a result, real-time or near-real-time monitoring of assets and environmental changes on a global scale became feasible, which inevitably attracted a lot of interest from the proliferating field of transition finance looking to fill in the 'missing signals' gaps within their environmental data inventories. Asset managers, investors, and financial institutions could now get a clearer picture of the environmental risks or potential compliance issues associated with their investments. For instance, an investor looking to finance a sustainable forestry initiative could utilize computer vision-analyzed satellite imagery to verify if a particular forest area is indeed being preserved or restored as claimed. Furthermore, with rising concerns over climate change, ESG (Environmental, Social, and Governance) factors have become paramount in investment decisions. Computer vision allowed for the objective evaluation of the 'E' in ESG, making spatial finance a crucial tool for green and sustainable finance initiatives.

Nevertheless, while the integration of computer vision and satellite imagery has unquestionably augmented the capabilities of spatial finance, it has also became imperative to acknowledge that visual information alone may not be deemed sufficient in discerning financial asset level data comprehensively. The intricate nature of financial markets necessitates the incorporation of exact, reliable data, especially pertaining to financial identifiers and attributes. And even though satellite imagery could provide compelling visual information about geographical and physical characteristics of assets, it was still falling short in affording precise and accurate financial details, like valuations, ownership, and transaction histories. Intrigued by this research gap, we decided to use the opportunity and verify how visual information about physical production facilities could be 'financially grounded' by using equally fast-proliferating deep learning technology in the natural language domain: Bidirectional Encoder Representations from Transformers (or BERT).

Importance of accurate financial annotations and associations, and the growing role of LLMs

The accuracy of financial annotations is paramount for the industry as they provide critical context, categorization, and clarification for complex financial data, ensuring that investors, analysts, and other stakeholders can make informed decisions. A minor oversight or inaccuracy can lead to misguided strategies, financial losses, or even regulatory penalties. As part of this collaborative research project we dedicated a substantial amount of initial time and efforts to collecting high-quality training data across multiple GHG emitters, specifically cement, iron & steel, petrochemicals, pulp & paper, waste management facilities and meat packers.

For the pilot study covering global inventory of the cement facilities, we agreed on a number of hypotheses in regards to origination of the asset-level data attributes:

  • Geolocation: Satellite imagery excels at providing accurate geographical coordinates and locations of assets, hence it has been agreed to use centerpoints of the production sites as XY attributes (in WGS-84 projection, to ensure the global consistency of mapped assets).
  • Production Capacity: In some instances, production capacity could be inferred from satellite imagery, such as observing the physical size of agricultural fields, the number of shipping containers in a port, or the extent of solar panels in a solar farm. However, usually such approach would provide a rough estimate and might not account for internal efficiencies or technologies that impact actual capacity. Specifically for cement industry, where sites are structurally intricate and production is highly dependent on the production methods, driven by local supplies and/or imports, it has been decided to deploy text transformers on the companies' websites and disclosures - as well as on numerous secondary data seources, such as development institutions reports, local news, litigation proceeds and NGO databases.
  • Ownership: Detailed ownership information is typically outlined in company reports or on official websites since satellite imagery does not provide insights into the legal or financial status of assets.
  • Production Type: While satellite imagery can suggest the type of production (e.g., crop type, type of mining), detailed and accurate data about production types (e.g., technologies used, production processes) are more reliably sourced from company reports or websites.
  • Years Operations Started or Upgraded: This information would be archival and is not visually present in satellite images. It can usually be found in historical reports, official records, or company websites.

At the time of the method development, BERT as a Large Language Model (LLM) represented the pinnacle of encoder architecture, which enabled comprehensive experiments with the Named Entities Recognition tasks. NERs in the modern LLMs have been extended to more than 1,300 categories, at the time of work on cement database we used the main five ones, notably: (1) YEAR, (2) QUANTITY, (3) COMPANY_NAME, (4) ACTIVITY_TYPE, and (5) LOCATION. Hence, utilization of both satellite imagery and official reports or websites in tandem allowed for a more comprehensive understanding of asset-level data, with each source compensating for the limitations of the other - despite being processed independently.

Multimodal insights: The future of spatial finance in the epoch of generative AI

In an era marked by the burgeoning capacities of generative AI, the future of spatial finance unfurls new horizons, ensuring an intricate interplay between deep learning semantic capabilities and financial acumen. Spatial finance, which inherently intertwines financial data with geospatial information, has made so far significant strides in attempting to align investment strategies with sustainable and environmentally responsible goals. With the advent of generative AI, this nascent field is poised to enter a transformative phase, where the synthetic and predictive capacities of algorithms enable even more nuanced understanding and utilization of geospatial data.

Generative AI, with its ability to create data and scenarios that do not merely extrapolate from existing data but innovate and generate novel data sets and patterns, introduces a radical shift in spatial finance’s predictive and analytic capabilities. It is being predicted that it will facilitate the creation of synthetic environments and scenarios wherein financial models can be tested and refined, not just based on scarce historical or existing data, but also through exploring a myriad of plausible future scenarios generated by the AI. This will open up opportunities for financial strategies, assessed and stress-tested across a multitude of potential future scenarios, both probable and improbable, equipping financial entities with a foresight that is multi-dimensional and comprehensively explored.

Moreover, as spatial finance often grapples with varied and vast data sets - from satellite imagery to intricate financial details - generative AI also emerges as a tool that can synthetically augment these datasets, filling in gaps and offering predictive insights where actual data may be scarce or unavailable. This aids in forming more holistic and robust datasets for analysis, enhancing the precision and reliability of spatial finance strategies.

Importantly, the convergence of spatial finance and generative AI extends the ability to align financial endeavors with sustainability and climate goals, permitting financial institutions and investors to navigate through complex investment landscapes with enhanced clarity and foresight. It enables not only tracking, assessing, and mitigating environmental and sustainability risks across an investment portfolio but also generatively predicting and preparing for multi-faceted future scenarios, establishing a financial landscape that is intricately and inherently tied with sustainable futures.

In essence, the epoch of generative AI illuminates untapped potentials in spatial finance, unlocking a future where financial decision-making is not merely reactive and regulated by existing circumstances, but is proactively sculpted, taking into account a rich tapestry of generatively envisioned futures, and thereby steering investments towards a path that is sustainably intelligent and robustly prepared for myriad eventualities.

Concluding remarks

In this paper, the innovative amalgamation of semantic understanding from BERT-LLM with the geospatial precision from satellite imagery not only signified a pioneering stride in asset-level data analytics but also offered numerous opportunities for future explorations and innovations in the domain. This research has been published in Scientific Data and is available online (https://www.nature.com/articles/s41597-023-02599-w). Peer-reviewed dataset associated with the research is available at https://datadryad.org/stash/dataset/doi:10.5061/dryad.6t1g1jx4f.

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in

Subscribe to the Topic

Sustainability
Research Communities > Community > Sustainability

Related Collections

With collections, you can get published faster and increase your visibility.

Ecological data for tracking biological diversity and environmental change

This collection presents data contributions addressing topics in biodiversity and ecology.

Publishing Model: Open Access

Deadline: Jan 31, 2024

Medical imaging data for digital diagnostics

This Collection presents a series of articles describing annotated datasets of medical images and video. All medical specialities are considered and data can be derived from study participants, tissue samples, electronic health records (EHRs) or other sources.

Publishing Model: Open Access

Deadline: Dec 20, 2023