B. Lütjens et al., "Generating Physically-Consistent Satellite Imagery for Climate Visualizations," in IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp. 1-11, 2024, Art no. 4213311, doi: 10.1109/TGRS.2024.3493763.
Work for a Member company and need a Member Portal account? Register here with your company email address.
Nov. 25, 2024
B. Lütjens et al., "Generating Physically-Consistent Satellite Imagery for Climate Visualizations," in IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp. 1-11, 2024, Art no. 4213311, doi: 10.1109/TGRS.2024.3493763.
Deep generative vision models are now able to synthesize realistic-looking satellite imagery. However, the possibility of hallucinations prevents their adoption of risk-sensitive applications, such as generating materials for communicating climate change. To demonstrate this issue, we train a generative adversarial network (GAN, pix2pixHD) to create synthetic satellite imagery of future flooding and reforestation events. We find that a pure deep learning-based model can generate photorealistic flood visualizations but hallucinate floods at locations that are not susceptible to flooding. To address this issue, we propose to condition and evaluate generative vision models on segmentation maps of physics-based flood models. We show that our physics-conditioned model outperforms the pure deep learning-based model and a handcrafted baseline. We evaluate the generalization capability of our method to different remote sensing data and different climate-related events (reforestation). We publish our code and dataset which includes the data for a third case study of melting Arctic sea ice and >30 000 labeled HD image triplets—or the equivalent of 5.5 million images at $128 \times 128$ pixels—for segmentation guided image-to-image (im2im) translation in Earth observation. Code and data are available at github.com/blutjens/eie-earth-public.