Enhancing Semantic Segmentation of Cloud Images Captured with Horizon-Oriented Cameras
Research Article
Abstract
The segmentation of sky cloud images is a complex task essential for applications like weather analysis. Compared to all-sky imagers, horizon-oriented cameras provide a more detailed view of clouds near the horizon. In our study, we evaluated three semantic segmentation models: HRNet48, PPLite, and SegFormerB3, utilizing a variety of loss functions on a novel dataset of horizon cloud images. Throughout our experiments, we consistently observed segmentation leakage issues. To address this, we introduced machine learning-based post-processing methods, including random forest and xgboost, that leverage region-specific features to refine the segmentation. Our results showed notable improvements, with the Cumuliform class dice score increasing from 0.552 to 0.583, and Stratiform class accuracy improving from 0.49 to 0.511 when applying xgboost on SegFormerB3's output. The study revealed the relative contributions of the loss functions and post-processing steps.
Keywords Remote Sensing; Segmentation; Sky Clouds; Deep Learning
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Allan Cerentini, Bruno Juncklaus Martins, Juliana Marian Arrais, Sylvio Luiz Mantelli Neto, Gilberto Perello Ricci Neto, Aldo von Wangenheim
This work is licensed under a Creative Commons Attribution 4.0 International License.