Water level prediction using soft computing techniques: A case study in the Malwathu Oya, Sri Lanka
Namal Rathnayake,
Upaka Rathnayake,
Tuan Linh Dang and
Yukinobu Hoshino
PLOS ONE, 2023, vol. 18, issue 4, 1-21
Abstract:
Hydrologic models to simulate river flows are computationally costly. In addition to the precipitation and other meteorological time series, catchment characteristics, including soil data, land use, land cover, and roughness, are essential in most hydrologic models. The unavailability of these data series challenged the accuracy of simulations. However, recent advances in soft computing techniques offer better approaches and solutions at less computational complexity. These require a minimum amount of data, while they reach higher accuracies depending on the quality of data sets. The Gradient Boosting Algorithms and Adaptive Network-based Fuzzy Inference System (ANFIS) are two such systems that can be used in simulating river flows based on the catchment rainfall. In this paper, the computational capabilities of these two systems were tested in simulated river flows by developing the prediction models for Malwathu Oya in Sri Lanka. The simulated flows were then compared with the ground-measured river flows for accuracy. Correlation of coefficient (R), Per cent-Bias (bias), Nash Sutcliffe Model efficiency (NSE), Mean Absolute Relative Error (MARE), Kling-Gupta Efficiency (KGE), and Root mean square error (RMSE) were used as the comparative indices between Gradient Boosting Algorithms and Adaptive Network-based Fuzzy Inference Systems. Results of the study showcased that both systems can simulate river flows as a function of catchment rainfalls; however, the Cat gradient Boosting algorithm (CatBoost) has a computational edge over the Adaptive Network Based Fuzzy Inference System (ANFIS). The CatBoost algorithm outperformed other algorithms used in this study, with the best correlation score for the testing dataset having 0.9934. The extreme gradient boosting (XGBoost), Light gradient boosting (LightGBM), and Ensemble models scored 0.9283, 0.9253, and 0.9109, respectively. However, more applications should be investigated for sound conclusions.
Date: 2023
References: Add references at CitEc
Citations: View citations in EconPapers (2)
Downloads: (external link)
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0282847 (text/html)
https://journals.plos.org/plosone/article/file?id= ... 82847&type=printable (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:plo:pone00:0282847
DOI: 10.1371/journal.pone.0282847
Access Statistics for this article
More articles in PLOS ONE from Public Library of Science
Bibliographic data for series maintained by plosone ().