EconPapers    
Economics at your fingertips  
 

Explainable Artificial Intelligence for Environmental Decision-Making: A Comparative Study of Machine Learning Approaches Using Tree Bioelectrical Responses to Geomagnetic Variability

Murali Palani (), Atif Farid Mohammad () and Malathy Muthu ()
Additional contact information
Murali Palani: Capitol Technology University, Laurel, Maryland, USA
Atif Farid Mohammad: Capitol Technology University, Laurel, Maryland, USA
Malathy Muthu: Capitol Technology University, Laurel, Maryland, USA

RAIS Conference Proceedings 2022-2026 from Research Association for Interdisciplinary Studies

Abstract: This study examines how explainable artificial intelligence can support responsible decision-making in socio-ecological systems by analyzing tree bioelectrical responses to geomagnetic variability as a global environmental case study. Using 309,660 hourly observations collected from 21 international monitoring stations between 2023 and 2024, we compare traditional machine learning and deep learning approaches to model bioelectrical circadian rhythms under varying geomagnetic and environmental conditions. Nine AI architectures were evaluated, including Random Forest, Gradient Boosting, XGBoost, LSTM networks, and Transformer models. Results indicate that traditional machine learning methods outperform deep learning approaches in both predictive accuracy and interpretability, with Random Forest achieving the highest performance (R² = 0.936), exceeding the best deep learning model by 18.7%. Geomagnetic storm conditions were associated with a 143.9% increase in signal amplitude and a three-hour phase delay in tree circadian rhythms, demonstrating measurable environmental sensitivity to electromagnetic variability. SHAP-based explainability analysis identified tree ground voltage as the dominant predictor, followed by key meteorological variables such as humidity, temperature, and wind speed. Beyond predictive performance, the findings highlight critical social and institutional implications of AI model selection. Traditional machine learning approaches offer greater transparency, lower computational barriers, and higher stakeholder interpretability factors essential for environmental governance, policy compliance, and public trust in AI-driven monitoring systems. By positioning explainable AI as a socio-technical tool rather than a purely computational solution, this research contributes to interdisciplinary discussions on responsible AI deployment, environmental decision support, and the role of transparent analytics in managing complex human–environment interactions.

Keywords: Machine Learning; Deep Learning; Explainable AI; Environmental Decision-Making; AI Governance; Tree Bioelectrical Activity; Geomagnetic Variations; Environmental Monitoring; Circadian Rhythms (search for similar items in EconPapers)
Pages: 6 pages
Date: 2026-03
References: Add references at CitEc
Citations:

Published in Proceedings of the 43rd International RAIS Conference on Social Sciences and Humanities, March 12-13, 2026, pages 93-99

Downloads: (external link)
https://rais.education/wp-content/uploads/0634.pdf Full text (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:smo:raiswp:0634

Access Statistics for this paper

More papers in RAIS Conference Proceedings 2022-2026 from Research Association for Interdisciplinary Studies
Bibliographic data for series maintained by Eduard David ().

 
Page updated 2026-05-01
Handle: RePEc:smo:raiswp:0634