Deep Learning in Science: Is there a Reason for (Philosophical) Pessimism?
Martin Justin ()
Additional contact information
Martin Justin: University of Ljubljana, Faculty of Arts, Department of Philosophy, Ljubljana, Slovenia
Interdisciplinary Description of Complex Systems - scientific journal, 2024, vol. 22, issue 1, 59-70
Abstract:
In this article, I will review existing arguments for and against this philosophical pessimism about using deep learning models in science. Despite the remarkable results achieved by deep learning models networks in various scientific fields, some philosophers worry that because of their opacity, using these systems cannot improve our understanding of the phenomena studied. First, some terminological and conceptual clarification is provided. Then, I present a case for optimism, arguing that using opaque models does not hinder the possibility of gaining new understanding. After that, I present a critique of this argument. Finally, I present a case for pessimism, concluding that there are reasons to be pessimistic about the ability of deep learning models to provide us with new understanding of phenomena, studied by scientists.
Keywords: deep learning; scientific understanding; explanation; black box problem; artificial neural networks (search for similar items in EconPapers)
JEL-codes: O33 (search for similar items in EconPapers)
Date: 2024
References: Add references at CitEc
Citations:
Downloads: (external link)
https://www.indecs.eu/2024/indecs2024-pp59-70.pdf (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:zna:indecs:v:22:y:2024:i:1:p:59-70
Access Statistics for this article
More articles in Interdisciplinary Description of Complex Systems - scientific journal from Croatian Interdisciplinary Society Provider Homepage: http://indecs.eu
Bibliographic data for series maintained by Josip Stepanic ().