Lies, Damned Lies, and the Orthogonality Thesis
Michael Timothy Bennett
No zcfw6_v2, OSF Preprints from Center for Open Science
Abstract:
In AI safety, the orthogonality thesis is that intelligence and goals are independent. Here I refute it with a rudimentary proof and argument based on computational dualism. First I show intelligence is fundamentally tied to embodiment. I illustrate using the universal artificial intelligence AIXI. Its performance hinges upon a choice of Universal Turing Machine (UTM). This UTM is a form of embodiment. It interprets and thus determines everything AIXI does, meaning AIXI can be made to behave arbitrarily well or poorly by changing the UTM. This is the case for all agents, not just AIXI. Next, I show embodiment is not neutral but inherently goal directed. A body is biased toward some goals over others. Just as every policy can be optimal if we choose the right body, every body can be optimal if we choose the right goal. This connects intelligence to embodiment to goals. They are not independent. The orthogonality thesis is a case of computational dualism.
Date: 2025-03-20
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://osf.io/download/67db67c1e94974cf82c700b6/
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:osf:osfxxx:zcfw6_v2
DOI: 10.31219/osf.io/zcfw6_v2
Access Statistics for this paper
More papers in OSF Preprints from Center for Open Science
Bibliographic data for series maintained by OSF ().