A systematic study of key elements underlying molecular property prediction
Jianyuan Deng,
Zhibo Yang,
Hehe Wang,
Iwao Ojima,
Dimitris Samaras and
Fusheng Wang ()
Additional contact information
Jianyuan Deng: Stony Brook University, Department of Biomedical Informatics
Zhibo Yang: Stony Brook University, Department of Computer Science
Hehe Wang: Stony Brook University, Department of Chemistry
Iwao Ojima: Stony Brook University, Department of Chemistry
Dimitris Samaras: Stony Brook University, Department of Computer Science
Fusheng Wang: Stony Brook University, Department of Biomedical Informatics
Nature Communications, 2023, vol. 14, issue 1, 1-20
Abstract:
Abstract Artificial intelligence (AI) has been widely applied in drug discovery with a major task as molecular property prediction. Despite booming techniques in molecular representation learning, key elements underlying molecular property prediction remain largely unexplored, which impedes further advancements in this field. Herein, we conduct an extensive evaluation of representative models using various representations on the MoleculeNet datasets, a suite of opioids-related datasets and two additional activity datasets from the literature. To investigate the predictive power in low-data and high-data space, a series of descriptors datasets of varying sizes are also assembled to evaluate the models. In total, we have trained 62,820 models, including 50,220 models on fixed representations, 4200 models on SMILES sequences and 8400 models on molecular graphs. Based on extensive experimentation and rigorous comparison, we show that representation learning models exhibit limited performance in molecular property prediction in most datasets. Besides, multiple key elements underlying molecular property prediction can affect the evaluation results. Furthermore, we show that activity cliffs can significantly impact model prediction. Finally, we explore into potential causes why representation learning models can fail and show that dataset size is essential for representation learning models to excel.
Date: 2023
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (3)
Downloads: (external link)
https://www.nature.com/articles/s41467-023-41948-6 Abstract (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:nat:natcom:v:14:y:2023:i:1:d:10.1038_s41467-023-41948-6
Ordering information: This journal article can be ordered from
https://www.nature.com/ncomms/
DOI: 10.1038/s41467-023-41948-6
Access Statistics for this article
Nature Communications is currently edited by Nathalie Le Bot, Enda Bergin and Fiona Gillespie
More articles in Nature Communications from Nature
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().