EconPapers    
Economics at your fingertips  
 

Learning representations for image-based profiling of perturbations

Nikita Moshkov, Michael Bornholdt, Santiago Benoit, Matthew Smith, Claire McQuin, Allen Goodman, Rebecca A. Senft, Yu Han, Mehrtash Babadi, Peter Horvath, Beth A. Cimini, Anne E. Carpenter, Shantanu Singh and Juan C. Caicedo ()
Additional contact information
Nikita Moshkov: HUN-REN Biological Research Centre
Michael Bornholdt: Broad Institute of MIT and Harvard
Santiago Benoit: Broad Institute of MIT and Harvard
Matthew Smith: Broad Institute of MIT and Harvard
Claire McQuin: Broad Institute of MIT and Harvard
Allen Goodman: Broad Institute of MIT and Harvard
Rebecca A. Senft: Broad Institute of MIT and Harvard
Yu Han: Broad Institute of MIT and Harvard
Mehrtash Babadi: Broad Institute of MIT and Harvard
Peter Horvath: HUN-REN Biological Research Centre
Beth A. Cimini: Broad Institute of MIT and Harvard
Anne E. Carpenter: Broad Institute of MIT and Harvard
Shantanu Singh: Broad Institute of MIT and Harvard
Juan C. Caicedo: Broad Institute of MIT and Harvard

Nature Communications, 2024, vol. 15, issue 1, 1-17

Abstract: Abstract Measuring the phenotypic effect of treatments on cells through imaging assays is an efficient and powerful way of studying cell biology, and requires computational methods for transforming images into quantitative data. Here, we present an improved strategy for learning representations of treatment effects from high-throughput imaging, following a causal interpretation. We use weakly supervised learning for modeling associations between images and treatments, and show that it encodes both confounding factors and phenotypic features in the learned representation. To facilitate their separation, we constructed a large training dataset with images from five different studies to maximize experimental diversity, following insights from our causal analysis. Training a model with this dataset successfully improves downstream performance, and produces a reusable convolutional network for image-based profiling, which we call Cell Painting CNN. We evaluated our strategy on three publicly available Cell Painting datasets, and observed that the Cell Painting CNN improves performance in downstream analysis up to 30% with respect to classical features, while also being more computationally efficient.

Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.nature.com/articles/s41467-024-45999-1 Abstract (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:nat:natcom:v:15:y:2024:i:1:d:10.1038_s41467-024-45999-1

Ordering information: This journal article can be ordered from
https://www.nature.com/ncomms/

DOI: 10.1038/s41467-024-45999-1

Access Statistics for this article

Nature Communications is currently edited by Nathalie Le Bot, Enda Bergin and Fiona Gillespie

More articles in Nature Communications from Nature
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-03-19
Handle: RePEc:nat:natcom:v:15:y:2024:i:1:d:10.1038_s41467-024-45999-1