Digital divide and artificial intelligence for health
Clara Jean (),
Jean-Flavien Bussotti (),
Grazia Cecere (),
Nessrine Omrani () and
Paolo Papotti ()
Additional contact information
Clara Jean: EESC-GEM Grenoble Ecole de Management
Jean-Flavien Bussotti: Eurecom [Sophia Antipolis]
Grazia Cecere: IMT-BS - DEFI - Département Data analytics, Économie et Finances - IMT - Institut Mines-Télécom [Paris] - IMT-BS - Institut Mines-Télécom Business School - IMT - Institut Mines-Télécom [Paris], LITEM - Laboratoire en Innovation, Technologies, Economie et Management (EA 7363) - UEVE - Université d'Évry-Val-d'Essonne - Université Paris-Saclay - IMT-BS - Institut Mines-Télécom Business School - IMT - Institut Mines-Télécom [Paris]
Nessrine Omrani: PSB - Paris School of Business - HESAM - HESAM Université - Communauté d'universités et d'établissements Hautes écoles Sorbonne Arts et métiers université
Paolo Papotti: Eurecom [Sophia Antipolis]
Post-Print from HAL
Abstract:
Social media platforms have become key intermediaries for ad campaigns, but concerns persist regarding the veracity of information presented in ads. In the health sector, false or unsupported claims in ad content can have real-world public health consequences. On these platforms, the display of ads is managed by recommendation systems that match the content of the ad to the interests of the user. This paper investigates whether the use of AI algorithms to recommend ads on social media platforms may help progress toward the Sustainable Development Goals (SDGs). We collected ads across all US states on Meta and Instagram during a period marked by increased public health concerns. Using a fine-tuned deep learning model, we fact-checked the content of these ads. The results of the fact-check show that only 0.2 % of the ads were classified as misinformation, and 15.41 % of the ads were classified as ambiguous. Both types of ads are less likely to be recommended to users located in wealthier states especially when health-related. Also, health-related ads classified as misinformation are more likely to be recommended to users in states with high percentage of people without health insurance. We argue that the use of recommendation systems contributes to widening the digital divide, which can hinder the achievement of SDGs.
Keywords: SDGs; Fact-checking; Inequality; Health; Digital divide (search for similar items in EconPapers)
Date: 2026-03
References: Add references at CitEc
Citations:
Published in Technovation, 2026, 151, pp.103392. ⟨10.1016/j.technovation.2025.103392⟩
There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:hal:journl:hal-05357828
DOI: 10.1016/j.technovation.2025.103392
Access Statistics for this paper
More papers in Post-Print from HAL
Bibliographic data for series maintained by CCSD ().