Dissecting bias of ChatGPT in college major recommendations
Alex Zheng ()
Additional contact information
Alex Zheng: Carnegie Mellon University
Information Technology and Management, 2025, vol. 26, issue 4, No 11, 625-636
Abstract:
Abstract Large language models (LLMs) such as ChatGPT play a crucial role in guiding critical decisions nowadays, such as in choosing a college major. Therefore, it is essential to assess the limitations of these models’ recommendations and understand any potential biases that may mislead human decisions. In this study, I investigate bias in terms of GPT-3.5 Turbo’s college major recommendations for students with various profiles, looking at demographic disparities in factors such as race, gender, and socioeconomic status, as well as educational disparities such as score percentiles. To conduct this analysis, I sourced public data for California seniors who have taken standardized tests like the California Standard Test (CAST) in 2023. By constructing prompts for the ChatGPT API, allowing the model to recommend majors based on high school student profiles, I evaluate bias using various metrics, including the Jaccard Coefficient, Wasserstein Metric, and STEM Disparity Score. The results of this study reveal a significant disparity in the set of recommended college majors, irrespective of the bias metric applied. Notably, the most pronounced disparities are observed for students who fall into minority categories, such as LGBTQ + , Hispanic, or the socioeconomically disadvantaged. Within these groups, ChatGPT demonstrates a lower likelihood of recommending STEM majors compared to a baseline scenario where these criteria are unspecified. For example, when employing the STEM Disparity Score metric, an LGBTQ + student scoring at the 50th percentile faces a 50% reduced chance of receiving a STEM major recommendation in comparison to a male student, with all other factors held constant. Additionally, an average Asian student is three times more likely to receive a STEM major recommendation than an African-American student. Meanwhile, students facing socioeconomic disadvantages have a 30% lower chance of being recommended a STEM major compared to their more privileged counterparts. These findings highlight the pressing need to acknowledge and rectify biases within language models, especially when they play a critical role in shaping personalized decisions. Addressing these disparities is essential to foster a more equitable educational and career environment for all students.
Keywords: Large language models (LLM); ChatGPT; Bias; Prompt engineering; College major recommendation (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
http://link.springer.com/10.1007/s10799-024-00430-5 Abstract (text/html)
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:infotm:v:26:y:2025:i:4:d:10.1007_s10799-024-00430-5
Ordering information: This journal article can be ordered from
http://www.springer.com/journal/10799
DOI: 10.1007/s10799-024-00430-5
Access Statistics for this article
Information Technology and Management is currently edited by Raymond Patterson and Erik Rolland
More articles in Information Technology and Management from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().