Different Questions, Different Gender Gap: Can the Format of Questions Explain the Gender Gap in Mathematics?
2020 Papers from Job Market Papers
Standardized assessments are widely used to determine educational and economic opportunities. These standardized assessments exclusively, or in large part, use multiple-choice questions. But multiple-choice exams may not be adequate for comparing studentsâ competencies across genders. In this paper, I show that female students receive lower marks when randomly assigned to exams with a larger proportion of multiple-choice questions. Specifically, a 10 percentage point increase in the proportion of multiple-choice questions widens the gender difference in mathematics performance by 0.026 standard deviations in favor of men, an effect that represents about 50% of the overall gender gap. Moreover, a higher proportion of multiple-choice questions has negative spillovers to other open-ended questions on the same exam. Female students exert less effort than males on tests that contain a larger proportion of multiple-choice questions. I provide suggestive evidence that these results are driven by womenâs lower confidence and by the stereotypes that women face in traditionally male domains.
JEL-codes: I21 I24 J24 (search for similar items in EconPapers)
New Economics Papers: this item is included in nep-edu, nep-exp and nep-gen
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (3) Track citations by RSS feed
Downloads: (external link)
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
Persistent link: https://EconPapers.repec.org/RePEc:jmp:jm2020:pgr710
Access Statistics for this paper
More papers in 2020 Papers from Job Market Papers
Bibliographic data for series maintained by Christian Zimmermann ().