What do telecommunications policy academics have to fear from GPT-3?
Bronwyn E. Howell and
Petrus H. Potgieter
32nd European Regional ITS Conference, Madrid 2023: Realising the digital decade in the European Union – Easier said than done? from International Telecommunications Society (ITS)
Abstract:
Artificial intelligence (AI) tools such as ChatGPT and GPT-3 have shot to prominence recently (Lin 2023), as dramatic advances have shown them to be capable of writing plausible output that is difficult to distinguish from human-authored content. Unsurprisingly, this has led to concerns about their use by students in tertiary education contexts (Swiecki et al. 2022) and it has led to them being banned in some school districts in the United States (e.g. Rosenblatt 2023; Clarridge 2023) and from at least one top-ranking international university (e.g. Reuters 2023). There are legitimate reasons for such fears to be held, as it is difficult to differentiate students' own written work presented for assessment from that produced by the AI tools. Successfully embedding them into educational contexts requires an understanding of the tools, what they are, and what they can and cannot do. Despite their powerful modelling and description capabilities, these tools have (at least currently) significant issues and limitations (Zhang & Li 2021). As telecommunications policy academics charged with the research-led teaching and supervising both undergraduate and research students, we need to be certain that our graduates are capable of understanding the complexities of current issues in this incredibly dynamic field and applying their learnings appropriately in industry and policy environments. We must be reasonably certain that the grades we assign are based on the students' own work and understanding, To this end, we engaged in an experiment with the current (Q1 of 2023) version of the AI tool to assess how well it coped with questions on a core and current topic in telecommunications policy education: the effects of access regulation (local loop unbundling) on broadband investment and uptake. We found that while the outputs were well-written and appeared plausible, there were significant systematic errors which, once academics are aware of them, can be exploited to avoid the risk of AI use severely undermining the credibility of the assessments we make of students' written work, at least for the time being and in respect of the version of chatbot software we used.
Keywords: Artificial Intelligence (AI); ChatGPT; GPT-3; Academia (search for similar items in EconPapers)
Date: 2023
New Economics Papers: this item is included in nep-ain, nep-big, nep-cmp and nep-ict
References: View complete reference list from CitEc
Citations: View citations in EconPapers (2)
Downloads: (external link)
https://www.econstor.eu/bitstream/10419/277972/1/Howell-Potgieter_GPT.pdf (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:zbw:itse23:277972
Access Statistics for this paper
More papers in 32nd European Regional ITS Conference, Madrid 2023: Realising the digital decade in the European Union – Easier said than done? from International Telecommunications Society (ITS)
Bibliographic data for series maintained by ZBW - Leibniz Information Centre for Economics ().