EconPapers    
Economics at your fingertips  
 

Improving Teacher Effectiveness: Designing Better Assessment Tools in Learning Management Systems

Dov Kruger, Sarah Inman, Zhiyu Ding, Yijin Kang, Poornima Kuna, Yujie Liu, Xiakun Lu, Stephen Oro and Yingzhu Wang
Additional contact information
Dov Kruger: Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ 07030, USA
Sarah Inman: Instructional Technology, Stevens Institute of Technology, Hoboken, NJ 07030, USA
Zhiyu Ding: Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ 07030, USA
Yijin Kang: Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ 07030, USA
Poornima Kuna: Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ 07030, USA
Yujie Liu: Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ 07030, USA
Xiakun Lu: Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ 07030, USA
Stephen Oro: Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ 07030, USA
Yingzhu Wang: Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ 07030, USA

Future Internet, 2015, vol. 7, issue 4, 1-16

Abstract: Current-generation assessment tools used in K-12 and post-secondary education are limited in the type of questions they support; this limitation makes it difficult for instructors to navigate their assessment engines. Furthermore, the question types tend to score low on Bloom’s Taxonomy. Dedicated learning management systems (LMS) such as Blackboard, Moodle and Canvas are somewhat better than informal tools as they offer more question types and some randomization. Still, question types in all the major LMS assessment engines are limited. Additionally, LMSs place a heavy burden on teachers to generate online assessments. In this study we analyzed the top three LMS providers to identify inefficiencies. These inefficiencies in LMS design, point us to ways to ask better questions. Our findings show that teachers have not adopted current tools because they do not offer definitive improvements in productivity. Therefore, we developed LiquiZ, a design for a next-generation assessment engine that reduces user effort and provides more advanced question types that allow teachers to ask questions that can currently only be asked in one-on-one demonstration. The initial LiquiZ project is targeted toward STEM subjects, so the question types are particularly advantageous in math or science subjects.

Keywords: Learning Management Systems; web-based Assessment; LMS; CMS; HOTS; Bloom’s taxonomy; STEM (search for similar items in EconPapers)
JEL-codes: O3 (search for similar items in EconPapers)
Date: 2015
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/1999-5903/7/4/484/pdf (application/pdf)
https://www.mdpi.com/1999-5903/7/4/484/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jftint:v:7:y:2015:i:4:p:484-499:d:60884

Access Statistics for this article

Future Internet is currently edited by Ms. Grace You

More articles in Future Internet from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jftint:v:7:y:2015:i:4:p:484-499:d:60884