University of Tasmania
Browse

File(s) under permanent embargo

Machine learning for the peer assessment credibility

conference contribution
posted on 2023-05-24, 22:33 authored by Lin, Y, Han, SC, Byeong KangByeong Kang
The peer assessment approach is considered to be one of the best solutions for scaling both assessment and peer learning to global classrooms, such as MOOCs. However, some academic staff hesitate to use a peer assessment approach for their classes due to concerns about its credibility and reliability. The focus of our research is to detect the credibility level of each assessment performed by students during peer assessment. We found three major scopes in assessing the credibility level of evaluations, 1) Informativity, 2) Accuracy, and 3) Consistency. We collect assessments, including comments and grades provided by students during the peer assessment process and then each feedback-and-grade pair is labeled with its credibility level by Mechanical Turk evaluators. We extract relevant features from each labeled assessment and use them to build a classifier that attempts to automatically assess its level of credibility in C5.0 Decision Tree classifier. The evaluation results show that the model can be used to automatically classify peer assessments as credible or non-credible, with accuracy in the range of 88%.

Funding

Asian Office of Aerospace Research & Development

History

Publication title

Proceedings from the International World Wide Web Conference

Pagination

117-118

ISBN

9781450356404

Department/School

School of Information and Communication Technology

Event title

International World Wide Web Conference

Event Venue

Lyon, France

Date of Event (Start Date)

2018-04-23

Date of Event (End Date)

2018-04-27

Repository Status

  • Restricted

Socio-economic Objectives

Information systems, technologies and services not elsewhere classified

Usage metrics

    University Of Tasmania

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC