Completed TDG Projects
Assessment and Feedback
Assessing, Validating and Improving an L2 English Speaking Assessment Rubric
Abstract
Coming under the TDG priority area of ‘assessment tasks and assessment standards’, this project aimed to develop a procedure to assess and improve the marking criteria of speaking assessments, by investigating the speaking test rubric currently used for the assessment of CAES1000: ‘Core University English’ program offered through HKU’s Centre for Applied English Studies (CAES). We quantified the amount of variance in grade assignment decisions between teacher examiners that may result from use of the existing speaking assessment rubric, and developed a method to systematically reduce such variance through a combination of quantitative and qualitative approaches including factor analysis and think-aloud protocols. Through these approaches, we were able to determine the thought process of teachers when assessing EAP speaking using the current rubric, and from this information, developed a sample L2 English speaking test rubric that is fairer and more transparent for students and teachers involved in the assessment process.
Principal InvestigatorDr. P.R. Crosthwaite, Centre for Applied English Studies, Faculty of Arts Contact |
Project levelProgramme-level project |
Project CompletionMay 2016 |
Deliverables
- A new framework was developed for assessing inter-rater reliability on speaking test assessments for the largest course on offer at HKU, and one that can be applied to other HKU speaking tests outside of the CAES1000 course.
- A new sample assessment rubric for spoken L2 English tutorial discussions was developed that is comparable to internationally-recognised standards such as CEFR or IELTS, and that promotes transparency and reliability of grades for students and teachers involved in the rating process.
- A quality control system was implemented that analyses inter-rater reliability periodically, prompting additional improvements to the rubric if required.