Measuring the Effectiveness of Your Flipped Class

This blog post is part of the ‘Flipped Classroom Professional Development Series’.

There are multiple ways to assess the effectiveness of your flipped class. While there is no single perfect way to measure teaching effectiveness, practitioners from HKU have come up with a few useful methods and tips for evaluation, which they shared in the Flipped Classroom Learning Symposium – Sharing of Pedagogies and Practices. In general, adopting a mixed approach allows you to evaluate your class more comprehensively.

How Researchers Measured Effectiveness in the Literature

  1. Criteria of evaluation
    Effectiveness of the flipped classroom has been measured by multiple ways in the literature, most palpably by examining the course’s direct and indirect educational outcomes. A scoping review conducted by O’Flaherty and Phillips (2015) summarized how educators evaluated the effectiveness of a flipped class by measuring various direct and indirect educational outcomes.

    While different researchers may have different definitions of “educational outcomes”, direct educational outcomes usually refer to (i) students’ scores and grades in traditional summative assessment and (ii) attendance. In particular, students’ performance in tests, exams, group work and group presentations are often used for evaluation in research (Cheng, Lee, Chang & Yang, 2017; Cotta, Shah, Almgren, Macías-Moriarity & Mody, 2016; Gilboy, Heinerichs & Pazzaglia, 2015). In contrast to direct outcomes, indirect educational outcomes include (i) students’ course experience; (ii) their attitudes, perceptions, and feelings towards the course; (iii) student engagement and learning behavior (measured by learning data); and (iv) student empowerment and development in the course, e.g., development of high order thinking skills, such as creativity, problem-solving and critical thinking skills, etc.

    According to O’Flaherty and Phillips (2015), limited research had been conducted on evaluating student learning outcomes in terms of their development of high order thinking skills; more researchers chose to evaluate (i) student satisfaction of the flipped class; (ii) student-teacher interactions; (iii) student engagement in using e-learning gadgets such as apps in mobile devices; and (iv) the opportunity for real-time and immediate feedback (Gilboy et al., 2015)

  2. Tools for data collection
    Apart from evaluating students’ performance in assignments and reports, various tools can be used to collect data. Examples include student evaluation surveys and interviews. Some researchers also supplement their findings with their own observations.

Strategies Used by Practitioners in HKU
In the Flipped Classroom Learning Symposium, practitioners from HKU shared with us how they evaluate the effectiveness of their flipped classes. In general, they tend to adopt a mixed approach in evaluating the effectiveness of flipped classes, i.e. analyzing both direct and indirect educational outcomes, instead of only using one instrument to evaluate a course. This allows them to evaluate their courses more comprehensively.

Criteria of evaluation
When evaluating the effectiveness of their courses, the practitioners usually collect the following types of information:

  1. Students’ grades: For example, Mr. Mathew Pryor, course instructor of CCHU9001 Designs on the Future, considered grades as strong evidence of students’ improvement.
  2. Students’ comments and perceptions on (i) quality of teaching (in terms of clarity of delivery, clarity of goals and standards, opportunities for skill development, etc.); and (ii) assessment design and workload.
  3. Students behaviour in face-to-face interactions and online

Methods of data collection
Students’ feedback can be obtained through formal and informal means.

  1. Formal feedback can be obtained through surveys and interviews.
    • In HKU, the Student Evaluation of Teaching and Learning (SETL) questionnaire is issued at the end of each course as an official way to evaluate course and instructor effectiveness. In Mr. Pryor’s case, SETL scores served as useful reference for his own performance. Both the quantitative scores (direct ratings by students) and the qualitative response (in the form of open-ended comments) provide vital information for him to improve his course. Using this questionnaire, he discovered that his student evaluations “go up by 10%” after flipping his class. The questionnaire provides concrete evidence that proves the effectiveness of the flipped class approach.
    • In 2014, Professor Rick Glofcheski collected students’ feedbacks on his Tort Law flipped class using a survey with TeLi’s support. The survey collected both quantitative and qualitative evidence of the effectiveness of his flipped classes. Below are some examples:
      Quantitative evidence:
      60% students found the classes “useful”, and 34% “very useful”.
      (Image credit: Professor Rick Glofcheski)

      Qualitative evidence (anonymous comments from students):

      • “It helps me better understand and remember the consideration factors of duty of care.”
      • “It also is an opportunity to discuss with other classmates and get ideas and inspirations from them.”
      • “The class also acts as a useful preparation for future legal practices as it encourages students to articulately express themselves in both oral and written forms.”
      • “Very useful, made me understand the problems better and engage in debate with other students.”
    • Dr. Ng Ming Yen from the Department of Diagnostic Radiology also collected feedback from students in his tutorials on chest pain imaging using a questionnaire. It was part of an experiment he conducted in 2016-2017 to examine the effectiveness of the flipped class approach. 60 students first attended lectures and completed a questionnaire. They then attended flipped classes 6 months later and filled in the questionnaires again. The result showed that the students generally appreciated the videos and over 75% of them thought that the flipped class was an improvement.
      Apart from quantitative data, Dr. Ng also collected qualitative comments from students. For example, some students asked for more cases and more time for discussion. These comments provide references for improvement in the next cohort.
  2. Informal feedback can be quickly obtained by teachers in class and online. For example, a quick show of hands gives teachers a rough impression of whether students enjoy an activity. Teachers can also invite students to give anonymous feedback using discussion forums or online polling tools, such as Mentimeter.

    In Mr. Pryor’s case, he highly valued and respected students’ feedback. To understand how students perceive his teaching, he collected informal feedback by asking simple, straight-forward questions such as “Which activity do you like or not like?” or even “Are you happy?” on discussion forums or with Mentimeter. These immediate feedback from students are pivotal in course planning and strategizing.

  3. Observation of students’ behaviour in face-to-face interactions: It is also important for teachers to observe students’ response and behaviour in class, as their body language honestly reflects their extent of engagement and satisfaction. They provide alternative evidence to support findings generated in formal surveys. For example, For example, Dr. Courtney Fung evaluated the effectiveness of her teaching by observing students’ behaviour and response. In class, students assume roles of different nations and simulate real-world political negotiations to resolve crises. Since this activity was student-led, Dr. Fung acted as a facilitator and an observer during the process of negotiation. She observed that not only students were engaged in class, they even self-initiated further discussions over lunch after class. The level of engagement was high, which in turn reflected the effectiveness of the class.

    Dr. Courtney Fung

To sum up, it is best to evaluate a course from multiple dimensions, as different scales of measurement shed light on different aspects of a course. Direct and indirect educational outcomes, as well as students’ feedback, engagement and learning behavior, all have different advantages in telling how effective a flipped class is based on the nature of the course. Aligning your expected students’ learning outcomes with appropriate ways of measurement is crucial for effective evaluation.

Building a flipped class is a long process of development. From preparing online and pre-class elements, encouraging student participation, designing in-class activities, to evaluating  effectiveness, a lot of support and resources may be needed. It is our mission to support teachers in developing e-learning materials and flipping their classes. Contact us if you need help!

Next step
If you are interested in further exploring teaching and learning with us, don’t miss the Authentic Assessment Symposium: The Transformation of Learning in Higher Education on May 3!

This blog post is part of the Flipped Classroom Professional Development Series. More articles from the series:

Cheng, X., Lee, K. H., Chang, E. Y., & Yang, X. (2017). The “flipped classroom” approach: Stimulating positive learning attitudes and improving mastery of histology among medical students. Anatomical Sciences Education, 10(4), 317-327. Retrieved from

Cotta, K. I., Shah, S., Almgren, M. M., Macías-Moriarity, L. Z., & Mody, V. (2016). Effectiveness of flipped classroom instructional model in teaching pharmaceutical calculations. Currents in Pharmacy Teaching and Learning,8(5), 646-653.

Gilboy, M. B., Heinerichs,S., & Pazzaglia, G. (2015). Enhancing student engagement using the flipped classroom. Journal of Nutrition Education and Behavior,47(1), 109-114.

O’Flaherty, J. & Phillips, C. (2015). The use of flipped classrooms in higher education: A scoping review. Internet and Higher Education, 25(8), 85-95. Doi: 10.1016/j.iheduc.2015.02.002.

The University of Hong Kong (2018). Educational aims and institutional learning outcomes. In Undergraduate Handbook. Retrieved from