How to evaluate?
Receiving the university evaluation report for your course can cause a certain amount of anxiety, particularly in relation to possible negative feedback from students about your teaching or your course.
Most of us will, at one time or another, receive negative feedback from students. This is unavoidable, since it is impossible to please everyone all of the time.
It is important not to get caught up on negative feedback, except as an indicator that aspects of your course design or teaching need changing. Make the necessary changes, communicate these changes to your next class of students, and get ready to make yet more changes during the next evaluation cycle.
Key points to consider when making sense of the evaluation data:
Use multiple sources of data
University student evaluation data is just one source of data! Evaluation researchers caution that if one is serious about accurately evaluating and improving courses and teaching, more than one source of data about your course and teaching needs to be used (Miller, 1987). You can collect data from a range of sources including: yourself, students, peers, documents, colleagues, industry partners, administrators etc. (Cashin, 1989).
Use appropriate sources of data
All stakeholders will have a particular perspective on various aspects of your teaching and course. However, not all perspectives are appropriate for evaluation, particularly, if the evaluation is summative. For example, students are not an appropriate source for evaluating the relevance of a course to industry, or about the appropriateness of course content.
Students are an appropriate data source for evaluating issues such as the helpfulness of assignment feedback, workload, clarity of learning goals, interactions in classes, the facilities, and so on. These factors have a direct impact on student approaches to learning and ultimately their learning experiences and student outcomes (Biggs, 2003; Kek, Darmawan, & Chen, 2007; Prosser & Trigwell, 1999).
Use multiple evaluation methods
Evaluate your teaching and courses with multiple evaluation methods , as far as you practically can.
Integrate quantitative and qualitative data
Complement quantitative evaluation data with qualitative data. The quantitative data provides good information on the 'whats', but the qualitative comments provide you with a rich collection of the 'whys' - providing you with further insights into the 'whats' of quantitative data (Erzberger & Kelle, 2003).
Interpreting quantitative data
When interpreting quantitative data, consider the following:
Determine if your data sample is representative
To determine if your data is representative, check the overall response rate. It is indicated as "percentage of class responded" on the summary report. It refers to the proportion of students who have responded to the survey out of enrolled students. Usually the higher the response rate, the more representative the data.
Online responses typically elicit lower response rates than paper-based and in-class surveys (Dommeyer, Baum, Hanna, & Chapman, 2004). However, it was found that students provide more information in on-line evaluation survey (Hardy, 2003; McGhee & Lowell, 2003) and that the ratings are not significantly different from paper-based, in-class surveys (Dommeyer et al., 2004; Hardy, 2003).
The most often-asked question is what constitutes an acceptable response rate for on-line surveys. At this point, there is no definitive percentage, which qualifies as an ‘acceptable response rate'. However, it is suggested that data obtained from a low response rate be treated with caution. Indeed, if the rate is below 10%, it is a good idea to supplement this data with qualitative student comments.
It is a good idea to look at the spread of response scores, from highest to lowest, as well as the mean or the mid-point response score. This is because the spread of responses can indicate the strength of your mean score.
The mean is calculated by totaling values of all response scores and dividing it by the number of responses.
If the spread between the response score is very high, your mean is not particularly useful as a general indicator of how students actually feel about your course and or teaching. Obviously if the spread between student response scores is low, and they are all very close to the mean, then it is representative of student sentiment.
To determine whether the spread, or ‘standard deviation', is appropriate, an approximate guide for standard deviation: < 0.50 is considered small, 0.51 – 0.90 is considered average, and above 0.90 is considered large.
- Example 1: If you have a mean of 4.00 (considered "good") for the SEC item 10 (The assessment tasks were appropriate to the aims of the course). The distribution of responses show that 67% of students agree (% of positive responses on strongly agree and agree) to this item. The standard deviation is 1.0 which indicates that there is a very large spread of responses away from the mean.
Interpretation: Some students reported around the same value, with a sizeable group of students reporting very differently. The data signals that you may like to consider as to why there was a large difference in reactions to the item. In this case, using the mean alone is not useful because it is not representative of how students feel about the item.
- Example 2: If you have a mean of 4.00 for item 10 and 81% of students responding agree to the item. The standard deviation is 0.6 (average), indicating a reasonable spread of response away from the mean.
Interpretation: Most of the students reported around the same value, with a small group responding differently. The mean is useful information.
- Example 3: If you have a mean of 4.00 for the item, and 100% of students agree to the item. The standard deviation is 0.0, indicating a very small spread of responses away from the mean.
Interpretation: All students responded similarly to the item. Here, you can be sure that the mean is representative of how the students feel about the item.
Look at the distribution or patterns of responses on each question item
- If all of the responses bunch together at a single rating, you can be sure that the mean or average score is representative of how the raters feel about your teaching and course.
- If you receive quite different responses from different groups, the mean or average score may not be too useful here. You may like to consider why these two groups have reacted so differently. You can reflect on your teaching and course context and consider what you can do to overcome this issue.
- If all the responses are distributed evenly across the range of ratings, do the same as above.
- Try not to ignore the negative responses or ratings, even if it is a small percentage. In considering why they might have responded negatively can also assist to decide if changes in the course and or teaching are warranted.
Interpreting qualitative data
In interpreting and analysing qualitative comments, it may be practical to use a thematic approach. That is, categorise or group comments into similar themes. You may not have the categories of themes right at the first try. You will have to conduct a few reiterations before a ‘picture' emerges which makes sense to you and relates to your context.
Create a matrix or frequency table
While you are categorising the themes, record the number of times the comments appear under the themes. This will create a matrix or frequency table of qualitative comments. The table can further assist you to ‘quantify' the qualitative, open-ended data.
For more information on other approaches to analysing qualitative data such as content analysis, grounded theory and narrative analysis, refer to:
-Woods, L., Priest, H., & Roberts, P. (2002). An overview of three different approaches to the interpretation of qualitative data. Part 2: practical illustrations. Nurse Researcher, 10 (1), 43-51.