Resource Support in Student Feedback Systems: Impact in Higher Education
Rationale
Student evaluation systems are widely applied in higher education, as universities utilize them to collect feedback from students to understand their educational experience and to identify areas that schools could improve on (Blair & Valdez Noel, 2014).
The widespread introduction of student evaluation systems in UK universities aims to improve both the quality of teaching and learning, and student satisfaction (Nixon, Scullion, & Hearn, 2016). With the marketization of higher education, the increasing prominence of students as consumers means that student satisfaction has become a key measure of the quality of university education. However, the vague pursuit of student satisfaction may also marginalize higher education institutions' pursuit of overall educational quality, which may undermine long-term educational outcomes (Mägi, Jaakson, Aidla, Kirss, & Reino, 2012).
As a student now studying at a UK university, I often question whether the university puts emphasis on our evaluation. This is why I choose “the impact of the student feedback system in higher education on students' subsequent educational experiences” as my research topic. There are many pieces of research which has explored the value of the student feedback system from different aspects (Kember, Leung, & Kwan, 2002), such as whether the opinions expressed by students through the feedback system have an impact on the setting of the course (Aleamoni, 1999), or whether the teacher's attitude towards the student feedback system and whether the student feedback will affect the teacher's subsequent teaching content and means (Flodén, 2016). However, these studies focus on the feedback as a whole, and there is no research on a specific part. My research on course material/resource support in student feedback systems will contribute to this research gap to some extent.
Literature Review
With the wide application of the student feedback system, current research can be mainly divided into four aspects: teachers' attitudes and responses to student feedback, different factors affecting the content of student feedback, the value of the student feedback system to higher education, and the effectiveness of student feedback system in practice (Alderman, Towers, & Bannah, 2012).
From the teachers' perspective, research on teachers' responses to student feedback has focused on teachers' attitudes and whether it affects their arrangements for subsequent teaching. Some studies show that most teachers have positive and open attitudes toward student feedback (Wong & Moni, 2013), while there are also studies showing vice versa (Simpson & Siguaw, 2000). But regardless of teachers' attitudes toward students’ feedback, they are unlikely to make significant changes to the curriculum as a result of student influence (Floden, 2016). This is why the author chose to examine whether student feedback has been taken into consideration from the perspective of course resource support rather than content adaptation. Whether limited by HEI regulations or teachers' reactions, student feedback by itself is unlikely to have a significant impact on teaching content in the short term.
There is a wealth of research on how different factors affect students’feedback From a macro perspective, studies have been conducted on how the technology and form of student feedback system affect students' enthusiasm for corresponding surveys, such as whether the adoption of a reward mechanism promotes students' responsiveness and whether the design of student feedback survey is reliable (Porter, Whitcomb, & Weitzer, 2004). From the perspective of individual differences, students' personal preferences and types of different courses are also explored. This includes the influence of students' grades and different learning habits on students' feedback content (Aleamoni, 1999). This also shows that a large number of variables need to be taken into account in the study of student feedback.
Many studies have also questioned the value of a student feedback system (Pounder, 2007), including the possibility of improving the quality of subsequent education, such as whether student feedback is used to adjust the existing support system (Patricie Mertova & Chenicheri Sid Nair, 2011). One study that inspired me was to collect student feedback from five courses in 2011 and again a year later. Compare and analyze the two times of feedback given to different students in the same course.The results suggest that student feedback is informative, but there needs to be evidence that this feedback is actually translated into actions that improve the student learning experience (Blair & Valdez Noel, 2014). In other words, institutions may not attach importance to practical improvement based on the received results (Zhao & Gallant, 2012).
In general, the practical value and effectiveness of student feedback have been widely concerned. However, the current research mainly focuses on the overall effect of student feedback, and there is a lack of research on a specific part of student feedback. I hope to fill this gap through research on the course resource support of student feedback.
Methodology
This study aims to explore the practical value of a specific part of the student feedback system (course materials/resources support) that is, its views on students' subsequent educational experience in higher education through qualitative analysis.
Research Question: How does feedback on course materials/resources improve students' perceptions of their subsequent educational experience in the context of IOE's mid-term student evaluations?
The course materials/resources in the mid-term student feedback were selected for research because students could experience whether the feedback brought changes and how the feedback contributed to their subsequent educational experience in the second half of the semester after the mid-term feedback. The course material part of the feedback is chosen because, compared with other parts of the feedback, it can be solved relatively easily in the later course planning (Blair & Valdez Noel, 2014).
Questionnaire:
Questionnaires usually consist of structured questions, both open-ended and closed. Due to the low cost of money and time, the questionnaire is often used in research requiring a large amount of data collection (Newby, 2010). It can be completed online, over the phone, or on paper. .This study will also mainly use questionnaire as the primary tool for data collection. The first reason is that this study's sample size is relatively large (50 people). It is feasible to save costs and time for participants to fill out the questionnaire online. Secondly, considering that students' opinions on student feedback are usually affected by various subjective factors (Aleamoni, 1999), a large amount of student feedback data collected through questionnaire survey is more representative (Braun, Clarke, Boulton, Davey, & McEvoy, 2020). As previously discussed in the literature review, students' perceptions of course feedback can be affected by various variables. These include each student's different academic achievements, study habits, and personal preferences. These unavoidable individual factors may affect the results of the investigation (Blair & Valdez Noel, 2014). Suppose data collection methods such as interviews, ethnography, and other small samples are used. In that case, the collected results are more likely to be under-representative of the whole student group due to the influence of the individual factors of the participants. Using questionnaires to collect a large number of participants' results can, to some extent, reduce the impact of individual variables on the overall results and improve the accuracy and universality of results (Braun, Clarke, Boulton, Davey, & McEvoy, 2020). In order to reduce the differences caused by diverse course contents and settings, the questionnaire in this research will only ask participants to give feedback on the three compulsory courses in the first year.
In qualitative analysis, a non-negligible disadvantage of using questionnaires is that the collected data often lack depth. This study will try to make up for this problem by adding open questions to the questionnaire. However, studies show that people are often impatient with open questions in questionnaires and rarely give in-depth and rich answers (Braun, Clarke, Boulton, Davey, & McEvoy, 2020). If the same problem occurs in this study, a small number of participants will be selected for follow-up interviews to increase the richness and depth of the data.
Interview:
As a data collection method for participants' views and experiences, the interview has always played an important role in qualitative research (Heijnen, Stewart, & Espiner, 2021). The reason why the interview is not used as the main research method in this study is that it takes a long time, and the feasibility is too low when the number of participants in this experiment is 50. Another reason for not using interviews as primary method is that each student's view of the student feedback system is very subjective and influenced by personal preferences (Blair & Valdez Noel, 2014). Using only the results of interviews with a few participants as the total data collected may lead to the individual views of a few students being representative of the whole group (Braun, Clarke, Boulton, Davey, & McEvoy, 2020).
Setting and respondents:
The sample size of this study is 50 sophomore or junior students studying B.A Education in IOE. Sophomores or juniors were chosen because they had attended at least one midterm student feedback session and experienced the impact of the feedback. Participants were recruited by means of email promotion, which was sent to the target candidates. Fifty students will be selected to participate in the online questionnaire from the replies to the email.
Data analysis plan:
Content analysis will be adopted in this study to study the collected data. This method enables this study to conduct a rich and in-depth investigation of students' views and experiences on student feedback (Graneheim & Lundman, 2004). Potential themes in the collected data may include: "Feedback made resource support more effective/ineffective," "I felt my feedback was valued/ignored," and "There were some specific reforms that helped me after my feedback."
Evaluation and Limitation:
When designing questions for questionnaire or follow-up interviews, it is difficult to completely unadulterate personal presupposition. Especially considering that as an experimental designer, I am also a university student, and my personal experience may potentially influence my design problems. The data collected by this potentially instructive tool may not be objective and true, but to some extent, reinforce the presupposition of the researcher when designing the experiment (Watson, 2008).
Another problem with this experiment is that there are too many variables in the study, and the number of participants is not large enough for the experiment to ignore these variables and get representative results (Flodén, 2016). There are three modules involved in this study, and the different curriculum settings of these three modules, as well as students' personal preferences and their different attitudes toward different modules, may affect the results of the experiment. In future experiments, it is possible to consider reducing variables, such as feedback from different groups of students on a particular module.
Ethical and Reflexivity
Informed consent:
Before filling out the questionnaire, the participants would sign the informed consent form, and the researchers would introduce the study to them in detail to make sure that all participants understood the content and purpose of the study.
Privacy and anonymity:
In order to protect the privacy of participants to the maximum extent, participants' personal email addresses and personal information are kept confidential (Sng, Yip and Han, 2016). In the questionnaire survey, participants are only asked to choose the students in their second year or third year, and the contents of the questionnaire are related to the required courses that all students in the first year take. This prevents people from extrapolating personal information about the participants from the results of the questionnaire.
Background and position:
My basic education was completed in China, and Chinese classrooms tend to ignore the importance of students’ feedback and regard it unecessary component. This is different from the atmosphere in which universities value and encourage student feedback (Alfred, 2003). Compared with students who have received education in the UK since childhood, my unfamiliarity with the feedback system of students may lead me to add my own presupposition to the research based on my personal experience, which might affect the objectivity of the research results. As an undergraduate student also studying in IOE, my background is very similar to that of the participants, which may lead me assume that the terms and concepts I used in the questionnaire design are familiar to all participants, but I did not give detailed explanations, making the participants not fully understand the content of the questionnaire.
Significance and feasibility:
As explained in the rationale post, the marketization of higher education has made higher education institutions pay more attention to student satisfaction (Nixon, Scullion, & Hearn, 2016). The student feedback system is the most important tool to measure and improve student satisfaction (Alderman, Towers, & Bannah, 2012). This study is intended to fill a gap in the research on specific parts of student feedback systems in the investigation of the effectiveness and subsequent practice of student feedback systems.
This study will complete the recruitment of participants and the distribution of questionnaires in February 2023, collect and analyze the data in March, and consider whether to conduct follow-up interviews for a few participants according to the depth and richness of the research data in April.
References
Alderman, L., Towers, S., & Bannah, S. (2012). Student feedback systems in higher education: a focused literature review and environmental scan. Quality in Higher Education, 18(3), 261–280. https://doi.org/10.1080/13538322.2012.730714
Aleamoni, L. M. (1999). Journal of Personnel Evaluation in Education, 13(2), 153–166. https://doi.org/10.1023/a:1008168421283
Alfred, M. V. (2003). Sociocultural Contexts and Learning: Anglophone Caribbean Immigrant Women in U.S. Postsecondary Education. Adult Education Quarterly, 53(4), 242–260. https://doi.org/10.1177/0741713603254028
Blair, E., & Valdez Noel, K. (2014). Improving higher education practice through student evaluation systems: is the student voice being heard? Assessment& Evaluation in Higher Education, 39(7), 879–894. https://doi.org/10.1080/02602938.2013.875984
Braun, V., Clarke, V., Boulton, E., Davey, L., & McEvoy, C. (2020). The online survey as a qualitative research tool. International Journal of Social Research Methodology, 24(6), 1–14. https://doi.org/10.1080/13645579.2020.1805550
Denovan, A., & Macaskill, A. (2016). Stress and Subjective Well-Being Among First Year UK Undergraduate Students. Journal of Happiness Studies, 18(2), 505–525. https://doi.org/10.1007/s10902-016-9736-y
Flodén, J. (2016). The impact of student feedback on teaching in higher education. Assessment & Evaluation in Higher Education, 42(7), 1054–1068. https://doi.org/10.1080/02602938.2016.1224997
Ginns, P., Prosser, M., & Barrie, S. (2007). Students’ perceptions of teaching quality in higher education: the perspective of currently enrolled students. Studies in Higher Education, 32(5), 603–615. https://doi.org/10.1080/03075070701573773
Graneheim, U. H., & Lundman, B. (2004). Qualitative content analysis in nursing research: concepts, procedures and measures to achieve trustworthiness. Nurse Education Today, 24(2), 105–112. https://doi.org/10.1016/j.nedt.2003.10.001
Heijnen, I., Stewart, E., & Espiner, S. (2021). On the move: the theory and practice of the walking interview method in outdoor education research. Annals of Leisure Research, 1–19. https://doi.org/10.1080/11745398.2021.1949734
Kember, D., Leung, D. Y. P., & Kwan, K. P. (2002). Does the Use of Student Feedback Questionnaires Improve the Overall Quality of Teaching?Assessment & Evaluation in Higher Education, 27(5), 411–425. https://doi.org/10.1080/0260293022000009294
Lewine, R., & Sommers, A. (2016). Unrealistic Optimism in the Pursuit of Academic Success. International Journal for the Scholarship of Teaching and Learning, 10(2). https://doi.org/10.20429/ijsotl.2016.100204
Newby, P. (2010). Research methods for education. Harlow, England ; New York:Pearson Education Ltd.
Nixon, E., Scullion, R., & Hearn, R. (2016). Her majesty the student: marketised higher education and the narcissistic (dis)satisfactions of the student-consumer. Studies in Higher Education, 43(6), 927–943. https://doi.org/10.1080/03075079.2016.1196353
Patricie Mertova, & Chenicheri Sid Nair. (2011). Student Feedback: The Cornerstone to an Effective Quality Assurance System in Higher Education. Chandos Publishing.
Porter, S. R., Whitcomb, M. E., & Weitzer, W. H. (2004). Multiple surveys of students and survey fatigue. New Directions for Institutional Research, 2004(121), 63–73. https://doi.org/10.1002/ir.101
Pounder, J. S. (2007). Is student evaluation of teaching worthwhile? Quality Assurance in Education, 15(2), 178–191. https://doi.org/10.1108/09684880710748938
Simpson, P. M., & Siguaw, J. A. (2000). Student Evaluations of Teaching: An Exploratory Study of the Faculty Response. Journal of Marketing Education, 22(3), 199–213. https://doi.org/10.1177/0273475300223004
Sng, B., Yip, C., & Han, N.-L. (2016). Legal and ethical issues in research. Indian Journal of Anaesthesia, 60(9), 684–688. NCBI. https://doi.org/10.4103/0019-5049.190627
Watson, R. (2008). Nursing research : designs and methods. Edinburgh ; New York: Churchill Livingstone/Elsevier.
Wong, W. Y., & Moni, K. (2013). Teachers’ perceptions of and responses to student evaluation of teaching: purposes and uses in clinical education. Assessment & Evaluation in Higher Education, 39(4), 397–411. https://doi.org/10.1080/02602938.2013.844222
Zhao, J., & Gallant, D. J. (2012). Student evaluation of instruction in higher education: exploring issues of validity and reliability. Assessment &Evaluation in Higher Education, 37(2), 227–235. https://doi.org/10.1080/0