Dr Andrew Youde

HudCRES

From personal experience, it is very difficult to design a valid and reliable questionnaire that will hold up to scrutiny from both statistical tests and critique from fellow academics. However, the development of questionnaires is a time when the academic community supports each other by sharing the questions and anonymised primary data, and the findings of validity and reliability tests.

As part of my doctoral research I explored effective tutors and tutoring within blended learning contexts (Youde, 2014). Blended learning typically involves significant online teaching, learning and support but includes some face-to-face contact. Further, the tutors under investigation were delivering modules for part-time students undertaking vocationally relevant degree courses. In order to evaluate tutor practices in this context, learner feedback was needed to explore the quality of tutoring they experienced.

Questionnaires are a useful research instrument when a breadth of opinion about an area is needed, and, therefore, was appropriate for this piece of research.

The Course Experience Questionnaire (CEQ)

I could have started to design a questionnaire from scratch, however, there are a number of existing ones published that consider effective teaching and learning, but not for a blended context. To obtain general, learner opinion about the quality of tutoring experienced on a module, a modified version of the Course Experience Questionnaire (CEQ) was used (Ramsden, 1991).  The CEQ was designed as an indicator of teacher effectiveness on courses in higher education (HE) institutions and draws on learner perceptions of teaching, curriculum and assessment.  The CEQ was originally designed for courses with traditional approaches to teaching with more regular tutor/learner contact than the context I was researching. 

Kreber (2003) modified the CEQ to make it suitable for an individual tutor, rather than for a course team, which was the focus of the original version.  In North America, where Kreber’s study was based, a course was interpreted as 'a semester-long seminar or lecture usually comprising thirty-six hours of class time and taught by one instructor' (Kreber, 2003: 62), therefore, similar to a Higher Education module of study in the UK. Richardson and Woodley (2001) and Richardson (2009) made similar modifications of the CEQ for research within distance education contexts. These three published versions of the CEQ, in addition to the original version, helped my development of this questionnaire for my research.  

My version of the CEQ

Following statistical tests, the version of the CEQ that I developed was found to be largely valid and reliable. It was developed as part of a small-scale research study within an education context, and needs further testing across a range of disciplinary areas. Only one section of the questionnaire was found to be problematic, so it was not included as part of the research. The modified items of the CEQ that I used included: 

Good Teaching: communication

  • 2 The tutor of this module motivated me to do my best work
  • 13 The tutor was extremely good at explaining things
  • 15 The tutor worked hard on making the subject interesting

Good Teaching: feedback on, and concern for, student learning

  • 5 The tutor put a lot of time into commenting on my work
  • 10* Feedback on my work was usually given only in the form of marks or grades
  • 11 The tutor made a real effort to understand difficulties I might be having with my work
  • 12 The tutor normally gave me feedback on how I was doing

Clear Goals and Standards

  • 1 It was easy to know the standard of work expected in this module
  • 4 I usually had a clear idea of where I was going and what was expected of me in this module
  • 8* It was often hard to discover what was expected of me in this module
  • 18 The tutor made it clear right from the start what they expected from students

Appropriate Workload

  • 3* The workload in this module was too heavy
  • 9 I was generally given enough time to understand things I had to understand
  • 16* There was a lot of pressure on me to do well in this module
  • 17* The sheer volume of work to get through in this module was too heavy

Appropriate Assessment

  • 6* To do well in this module all you really needed was to rework the course notes
  • 7* The tutor seemed more interested in assessing learning outcomes than what I had understood
  • 14* The tutor asked me questions about facts

* = reverse coded items.

 

I summarised the development of this questionnaire in a paper presented at EduLearn19 - the 11th International Conference on Education and New Learning Technologies in Palma, Spain in July, where it was well received, and the paper is available as part of the full proceedings. The full paper is available to download here

I would welcome future discussions with researchers exploring the use of the questionnaire, particularly in terms of its value in blended learning contexts.

 

imagealttag

Want more 'Ed Space?

Read more of the research blog of the Huddersfield Centre for Research in Education and Society (HudCRES).