I have had the opportunity to discuss this question during two recent online events, both hosted by the British Council. The first was a webinar - What do teachers want to know about assessment? (Oct, 2020). The second was a Facebook Live Q&A event - What criteria can we use in assessments? (Nov, 2020).
One question which featured prominently at both events focused on preventing cheating at online tests. The concern was expressed that students completing tests without the supervision of the teacher could not be trusted to complete the test only using their own efforts.
I agree with the participants that trying to replicate a traditional test online experience is not going to be satisfactory. My suggestion was, therefore, to not to try to use these traditional tests. Rather, online assessment should make the most of the extra affordances offered by the new teaching and learning environment.
‘Open book’ tests reward the students who synthesise information and have good critical thinking skills. So, instead of trying to prevent students from searching for answers online, we should consider helping them to develop ‘21st Century Thinking Skills.’ We can use this break from tests delivered in tradition exam halls to consider if they are giving us any information which teachers do not already have about their students.
The current circumstances could lead to a shift towards assessment which is skills-based and away from assessment which focusses on memorising and repeating information.
One way to consider impact is the number of people who attend an event. On the day 311 people came to the webinar and 205 people came to the Q&A event. Recordings of both events are still available and continue to be viewed. For example, 6 000 people viewed the Q&A session in the week following the event.
Other research events hosted during the last term by my colleagues at HudCRES are also still available to watch:
Online events may have a longer shelf-life than face-to-face meetings as they are readily accessible to anyone interested at a time of their choosing.
The total numbers attending both British Council events on the day were broadly similar. The Q&A attendees tended to stay for a more limited length of time. Some stayed for only three seconds. That may reflect how people generally interact with sites such as Facebook or it might be that once their specific question was answered they moved on from the event.
However, simply counting attendees does not tell us what impact the session has had on them. The question of impact is a very thorny one. It is often framed in terms of change enacted by practitioners following attendance at an event. That is to say, a teacher makes a change to how they teach based on an idea presented by an academic which was developed as part of their research.
89% of the webinar attendees who completed a post-webinar survey, stated they would change their practices as a result.
The survey also asked participants to rate their level of satisfaction with the webinar on a scale of 1 – 10 with 10 representing the highest level of satisfaction. The average, of those who gave a rating, was 9.25
As a researcher and presenter I find both figures to be gratifying as I can point to evidence which suggests my research can influence classroom practice. However, before I get too carried away with this idea, I need to remember that the responses indicated an intention to make a change. It is not evidence of actual change in teaching practices and it is not evidence of improvements which resulted from this change. Nevertheless, the data suggests that the events presented research-based evidence which the participants considered to be useful and relevant to their practice.
I think I can say I have offered one possible answer to the first one. The shift to online assessment can represent a chance to modify assessment practices in line with our understandings of how languages are used.
The second, relating to impact has proven more difficult to answer. I would not dare to suggest that all the teachers who attended the webinar immediately changed their assessment practices. I would, however, make the claim that the participants judged the ideas developed from relevant research projects presented to them to be worthy of consideration and changes to practice may develop out of this thinking.