Showing posts with label Students. Show all posts
Showing posts with label Students. Show all posts

Friday, May 1, 2015

eSTEeM conference 2015

eSTEeM is an organising group within the Open University which brings together people doing research into teaching and learning in the STEM disciplines, Science, Technology, Engineering and Maths. Naturally enough for the OU, a lot of that work revolves around educational technology. Once a year they have an annual conference for people to share what they have been doing. I went along because I like to see what people have been doing with our VLE, and hence how we could make it work better for students and staff in the future.

It started promisingly enough in a way. As I walked in to get my cup of coffee after registration, I was immediately grabbed by Elaine Moore from Chemistry who had two Moodle Quiz issues. She wanted the Combined question type to use the HTML editor for multiple choice choices (good idea, we should put that on the backlog) and a problem with a Pattern-match questions which we could not get to the bottom of over coffee.

But, on to the conference itself. I cannot possibly cover all the keynotes and parallel sessions so I will pick the highlights for me.

Assessment matters to students

The first was a graph from Linda Price’s keynote. Like most universities, at the end of every module we give have a student satisfaction survey. The graph showed the student's ratings in response to three of the questions:

  • Overall, I am satisfied with the quality of this module.
  • I had a clear understanding of what was required to complete the assessed activities.
  • The assessment activities supported my learning.

There was an extremely strong correlation between those. This is nothing very new. We know that assessment is important in determining the ‘hidden curriculum’, and hence we like to think that ‘authentic assessment’ is important. However, it is interesting to see how much this matters this is to students. Previously, I would not even have been sure that they could tell the difference.

The purpose of education

Into the parallel sessions. There was an interesting talk from the module team for TU100 my digital life, the first course in the computing and technology degrees. Some of the things they do in that module’s teaching is based around the importance of language, even in science. Learning a subject can be thought of as learning to construct the world of that subject through language, or as they put it, humanities style thinking in technology education. Unsurprisingly, many students don’t like that “I came to learn computing, not writing.” However, there is a strong correlation between students language use and their performance in assessments. By the end of the module some students do come to appreciate what the module is trying to do.

This talk triggered a link to back to another part of Linda Price’s keynote. An important (if now rather cliched question) for formal education is “What is education for everything is now available on the web?” (or one might put that more crudely as “Why should students pay thousands of pounds for one of our degrees?”). The answer that came to me during this talk was “To make them do things they don’t enjoy, because we know it will do them good.” OK, so that is a joke, but I would like to think there is a nugget of truth in there.

Peer assessment

On to more specifically Moodle-related things. A number of modules have been trying out Moodle’s Workshop activity. That is a tool for peer review or peer assessment. The talk was from the SD815 Contemporary issues in brain and behaviour module team. Their activity involved students recording a presentation (PowerPoint + audio) that critically evaluated a research article. Then they had to upload them to the Moodle Workshop, and review each others presentations as managed by the tool. Finally, they had to take their slide-cast, the feedback they had received, and a reflective note on the process and what they had learned from it, and hand it all in to be graded by their tutor.

Now for OU students (at least) collaborative activities, particularly those tied to assessments, are typically another thing we make them do that they don’t enjoy. This activity added the complexities of PowerPoint and/or Open Office and recording audio. However, it seems to have worked remarkably well. Students appreciated all the things that are normally said about peer review: getting to see other approaches to the same task; practising the skills of evaluating others’ work and giving constructive feedback. In this case the task was one that the students (healthcare workers studying at postgraduate level) could see was relevant to their vocation, which brings us back to visibly authentic assessment, and the student satisfaction graph from the opening keynote.

For me the strongest message from this talk, however, is what was not said. There was very little said about the Moodle workshop tool, beyond a few screen-grabs to show what it looked like. It seems that this is a tool that does what you need it to do without getting in the way, which is normally what you want from educational technology.

Skipping briefly over

There are many more interesting things I could write about in detail, but to keep this post to a reasonable length I will just skim over the posters with lunch. For example,

And, some of the other talks:

  • a session on learning analytics, in this case with a neural net, to try to identify early on those students (on TU100 again) who get through all the continuous assessment tasks with a passing grade, only to fail the end of module assessment, so that they could be targeted for extra support.
  • a whole morning on the second day, where we saw nine different approaches to remote experiments from around the world. For example, the Open University's remote control telescope PIRATE. I was left me with the impression that this sort of thing is much more feasible and worthwhile than I had previously thought.

Our session on online Quizzes

The only other session I will talk about in detail is the one I helped run. It was a ‘structured discussion’ about the OU’s use of iCMAs (which is what we call Moodle quizzes). I found this surprisingly nerve-wracking. I have given plenty of talks before, and you prepare them. You know what you are going to say, and you are fairly sure it is interesting. Therefore you are pretty sure what is going to happen. For this session, we just had three questions, and it was really up to the attendees how well it worked.

We did allow ourselves two five-minute presentations. We started with Frances Chetwynd showing some the different ways quizzes are used in modules’ teaching and assessment strategies. This set up a 10-minute discussion of our first question: “How are iCMAs best be used as part of an assessment strategy?”. For this, delegates were seated around four tables, with four of five participants and a facilitator to each table. The tables were covered with flip-chart paper for people to write on.

We were using a World Café format, so after 10 minutes I rang my bell, and all the delegates move to a new table while the facilitators stayed put. Then, in new groups, they discussed the second question: "How can we engage students using iCMAs?" The facilitators were meant to make a brief bridge between what had been said in the previous group at their table, before moving on to the new question with the new group.

After 10 minutes on the second question, we had the other five-minute talk from Sally Jordan, showing some examples of what we have previously learned through scholarship into how iCMAs work in practice. (If you are interested in that, come to my talk at either MoodleMoot IE UK 2015 or iMoot 2015). This lead nicely, after one more round of musical chairs, to the third question: "Where next for iCMAs? Where next for iCMA scholarship?". Finally we wrapped up with a brief plenary to capture they key answers to that last question from each table.

By the end, I really had no idea how well it had gone, although each time I rang my bell, I felt I was interrupting really good conversations. Subsequently, I have written up the notes from each table, and heard from some of the attendees that they had found it useful and interesting, so that is a relief. We had a great team of facilitators (Frances, Jon, Ingrid, Anna) which helped. I would certainly consider using the the same format again. With a traditional presentation, you are always left with the worry that perhaps you got more out of preparing and delivering the presentation than any of the audience did out of listening. In this case, I am sure the audience got much more out of it than me, which is no bad thing.

Wednesday, July 3, 2013

Assessment in Higher Education conference 2013

Last week I attended the Assessment in Higher Education conference in Birmingham. This was the least technology and most education conference that I have been to. It was interesting to learn about the bigger picture of assessment in universities. One reason for going was that Sally Jordan wanted my help running a 'masterclass' about producing good computer-marked assessment on the first morning. I may write more about that in a future post. Also I presented a poster about all the different online assessment systems the OU uses. Again a possible future topic. For now I will summarise the other parts of the conference, the presentations I listed to.

One thing I was surprised to discover is how much the National Student Survey (NSS) is influencing what universities do. Clearly it is seen as something that prospective students pay attention to, and attracting students is important. However, as Margaret Price from Oxford Brookes University, the first keynote speaker said, the kind of assessment that students like (and so rate highly in NSS) is not necessarily the most effective educationally. That is, while student satisfaction is something worth considering, students don't have all the knowledge to evaluate the teaching they receive. Also, she suggested that the NSS ratings have made universities more risk-averse in trying innovative forms of assessment and teaching.

The opening keynote was about "Assessment literacy", making the case that students need to be taught a bit about how assessment works, so they can engage with it most effectively. That is, we want the students to be familiar with the mechanics of what they are being asked to do in assessment, so those mechanics don't get in the way of the learning; but more than that, we want the students to learn the most from all the tasks we set them, and assessment tasks are the ones students pay the most attention to, so we should help the students understand why they are being asked to do them. I dispute one thing the Margaret Price said. She said that at the moment, if assessment literacy is developed at all, that only happens serendipitously. However, in my time as a student, there were plenty of times when it was covered (although not by that name) in talks about study skill and exam technique.

Another interesting realisation during the conference was that, at least in that company (assessment experts), the "Assessment for learning" agenda is taken as a given. It is used as the reason that some things are done, but there is no debate that it is the right thing to do.

Something that is a hot topic at the moment is more authentic assessment. I think it is partly driven by technology improvements making it possible to capture a wider range of media, and to submit eportfolios. It is also driven by a desire for better pedagogy, and assessments that by their design make plagiarism harder. If you are being asked to apply what you have learned to something in your life (for example in a practice-based subject like nursing) it is much harder to copy from someone else.

I ended up going to all three of the talks given by OU folks. Is it really necessary to go to Birmingham to find out what is going on in the OU? Well, it was a good opportunity to do so. The first of these was about an on-going project to review the OU's assessment strategy across the board. So far a set of principles have been agreed (for example affirming the assessment for learning approach, athough that is nothing new at the OU) and they are about to be disseminated more widely. There was an interesting slide (which provoked some good discussion) pointing out that you need to balance top-down policy and strategy with bottom up implementation that allows each faculty use assessment that is effective for their particular discipline. There was another session by people from Ulster and Liverpool Hope universities that also talked about the top-down/bottom-up balance/conflict in policy changes.

In this OU talk, someone made a comment along the lines, "why is the OU re-thinking its assessment strategy? You are so far ahead of us already and we are still trying to catch up." I am familiar with hearing comments like that at education technology conferences. It was interested to learn that we are also held in similarly high for policy. The same questioner also used the great phrase "the OU effectively has a sleeper-cell in every other university, in the associate lecturer you employ". That makes what the OU does sound far more excitingly aggressive than it really is.

In the second OU talk, Janet Haresnape described a collaborative online activity in a third level environmental science course. These are hard to get right. I say that having suffered one as a student some years ago. This one seems to have been more successful, at least in part because it was carefully structured. Also, it started with some very easy tasks (put your name next to a picture and count some things in it), and the students could see the relationship between the slightly artificial task and what would happen in real fieldwork. Janet has been surveying and interviewing students to discover their attitudes towards this activity. The most interesting finding is that weaker students comment more, and more favourably, on the collaboration than the better students. They have more to learn from their peers.

The third OU talk was Sally Jordan talking about the ongoing change in the science faculty from summative to formative continuous assessment. It is early days, but they are starting to get some data to analyse. Nothing I can easily summarise here.

The closing keynote was about oral assessment. In some practice-based subjects like law and veterinary medicine it is an authentic activity. Also, a viva is a dialogue, which allows the extent of the student's knowledge to be probed more deeply than a written exam. With an exam script, you can only mark what is there. If something the student has written is not clear, then there is no way to probe that further. That reminded me of what we do in the Moodle quiz. For example in the STACK question type, if the student has made a syntax error in the equation they typed, we ask them to fix it before we try to grade it. Similarly, in Pattern-match questions, we spell check the student's answer and let them fix any errors before we try to grade it. Also, with all our interactive questions, if the student's first answer is wrong, we give them some feedback then let them try again. If they can correct their mistake themselves, then they get some partial credit. Of course computer-marked testing is typically used to assess basic knowledge and concepts, whereas an oral exam is a good way to test higher-order knowledge and understanding, but the parallel of enabling two-way dialogue between student and assessor appealed to me.

This post is getting ridiculously long, but I have to mention two other talks. Calum Delaney from Cardiff Metropolitan University reported on some very interesting work trying to understand what academics think about as they mark an essays. Some essays are easy to grade, and an experienced marker will rapidly decide on the grade. Others, particularly those that are partly right and partly wrong, take a lot longer weighing up the conflicting evidence. Overall though, the whole marking process struck me, a relative outsider, as scarily subjective.

John Kleeman, chair of QuestionMark, UK, summarised some psychology research that shows that the best way to learn something so that you can remember it again is to test yourself on it, rather than just reading it. That is, if you want to be able to remember something, then practice remembering it. It sounds obvious when you put it that way, but the point is that there is strong evidence to back up that statement. So, clearly you should all now go and create Moodle (or QuestionMark) quizzes for your students. Also, in writing this long rambling blog post I have been practising recalling all the interesting things I learned at the conference, so I should remember them better in future. If you read this far, thank you, and I hope you got something out of it too.

Thursday, March 25, 2010

When do students submit their online tests?

I am currently studying an Open University course (M888 Databases in Enterprise systems. There was an assignment due today, and like many students, I submitted only an hour before the deadline.

That got me thinking, are all students really like that? Well, I don't have access to our assessment submission system, but I do work on our Moodle-based VLE, so I can give you the data from there.

This graph shows how many hours before the deadline students submit their Moodle quizzes (iCMAs in OU-speak)



That is not exactly what I was expecting. Certainly, there is a bit of a peak in the last few hours, but there is another peak almost exactly 24 hours before that, with lesser peaks two and three days before.

Note that all our deadlines are at noon (it used to be midnight, but that changed a few months ago). The graph above is consistent with our general pattern of usage. The following graph shows what time of day students submitted their quiz attempts. It is same shape as our general load graph for most OU online systems.



I don't know what, if anything, this means, but I thought it was interesting enough to share.

By the way, if you want to compute these graphs for your own Moodle, here are the database queries I used:

-- Number of quiz submissions by hour before deadline
SELECT 
    (quiz.timeclose - qa.timefinish) / 3600 AS hoursbefore,
    COUNT(1)

FROM mdl_quiz_attempts qa
JOIN mdl_quiz quiz ON quiz.id = qa.quiz

WHERE
    qa.preview = 0 AND
    quiz.timeclose <> 0 AND
    qa.timefinish <> 0

GROUP BY
    (quiz.timeclose - qa.timefinish) / 3600

HAVING (quiz.timeclose - qa.timefinish) / 3600 < 24 * 7

ORDER BY
    hoursbefore

-- Number of quiz submissions by hour of day
SELECT 
    DATE_PART('hour', TIMESTAMP WITH TIME ZONE 'epoch' + timefinish * INTERVAL '1 second') AS hour,
    COUNT(1)

FROM mdl_quiz_attempts qa

WHERE
    qa.preview = 0 AND
    qa.timefinish <> 0

GROUP BY
    DATE_PART('hour', TIMESTAMP WITH TIME ZONE 'epoch' + timefinish * INTERVAL '1 second')

ORDER BY
    hour