I have been working hard at converting the new Moodle question engine to work in Moodle 2.0, aiming at a deadline this Friday (25th February). On Friday we should have a first OU version of Moodle 2.0 with all the key features so that we can start testing, even though students won't get onto the new system before July. I have have basically finished the question engine, give or take a few features that are not needed for testing, and this week I am just doing some final tidying up of the code.
Hopefully, next week I can start the process of getting it reviewed for inclusion in Moodle 2.1. As I say, there are some gaps in the functionality that will need to be filled in before it can actually be committed, but there is a lot of code to be reviewed (lucky Eloy!) and so I hope we can kick off the process.
So, my excuse for not blogging about the new question engine recently is that I have been too busy working on it to write about it. In the last few days, however, I encountered a couple of nice ideas that would be easy to implement using the flexibility the new question engine gives, and I want to describe them. First, I need to remind you of one key point about the new system:
Question behaviours
As I explained before a key idea in the new question engine is that of question behaviours. Whereas a question type lets you have either a multiple-choice, a drag and drop, or a short-answer question, a behaviour controls how the student interact with the questions, of whatever type. For example, the student may have to type in answers to each question in the quiz, then submit everything, and only then are the questions marked. This is known as the "Deferred feedback" behaviour. Alternatively, the student may answer one question, have their answer marked immediately. If they are wrong, they get a hint and can then immediately have another go. If they get it right on the second or third try, they get fewer marks. This is called the "Interactive with multiple tries" behaviour.
When I was first working on this, I did wonder whether it was perhaps over-kill to make behaviours fully-fledged Moodle plugins. It seemed to me that I had already implemented all the types of behaviour anyone was likely to want. It turns out I was wrong. Here are three ideas for new behaviours have I have come across since had that naive thought.
Explain your thinking behaviour
The concept here is that, in addition to presenting the question to the student for them to answer in the usual way, you also give them a text-area with the prompt "Explain your answer". When the submit the question is graded as usual. Moodle does not do anything with the explanation, other than to store it, and re-display it later when the student or their teacher reviews their attempt. The point is that the student should reflect upon and articulate their thought processes, and the teacher can then see what they wrote, which might be useful for diagnosing what problems the students are having.
I'm not sure that this would really work. Would the students really bother to write thoughtful comments if there were no marks to be had? However, this would be relatively easy to implement, so we should build it and see what happens in practice. The teacher could always manually adjust the marks based on the quality of the reflection, if that was necessary to incentivise students.
I'm afraid I cannot remember who suggested this idea. It was a post in the Moodle quiz forum some time ago, just after I had implemented the behaviour concept and was thinking that my initial list of behaviours was all anyone could possibly want.
gnikram desab-ytniatreC
This idea I only came across yesterday evening, in a blog post from people in the OU's technology faculty. It is a slightly strange twist on certainty-based marking.
With classic CBM, the students answers the questions, and also says how certain they are they got it right (for example, on a three-point scale). The student will only get full marks if they get the question right, and are very certain that they were right. If, however, they express high certainty and are wrong, they are penalised heavily with a big negative mark. To maximise their score, the student must accurately gauge their level of knowledge. This hopefully promotes reflection, and self awareness by the student of their level of knowledge.
The idea from the OU technology faculty is to do this backwards, for multiple choice questions. Rather than getting the student to answer the question and then select a certainty, you first show them just the question stem without the choices, and get them to express a certainty. Only then do you show them the choices and let them chose what they think is the right answer.
Again, I am not sure if this would work, but it is sufficiently easy to do by creating a new behaviour plug-ing (and a some change to the multiple-choice question type so that you can output just the question, without the choices) that it has to be worth a try.
Free text responses with a chance to act on feedback
This last idea I only heard about this morning. There was a session of the OU's "eLearning community" all about eAssessment, which naturally I attended. This is a monthly gathering with a number of different presentations on some eLearning topic. The first three talks were about specific courses that have recently adopted eAssessment, and how students had engaged with that, what effect the effect had been on retention and pass rates, and so on. That was interesting, but not what I want to talk about here. The final talk was by Denise Whitelock from the OU's Institute of Educational Technology who has just completed a review of recent research into technology-enhance assessment for HEA that should be published soon. Here, I just want to pick up on one specific idea from her talk.
I'm afraid that again, I don't recall who deserves credit for this idea. (Once Denise's review is published, I will have a proper reference, but I did not take notes this morning.) It was another UK university that had done this. It was in the context of language teaching. The student had to type a sentence in answer to the question, then the computer graded that attempt and gave some feedback. Then, the student was immediately allowed to revise their sentence in light of the feedback, and get it re-marked. The final grade for the question is then a weighed sum of the first mark and the second mark. You need to get the weights right. The weight for the first try has to be big enough that the student tries hard to get the question right on their own before seeing the hints, and the weight for the second try, though smaller, also has to be big enough so that the student bothers to revise their response.
Now, the OU is currently creating a Moodle question type that can automatically grade sentence length answers using an algorithm that my colleague Phil Butcher implemented the first version of in 1978! (When I say we are creating this, what I actually mean is that we have contracted Jamie Pratt a free-lance Moodle developer to implement it to our specification.) Anyway, once you have that, the idea of allowing two tries, with feedback after the first try, and a final grade that is a weighted sum of the marks for the two tries, is just another behaviour.
So, my initial thought that people would not have many ideas for interesting new behaviours seems to have been wrong. The flexibility I built into the system is worth having.
Hi Tim,
ReplyDeleteThe "gnikram desab-ytniatreC" (great name, btw ;-)) reminded me of something that you might be interested in (another type of question behaviour maybe? or another type of question? I don't know)
In Hot Potatoes (in JQuiz), apart from the usual short-answer (SA) and multi-choice (MC) question types, there's a so-called hybrid question, which is a hybrid of SA and MC. In a hybrid question, the student is presented with a standard SA question and is given a specific number of tries at this question. If s/he doesn't manage to get the answer right in the n tries, the question automagically turns into a MC question - the same question stem, but instead of the box to write the answer in, the student gets a few choices to select.
I think it would be great to have this question type (or behaviour?) in Moodle.
Cheers,
Przemek
Great Idea Przemyslaw,
ReplyDeleteI have created and modified questiontypes in the past andwould love to contribute in coding this one under Tim's guidance.
There is another area of functionality that will do well to become a plugin - question-selection scheme - of which random questions seem to be a patchy implementation.
Hello Tim,
ReplyDeleteI have asked Remote-Learner to set up a sandbox for me using Moodle 2.02 with your assessment upgrades. I want to use test out the CBM function but cannot seem to locate it in the question behavior section. Does it need to be activated some where or is it not complete?
Tim Florian
tpflorian@dcsdk12.org
To get the code, you just need to clone from https://github.com/timhunt/Moodle-Question-Engine-2 and run the install. (Upgrade code not done yet.) Further information on http://docs.moodle.org/en/Development:Question_Engine_2.
ReplyDelete