Wednesday, February 23, 2011

Etiquette for questions

I have been working hard at converting the new Moodle question engine to work in Moodle 2.0, aiming at a deadline this Friday (25th February). On Friday we should have a first OU version of Moodle 2.0 with all the key features so that we can start testing, even though students won't get onto the new system before July. I have have basically finished the question engine, give or take a few features that are not needed for testing, and this week I am just doing some final tidying up of the code.

Hopefully, next week I can start the process of getting it reviewed for inclusion in Moodle 2.1. As I say, there are some gaps in the functionality that will need to be filled in before it can actually be committed, but there is a lot of code to be reviewed (lucky Eloy!) and so I hope we can kick off the process.

So, my excuse for not blogging about the new question engine recently is that I have been too busy working on it to write about it. In the last few days, however, I encountered a couple of nice ideas that would be easy to implement using the flexibility the new question engine gives, and I want to describe them. First, I need to remind you of one key point about the new system:

Question behaviours

As I explained before a key idea in the new question engine is that of question behaviours. Whereas a question type lets you have either a multiple-choice, a drag and drop, or a short-answer question, a behaviour controls how the student interact with the questions, of whatever type. For example, the student may have to type in answers to each question in the quiz, then submit everything, and only then are the questions marked. This is known as the "Deferred feedback" behaviour. Alternatively, the student may answer one question, have their answer marked immediately. If they are wrong, they get a hint and can then immediately have another go. If they get it right on the second or third try, they get fewer marks. This is called the "Interactive with multiple tries" behaviour.

When I was first working on this, I did wonder whether it was perhaps over-kill to make behaviours fully-fledged Moodle plugins. It seemed to me that I had already implemented all the types of behaviour anyone was likely to want. It turns out I was wrong. Here are three ideas for new behaviours have I have come across since had that naive thought.

Explain your thinking behaviour

The concept here is that, in addition to presenting the question to the student for them to answer in the usual way, you also give them a text-area with the prompt "Explain your answer". When the submit the question is graded as usual. Moodle does not do anything with the explanation, other than to store it, and re-display it later when the student or their teacher reviews their attempt. The point is that the student should reflect upon and articulate their thought processes, and the teacher can then see what they wrote, which might be useful for diagnosing what problems the students are having.

I'm not sure that this would really work. Would the students really bother to write thoughtful comments if there were no marks to be had? However, this would be relatively easy to implement, so we should build it and see what happens in practice. The teacher could always manually adjust the marks based on the quality of the reflection, if that was necessary to incentivise students.

I'm afraid I cannot remember who suggested this idea. It was a post in the Moodle quiz forum some time ago, just after I had implemented the behaviour concept and was thinking that my initial list of behaviours was all anyone could possibly want.

gnikram desab-ytniatreC

This idea I only came across yesterday evening, in a blog post from people in the OU's technology faculty. It is a slightly strange twist on certainty-based marking.

With classic CBM, the students answers the questions, and also says how certain they are they got it right (for example, on a three-point scale). The student will only get full marks if they get the question right, and are very certain that they were right. If, however, they express high certainty and are wrong, they are penalised heavily with a big negative mark. To maximise their score, the student must accurately gauge their level of knowledge. This hopefully promotes reflection, and self awareness by the student of their level of knowledge.

The idea from the OU technology faculty is to do this backwards, for multiple choice questions. Rather than getting the student to answer the question and then select a certainty, you first show them just the question stem without the choices, and get them to express a certainty. Only then do you show them the choices and let them chose what they think is the right answer.

Again, I am not sure if this would work, but it is sufficiently easy to do by creating a new behaviour plug-ing (and a some change to the multiple-choice question type so that you can output just the question, without the choices) that it has to be worth a try.

Free text responses with a chance to act on feedback

This last idea I only heard about this morning. There was a session of the OU's "eLearning community" all about eAssessment, which naturally I attended. This is a monthly gathering with a number of different presentations on some eLearning topic. The first three talks were about specific courses that have recently adopted eAssessment, and how students had engaged with that, what effect the effect had been on retention and pass rates, and so on. That was interesting, but not what I want to talk about here. The final talk was by Denise Whitelock from the OU's Institute of Educational Technology who has just completed a review of recent research into technology-enhance assessment for HEA that should be published soon. Here, I just want to pick up on one specific idea from her talk.

I'm afraid that again, I don't recall who deserves credit for this idea. (Once Denise's review is published, I will have a proper reference, but I did not take notes this morning.) It was another UK university that had done this. It was in the context of language teaching. The student had to type a sentence in answer to the question, then the computer graded that attempt and gave some feedback. Then, the student was immediately allowed to revise their sentence in light of the feedback, and get it re-marked. The final grade for the question is then a weighed sum of the first mark and the second mark. You need to get the weights right. The weight for the first try has to be big enough that the student tries hard to get the question right on their own before seeing the hints, and the weight for the second try, though smaller, also has to be big enough so that the student bothers to revise their response.

Now, the OU is currently creating a Moodle question type that can automatically grade sentence length answers using an algorithm that my colleague Phil Butcher implemented the first version of in 1978! (When I say we are creating this, what I actually mean is that we have contracted Jamie Pratt a free-lance Moodle developer to implement it to our specification.) Anyway, once you have that, the idea of allowing two tries, with feedback after the first try, and a final grade that is a weighted sum of the marks for the two tries, is just another behaviour.

So, my initial thought that people would not have many ideas for interesting new behaviours seems to have been wrong. The flexibility I built into the system is worth having.

Wednesday, February 9, 2011

Should you listen to futurologists?

Educause just published their annual survey describing "six areas of emerging technology that will have significant impact on higher education and creative expression over the next one to five years".

This got circulated round the team at work and I rather cynically asked "so, what did they predict last year then?" My colleague Pete Mitton took that question and ran with it to produce the following analysis:
OK, as I have a full set of Horizon Reports on my hard disk, here's a summary of their predictions for the years 2004-11.

I've pushed some titles together where the wording is different but the intent is the same (for example they've used mobile computing/mobiles/mobile phones in the past with the same meaning).

The numbers in the table are the time-to-adoption horizon in years.
2004 2005 2006 2007 2008 2009 2010 2011
User-created content 1 1 1
Social Networking 4-5 1 1
Mobiles 2-3 2-3 1 1 1
Virtual Worlds 2-3
New Scholarship and Emerging forms of publication 4-5
Massively Multiplayer Educational Gaming 4-5
Collaboration Webs 1
Mobile Broadband 2-3
Data Mashups 2-3
Collective Intelligence 4-5
Social Operating Systems 4-5
Cloud Computing 1
The Personal Web 2-3
Semantic-Aware Applications 4-5
Smart Objects 4-5
Open Content 1
Electronic Books 2-3 1
Simple Augmented Reality 4-5 4-5 2-3 2-3
Gesture-based computing 4-5 4-5
Visual Data Analysis 4-5
Game-based learning 2-3 2-3 2-3
Learning analytics 4-5
Learning Objects 1
Scaleable Vector Graphics 1
Rapid Prototyping 2-3
Multimodal Interfaces 2-3
Context Aware Computing aka Geostuff 4-5 4-5 2-3
Knowledge Webs 4-5
Extended Learning 1
Ubiquitous Wireless 1
Intelligent Searching 2-3

Of course, the purpose of a report like this is not to accurately predict the future. The aim is rather to stimulate informed debate about the technologies that are coming up. Within our team, at least, they seem to have succeeded.

I thought, however, that this analysis was interesting enough to share. It provides some context for year's predictions. More generally it shows how difficult it is predict future technology trends.