Wednesday, April 23, 2014

The four types of thing a Moodle developer needs to know

In order to write code for Moodle, there is an aweful lot you need to know. Quite how much was driven home for me when I taught a Moodle developers' workshop at the German MoodleMaharaMoot in February. When preparing for that workshop, I though you could group that knowledge into three categories, but I have since added a fourth to my thinking.

1. The different types of Moodle plugin

The normal way you add functionality to Moodle is to create a plug-in. There are many different types of plug-in, depending on what you want to add (for example, a report, an activity module or a question type). Therefore, the first thing to learn is what the different types of plug-in are and when you should use them. Then, once you know which type of plug-in to create, you need to know how to make that sort of plug in. For example, what exactly do you need to do to create a new question type?

2. How to make Moodle code do things

Irrespective of what sort of plug-in you are creating, you also need to know how to make your code do certain things. Moodle is written in PHP, so generic PHP skills are a prerequisite, but Moodle has its own libraries for many common tasks, like getting input from the user, loading and saving data from the database, displaying output, and so on. A developer need to know many of these APIs.

3. How to get things done in the Moodle community

If you just want to write Moodle code for your own use, then the two types of know-how above are enough, but if you want to take full advantage of Moodle's open source nature, then you need to learn how to interact with the rest of the Moodle development community. For example how to ask for help, how to report a bug, how to submit the changes for a bug you have fixed or a feature you have implemented, get another developer to review your proposed code change, how to update your customised Moodle site using git, and so on.

4. Something about education

Those three points were what I thought of when trying to work out what I needed to teach during the developer workshop I ran. Since then, while listening to one of the presentations at the UK MoodleMoot as it happens, I realised that there was a fourth category of knowledge required to be a good Moodle developer. It matters that we are making sofware to help people teach and learn. I am struggling to think of specific concepts here, with URLs for where you can learn about them, as I gave in the previous sections, but there is a whole body of knowledge about what makes for effective learning and teaching and it is useful to have some appreciation of that. You also need some awareness of how educational institutions operate. If you hang around the Moodle community for any length of time you will also discover the the educational culture is different in different countries. For example in the southern hemisphere the long summer holiday is also the Christmas holiday, and in America, they expect to award grades of more then 100%.

Summary

Does this subdivision into categories actually help you learn to be a Moodle developer? I am not sure, but it was certainly useful when planning my workshop. The workshop was structured around creating three plugins on the first day, a Filter, a Block and then a Local plug-in. However, those exercises were structured so that while moving thorugh different types of category-one knowledge, we also covered key topics from categories two and three in a sensible order. So it helped me, and I thought it was an interesting enough thought to share.

Thursday, February 27, 2014

Reflections on listening to conference presentations in German

I am at the MoodleMaharaMoot in Leipzig listening to people talk about Moodle.

First, the good news is that about half the words in English came from the same roots as German, so there are a fair number of words you can recognise, at least if you have time to read them from the screen. For words that seem really key, there is Google translate. Also, the Germans seems like using English phrases for eLearning-related things, like Learning Analytics, or Multiple Choice.

However, I don’t think I was even understanding 10% of the words. What really makes a difference to intelligibility is what is on the screen. If speaker just had powerpoint slides with textual bullet points, that does not help. If the speaker uses the screen to show you what they are talking about - screen grabs or live demos - that is much better. Of course, this is just: show, don’t tell.

It also makes a big difference whether you already know a little bit about what is being said. I talked to some people from University of Vienna two years ago when they started building their offline quiz activity, so I already knew what it was supposed to do. I followed that presentation (which contained many screen-grabs) better than most. What they have done looks really slick, by the way.

Regarding my presentation, I feel vindicated in my plan to spend almost all of the presentation doing a live demonstration of the question types I was talking about. Of course, I am sure that almost everyone in the audience has better English than I have German. Also, I apologies that I talked for the whole time, and did not leave an opportunity for questions.

Finally, I have been speculating (without reaching any conclusions) about whether the experience of sitting there, failing to understand almost everything that is being said, and just picking some scraps from the slides, is giving me any empathy for people with severe disabilities who need major accessibility support to use software? As I say, these thoughts are inconclusive. What does anyone else think?

By the way, Germans applaud by rapping on the table with their knuckles. Your trivia fact for the day.

Wednesday, January 29, 2014

Moving the OU Moodle code to Moodle 2.6.1

I spent today upgrading our Moodle codebase from Moodle 2.5.4 to Moodle 2.6.1. This is the start of work towards our June release of the VLE. We have a March release based on Moodle 2.5.4 to get on the live servers first, and testing that will overlap with the development of the 2.6.1-based version.

Doing the merge

The first stage of the process is to merge in the new code. This is non-trivial because even if you just do

git checkout -b temp v2.5.4
git merge v2.6.1

Then you will get a lot of merge conflicts. That is a product of how the Moodle project manages its stable branches. If your own code changes also lead to other merge conflicts, then sorting out the two is a real mess.

Fortunately, there is a better way, because we know how we want to resolve any conflicts between 2.5.4 and 2.6.1. We want to end up with 2.6.1. Using git merge strategies, you can do that:

git checkout -b merge_helper_branch v2.6.1
git merge --strategy=ours v2.5.4

That gives you a commit that is upstream of both v2.5.4 and v2.6.1, and which contains code that is identical to v2.6.1. You can verify that using git diff v2.6.1 merge_helper_branch. That should produce no output.

Having built that helper branch, you can then proceed to upgrade your version of the code. Our version of Moodle lives on a branch called ouvle which we originally branched off Moodle 2.1.2 in October 2011. Since then, we have made lots of changes, including adding many custom plugins, and merging in many Moodle releases. Continuting from the above we do

git checkout ouvle
git merge --strategy-option=patience merge_helper_branch

That gave a lot of merge conflicts, but they were all to do with our changes. Most of them were due to MDL-38189, which sam marshall developed for Moodle 2.6, and which we had back-ported into our 2.5 code. That back-port made a big mess, but fortunately most of the files affected did not have any other ou-specific changes, so I could just overwrite them with the latest versions from v2.6.1.

git checkout --theirs lang/en backup lib/filestorage admin/settings/development.php lib/form/form.js
git add lang/en backup lib/filestorage admin/settings/development.php lib/form/form.js

Simiarly, we had backported MDL-35053 which lead to more conflicts that were easy to resolve. Another case was the Single activity course format which we had used as an add-on to Moodle 2.5. That is now part of the standard Moodle release. The change caused merge conflits, but again there was a simple solution: take the latest from 2.6.1.

After all that, there were only about 5 files that needed more detailed attention. They were mostly where a change had been made to standard Moodle code right next to a place where we had made a change. (Silly rules about full stops at the ends of comments!) They were easily to fix manually. The one tricky file was in lib/moodlelib.php where about 400 lines of code had been moved lib/classes/useragent.php. There were two ou-specific changes in the middle of that, which I had to re-do in the new version of that code.

Verifying the merge

Having resolved all the conflicts, it was then time to try to convince myself that I had not screwed anything up. The main check was to comparing our ouvle code with the standard 2.6.1 code. Just doing git diff v2.6.1 ouvle does not work well because it shows all contents of all the new files we have added. You need to read the git documentation and work out the incantation

git diff --patience --diff-filter=CDMRTUXB v2.6.1 ouvle

That tells git to just show changes to existing files - the ones that are part of standard Moodle 2.6.1. That is a manageable amount of output to review. We have a strict policy that any change to core Moodle code is marked up like this:

// ou-specific begins #2381 MDL-28567
/*
        $select = new single_select(new moodle_url(CALENDAR_URL.'set.php',
                array('return' => base64_encode($returnurl->out(false)),
                        'var' => 'setcourse', 'sesskey'=>sesskey())),
                'id', $courseoptions, $selected, null);
*/
        $select = new single_select(new moodle_url(CALENDAR_URL.'view.php',
                array('return' => $returnurl, 'view' => 'month')),
                'course', $courseoptions, $selected, null);
// ou-specific ends #2381 MDL-28567

That is, the original Moodle code is still there, but commented out, alongside our modified version, and the whole thing is wrapped in paired begin and end markers that refer to a ticket id in our issues database and if applicable a Moodle tracker issue. In this case I can check that MDL-28567 has still not been resolved, so we still need this ou-specific change. What I am doing looking at the diff output is verifying that every change is marked up like that, and that any issues mentioned are things that are still relevant.

The other check is to search the whole codebase for ou-specific and again review all the issue numbers mentioned. These combined checks find a few ou-specific changes that are no longer needed, which is a good thing.

What happens next

Now that I think the code seems right, it is time to test it, so I upgrade my development install. It mostly works, except that our custom memcache session handler no longer works (the session code seems to have changed a lot, including an official memcached session hander in core). For now I just switch back to default Moodle sessions, and make a note to investigate this later.

Apart from that, the upgrade goes smootly, and, apart from thousands of debugging warnings about use of deprecated code, I have a working Moodle site, so I push the code to our git server, and warn the rest of the team that they can upgrade if they feel brave.

The next thing, which will take place over the next few weeks is to check every single one of our custom plugins to verify that it still works properly in Moodle 2.6. To manage that we use a Google Docs spreadsheet that we can all edit that lists all the add-ons, with who is going to be responsible for checking it, and whether they have done so yet. Here is a small section.

The state of OU Moodle customisation

Our regular code-merges are a good moment to take stock of the extend to which we have customised Moodle. Here are some headline numbers:

  • 212 custom plug-ins: Of those 10 are ones we have taken from the community, including Questionnaire, Certificate, Code-checker and STACK (we helped create those last two). Of our own plugins, 58 (over a quarter) are shared with the community, though the counting is odd because ForumNG contains 20 sub-plugins.
  • 17 ou-specific issues: That is, reasons we made a change to core code that could not be an add-on.
  • Due to those 17 reasons, there are 42 pairs of // ou-specific begins/ends comments in the code.

So, we continue to be disciplined about not changing core code unless we really have to, but the number of plugins is getting a bit crazy. A lot of the plugins, are, however, very small. They just do one thing. Also, we run a range of very different sites, including OpenLearn, OpenLearn works, The Open Science Lab and our exams server. A significant number of our plugisn were just designed to be used on one of those sites.

Here are the numbers of custom plugins broken down by type (and ignoring sub-plugins of our custom plugins).

Plugin typeNumber
Activity module25
Admin tools8
Authentication methods2
Blocks30
Course formats3
Editors1
Enrolment methods1
Filters6
Gradebook reports1
Local plugins44
Message outputs2
Portfolio outputs1
Question behaviours4
Question types14
Quiz reports6
Quiz access rules2
Reports19
Repositories3
Themes9
TinyMCE plugins1

Thursday, November 28, 2013

Bug fixing as knowledge creation

There are lots of ways you can think about bug-fixing: it is just a job that developers do; it is problem solving; etc. Here I want to take one particular viewpoint, that it is generating new knowledge about a software system.

One was to think about software is that it is the embodiment of a set of requirements, of how something should work. For example, Moodle can be thought of as a lot of knowledge about what software is required to teach online, and how that software should be designed. Finding and fixing bugs increases that pool of knowledge by identifying errors or omissions and then correcting them.

The bug fixing process

We can break down the process of discovering and fixing a bug into the following steps. This is really trying to break the process down as finely as possible. As you read this list, please think about what new knowledge is generated during each step.

  1. Something's wrong: We start from a state of blissful ignorance. We think our software works exactly as it should, and then some blighter comes along and tells us "Did you know that sometimes ... happens?" Not what you want to hear, but just knowing that there is a problem is vital. In fact the key moment is not when we are told about the problem, but when the user encountered it. Good users report the problems they encounter with an appropriate amount of detail
  2. Steps to reproduce: Knowing the problem exists is vital, but not a great place to start investigating. What you need to know is something like "Using Internet Explorer 9, if you are logged in as a student, are on this page, and then click that link then on the next page press that button, then you get this error." and that all the details there are relevant. This is called steps to reproduce. For some bugs they are trivial. For bugs that initially appear to be random, identifying the critical factors can be a major undertaking.
  3. Which code is broken: Once the developer can reliably trigger the bug, then it is possible to investigate. The first thing to work out is which bit of code is failing. That is, which lines in which file.
  4. What is going wrong: As well as locating the problem code, you also have to understand why it is misbehaving. Is it making some assumption that is not true? Is it misusing another bit of code? Is it mishandling certain unusual input values? ...
  5. How should it be fixed: Once the problem is understood, then you can plan the general approach to solving it. This may be obvious given the problem, but in some cases there is a choice of different ways you could fix it, and the best approach must be selected.
  6. Fix the code: Once you know how you will fix the bug, you need to write the specific code that embodies that fix. This is probably the bit that most people think of when you say bug-fixing, but it is just a tiny part.
  7. No unintended consequences: This could well be the hardest step. You have made a change which fixed the specific symptoms that were reported, but have you changed anything else? Sometimes a bug fix in one place will break other things, which must be avoided. This is a place where peer review, getting another developer to look at your proposed changes, is most likely to spot something you missed.
  8. How to test this change: Given the changes you made, what should be done to verify that the issue is fixed, and that nothing else has broken? You can start with the steps to reproduce. If you work through those, there should no longer be an error. Given the previous point, however, other parts of the system may also need to be tested, and those need to be identified.
  9. Verifying the fix works: Given the fixed software, and the information about what needs to be tested, then you actually need to perform those tests, and verify that everything works.

Some examples

In many cases you hardly notice some of the steps. For example, if the software always fails in a certain place with an informative error message, then that might jump you to step 4. To give a recent example: MDL-42863 was reported to me with this error message:

Error reading from database

Debug info: ERROR: relation "mdl_questions" does not exist

LINE 1: ...ECT count(1) FROM mdl_qtype_combined t1 LEFT JOIN mdl_questi...

SELECT count(1) FROM mdl_qtype_combined t1 LEFT JOIN mdl_questions t2 ON t1.questionid = t2.id WHERE t1.questionid <> $1 AND t2.id IS NULL

[array (0 => '0',]

Error code: dmlreadexception

Stack trace:

  • line 423 of /lib/dml/moodle_database.php: dml_read_exception thrown
  • line 248 of /lib/dml/pgsql_native_moodle_database.php: call to moodle_database->query_end()
  • line 764 of /lib/dml/pgsql_native_moodle_database.php: call to pgsql_native_moodle_database->query_end()
  • line 1397 of /lib/dml/moodle_database.php: call to pgsql_native_moodle_database->get_records_sql()
  • line 1470 of /lib/dml/moodle_database.php: call to moodle_database->get_record_sql()
  • line 1641 of /lib/dml/moodle_database.php: call to moodle_database->get_field_sql()
  • line 105 of /admin/tool/xmldb/actions/check_foreign_keys/check_foreign_keys.class.php: call to moodle_database->count_records_sql()
  • line 159 of /admin/tool/xmldb/actions/XMLDBCheckAction.class.php: call to check_foreign_keys->check_table()
  • line 69 of /admin/tool/xmldb/index.php: call to XMLDBCheckAction->invoke()

I have emboldened the key bit that says where the error is. Well, there are really two errors here. One is that the Combined question type add-on refers to mdl_questions when it should be mdl_question. The other is that the XMLDB check should not die with a fatal error if presented with bad input like this. The point is, this was all immediately obvious to me from the error message.

Another recent example at the other extreme is MDL-42880. There was no error message in this case, but presumably someone noticed that some of their quiz settings had changed unexpectedly (Step 1). Then John Hoopes, who reported the bug, had to do some careful investigation to work out what was going on (Step 2). I am glad he did, because it was pretty subtle thing, so in this case Step 2 was probably a lot of work. From there, it was obvious which bit of the code was broken (Step 3).

Note that Step 3 is not always obvious even when you have an error message. Sometimes things only blow up later as a consequence of something that went wrong before. To use an extreme example, if someone fills your kettle with petrol, instead of water, and then you turn it on to make some tea and it blows up. The error is not with turning the kettle on to make tea, but with filling it with petrol. If all you have is shrapnel, finding out how the petrol ended up in the kettle might be quite hard. (I have no idea why I dreamt up that particular analogy!)

MDL-42880 also shows the difference between the conceptual Steps 4 and 5, and the code-related Steps 3 and 6. I though the problem was with a certain variable becoming un-set at a certain time, so I coded a fix to ensure the value was never lost. That led to complex code that required a paragraph-long comment to try to explain it. Then I had a chat with Sam Marshall who suggested that in fact the problem was that another bit of code was relying on the value that variable, when actually the value was irrelevant. That lead to a simpler (hence better) fix: stop depending on the irrelevant value.

What does this mean for software?

There are a few obvious consequences that I want to mention here, although they are well known good practice. I am sure there are other more subtle ones.

First, you want the error messages output by your software to be as clear and informative as possible. They should lead you to where the problem actually occurred, rather than having symptoms only manifesting later. We don't want exploding kettles. There are some good examples of this in Moodle.

Second, because Step 7, ensuring that you have not broken anything else, is hard, it really pays to structure your software well. If you software is made up of separate modules that are each responsible for doing one thing, and which communicate in defined ways, then it is easier to know what the effect of changing a bit of one component is. If your software is a big tangle, who knows the effect of pulling one string.

Third, it really pays to engage with your users and get them to report bugs. Of course, you would like to find and fix all the bugs before you release the software, but that is impossible. For example, we are working towards a new release of the OU's Moodle platform at the start of December. We have had two professional testers testing it for a month, and a few select users doing various bits of ad-hoc testing. That adds up to less than 100 person days. On the day the software is released, probably 50,000 different users will log in. 50,000 user days, even by non-expert testers, are quite likely to find something that no-one else noticed.

What does this mean for users?

The more important consequences are for users, particularly of open-source software.

  • Reporting bugs (Step 1) is a valuable contribution. You are adding to the collective knowledge of the project.

There are, however, some caveats that follow from the fact that in many projects, the number of developers available to fix bugs is smaller than the number of users reporting bugs.

  • If you report a bug that was already reported, then someone will have to find the duplicate and link the two. Rather than being a useful contribution, this just wastes resources, so try hard to find any existing bug report, and add your information there, before creating a new one.
  • You can contribute more by reporting good steps to reproduce (Step 2). It does not require a developer to work those out, and if you can do it, then there is more chance that someone else will do the remaining work to fix the bug. On the other hand, there is something of a knack to working out and testing which factors are, or are not, significant in triggering a bug. The chances are that an experienced developer or tester can work out the steps to reproduce quicker than you could. If, however, all the experienced developers are busy then waiting for them to have time to investigate is probably slower than investigating yourself. If you are interested, you can develop your won diagnosis skills.
  • If you have an error message then copy and paste it exactly. It may be all the information you need to give to get straight to Step 3 or 4. In Moodle you can get a really detailed error message by setting 'debugging' to 'DEVELOPER' level, then triggering the bug again. (One of the craziest mis-features in Windows is that most error pop-ups do not let you copy-and-paste the message. Paraphrased error messages can be worse than useless.)

Finally, it is worth pointing out that Step 9 is another thing that can be done by the user, not a developer. For developers, it is really motivating when the person who reported the bug bothers to try it out and confirm that it works. This can be vital when the problem only occurs in an environment that the developer cannot easily replicate (for example an Oracle-specific bug in Moodle).

Conclusion

Thinking about bug finding and fixing as knowledge creation puts a more positive spin on the whole process than is normally the case. This shows that lots of people, not just developers and testers, have something useful to contribute. This is something that open source projects are particularly good at harnessing.

It also shows why it makes sense for an organisation like the Open University to participate in an open source community like Moodle: Bugs may be discovered before they harm our users. Other people may help diagnose the problem, and there is a large community of developers with whom we can discuss different possible solutions. Other people will help test our fixes, and can help us verify that they do not have unintended consequences.

Wednesday, July 3, 2013

Assessment in Higher Education conference 2013

Last week I attended the Assessment in Higher Education conference in Birmingham. This was the least technology and most education conference that I have been to. It was interesting to learn about the bigger picture of assessment in universities. One reason for going was that Sally Jordan wanted my help running a 'masterclass' about producing good computer-marked assessment on the first morning. I may write more about that in a future post. Also I presented a poster about all the different online assessment systems the OU uses. Again a possible future topic. For now I will summarise the other parts of the conference, the presentations I listed to.

One thing I was surprised to discover is how much the National Student Survey (NSS) is influencing what universities do. Clearly it is seen as something that prospective students pay attention to, and attracting students is important. However, as Margaret Price from Oxford Brookes University, the first keynote speaker said, the kind of assessment that students like (and so rate highly in NSS) is not necessarily the most effective educationally. That is, while student satisfaction is something worth considering, students don't have all the knowledge to evaluate the teaching they receive. Also, she suggested that the NSS ratings have made universities more risk-averse in trying innovative forms of assessment and teaching.

The opening keynote was about "Assessment literacy", making the case that students need to be taught a bit about how assessment works, so they can engage with it most effectively. That is, we want the students to be familiar with the mechanics of what they are being asked to do in assessment, so those mechanics don't get in the way of the learning; but more than that, we want the students to learn the most from all the tasks we set them, and assessment tasks are the ones students pay the most attention to, so we should help the students understand why they are being asked to do them. I dispute one thing the Margaret Price said. She said that at the moment, if assessment literacy is developed at all, that only happens serendipitously. However, in my time as a student, there were plenty of times when it was covered (although not by that name) in talks about study skill and exam technique.

Another interesting realisation during the conference was that, at least in that company (assessment experts), the "Assessment for learning" agenda is taken as a given. It is used as the reason that some things are done, but there is no debate that it is the right thing to do.

Something that is a hot topic at the moment is more authentic assessment. I think it is partly driven by technology improvements making it possible to capture a wider range of media, and to submit eportfolios. It is also driven by a desire for better pedagogy, and assessments that by their design make plagiarism harder. If you are being asked to apply what you have learned to something in your life (for example in a practice-based subject like nursing) it is much harder to copy from someone else.

I ended up going to all three of the talks given by OU folks. Is it really necessary to go to Birmingham to find out what is going on in the OU? Well, it was a good opportunity to do so. The first of these was about an on-going project to review the OU's assessment strategy across the board. So far a set of principles have been agreed (for example affirming the assessment for learning approach, athough that is nothing new at the OU) and they are about to be disseminated more widely. There was an interesting slide (which provoked some good discussion) pointing out that you need to balance top-down policy and strategy with bottom up implementation that allows each faculty use assessment that is effective for their particular discipline. There was another session by people from Ulster and Liverpool Hope universities that also talked about the top-down/bottom-up balance/conflict in policy changes.

In this OU talk, someone made a comment along the lines, "why is the OU re-thinking its assessment strategy? You are so far ahead of us already and we are still trying to catch up." I am familiar with hearing comments like that at education technology conferences. It was interested to learn that we are also held in similarly high for policy. The same questioner also used the great phrase "the OU effectively has a sleeper-cell in every other university, in the associate lecturer you employ". That makes what the OU does sound far more excitingly aggressive than it really is.

In the second OU talk, Janet Haresnape described a collaborative online activity in a third level environmental science course. These are hard to get right. I say that having suffered one as a student some years ago. This one seems to have been more successful, at least in part because it was carefully structured. Also, it started with some very easy tasks (put your name next to a picture and count some things in it), and the students could see the relationship between the slightly artificial task and what would happen in real fieldwork. Janet has been surveying and interviewing students to discover their attitudes towards this activity. The most interesting finding is that weaker students comment more, and more favourably, on the collaboration than the better students. They have more to learn from their peers.

The third OU talk was Sally Jordan talking about the ongoing change in the science faculty from summative to formative continuous assessment. It is early days, but they are starting to get some data to analyse. Nothing I can easily summarise here.

The closing keynote was about oral assessment. In some practice-based subjects like law and veterinary medicine it is an authentic activity. Also, a viva is a dialogue, which allows the extent of the student's knowledge to be probed more deeply than a written exam. With an exam script, you can only mark what is there. If something the student has written is not clear, then there is no way to probe that further. That reminded me of what we do in the Moodle quiz. For example in the STACK question type, if the student has made a syntax error in the equation they typed, we ask them to fix it before we try to grade it. Similarly, in Pattern-match questions, we spell check the student's answer and let them fix any errors before we try to grade it. Also, with all our interactive questions, if the student's first answer is wrong, we give them some feedback then let them try again. If they can correct their mistake themselves, then they get some partial credit. Of course computer-marked testing is typically used to assess basic knowledge and concepts, whereas an oral exam is a good way to test higher-order knowledge and understanding, but the parallel of enabling two-way dialogue between student and assessor appealed to me.

This post is getting ridiculously long, but I have to mention two other talks. Calum Delaney from Cardiff Metropolitan University reported on some very interesting work trying to understand what academics think about as they mark an essays. Some essays are easy to grade, and an experienced marker will rapidly decide on the grade. Others, particularly those that are partly right and partly wrong, take a lot longer weighing up the conflicting evidence. Overall though, the whole marking process struck me, a relative outsider, as scarily subjective.

John Kleeman, chair of QuestionMark, UK, summarised some psychology research that shows that the best way to learn something so that you can remember it again is to test yourself on it, rather than just reading it. That is, if you want to be able to remember something, then practice remembering it. It sounds obvious when you put it that way, but the point is that there is strong evidence to back up that statement. So, clearly you should all now go and create Moodle (or QuestionMark) quizzes for your students. Also, in writing this long rambling blog post I have been practising recalling all the interesting things I learned at the conference, so I should remember them better in future. If you read this far, thank you, and I hope you got something out of it too.

Monday, July 1, 2013

Open University question types ready for Moodle 2.5

This is just a brief note to say that Colin Chambers has now updated all the OU question types to work with Moodle 2.5. Note that we are not yet running this code ourselves on our live servers, since we are on Moodle 2.4 until the autumn, but Phil Butcher has tested them all and he is very thorough.

You can download all these question types (and others) from the Moodle add-ons database.

Thanks to Dan Poltawski's Github repository plugin, that is easier than it used to be. Still, updating 10 plugins is pretty dull, so I feel like I have contributed a bit. I also reviewed most of the changes and fixed the unit tests.

I hope you enjoy our add-ons. I am wondering whether we should add the drag-and-drop questions types to the standard Moodle release. What do you think? If that seems like a good idea to you, I suggest posting something enthusiastic in the Moodle quiz forum. It will be easier to justify adding these question types to standard Moodle if lots of non-OU Moodlers ask for it.

Friday, June 21, 2013

Book review: Computer Aided Assessment of Mathematics by Chris Sangwin

The book coverChris is the brains behind the STACK online assessment system for maths, and he has been thinking about how best to use computers in maths teaching for well over ten years. This book is the distillation what he has learned about the subject.

While the book focusses specifically on online maths assessment, it takes a very broad view of that topic. Chris starts by asking what we are really trying to achive when teaching and assessing maths, before considering how computers can help with that. There are broadly two areas of mathematics: solving problems and proving theorems. Computer assessment tools can cope with the former, where the student performs a calculation that the computer can check. Getting computers to teach the student to prove theorems is an outstanding research problem, which is touched on briefly at the end of the book.

So the bulk of the book is about how computers can help students master the parts of maths that are about performing calculations. As Chris says, learning and practising these routine techniques is the un-sexy part of maths education. It does not get talked about very much, but it is important for students to master these skills. Doing this requires several problems to be addressed. We want randomly generated questions, so we have to ask what it means for two maths questions to be basically the same, and equally difficult. We have to solve the problem of how students can type maths into the computer, since traditional mathematics notation is two dimensional, but it is easier to type a single line of characters. Chris precedes this with a fascinating digression into where modern maths notation came from, something I had not previously considered. It is more recent than you probably think.

Example of how STACK handles maths input

If we are going to get the computer to automatically assess mathematics, we have to work out what it is we are looking for in students' work. We also need to think about the outcomes we want, namely feedback for the student to help them learn; numerical grades to get a measure of how much the student has learned; and diagnostic output for the teacher, identifying which types of mistakes the students made, which may inform subsequent teaching decisions. Having discussed all the issues, Chris them brings them together by describing STACK. This is an opportune moment for me to add the dislaimer that I worked with Chris for much of 2012 to re-write STACK as a Moodle question type. That was one of the most enjoyable projects I have ever worked on, so I am probably biassed. If you are interested, you can try out a demo of STACK here.

Chris rounds off the book with a review of other computer-assissted assessment systems for maths that have notable features.

In summary, this is a facinating book for anyone who is interested in this topic. Computers will never replace teachers. They can only automate some of the more routine things that teachers do. (They can also be more available than teachers, making feedback on their work available to students even when the teacher is not around.) To automate anything via a computer you really have to understand that thing. Hence this book about computer-assessted assessment gives a range of great insights into maths education. Highly recommended. Buy it here!