Showing posts with label Moodle. Show all posts
Showing posts with label Moodle. Show all posts

Thursday, June 25, 2015

The Assessment in Higher Education conference 2015

I am writing this on a sunny evening, sitting in a pub overlooking Old Turn Junction, part of the Birmingham Canal Navigations, with a well-earned beer after two fascinating and exhausting days at the Assessment in Higher Education conference.

It was a lovely conference. The organising committee had set out to try to make it friendly and welcoming and they succeeded. There was a huge range of interesting talks and since I could not clone myself I was not able to go to them all. I am not going to describe individual talks in detail, but rather draw out what seemed to me to be the common themes.

A. It is all just assessment

The first keynote speaker (Maddalena Taras) said this directly, and there were a couple of other things along the same lines: the split between formative and summative assessment is a false dichotomy. If an assessment does not actually evaluate the students (give them a grade, hence summative) then it misses the main function of an assessment. This is not the same as saying that every assessment must be high stakes. Conversely, in the words of a quote Sally reminded me of:

“As I have noted, summative assessment is itself ‘formative’. It cannot help but be formative. That is not an issue. At issue is whether that formative potential of summative assessment is lethal or emancipatory. Does formative assessment exert its power to discipline and control, a power so possibly lethal that the student may be wounded for life? … Or, to the contrary, does summative assessment allow itself to be conquered by the student, who takes up a positive, even belligerent stance towards it, determined to extract every human possibility that it affords?” (Boud & Falchikov (2007) Rethinking Assessment in Higher Education: Learning for the Longer Term)

The first keynote was a critique of Assessment for Learning (AfL). Not that assessment should not help students learn. Of course it should. Rather, the speaker questioned some of the specific recommendations from the AfL literature in a thought-provoking way.

The 'couple of other things' were a talk from Jill Barber of School of Pharmacy at Birmingham, about giving students quite detailed feedback after their end of year exams; and Sally Jordan’s talk (which I did not go to since I have heard it internally at the OU) about the OU Science faculty's semantic wranglings about whether all their assessment gets called “summative” or “formative”, and hence how the marks for the separate assignments are added up, without changing what the assessed tasks actually are.

B. Do students actually attend to feedback?

The second main theme came out many times. On the one hand, students say they like feedback and demand more of it. On the other hand, there is quite a lot of evidence that many students don’t spend much time reading it, or that when they do, it does not necessarily help them to improve. So, there were various approaches suggested for getting students to engage more with feedback, for example by

  • giving feedback via a screen-cast video, talking them through their essay highlighting with the mouse (David Wright & Damian Kell, Manchester Metropolitan University). Would students spend 19 minutes reading and digestion written feedback on an essay? Well, they got a 19 minute (on average) video - one of the few cases where some students thought it was too much feedback!
  • making feedback a dialogue. That is, encouraging students to write questions on the cover sheet when they hand the work in, for their tutor to answer as part of the feedback. That was what Rebecca Westrup from the University of East Anglia was doing.
  • Stefanie Sinclair from the OU religious studies department talked about work she had one with John Butcher & Anactoria Clarke assessing reflection in an access module (a module to designed to help students with limited prior education to develop the skills they need to study at Level 1). Again, this was to encourage students to engage in a dialogue with their tutor about their learning.
  • Using peer and self assessment, so that students spend more time engaging with the assessment criteria by applying them to their own and other’s work. Also the suggestion from Maddalena Taras was that initially you give the student’s work back without the marks or feedback (but after a couple of weeks of marking) so that they read it with fresh eyes before they get the feedback (first) then the marks.
  • There was another peer assessment talk, by Blazenka Divjak of the University of Zagreb, using the Moodle Workshop tool. The results were along the same lines as other similar talks I have seen (for example at the OU where we are also experimenting with the same tool). Peer assessment activities do help students understand the assessment criteria. It helps them appreciate what teachers do more. Students’ grading of their peers, particularly in aggregate, is reliable, and comparable to the teacher’s grade.
  • A case of automated marking (in this case of programming exercises) where students clearly did engage with the feedback because they were allowed to submit repeatedly until they got it right. In computer programming this is authentic. It is what I do when doing Moodle development. (Stephen Nutbrown, Su Beesley, Colin Higgins, University of Nottingham and Nottingham Trent University.)
  • It was also something Sally touched on in her part of our talk. With the OU's computer-marked questions with multiple tries, students say the feedback helps them learn and that they like it. However, if you look at the data or usability lab observations, you see that in some cases some students are clearly paying not attention to the feedback they get.

C. The extent to which transparency in assessment is desirable

This was the main theme of the closing keynote by Jo-Anne Baird from the Oxford University Centre for Educational Assessment. The proposition is that if assessment is not transparent enough, it is unfair because students don’t really understand what is expected of them. A lot of university assessment is probably towards this end of the spectrum.

Conversely, if assessment is too transparent it encourages pathological teaching to the test. This is probably where most school assessment is right now, and it is exacerbated by the excessive ways school exams are made hight stakes, for the student, the teacher and the school. Too much transparency (and risk averseness) in setting assessment can lead to exams that are too predicable, hence students can get a good mark by studying just those things that are likely to be on the exam. This damages validity, and more importantly damages education.

Between these extremes there is a desirable balance where students are given enough information about what is required of them to enable them to develop as knowledgable and independent learners, without causing pathological behaviour. That, at least, is the hope.

While this was the focus of the last keynote, it resonated with several of the talks I listed in the previous section.

D. The NSS & other acronyms

The National Student Survey (NSS) is clearly a driver for change initiatives at a lot of other universities (as it was two years ago). It is, or at least it is perceived to be a, big deal. Therefore it can be used as a catalyst or leaver to get people to review and change their assessment practices since feedback and assessment is something that students often give low ratings for. This struck me as odd, since I am not aware of this happening at the OU. I assume that is because the OU has so far scored highly in the NSS.

The other acronym floating around a lot was TESTA. This seems to be a framework for reviewing the assessment practice of a whole department or degree programme. In one case, however (a talk by Jessica Evans & Simon Bromley of the OU faculty of Social Science) their review was done before TESTA was invented, though along similar lines.

Finally

A big thank-you to Sue Bloxham and the rest of the organising team for putting together a great conference. Roll on 2017.

Friday, May 1, 2015

eSTEeM conference 2015

eSTEeM is an organising group within the Open University which brings together people doing research into teaching and learning in the STEM disciplines, Science, Technology, Engineering and Maths. Naturally enough for the OU, a lot of that work revolves around educational technology. Once a year they have an annual conference for people to share what they have been doing. I went along because I like to see what people have been doing with our VLE, and hence how we could make it work better for students and staff in the future.

It started promisingly enough in a way. As I walked in to get my cup of coffee after registration, I was immediately grabbed by Elaine Moore from Chemistry who had two Moodle Quiz issues. She wanted the Combined question type to use the HTML editor for multiple choice choices (good idea, we should put that on the backlog) and a problem with a Pattern-match questions which we could not get to the bottom of over coffee.

But, on to the conference itself. I cannot possibly cover all the keynotes and parallel sessions so I will pick the highlights for me.

Assessment matters to students

The first was a graph from Linda Price’s keynote. Like most universities, at the end of every module we give have a student satisfaction survey. The graph showed the student's ratings in response to three of the questions:

  • Overall, I am satisfied with the quality of this module.
  • I had a clear understanding of what was required to complete the assessed activities.
  • The assessment activities supported my learning.

There was an extremely strong correlation between those. This is nothing very new. We know that assessment is important in determining the ‘hidden curriculum’, and hence we like to think that ‘authentic assessment’ is important. However, it is interesting to see how much this matters this is to students. Previously, I would not even have been sure that they could tell the difference.

The purpose of education

Into the parallel sessions. There was an interesting talk from the module team for TU100 my digital life, the first course in the computing and technology degrees. Some of the things they do in that module’s teaching is based around the importance of language, even in science. Learning a subject can be thought of as learning to construct the world of that subject through language, or as they put it, humanities style thinking in technology education. Unsurprisingly, many students don’t like that “I came to learn computing, not writing.” However, there is a strong correlation between students language use and their performance in assessments. By the end of the module some students do come to appreciate what the module is trying to do.

This talk triggered a link to back to another part of Linda Price’s keynote. An important (if now rather cliched question) for formal education is “What is education for everything is now available on the web?” (or one might put that more crudely as “Why should students pay thousands of pounds for one of our degrees?”). The answer that came to me during this talk was “To make them do things they don’t enjoy, because we know it will do them good.” OK, so that is a joke, but I would like to think there is a nugget of truth in there.

Peer assessment

On to more specifically Moodle-related things. A number of modules have been trying out Moodle’s Workshop activity. That is a tool for peer review or peer assessment. The talk was from the SD815 Contemporary issues in brain and behaviour module team. Their activity involved students recording a presentation (PowerPoint + audio) that critically evaluated a research article. Then they had to upload them to the Moodle Workshop, and review each others presentations as managed by the tool. Finally, they had to take their slide-cast, the feedback they had received, and a reflective note on the process and what they had learned from it, and hand it all in to be graded by their tutor.

Now for OU students (at least) collaborative activities, particularly those tied to assessments, are typically another thing we make them do that they don’t enjoy. This activity added the complexities of PowerPoint and/or Open Office and recording audio. However, it seems to have worked remarkably well. Students appreciated all the things that are normally said about peer review: getting to see other approaches to the same task; practising the skills of evaluating others’ work and giving constructive feedback. In this case the task was one that the students (healthcare workers studying at postgraduate level) could see was relevant to their vocation, which brings us back to visibly authentic assessment, and the student satisfaction graph from the opening keynote.

For me the strongest message from this talk, however, is what was not said. There was very little said about the Moodle workshop tool, beyond a few screen-grabs to show what it looked like. It seems that this is a tool that does what you need it to do without getting in the way, which is normally what you want from educational technology.

Skipping briefly over

There are many more interesting things I could write about in detail, but to keep this post to a reasonable length I will just skim over the posters with lunch. For example,

And, some of the other talks:

  • a session on learning analytics, in this case with a neural net, to try to identify early on those students (on TU100 again) who get through all the continuous assessment tasks with a passing grade, only to fail the end of module assessment, so that they could be targeted for extra support.
  • a whole morning on the second day, where we saw nine different approaches to remote experiments from around the world. For example, the Open University's remote control telescope PIRATE. I was left me with the impression that this sort of thing is much more feasible and worthwhile than I had previously thought.

Our session on online Quizzes

The only other session I will talk about in detail is the one I helped run. It was a ‘structured discussion’ about the OU’s use of iCMAs (which is what we call Moodle quizzes). I found this surprisingly nerve-wracking. I have given plenty of talks before, and you prepare them. You know what you are going to say, and you are fairly sure it is interesting. Therefore you are pretty sure what is going to happen. For this session, we just had three questions, and it was really up to the attendees how well it worked.

We did allow ourselves two five-minute presentations. We started with Frances Chetwynd showing some the different ways quizzes are used in modules’ teaching and assessment strategies. This set up a 10-minute discussion of our first question: “How are iCMAs best be used as part of an assessment strategy?”. For this, delegates were seated around four tables, with four of five participants and a facilitator to each table. The tables were covered with flip-chart paper for people to write on.

We were using a World Café format, so after 10 minutes I rang my bell, and all the delegates move to a new table while the facilitators stayed put. Then, in new groups, they discussed the second question: "How can we engage students using iCMAs?" The facilitators were meant to make a brief bridge between what had been said in the previous group at their table, before moving on to the new question with the new group.

After 10 minutes on the second question, we had the other five-minute talk from Sally Jordan, showing some examples of what we have previously learned through scholarship into how iCMAs work in practice. (If you are interested in that, come to my talk at either MoodleMoot IE UK 2015 or iMoot 2015). This lead nicely, after one more round of musical chairs, to the third question: "Where next for iCMAs? Where next for iCMA scholarship?". Finally we wrapped up with a brief plenary to capture they key answers to that last question from each table.

By the end, I really had no idea how well it had gone, although each time I rang my bell, I felt I was interrupting really good conversations. Subsequently, I have written up the notes from each table, and heard from some of the attendees that they had found it useful and interesting, so that is a relief. We had a great team of facilitators (Frances, Jon, Ingrid, Anna) which helped. I would certainly consider using the the same format again. With a traditional presentation, you are always left with the worry that perhaps you got more out of preparing and delivering the presentation than any of the audience did out of listening. In this case, I am sure the audience got much more out of it than me, which is no bad thing.

Monday, September 29, 2014

What makes something a horrible hack?

Over in Moodle bug MDL-42974, Derek Chirnside asked "What is it about a hack that makes it 'horrible'??". I had described the code sam wrote to fix that issue in those terms, while at the same time submitting it for integration. It was a fair enough comment. I had helped sam create the code, and it was the kind of code you only write to make things work in Internet Explorer 8.

Although "Horrible hack" is clearly an aesthetic judgement, and therefore rather subjective, I think I can give a definition. However, it is easier to start by defining the opposite term. What is "Good code"? Good code should have properties like this:

  1. It works: It does what it is supposed to.
  2. It is readable: Another developer can read it and see what it is supposed to do.
  3. It is logical: It does what it is supposed to do in a way that makes sense. It is not just that a developer can puzzle out what it does, but it is clear that it does just that and nothing else.
  4. It is succinct: This is a companion to point 3).
  5. It is maintainable: It is clear that the code will go in working in the future, or if circumstances do change, it is clear how the code could be modified to adapt to it.

Note that property 1) is really just a starting point. It is not enough on its own.

A horrible hack is code that manages little more than property 1. I think sam's patch on MDL-42974 scores a full et.

  1. It is not at all obvious what the added code it for. Sam tried to mitigate that by adding a long comment to explain, but that is just a workaround to the hackiness of the code.
  2. There is no logical reason why the given change makes things work in IE <= 8. We were just fiddling around in Firebug to try to find out how IE was going wrong. Changing the display property on one div appeared to solve the display problem, so we turned that into code. We still don't really understand why. Another sign of the illogicality is the two setTimeout calls. Why do we need those two delays to make it work? No idea, but they are necessary.
  3. The whole chunk of added code should be unnecessary. Without the addition, it works in any other browser. We are adding some code that should be redundant just to make things work in IE.
  4. We don't understand why this code works, so we cannot understand if it will go on working. In this case, lack of maintainability is not too serious. The code only executes on IE8 or below. In due course we know we can just delete it.

Normally, you would wish to avoid code like this, but in this case it is OK because:

  • The hack is confined in one small area of the code.
  • There is a comment to explain what is going on.
  • It is clear that we will be able to remove this code in future, once usage of IE8 has dropped to a low enough level.

At least, we hope that the Moodle integration team agree that this code is acceptable for now. Otherwise, we wasted our time writing it.

Friday, April 25, 2014

Load-testing Moodle 2.6.2 at the OU

At the start of June we will upgrade the OU Moodle sites to Moodle 2.6. Before then we need to know that it will still perform well when subjected to our typical peak load of 100,000 page-views per hour. This time, I got 'volunteered' to do the testing.

The testing servers

To do the testing, we have a set of 10 servers that are roughly similar to our live servers. That is six web servers for handling normal requests, one web server that handles 'administrative' requests. That is, any URL starting /admin, /report or /backup. Those pages are often big, long-running processes, rather than quick page views, so it is better to put them on a different server that is tuned differently. There is one 'web' server is just for running the cron batch processes. Finally we have a database server and a file server.

In order to be able to make easy comparisons, we make two copies of our live site onto these servers. That is, we have two different www_root folders, which correspond to different URLs lrn2-perf-cur and lrn2-perf-upg. In due course we will upgrade one of the copies to the new release while leaving the other open running the current version of the code. This make it easy to switch back and forth when comparing the two.

In addition to the servers running Moodle, we have 6 virtual machines to generate the simulated load.

The testing procedure

We test using JMeter. In order to test Moodle, you need to send lots of requests for different pages, many of which include numeric ids in the URLs. Therefore, the JMeter script needs to be written specifically for the site being tested. Fortunately, our former colleague James Brisland made a script that automatically generates the necessary JMeter script. We shared that script with the community, and you can find a copy here. However, we shared it a long time ago, and since then our version has probably diverged from the community version a bit. Oops!

I say this tool automatically generates the necessary JMeter script, but sadly that is an oversimplification. It fails in certain cases like if a forum is set to separate groups mode. So, having generated the JMeter script, you need to run it and check that it actually works. If not, you have to go into the courses and activities being tested and modify the settings. We really ought to automate that, but no one has had the time. Anyway, eventually (and this took ages) you have a working test script.

Tuning the test script

Once the test script works, in that it simulates users performing various actions without error, one at a time, then you have to start running it at high load. That is, simulating lots of users doing lots of things simultaneously. After it has settled down, you let it run for 15 or 20 minutes, and then look at what sort of load you are generating. The goal is to get about the same number of requests per second for each type of page (course view, forum view, post to forum, view resource, ...) in the test run as in real use on the live system. If not, you tweak the time delays, or number of threads, and then run again. It took about four runs to get to a simulated load that was close (actually slightly higher) than the target request rates we had taken from the live server logs.

All that creation and tuning of the tests scripts is done on the lrn2-perf-cur copy of the site. Once that is OK, then you run the same script against lrn2-perf-upg. That should give exactly the same performance, and before proceeding we want to verify that is the case. It turned out at first that it was slightly different. I had to find the few admin settings that were different between the two servers. Once the configuration was the same, the performance was the same, and we were finally in a position to start comparing the old and new systems.

Upgrade to the new version of Moodle

The next step is to upgrade lrn2-perf-upg to the new code. This code is still work-in-progress. Final testing of the code before release happens next month, but we try to keep our code in a releasable state, so it should be OK for load-testing. However, this is the first time we have run the upgrade on a copy of all our data. Unsurprisingly, we found some bugs. Fortunately they were easily fixed, and better to find them now than later.

Also, a new version of Moodle comes with a lot of new configuration options. This is the moment to consider what we should set them to. Luckily, most of the default values were right, so there was not a lot to do. Moodle prompts you for most of the new settings you need to make as part of the upgrade. However, it does not prompt you to configure any new caches, so you have to remember to go and do that.

Compare performance

At long last (about one and a half weeks into the process) you are finally ready to run the test you want. How does 2.6 performance compare to 2.5? Here is a screen-grab of today's testing:

Good news: Moodle 2.6 is mostly a bit faster (5-10%) than Moodle 2.5. Bad news: every 15 minutes, it suddenly goes slow for about 15 seconds. What?!

Problem solving

Actually, there is a logical explanation. We have cron set to run every 15 minutes, so surely the problem is caused by cron, right? No. Wrong! We stopped cron running, and the spikes remained. We tried various things to see what it might be, and could not make any sense of it. One thing we discovered was that the spikes were about as large as the spikes you get by clicking the 'Purge all caches' button. OK, so something is purging caches, but what?

To cut a long story short, you need to remember that our two test sites lrn2-perf-cur and lrn2-perf-upg are sharing the same servers. Therefore they are sharing the same memcache storage. It appears that something in cron in Moodle 2.5 purges at least some of the caches. When we stopped cron on our Moodle 2.5 site the spikes went away on our 2.6 site. I am afraid we did not try to work out why Moodle 2.5 cron was purging caches, but there is probably a bug there. It turns out that purge caches does not cause a measureable slow-down in Moodle 2.5, at least not for us, which is worth knowing.

Why does Purge caches cause a slow-down in 2.6 but not in 2.5? I am pretty sure the reason is MDL-41436. When things slowed down, it was the course page that slowed down the most, and that is the one most dependent on the modinfo cache.

Summary

  • Moodle 2.6 is about 5-10% faster than 2.5, at least on our servers, which are RHEL5 + Postgres + memcache cluster store. (MDL-42071 - why has that not been integrated yet?)
  • In Moodle 2.5, doing Purge caches when your system is running at high load seems to cause remarkably little slow-down.
  • In Moodle 2.6, doing Purge caches does slow things down a lot, but only very briefly. Performance recovered within about 15 seconds in our test, but then the test was only using a few courses.
  • In Moodle 2.6, clicking Clear theme caches (at the top of Admin -> Appearance -> Themes -> Theme selector) causes no noticeable slow-down.

The bit about what happens when you clear the caches is important because sometimes, when you patch the system with a bug fix, you need to purge one or more caches to make the fix take effect. In the past, we did not know what effect that had. We were cautious and had people waiting up until after midnight to click the button at a time of low system load. It turns out now that is probably not necessary. We can clear caches during the working day, when staff are in the office to pick up the pieces if anything does go wrong.

Wednesday, April 23, 2014

The four types of thing a Moodle developer needs to know

In order to write code for Moodle, there is an awful lot you need to know. Quite how much was driven home for me when I taught a Moodle developers' workshop at the German MoodleMaharaMoot in February. When preparing for that workshop, I though you could group that knowledge into three categories, but I have since added a fourth to my thinking.

1. The different types of Moodle plugin

The normal way you add functionality to Moodle is to create a plug-in. There are many different types of plug-in, depending on what you want to add (for example, a report, an activity module or a question type). Therefore, the first thing to learn is what the different types of plug-in are and when you should use them. Then, once you know which type of plug-in to create, you need to know how to make that sort of plug in. For example, what exactly do you need to do to create a new question type?

2. How to make Moodle code do things

Irrespective of what sort of plug-in you are creating, you also need to know how to make your code do certain things. Moodle is written in PHP, so generic PHP skills are a prerequisite, but Moodle has its own libraries for many common tasks, like getting input from the user, loading and saving data from the database, displaying output, and so on. A developer need to know many of these APIs.

3. How to get things done in the Moodle community

If you just want to write Moodle code for your own use, then the two types of know-how above are enough, but if you want to take full advantage of Moodle's open source nature, then you need to learn how to interact with the rest of the Moodle development community. For example how to ask for help, how to report a bug, how to submit the changes for a bug you have fixed or a feature you have implemented, get another developer to review your proposed code change, how to update your customised Moodle site using git, and so on.

4. Something about education

Those three points were what I thought of when trying to work out what I needed to teach during the developer workshop I ran. Since then, while listening to one of the presentations at the UK MoodleMoot as it happens, I realised that there was a fourth category of knowledge required to be a good Moodle developer. It matters that we are making software to help people teach and learn. I am struggling to think of specific concepts here, with URLs for where you can learn about them, as I gave in the previous sections, but there is a whole body of knowledge about what makes for effective learning and teaching and it is useful to have some appreciation of that. You also need some awareness of how educational institutions operate. If you hang around the Moodle community for any length of time you will also discover the educational culture is different in different countries. For example in the southern hemisphere the long summer holiday is also the Christmas holiday, and in America, they expect to award grades of more than 100%.

Summary

Does this subdivision into categories actually help you learn to be a Moodle developer? I am not sure, but it was certainly useful when planning my workshop. The workshop was structured around creating three plugins on the first day, a Filter, a Block and then a Local plug-in. However, those exercises were structured so that while moving through different types of category-one knowledge, we also covered key topics from categories two and three in a sensible order. So it helped me, and I thought it was an interesting enough thought to share.

Thursday, February 27, 2014

Reflections on listening to conference presentations in German

I am at the MoodleMaharaMoot in Leipzig listening to people talk about Moodle.

First, the good news is that about half the words in English came from the same roots as German, so there are a fair number of words you can recognise, at least if you have time to read them from the screen. For words that seem really key, there is Google translate. Also, the Germans seems like using English phrases for eLearning-related things, like Learning Analytics, or Multiple Choice.

However, I don’t think I was even understanding 10% of the words. What really makes a difference to intelligibility is what is on the screen. If speaker just had powerpoint slides with textual bullet points, that does not help. If the speaker uses the screen to show you what they are talking about - screen grabs or live demos - that is much better. Of course, this is just: show, don’t tell.

It also makes a big difference whether you already know a little bit about what is being said. I talked to some people from University of Vienna two years ago when they started building their offline quiz activity, so I already knew what it was supposed to do. I followed that presentation (which contained many screen-grabs) better than most. What they have done looks really slick, by the way.

Regarding my presentation, I feel vindicated in my plan to spend almost all of the presentation doing a live demonstration of the question types I was talking about. Of course, I am sure that almost everyone in the audience has better English than I have German. Also, I apologies that I talked for the whole time, and did not leave an opportunity for questions.

Finally, I have been speculating (without reaching any conclusions) about whether the experience of sitting there, failing to understand almost everything that is being said, and just picking some scraps from the slides, is giving me any empathy for people with severe disabilities who need major accessibility support to use software? As I say, these thoughts are inconclusive. What does anyone else think?

By the way, Germans applaud by rapping on the table with their knuckles. Your trivia fact for the day.

Wednesday, January 29, 2014

Moving the OU Moodle code to Moodle 2.6.1

I spent today upgrading our Moodle codebase from Moodle 2.5.4 to Moodle 2.6.1. This is the start of work towards our June release of the VLE. We have a March release based on Moodle 2.5.4 to get on the live servers first, and testing that will overlap with the development of the 2.6.1-based version.

Doing the merge

The first stage of the process is to merge in the new code. This is non-trivial because even if you just do

git checkout -b temp v2.5.4
git merge v2.6.1

Then you will get a lot of merge conflicts. That is a product of how the Moodle project manages its stable branches. If your own code changes also lead to other merge conflicts, then sorting out the two is a real mess.

Fortunately, there is a better way, because we know how we want to resolve any conflicts between 2.5.4 and 2.6.1. We want to end up with 2.6.1. Using git merge strategies, you can do that:

git checkout -b merge_helper_branch v2.6.1
git merge --strategy=ours v2.5.4

That gives you a commit that is upstream of both v2.5.4 and v2.6.1, and which contains code that is identical to v2.6.1. You can verify that using git diff v2.6.1 merge_helper_branch. That should produce no output.

Having built that helper branch, you can then proceed to upgrade your version of the code. Our version of Moodle lives on a branch called ouvle which we originally branched off Moodle 2.1.2 in October 2011. Since then, we have made lots of changes, including adding many custom plugins, and merging in many Moodle releases. Continuting from the above we do

git checkout ouvle
git merge --strategy-option=patience merge_helper_branch

That gave a lot of merge conflicts, but they were all to do with our changes. Most of them were due to MDL-38189, which sam marshall developed for Moodle 2.6, and which we had back-ported into our 2.5 code. That back-port made a big mess, but fortunately most of the files affected did not have any other ou-specific changes, so I could just overwrite them with the latest versions from v2.6.1.

git checkout --theirs lang/en backup lib/filestorage admin/settings/development.php lib/form/form.js
git add lang/en backup lib/filestorage admin/settings/development.php lib/form/form.js

Simiarly, we had backported MDL-35053 which lead to more conflicts that were easy to resolve. Another case was the Single activity course format which we had used as an add-on to Moodle 2.5. That is now part of the standard Moodle release. The change caused merge conflits, but again there was a simple solution: take the latest from 2.6.1.

After all that, there were only about 5 files that needed more detailed attention. They were mostly where a change had been made to standard Moodle code right next to a place where we had made a change. (Silly rules about full stops at the ends of comments!) They were easily to fix manually. The one tricky file was in lib/moodlelib.php where about 400 lines of code had been moved lib/classes/useragent.php. There were two ou-specific changes in the middle of that, which I had to re-do in the new version of that code.

Verifying the merge

Having resolved all the conflicts, it was then time to try to convince myself that I had not screwed anything up. The main check was to comparing our ouvle code with the standard 2.6.1 code. Just doing git diff v2.6.1 ouvle does not work well because it shows all contents of all the new files we have added. You need to read the git documentation and work out the incantation

git diff --patience --diff-filter=CDMRTUXB v2.6.1 ouvle

That tells git to just show changes to existing files - the ones that are part of standard Moodle 2.6.1. That is a manageable amount of output to review. We have a strict policy that any change to core Moodle code is marked up like this:

// ou-specific begins #2381 MDL-28567
/*
        $select = new single_select(new moodle_url(CALENDAR_URL.'set.php',
                array('return' => base64_encode($returnurl->out(false)),
                        'var' => 'setcourse', 'sesskey'=>sesskey())),
                'id', $courseoptions, $selected, null);
*/
        $select = new single_select(new moodle_url(CALENDAR_URL.'view.php',
                array('return' => $returnurl, 'view' => 'month')),
                'course', $courseoptions, $selected, null);
// ou-specific ends #2381 MDL-28567

That is, the original Moodle code is still there, but commented out, alongside our modified version, and the whole thing is wrapped in paired begin and end markers that refer to a ticket id in our issues database and if applicable a Moodle tracker issue. In this case I can check that MDL-28567 has still not been resolved, so we still need this ou-specific change. What I am doing looking at the diff output is verifying that every change is marked up like that, and that any issues mentioned are things that are still relevant.

The other check is to search the whole codebase for ou-specific and again review all the issue numbers mentioned. These combined checks find a few ou-specific changes that are no longer needed, which is a good thing.

What happens next

Now that I think the code seems right, it is time to test it, so I upgrade my development install. It mostly works, except that our custom memcache session handler no longer works (the session code seems to have changed a lot, including an official memcached session hander in core). For now I just switch back to default Moodle sessions, and make a note to investigate this later.

Apart from that, the upgrade goes smootly, and, apart from thousands of debugging warnings about use of deprecated code, I have a working Moodle site, so I push the code to our git server, and warn the rest of the team that they can upgrade if they feel brave.

The next thing, which will take place over the next few weeks is to check every single one of our custom plugins to verify that it still works properly in Moodle 2.6. To manage that we use a Google Docs spreadsheet that we can all edit that lists all the add-ons, with who is going to be responsible for checking it, and whether they have done so yet. Here is a small section.

The state of OU Moodle customisation

Our regular code-merges are a good moment to take stock of the extend to which we have customised Moodle. Here are some headline numbers:

  • 212 custom plug-ins: Of those 10 are ones we have taken from the community, including Questionnaire, Certificate, Code-checker and STACK (we helped create those last two). Of our own plugins, 58 (over a quarter) are shared with the community, though the counting is odd because ForumNG contains 20 sub-plugins.
  • 17 ou-specific issues: That is, reasons we made a change to core code that could not be an add-on.
  • Due to those 17 reasons, there are 42 pairs of // ou-specific begins/ends comments in the code.

So, we continue to be disciplined about not changing core code unless we really have to, but the number of plugins is getting a bit crazy. A lot of the plugins, are, however, very small. They just do one thing. Also, we run a range of very different sites, including OpenLearn, OpenLearn works, The Open Science Lab and our exams server. A significant number of our plugisn were just designed to be used on one of those sites.

Here are the numbers of custom plugins broken down by type (and ignoring sub-plugins of our custom plugins).

Plugin typeNumber
Activity module25
Admin tools8
Authentication methods2
Blocks30
Course formats3
Editors1
Enrolment methods1
Filters6
Gradebook reports1
Local plugins44
Message outputs2
Portfolio outputs1
Question behaviours4
Question types14
Quiz reports6
Quiz access rules2
Reports19
Repositories3
Themes9
TinyMCE plugins1

Thursday, November 28, 2013

Bug fixing as knowledge creation

There are lots of ways you can think about bug-fixing: it is just a job that developers do; it is problem solving; etc. Here I want to take one particular viewpoint, that it is generating new knowledge about a software system.

One was to think about software is that it is the embodiment of a set of requirements, of how something should work. For example, Moodle can be thought of as a lot of knowledge about what software is required to teach online, and how that software should be designed. Finding and fixing bugs increases that pool of knowledge by identifying errors or omissions and then correcting them.

The bug fixing process

We can break down the process of discovering and fixing a bug into the following steps. This is really trying to break the process down as finely as possible. As you read this list, please think about what new knowledge is generated during each step.

  1. Something's wrong: We start from a state of blissful ignorance. We think our software works exactly as it should, and then some blighter comes along and tells us "Did you know that sometimes ... happens?" Not what you want to hear, but just knowing that there is a problem is vital. In fact the key moment is not when we are told about the problem, but when the user encountered it. Good users report the problems they encounter with an appropriate amount of detail
  2. Steps to reproduce: Knowing the problem exists is vital, but not a great place to start investigating. What you need to know is something like "Using Internet Explorer 9, if you are logged in as a student, are on this page, and then click that link then on the next page press that button, then you get this error." and that all the details there are relevant. This is called steps to reproduce. For some bugs they are trivial. For bugs that initially appear to be random, identifying the critical factors can be a major undertaking.
  3. Which code is broken: Once the developer can reliably trigger the bug, then it is possible to investigate. The first thing to work out is which bit of code is failing. That is, which lines in which file.
  4. What is going wrong: As well as locating the problem code, you also have to understand why it is misbehaving. Is it making some assumption that is not true? Is it misusing another bit of code? Is it mishandling certain unusual input values? ...
  5. How should it be fixed: Once the problem is understood, then you can plan the general approach to solving it. This may be obvious given the problem, but in some cases there is a choice of different ways you could fix it, and the best approach must be selected.
  6. Fix the code: Once you know how you will fix the bug, you need to write the specific code that embodies that fix. This is probably the bit that most people think of when you say bug-fixing, but it is just a tiny part.
  7. No unintended consequences: This could well be the hardest step. You have made a change which fixed the specific symptoms that were reported, but have you changed anything else? Sometimes a bug fix in one place will break other things, which must be avoided. This is a place where peer review, getting another developer to look at your proposed changes, is most likely to spot something you missed.
  8. How to test this change: Given the changes you made, what should be done to verify that the issue is fixed, and that nothing else has broken? You can start with the steps to reproduce. If you work through those, there should no longer be an error. Given the previous point, however, other parts of the system may also need to be tested, and those need to be identified.
  9. Verifying the fix works: Given the fixed software, and the information about what needs to be tested, then you actually need to perform those tests, and verify that everything works.

Some examples

In many cases you hardly notice some of the steps. For example, if the software always fails in a certain place with an informative error message, then that might jump you to step 4. To give a recent example: MDL-42863 was reported to me with this error message:

Error reading from database

Debug info: ERROR: relation "mdl_questions" does not exist

LINE 1: ...ECT count(1) FROM mdl_qtype_combined t1 LEFT JOIN mdl_questi...

SELECT count(1) FROM mdl_qtype_combined t1 LEFT JOIN mdl_questions t2 ON t1.questionid = t2.id WHERE t1.questionid <> $1 AND t2.id IS NULL

[array (0 => '0',]

Error code: dmlreadexception

Stack trace:

  • line 423 of /lib/dml/moodle_database.php: dml_read_exception thrown
  • line 248 of /lib/dml/pgsql_native_moodle_database.php: call to moodle_database->query_end()
  • line 764 of /lib/dml/pgsql_native_moodle_database.php: call to pgsql_native_moodle_database->query_end()
  • line 1397 of /lib/dml/moodle_database.php: call to pgsql_native_moodle_database->get_records_sql()
  • line 1470 of /lib/dml/moodle_database.php: call to moodle_database->get_record_sql()
  • line 1641 of /lib/dml/moodle_database.php: call to moodle_database->get_field_sql()
  • line 105 of /admin/tool/xmldb/actions/check_foreign_keys/check_foreign_keys.class.php: call to moodle_database->count_records_sql()
  • line 159 of /admin/tool/xmldb/actions/XMLDBCheckAction.class.php: call to check_foreign_keys->check_table()
  • line 69 of /admin/tool/xmldb/index.php: call to XMLDBCheckAction->invoke()

I have emboldened the key bit that says where the error is. Well, there are really two errors here. One is that the Combined question type add-on refers to mdl_questions when it should be mdl_question. The other is that the XMLDB check should not die with a fatal error if presented with bad input like this. The point is, this was all immediately obvious to me from the error message.

Another recent example at the other extreme is MDL-42880. There was no error message in this case, but presumably someone noticed that some of their quiz settings had changed unexpectedly (Step 1). Then John Hoopes, who reported the bug, had to do some careful investigation to work out what was going on (Step 2). I am glad he did, because it was pretty subtle thing, so in this case Step 2 was probably a lot of work. From there, it was obvious which bit of the code was broken (Step 3).

Note that Step 3 is not always obvious even when you have an error message. Sometimes things only blow up later as a consequence of something that went wrong before. To use an extreme example, if someone fills your kettle with petrol, instead of water, and then you turn it on to make some tea and it blows up. The error is not with turning the kettle on to make tea, but with filling it with petrol. If all you have is shrapnel, finding out how the petrol ended up in the kettle might be quite hard. (I have no idea why I dreamt up that particular analogy!)

MDL-42880 also shows the difference between the conceptual Steps 4 and 5, and the code-related Steps 3 and 6. I though the problem was with a certain variable becoming un-set at a certain time, so I coded a fix to ensure the value was never lost. That led to complex code that required a paragraph-long comment to try to explain it. Then I had a chat with Sam Marshall who suggested that in fact the problem was that another bit of code was relying on the value that variable, when actually the value was irrelevant. That lead to a simpler (hence better) fix: stop depending on the irrelevant value.

What does this mean for software?

There are a few obvious consequences that I want to mention here, although they are well known good practice. I am sure there are other more subtle ones.

First, you want the error messages output by your software to be as clear and informative as possible. They should lead you to where the problem actually occurred, rather than having symptoms only manifesting later. We don't want exploding kettles. There are some good examples of this in Moodle.

Second, because Step 7, ensuring that you have not broken anything else, is hard, it really pays to structure your software well. If you software is made up of separate modules that are each responsible for doing one thing, and which communicate in defined ways, then it is easier to know what the effect of changing a bit of one component is. If your software is a big tangle, who knows the effect of pulling one string.

Third, it really pays to engage with your users and get them to report bugs. Of course, you would like to find and fix all the bugs before you release the software, but that is impossible. For example, we are working towards a new release of the OU's Moodle platform at the start of December. We have had two professional testers testing it for a month, and a few select users doing various bits of ad-hoc testing. That adds up to less than 100 person days. On the day the software is released, probably 50,000 different users will log in. 50,000 user days, even by non-expert testers, are quite likely to find something that no-one else noticed.

What does this mean for users?

The more important consequences are for users, particularly of open-source software.

  • Reporting bugs (Step 1) is a valuable contribution. You are adding to the collective knowledge of the project.

There are, however, some caveats that follow from the fact that in many projects, the number of developers available to fix bugs is smaller than the number of users reporting bugs.

  • If you report a bug that was already reported, then someone will have to find the duplicate and link the two. Rather than being a useful contribution, this just wastes resources, so try hard to find any existing bug report, and add your information there, before creating a new one.
  • You can contribute more by reporting good steps to reproduce (Step 2). It does not require a developer to work those out, and if you can do it, then there is more chance that someone else will do the remaining work to fix the bug. On the other hand, there is something of a knack to working out and testing which factors are, or are not, significant in triggering a bug. The chances are that an experienced developer or tester can work out the steps to reproduce quicker than you could. If, however, all the experienced developers are busy then waiting for them to have time to investigate is probably slower than investigating yourself. If you are interested, you can develop your won diagnosis skills.
  • If you have an error message then copy and paste it exactly. It may be all the information you need to give to get straight to Step 3 or 4. In Moodle you can get a really detailed error message by setting 'debugging' to 'DEVELOPER' level, then triggering the bug again. (One of the craziest mis-features in Windows is that most error pop-ups do not let you copy-and-paste the message. Paraphrased error messages can be worse than useless.)

Finally, it is worth pointing out that Step 9 is another thing that can be done by the user, not a developer. For developers, it is really motivating when the person who reported the bug bothers to try it out and confirm that it works. This can be vital when the problem only occurs in an environment that the developer cannot easily replicate (for example an Oracle-specific bug in Moodle).

Conclusion

Thinking about bug finding and fixing as knowledge creation puts a more positive spin on the whole process than is normally the case. This shows that lots of people, not just developers and testers, have something useful to contribute. This is something that open source projects are particularly good at harnessing.

It also shows why it makes sense for an organisation like the Open University to participate in an open source community like Moodle: Bugs may be discovered before they harm our users. Other people may help diagnose the problem, and there is a large community of developers with whom we can discuss different possible solutions. Other people will help test our fixes, and can help us verify that they do not have unintended consequences.

Monday, July 1, 2013

Open University question types ready for Moodle 2.5

This is just a brief note to say that Colin Chambers has now updated all the OU question types to work with Moodle 2.5. Note that we are not yet running this code ourselves on our live servers, since we are on Moodle 2.4 until the autumn, but Phil Butcher has tested them all and he is very thorough.

You can download all these question types (and others) from the Moodle add-ons database.

Thanks to Dan Poltawski's Github repository plugin, that is easier than it used to be. Still, updating 10 plugins is pretty dull, so I feel like I have contributed a bit. I also reviewed most of the changes and fixed the unit tests.

I hope you enjoy our add-ons. I am wondering whether we should add the drag-and-drop questions types to the standard Moodle release. What do you think? If that seems like a good idea to you, I suggest posting something enthusiastic in the Moodle quiz forum. It will be easier to justify adding these question types to standard Moodle if lots of non-OU Moodlers ask for it.

Friday, June 21, 2013

Book review: Computer Aided Assessment of Mathematics by Chris Sangwin

The book coverChris is the brains behind the STACK online assessment system for maths, and he has been thinking about how best to use computers in maths teaching for well over ten years. This book is the distillation what he has learned about the subject.

While the book focusses specifically on online maths assessment, it takes a very broad view of that topic. Chris starts by asking what we are really trying to achive when teaching and assessing maths, before considering how computers can help with that. There are broadly two areas of mathematics: solving problems and proving theorems. Computer assessment tools can cope with the former, where the student performs a calculation that the computer can check. Getting computers to teach the student to prove theorems is an outstanding research problem, which is touched on briefly at the end of the book.

So the bulk of the book is about how computers can help students master the parts of maths that are about performing calculations. As Chris says, learning and practising these routine techniques is the un-sexy part of maths education. It does not get talked about very much, but it is important for students to master these skills. Doing this requires several problems to be addressed. We want randomly generated questions, so we have to ask what it means for two maths questions to be basically the same, and equally difficult. We have to solve the problem of how students can type maths into the computer, since traditional mathematics notation is two dimensional, but it is easier to type a single line of characters. Chris precedes this with a fascinating digression into where modern maths notation came from, something I had not previously considered. It is more recent than you probably think.

Example of how STACK handles maths input

If we are going to get the computer to automatically assess mathematics, we have to work out what it is we are looking for in students' work. We also need to think about the outcomes we want, namely feedback for the student to help them learn; numerical grades to get a measure of how much the student has learned; and diagnostic output for the teacher, identifying which types of mistakes the students made, which may inform subsequent teaching decisions. Having discussed all the issues, Chris them brings them together by describing STACK. This is an opportune moment for me to add the dislaimer that I worked with Chris for much of 2012 to re-write STACK as a Moodle question type. That was one of the most enjoyable projects I have ever worked on, so I am probably biassed. If you are interested, you can try out a demo of STACK here.

Chris rounds off the book with a review of other computer-assissted assessment systems for maths that have notable features.

In summary, this is a facinating book for anyone who is interested in this topic. Computers will never replace teachers. They can only automate some of the more routine things that teachers do. (They can also be more available than teachers, making feedback on their work available to students even when the teacher is not around.) To automate anything via a computer you really have to understand that thing. Hence this book about computer-assessted assessment gives a range of great insights into maths education. Highly recommended. Buy it here!

Thursday, May 2, 2013

Performance-testing Moodle

Background

The Open University is moving from Moodle 2.3.x to Moodle 2.4.3 in June. As is usual with a major upgrade, we (that is Rod and Derek) did some load testing to see if it still runs fast enough on our servers.

The first results were spectacularly bad! Moodle 2.4 was ten times slower. We were expecting Moodle 2.4 to be faster than 2.3. The first step was easy.

Performance advice: if you are running Moodle 2.4 with load-balanced web servers, don't use the default caching option that stores the data in moodledata on a shared network drive. Use memcache instead.

Take 2 was a lot better. Moodle 2.4 was now only about 1.5 times slower. Still not good enough, but in the right ball park. This blog post is about what we did next, which was to use the tools Moodle provides to work out what was slow and fix it.

Moodle's profiling tool

When your software is too slow, you need measurements to tell you which are the slow bits. Tools that do that are called profilers. One of the better profiling tools for PHP is called XHProf. The good news is that it has already been integrated into Moodle, and there is documenation about getting it working. Basically, you just need to install a PHP extension and turn on some options under Admin -> Development -> Profiling.

Since we already had the necessary PHP extension on our severs, that was really easy. The option I chose was to profile a page when &PROFILEME was added to the end of the URL, but there are several ways to control it.

Profiling output

Once you have profiled a page, the results appear under Admin -> Development -> Profiling runs.

This just lists the runs you have done. You need to click through to see the details of one run. That looks like a big table of all the function that were called as part of rendering the page, how many times each one was called, and how much time each function was responsible for.

Inclusive time is the amount of time taken by that function, and all the other functions it called. Exclusive time is the time taken by that function itself. Some people, like sam, seem to like that tabular view. I am a more visual person, so I tend to click on the [View full callgraph] link. That produces a crazily big image, showing graphically which functions call which other functions, and how much time is spent in each one. Here is the image for the run we are looking at:

You can click for the full-sized image. The yellow and red highlighting is applied automatically to try to highlight places where a lot of time is being spent. Sometimes it is helpful. Sometimes not. The red box in the bottom right is where we do database queries. No suprise there. We know calling the database is one of the slowest things you can do in Moodle code. The other red box is fetching data from memcache, which also involves connecting to another server.

What you have to look for is somewhere on the diagram that makes you go "What! We are spending how much time doing that?! That's surely not necessary." In this case, my eye was drawn to the far right of the yellow chain. When viewing this small course, we are fetching the course format object 134 times, and doing that is accounting for about 9% of the page-load time. There is no way we need to do that.

Fixing the problem

Once you have identified what appears to be a gross inefficiency, then you have to fix it. Mostly that follows the normal Moodle bug-fixing mechanics, but it is worth saying a bit about the different approaches you could take to changing the code:

  1. You might work out that what is being done is unnecessary. Then you can just remove it. For example MDL-39452 or MDL-39449. This is the best case. We have both improved performance and simplified the code.
  2. The next option is to take an overview of the code, and re-organise it to be more sensible. For example, in the course format case, we should probably just get the course format object once, and then use it. However, that would be a big risky change, which I did not want to do at this time (just before the Moodle 2.5 release). This approach does, however, also have the potential to simplify the code while improving performance.
  3. The next option is some other sort of refactoring. For example get_plugin_list was getting called a lot, and it in turn was calling the generic function clean_param to validate something. clean_param is quite fast, but not when you call it a thousand times. Therefore, it was worth extracting a simpler is_valid_plugin_name function. Doing that (MDL-39445) reduced the page load time by about 2%, but did make the code slighly more complex. Still, that is a worth-while trade off.
  4. The last option is to add caching. If you are doing the same thing repeatedly, and it is slow, and you can't avoid doing it repeatedly, then remember the answer the first time you compute it, and reuse it later. This should be the option of last resort because caches definitely increase the code complexity, and if you forget to clear them when necessary you introduce bugs. However, as in the course formats example we are looking at they can make a big difference. This fix reduced page-load times by 8%.

So far, we have found nine speed-ups we can make to Moodle 2.4 in the core Moodle code, and about the same in OU plugins. That is probably a 10-20% speed-up on most pages. Some of those are new problems introduced in Moodle 2.4. Others have been there since Moodle 2.0. We could really benefit from more people looking at Moodle profiling output often, and that is why I wrote this article.

Monday, April 8, 2013

Do different media affect the effectiveness of teaching and learning

Here is some thirty-year-old research that still seems relevant today:

Richard E. Clark, 1983, "Reconsidering Research on Learning from Media", Review of Educational Research, Vol. 53, No. 4 (Winter, 1983), pp. 445-459.

This paper reviews the the seemingly endless research trying to ask whether teaching using Media X inherrently more effective than the same instruction in Media Y. Given the age of the paper, you will not suprised to learn that the research cited covers media like Radio for education (hot research topic in the 1950s), Television (1960s) and early computer-assessted assessment (1970s). Clark's earliest citation, however, is "since Thorndike (1912) recommended pictures as a labor saving device in instruction." Images as novel educational technology! Well, they were once. The point is  that basically the same reserach was done for each new media to come along, and it was all equally inconclusive.

Here are some choice quotes that nicely summarise the article:

Based on this consistent evidence, it seems reasonable to advise strongly against future media comparison research. Five decades of research suggest that there are no learning benefits to be gained from employing different media in instruction, regardless of their obviously attractive features or advertised superiority.

Where learning benefits are at issue, therefore, it is the method, aptitude, and task variables of  instruction that should be investigated.

The best current evidence is that media are mere vehicles that deliver instruction but do not influence student achievement any more than the truck that delivers our groceries causes changes in our nutrition

Clark does not miss out on the fact that effectiveness of the learning is the only problem in education:

Of course there are instructional problems other than learning that may be influenced by media (e.g., costs, distribution, the adequacy of different vehicles to carry different symbol systems, equity of access to instruction).

Since this paper is a thorough review of a lot of the available literature, it contains a number of other gems. For example:

Ksobiech (1976) told 60 undergraduates that televised and textual lessons were to be (a) evaluated, (b) entertainment, or (c) the subject of a test. The test group performed best on a subsequent test with the evaluation group scoring next best and the entertainment group demonstrating the poorest performance.

Hess and Tenezakis (1973) ... Among a number of interesting findings was an unanticipated attribution of more fairness to the computer than to the teacher.

I wonder how much later research fell it to the trap outlined in this paper. I am not familiar enough with the literature, but presumably there was lots of papers about the world-wide web, VLEs, social media, mobiles and tablets for education. I wonder how novel they really were?

Today, computers and the internet have made media cheaper to produce and more readily accessible than ever before. This removes many constraints on the instructional techniques available, but what this old paper is reminding us is that when it comes to teaching, it is not the media that matters, but the instructional design.

Wednesday, August 15, 2012

Automating git

This is a long-overdue follow-up to my previous post about using git to fix Moodle bugs. Thanks to Andrew Nichols of LUNS for nudging me in to writing this.

Git has an efficient command-line interface, but even so, there are some sequences of commands that you find yourself typing repeatedly. Git provides a mechanism called aliases which can be used to reduce this repetitive typing. This post explains how I use it in my Moodle development.

Basic usage

Let us start with the simplest possible example. I get bored typing git cherry-pick in full all the time. The solution is to edit the file .gitconfig in my home directory, and add

[alias]
        cp     = cherry-pick

Then git cp … is equivalent to git cherry-pick …. That saves 9 characters every time I have to copy a bug fix to a stable branch.

Simple aliases like this can also be used to to supply options. Another one I have set up is

        ff     = merge --ff-only

I use that when I need to update one of my local branches to match a remote branch. Suppose I think I are on the master branch, and I want to update that to the latest moodle/master. Normally one would just type git merge moodle/master and it would look like this:

timslaptop:moodle_head tim$ git merge moodle/master
Updating ddd84e9..b658200
Fast-forward

Suppose, however, that I had made a mistake, and I was actually on some other branch. Then git would try to do a merge between master and that branch, which is not what I want. The --ff-only option tells git not to do that. Instead it will stop with an error if it can't do a fast forward. So, to prevent mistakes, I normally use that option, and I do it frequently enough I found it worthwhile to create the alias.

Getting more ambitious

Sometimes the repeated operation you want to automate is a sequence of git commands. For example, when a new weekly build of Moodle comes out, I need to type a sequence of commands like this:

git checkout master
git fetch moodle
git merge --ff-only moodle/master
git push origin master

That updates my local copy of the master branch with the latest from moodle.org and then copies that to my area on github. To automate this sort of thing, you have to start using the power of Unix shell scripting. (If you are on Windows, don't worry, because you typically get the bash shell when you install git.)

Fortunately, you don't need to know much scripting, and you can probably just copy these examples blindly. The first thing to know is that you can put two commands on one line if you separate them using a semi-colon (just like in PHP). The previous sequence of commands could be typed on one line as

git checkout master; git fetch moodle; git merge --ff-only moodle/master; git push origin master

(Note that these lines of code are getting quite long, and will probably line-wrap. It should, however, be a single line of code.)

Doing it this way turns out to be a bad idea. What happens if one of the commands gives an error? Well, the system will just move on to the next command, even though the error from the previous command probably left things in an unknown state. Dangerous! Fortunately there is a better way. If you use && instead of ; then any error will cause everything to stop immediately. If you are familiar with PHP, then just image that every command is a function that returns true or false to say whether it succeeded or not. That is not so far from the truth. So, the right way to join the commands together looks like this:

git checkout master && git fetch moodle && git merge --ff-only moodle/master && git push origin master

Now we know what we want to automate, we need to teach this to git. It is a bit tricky because we don't just want to convert one single git command into another single git command. Instead we want to convert one git command into a sequence of shell commands. Fortunately this is supported, you just need to know the right syntax:

        updatemaster = !sh -c 'git checkout master && git fetch moodle && git merge --ff-only moodle/master && git push origin master' -

Now I just have to type git updatemaster to run that sequence of commands.

Parameterising your aliases

That is all very well for master, but what about all the stable branches? Do I have to create lots of separate aliases like update23, update22, update21, …? Of course not. Git was created by and for computer programmers. Shell scripts can take parameters, and the solution is an alias that looks like

        update = !sh -c 'git checkout MOODLE_$1_STABLE && git fetch moodle && git merge --ff-only moodle/MOODLE_$1_STABLE && git push origin MOODLE_$1_STABLE' -

With that alias, git update 23 will update my MOODLE_23_STABLE branch, git update 22 will update my MOODLE_22_STABLE, and so on.

You can use any number of parameters. If you remember my previous blog post, typically I will create the bug fix on a branch with a name like MDL-12345 that starts from master, and then I will want to copy that to a branch called MDL-12345_23 branching off MOODLE_23_STABLE. With the following alias, I just have to type git cpfix MDL-12345 23 in my Moodle 2.3 stable check-out:

        cpfix = !sh -c 'git fetch -p origin && git checkout -b $1_$2 MOODLE_$2_STABLE && git cherry-pick origin/master..origin/$1 && git push origin $1_$2' -

One final example that belongs in this section:

        killbranch = !sh -c 'git branch -d $1 && git push origin :$1' -

That deletes a branch both in the local repository and also from my area on github. That is useful once one of my bug fixes has been integrated. I then no longer need the MDL-12345 branch and can eliminate it with git killbranch MDL-12345.

To boldly go …

Of course, all this automation comes with some risk. If you are going to screw up, automation lets you screw up more things quicker. I feel obliged to emphasis that at this point. If you are going to shoot yourself in the foot, a machine gun gives the most spectacular results, and we are about to build one, at least metaphorically.

We just saw the killbranch command that can be used to clean up branches that have been integrated. What happens if I submitted lots of branches for integration last week. I have to delete lots of branches. Can that be automated? Using git I can at least get a list of those branches:

timslaptop:moodle_head tim$ git checkout master
Already on 'master'
timslaptop:moodle_head tim$ git branch --merged
  MDL-12345
  MDL-23456
* master

Those are the branches that are included in master, and so are presumably ones that have already been integrated. It is a bit irritating that the master branch itself is included in the list, but I can get rid of it using the standard command grep:

timslaptop:moodle_head tim$ git branch --merged | grep -v master
  MDL-12345
  MDL-23456

I have a list of branches to delete, but how can I actually delete them? I need to execute a command for each of those branch names. Once again, we find that shell scripting was developed by hacker, for hackers. The command xargs does exactly that. xargs executes a given command once for each line of input it receives. Feeding in the list of branches, and getting it to execute the killbranch command looks like this:

git branch --merged | grep -v $1 | xargs -I "{}" git killbranch "{}"

Now to make that into an alias

        killmerged = !sh -c 'git checkout $1 && git branch --merged | grep -v $1 | xargs -I "{}" git killbranch "{}"' -

With that in place, git killmerged master will delete all my branches that have been integrated into master. Note that you can use one alias (killbranch) inside another (killmerged). That makes it easier to build more complex aliases.

Once I have deleted all the things that were integrated, I am left with the branches I have in progress that have not been integrated yet. Those all need to be rebased, and that can be automated too:

        updatefix = !sh -c 'git checkout $1 && git rebase $2 && git checkout $2 && git push origin -f $1' -
        updatefixes = !sh -c 'git checkout $1 && git branch | grep -v $1 | xargs -I "{}" git updatefix "{}" $1' -

With those in place, I just just type git updatefixes master, and that will rebase all my branches, both locally and on github. Use at your own risk!

Thats all folks

To summarise, here is the whole of the alias section of my .gitconfig file:

[alias]
        cp     = cherry-pick
        ff     = merge --ff-only
        cpfix  = !sh -c 'git fetch -p origin && git checkout -b $1_$2 MOODLE_$2_STABLE && git cherry-pick origin/master..origin/$1 && git push origin $1_$2' -
        update = !sh -c 'git checkout MOODLE_$1_STABLE && git fetch moodle && git merge --ff-only moodle/MOODLE_$1_STABLE && git push origin MOODLE_$1_STABLE' -
        killbranch = !sh -c 'git branch -d $1 && git push origin :$1' -
        killmerged = !sh -c 'git checkout $1 && git branch --merged | grep -v $1 | xargs -I "{}" git killbranch "{}"' -
        updatefix = !sh -c 'git checkout $1 && git rebase $2 && git checkout $2 && git push origin -f $1' -
        updatefixes = !sh -c 'git checkout $1 && git branch | grep -v $1 | xargs -I "{}" git updatefix "{}" $1' -

There is limited documentation for this on the git config man page. There is more on the git wiki.

Thursday, August 2, 2012

Standards

Standardisation efforts are odd things. Most successful standards seem to have come out of one or a few brilliant individuals, and the standardisation committees only took over after the thing in question became widely adopted. Think of C, C++, Java, HTML, HTTP, JavaScript, SQL… Of course, it is only with hind-sight that we know those were successful things, that the people who created them were brilliant, and that it was worth investing effort in a standardisation committee to get different implementations to be interoperable. There are many fewer examples of successful standards that started with a committee. I am sure there are some, but I am failing to think of any right now.

Even when there are standards, that does not magically solve all your problems. Ask any developer about the problems of getting their web site to work on all browsers, particularly Internet Explorer, despite the existence of the HTML, CSS and JS standards; or look at the work Moodle has to do to work with the four databases we support, even though SQL is supposed to be a standardised language.

In theory a standard makes sense. If you have n different systems you want to move data between, then

  • If you go directly from system to system, you would have to write ½n*(n-1) different importers.
  • Given a common standard, you only need to write n different importers.

In practice, different systems have slightly different features, so you cannot perfectly copy data from one system to another. An importer from X to Y is not a perfect thing, it has to fudge some details. Now compare the two ways of handling import:

  • An importer for System Y that directly imports the files output by System X can know all about the details of System X, so it can do the best possible job of dealing with the incompatible features.
  • Using Standard S, System X has to save its data in format S dealing with any incompatibilities between what X supports and what S supports. Then System Y has to take file S and import it, dealing with any incompatibilities between what S supports and what Y supports, and it has to do that without the benefit of knowing that the data originally came from System X.

Therefore, going for direct import is likely to give better results, although at the cost of more work.

The particular case I am thinking about is, of course, moving questions between different eAssessment systems. The only standard that exists is IMS QTI, which has always struck me as the worst sort of product of a committee. It is not widely adopted and it is horribly complicated. Also, if we wanted to make Moodle support QTI, we would have to completely rewrite the Moodle to work the way QTI specifies. That is sort-of fair enough. If you want to display HTML like a web browser, you basically have to start from scratch and write you code from the ground up to work the way the HTML, CSS and JavaScript standards say. These standards are not designed to make content interoperate between different existing systems. You need only look at the horrible mess you get when you do Save as… -> HTML in MS Word, or even just copy-and-paste from Word to Moodle, to be convinced of that.

So, QTI is trying to solve the wrong problem. It is trying to be a full-featured standard that you can only support by basing your whole software around what the standard says. We don’t want to rewrite the whole Moodle question engine just to support some standard that hardly anyone else uses yet. We just want to be able to import 99%+ of questions from other systems, and from publishers, that Teachers can get access to. The kind of standard we want is more like CSV files. CSV is a nice simple standard to transfer data between spreadsheets and other applications.

In the past, it has always been easier to write separate importers for each other system Moodle wants to import from, rather than trying to deal with one very complex generic standard like QTI. See the screen-grab of Moodle's import UI to the right. To write a new importer, you just need some example files in the format you want to support, containing a few questions of each type. Then it is easy to write the code to parse that sort of file, and converting the data to the format Moodle expects.

Having said that, the current situation is not perfect. The problem is that most of these other file formats are output by commercial software. Therefore, many developers cannot easily get sample files in those formats to use for developing and testing code. As a result, some of the importers are buggy. We have to rely on people in the community who care enough, and who have access to the software, to create example files for us. There was a good example of that recently: Someone called Rick Jerz from Le Claire, Iowa produced a good example Examview file, and long-time quiz contributor Jean-Michel Vedrine from St Etienne, France used that to fix the bugs in the Examview importer.

On the standardisation front, there is a glimmer of hope. IMS Common Cartridge is a standard for moving learning content from one VLE to another. It uses a tiny, and hence manageable, subset of QTI that tries to solve the “transfer 99%+ of the questions teachers use” problem. It should be possible to get Moodle to read and write that QTI subset. We just need someone with the time and inclination to do the work. It is even possible that the OU's OpenLearn project will be that someone, but QTI import/export is just one of many things on their to-do list.

Wednesday, June 20, 2012

Interesting workshop about self-assessment tools

About 10 days ago, I took part in a very interesting workshop about the use of assessment tools to promote learning:

Self-assessment: strategies and software to stimulate learning

The day was organised by Sally Jordan from the OU, and Tony Gardner-Medwin from UCL, and supported by the HEA, so thanks to all of them for making it happen.

People talked about different assessment tools (not all Moodle), how they were getting students to use them, and in some cases what evidence there was for whether that was effective.

Parts of the event were recorded, and you can now access the recordings at http://stadium.open.ac.uk/stadia/preview.php?whichevent=1955&s=1. There is a total of 3.5 hours of video there, so you may not want to watch it all. My presentation is in Part 3, which also includes the final discussion, all in 30 minutes, and provides a reasonable summary of the day.

Despite having spent the whole day at the event, and discussed various aspects of what self-assessment is, I don't think we reached a single definition for what is self-assessment. Actually, I think it is clear that it is not one thing, but it is a useful way of looking at many different things, from the point of view of what is the most useful thing to help students learn.

One of the tools discussed during the day was PeerWise. If you have not come across that yet, then you should take a look, becuase it looks like a very interesting tool. There is a good introduction on Youtube:

.

Thursday, March 29, 2012

Fixing a bug in Moodle core: the mechanics

Several people at work have asked me about this, so I thought I would write it out as a blog post. In this post, I want to focus on the mechanics of using git and the Moodle tracker to prepare a bug-fix and submit it. Therefore I need a really simple bug. Fortunately one was reported recently: MDL-32039. If you go and look at that, you can see that the mistake is that I failed to write coherent English when working on the code to upgrade from Moodle 2.0 to 2.1.

What we need to do

This bug was introduced in Moodle 2.1, and affects every version since then. Since it is a bug, it needs to be fixed in all supported versions, which means on the 2.1 and 2.2 stable branches, and on the master branch.

My development set-up

I need to fix then test code on three branches. The way I handle this is to have a separate install of Moodle for each stable branch, and one for master.

I use Eclipse as my IDE, so these three copies of Moodle are separate projects in my Eclipse workspace .../workspace/moodle_head, .../workspace/moodle_22, and .../workspace/moodle_21. Each of these folders is a git repository. In each repositor, I have two remotes set up, for example:

timslaptop:moodle_head tim$ git remote -v
moodle git://git.moodle.org/moodle (fetch)
moodle git://git.moodle.org/moodle (push)
origin git@github.com:timhunt/moodle.git (fetch)
origin git@github.com:timhunt/moodle.git (push)

One is called moodle and points to the master copy of the code on moodle.org. The other is called origin and points to my area on github, where I publish my changes.

In order to test the code, I have three different Moodle installs, each pointing at one of these copies of the code. Each install uses an different database prefix in config.php, so they can share one database.

Getting ready to fix the bug on the master branch

The way I normally work is to fix the bug on the master branch first, and then transfer the fix to previous branches.

The first thing I need to do is to make sure my master branch is up to date, which I do using git:

timslaptop:workspace tim$ cd moodle_head
timslaptop:moodle_head tim$ git fetch moodle
remote: Counting objects: 1442, done.
remote: Compressing objects: 100% (213/213), done.
remote: Total 817 (delta 624), reused 790 (delta 602)
Receiving objects: 100% (817/817), 197.21 KiB | 111 KiB/s, done.
Resolving deltas: 100% (624/624), completed with 225 local objects.
From git://git.moodle.org/moodle
   8925a12..09f011a  MOODLE_19_STABLE -> moodle/MOODLE_19_STABLE
   1cd62bf..a280d40  MOODLE_20_STABLE -> moodle/MOODLE_20_STABLE
   a7899ca..c54172b  MOODLE_21_STABLE -> moodle/MOODLE_21_STABLE
   a81e8c4..58db57a  MOODLE_22_STABLE -> moodle/MOODLE_22_STABLE
   c856a1f..a280078  master     -> moodle/master
timslaptop:moodle_head tim$ git checkout master
Switched to branch 'master'
timslaptop:moodle_head tim$ git merge --ff-only moodle/master
Updating c856a1f..a280078
Fast-forward
 ... lots diff --stat output ...
timslaptop:moodle_head tim$ git push origin master
Everything up-to-date

(Why is everything already up-to-date on github? because I was fixing bugs at work today, and so had already updated my github space from there.)

Since there were updates, I now need to go to http://localhost/moodle_head/admin/index.php and let Moodle upgrade itself.

Fixing the bug on the master branch

First, I want to create a new branch for this bug fix, starting from where the master branch currently is. My convention is to use the issue id as the branch name, so:

timslaptop:moodle_head tim$ git checkout -b MDL-32039 master
Switched to a new branch 'MDL-32039'

Other people use other conventions. For example they might call the branch MDL-32039_qeupgradehelper_typos. That is a much better name. It helps you see immediately what that branch is about, but I am too lazy to type long names like that.

To fix this bug it is just a matter of going into admin/tool/qeupgradehelper/lang/en/tool_qeupgradehelper.php and editing the two strings that were wrong.

Except that, if I screwed up those two strings, it is quite likely that I made other mistakes nearby. I therefore spent a bit of time proof-reading all the strings in that language file (it is not very long). That was worthwhile. I found and fixed two extra typos. This sort of thing is always worth doing. When you see one bug report, spend a bit of time thinking about and checking whether other similar things are also broken.

OK, so here is the bug fix:

timslaptop:moodle_head tim$ git diff -U1
diff --git a/admin/tool/qeupgradehelper/lang/en/tool_qeupgradehelper.php b/admin
index 3010666..7bd7c13 100644
--- a/admin/tool/qeupgradehelper/lang/en/tool_qeupgradehelper.php
+++ b/admin/tool/qeupgradehelper/lang/en/tool_qeupgradehelper.php
@@ -50,3 +50,3 @@ $string['gotoresetlink'] = 'Go to the list of quizzes that can
 $string['includedintheupgrade'] = 'Included in the upgrade?';
-$string['invalidquizid'] = 'Invaid quiz id. Either the quiz does not exist, or 
+$string['invalidquizid'] = 'Invalid quiz id. Either the quiz does not exist, or
 $string['listpreupgrade'] = 'List quizzes and attempts';
@@ -57,5 +57,5 @@ $string['listtodo_desc'] = 'This will show a report of all the
 $string['listtodointro'] = 'These are all the quizzes with attempt data that st
-$string['listupgraded'] = 'List already upgrade quizzes that can be reset';
+$string['listupgraded'] = 'List already upgraded quizzes that can be reset';
 $string['listupgraded_desc'] = 'This will show a report of all the quizzes on t
-$string['listupgradedintro'] = 'These are all the quizzes that have attempts th
+$string['listupgradedintro'] = 'These are all the quizzes that have attempts th
 $string['noquizattempts'] = 'Your site does not have any quiz attempts at all!'
@@ -82,2 +82,2 @@ $string['upgradedsitedetected'] = 'This appears to be a site t
 $string['upgradedsiterequired'] = 'This script can only work after the site has
-$string['veryoldattemtps'] = 'Your site has {$a} quiz attempts that were never 
+$string['veryoldattemtps'] = 'Your site has {$a} quiz attempts that were never

(Note that:

  • I would not normally use the -U1 option. That just makes the output smaller, for the benefit of this blog post.
  • The diff is chopped off at 80 characters wide, which is the size of my terminal window.)

Now I need to test that the fix actually works. I go to Site administration ▶ Question engine upgrade helper in my web browser, and verify that the strings now look OK.

OK, so I have a good bug-fix and I need to commit it:

timslaptop:moodle_head tim$ git add admin/tool
timslaptop:moodle_head tim$ git status
# On branch MDL-32039
# Changes to be committed:
#   (use "git reset HEAD ..." to unstage)
#
# modified:   admin/tool/qeupgradehelper/lang/en/tool_qeupgradehelper.php
#
timslaptop:moodle_head tim$ git commit -m "MDL-32039 qeupgradehelper: fix typos in the lang strings"
[MDL-32039 9e45982] MDL-32039 qeupgradehelper: fix typos in the lang strings
 1 files changed, 4 insertions(+), 4 deletions(-)

Notice that I followed the approved style for Moodle commit comments. First the issue id, then a brief indication of which part of the code is affected, then a colon, then a brief summary of what the fix was. This first line of the commit comment is meant to be less than about 70 characters long, which can be a challenge!

If this had been a more complex fix, I would probably have added some additional paragraphs to the commit comment to explain things (and so I would have typed the comment in my editor, rather than giving it on the command-line with the -m option). In this case, however, the one line commit comment says enough.

Now I need to publish this change to github so others can see it:

timslaptop:moodle_head tim$ git push origin MDL-32039
Counting objects: 15, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (7/7), done.
Writing objects: 100% (8/8), 653 bytes, done.
Total 8 (delta 5), reused 0 (delta 0)
To git@github.com:timhunt/moodle.git
 * [new branch]      MDL-32039 -> MDL-32039

Now I can go to a URL like https://github.com/timhunt/moodle/compare/master...MDL-32039, and see the bug-fix through the github web interface.

More to the point, I can go to the tracker issue, click Request peer review, and fill in the details of this git branch, including that compare URL.

If this was a complex fix, I would then want for someone else to review the changes and confirm that they are OK. In this case, however, the fix is simple and I will just carry on without waiting for a review.

Transferring the fix to the 2.2 stable branch

So, now I want to apply the same fix to my moodle_22 code. First I need to update that install. Since this is similar to what we did above to update master, I will not show the output of these commands, just what I typed:

timslaptop:moodle_head tim$ cd ../moodle_22
timslaptop:moodle_22 tim$ git fetch moodle
timslaptop:moodle_22 tim$ git checkout MOODLE_22_STABLE
timslaptop:moodle_22 tim$ git merge --ff-only moodle/MOODLE_22_STABLE
timslaptop:moodle_22 tim$ git push origin MOODLE_22_STABLE

Then I visit http://localhost/moodle_22/admin/index.php to complete the upgrade.

(This may seem a bit laborious, but look out for a future blog post where I intend to talk about how I automate some of this. I only typed out the commands in full this time because I was writing this blog post.)

I want to apply the bug-fix I did on master on top of MOODLE_22_STABLE, and fortunately the command git cherry-pick is designed to do exactly that. (Since we are back in new territory, I will start showing the output of commands again.)

timslaptop:moodle_22 tim$ git fetch -p origin
remote: Counting objects: 1138, done.
remote: Compressing objects: 100% (118/118), done.
remote: Total 586 (delta 451), reused 569 (delta 434)
Receiving objects: 100% (586/586), 189.65 KiB, done.
Resolving deltas: 100% (451/451), completed with 176 local objects.
From github.com:timhunt/moodle
 * [new branch]      MDL-32039  -> origin/MDL-32039
   2117dcb..a280078  master     -> origin/master
timslaptop:moodle_22 tim$ git checkout -b MDL-32039_22 MOODLE_22_STABLE
Switched to a new branch 'MDL-32039_22'
timslaptop:moodle_22 tim$ git cherry-pick origin/MDL-32039
[MDL-32039_22 2c92dc7] MDL-32039 qeupgradehelper: fix typos in the lang strings
 1 files changed, 4 insertions(+), 4 deletions(-)
timslaptop:moodle_22 tim$ git push origin MDL-32039_22
Counting objects: 15, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (6/6), done.
Writing objects: 100% (8/8), 662 bytes, done.
Total 8 (delta 5), reused 3 (delta 1)
To git@github.com:timhunt/moodle.git
 * [new branch]      MDL-32039_22 -> MDL-32039_22

Notice that the convention I use is to append _22 to the branch name to distinguish the branch for Moodle 2.2 stable from the branch for master. Other people use different conventions, but this one is simple and works for me.

Of course, in the middle of that, I checked that the fix actually worked in Moodle 2.2. In this case, there is not much to worry about, but with more complex changes, you really have to check. For example the fix you did on the master branch might have used a new API that is not available in Moodle 2.2. In that case, you would have had to redo the fix to work on the stable branch.

Transferring the fix to the 2.1 stable branch

Now I rinse and repeat for the 2.1 branch. (I will supress the command output again, until the last command, when something interesting happens.)

timslaptop:moodle_22 tim$ cd ../moodle_21
timslaptop:moodle_21 tim$ git fetch moodle
timslaptop:moodle_21 tim$ git checkout MOODLE_21_STABLE
timslaptop:moodle_21 tim$ git merge --ff-only moodle/MOODLE_21_STABLE
timslaptop:moodle_21 tim$ git push origin MOODLE_21_STABLE
timslaptop:moodle_21 tim$ git fetch -p origin
timslaptop:moodle_21 tim$ git checkout -b MDL-32039_21 MOODLE_21_STABLE
timslaptop:moodle_21 tim$ git cherry-pick origin/MDL-32039
error: could not apply 9e45982... MDL-32039 qeupgradehelper: fix typos in the lang strings
hint: after resolving the conflicts, mark the corrected paths
hint: with 'git add ' or 'git rm '
hint: and commit the result with 'git commit'

So, git cherry-pick could not automatically apply the bug fix. To see what is going on, I use git status to get more information:

timslaptop:moodle_21 tim$ git status
# On branch MDL-32039_21
# Unmerged paths:
#   (use "git add/rm ..." as appropriate to mark resolution)
#
# deleted by us: admin/tool/qeupgradehelper/lang/en/tool_qeupgradehelper.php
#
no changes added to commit (use "git add" and/or "git commit -a")

That may or may not make things clear. Fortunately, I know the history behind this. What is going on here is that in Moodle 2.1, this code was in local/qeupgradehelper, and in Moodle 2.2 it moved to admin/tool/qeupgradehelper, and this confuses git. Therefore, I will have to sort things out ourselves.

In this case, we can just move the altered file to the right place

timslaptop:moodle_21 tim$ mv admin/tool/qeupgradehelper/lang/en/tool_qeupgradehelper.php local/qeupgradehelper/lang/en/local_qeupgradehelper.php

Then use git diff to check the changes are just what we expect:

timslaptop:moodle_21 tim$ git diff -U1
diff --git a/local/qeupgradehelper/lang/en/local_qeupgradehelper.php b/local/qeu
index ac883b5..7bd7c13 100644
--- a/local/qeupgradehelper/lang/en/local_qeupgradehelper.php
+++ b/local/qeupgradehelper/lang/en/local_qeupgradehelper.php
@@ -19,3 +19,3 @@
  *
- * @package    local
+ * @package    tool
  * @subpackage qeupgradehelper
@@ -50,3 +50,3 @@ $string['gotoresetlink'] = 'Go to the list of quizzes that can
 $string['includedintheupgrade'] = 'Included in the upgrade?';
-$string['invalidquizid'] = 'Invaid quiz id. Either the quiz does not exist, or 
+$string['invalidquizid'] = 'Invalid quiz id. Either the quiz does not exist, or
 $string['listpreupgrade'] = 'List quizzes and attempts';
@@ -57,5 +57,5 @@ $string['listtodo_desc'] = 'This will show a report of all the
 $string['listtodointro'] = 'These are all the quizzes with attempt data that st
-$string['listupgraded'] = 'List already upgrade quizzes that can be reset';
+$string['listupgraded'] = 'List already upgraded quizzes that can be reset';
 $string['listupgraded_desc'] = 'This will show a report of all the quizzes on t
-$string['listupgradedintro'] = 'These are all the quizzes that have attempts th
+$string['listupgradedintro'] = 'These are all the quizzes that have attempts th
 $string['noquizattempts'] = 'Your site does not have any quiz attempts at all!'
@@ -82,2 +82,2 @@ $string['upgradedsitedetected'] = 'This appears to be a site t
 $string['upgradedsiterequired'] = 'This script can only work after the site has
-$string['veryoldattemtps'] = 'Your site has {$a} quiz attempts that were never 
+$string['veryoldattemtps'] = 'Your site has {$a} quiz attempts that were never 

Actually, you can see that there is one wrong change there (the change to @package, so I need to undo that. The easy way to undo that would be to edit the file in Eclipse, but I want to show off another git trick:

timslaptop:moodle_21 tim$ git checkout -p local
diff --git a/local/qeupgradehelper/lang/en/local_qeupgradehelper.php b/local/qeupgradehelper/lang/en/local_qeupgradehelper.php
index ac883b5..7bd7c13 100644
--- a/local/qeupgradehelper/lang/en/local_qeupgradehelper.php
+++ b/local/qeupgradehelper/lang/en/local_qeupgradehelper.php
@@ -17,7 +17,7 @@
 /**
  * Question engine upgrade helper langauge strings.
  *
- * @package    local
+ * @package    tool
  * @subpackage qeupgradehelper
  * @copyright  2010 The Open University
  * @license    http://www.gnu.org/copyleft/gpl.html GNU GPL v3 or later
Discard this hunk from worktree [y,n,q,a,d,/,j,J,g,e,?]? y
@@ -48,16 +48,16 @@
 ... lots more diff output here ...
Discard this hunk from worktree [y,n,q,a,d,/,K,j,J,g,s,e,?]? q

Now, git diff will confirm that the change is just what we want, so we can test the change, and then finish up. (output suppressed again):

timslaptop:moodle_21 tim$ git add local
timslaptop:moodle_21 tim$ git commit
timslaptop:moodle_21 tim$ git push origin MDL-32039_21

Submitting the fix for integration

Now I have tested versions of the fix on all three of the branches where the bug needed to be fixed. So, I can go back to the Tracker issue and submit it for integration.

When I get to the bug, I see that Jim Tittsler, who reported the bug, has seen my request for peer review, and added a comment:

Fantastic! It really makes a difference when people follow-up on the bugs they report, and supply extra information, or just say thank you. In this case, although I intended to carry on without a peer reveiw, I got one.

Now I press the Submit for integration... button, and fill in the fix version, the details of the other branches, and most important, we write some testing instructions, so that someone else can test the bug next Wednesday as part of the integration process.

Finally, we are done. Now we sit back and wait for next Monday, when the weekly integration cycle starts. Our change will be reviewed, tested, and, all being well, included in the next weekly build of Moodle.

Reflection

Is this an overly laborious process? Well, it is if you try to describe every detail in a blog post! In normal circumstances, however, it really does not take long. In normal circumstances I could probably have done this fix in ten to fifteen minutes.

What usually takes the time is thinking about the problem, and writing and testing the code. This takes much longer than typing the git commands and completing the tracker issue. Writing the testing instructions can be laborious, particularly if it is a complex issue, but that is normally time well spent. It forces you to think carefully about the changes you have made, and what needs to be done to verify that they fix the bug that was reported without breaking anything else. As I said in my previous blog post, I think testing instructions are a really good discipline.

I hope this rather long blog post was interesting, or at least useful to somebody.