tag:blogger.com,1999:blog-22472462579231297022024-03-13T07:31:02.227+00:00Tim's blogOccasional musings about Moodle, Music, Go and whatever else comes to mind.Tim Hunthttp://www.blogger.com/profile/01349724348368316287noreply@blogger.comBlogger43125tag:blogger.com,1999:blog-2247246257923129702.post-69263399175756962802017-03-02T19:36:00.000+00:002017-03-02T19:36:38.863+00:00The online assessment Turing test<p>In the back-channel for <a href="http://transformingassessment.com/events_1_march_2017.php">yesterday's Transforming Assessment webinar</a> (which I would recommend) Geoff Crisp asked me:</p>
<blockquote><p>"Tim - what about the Turing test - what if a student could not tell the difference between a computer giving them feedback and the teacher?"</p></blockquote>
<p>I think this is a really nice question. Food for some quite wide-ranging thoughts about what online assessment should be.</p>
<p>On the whole, I stand my the snap answer I gave at the time. Computers and human markers (at least currently) have different strengths. The computer (having been set up by a human teacher) can be there at any time the student wants, able to give immediate feedback on a range of more or less basic practice activities. A human teacher in only available at certain times, but is able to give feedback in a more holistic way. They may know the student, and have some concept about how their subject is best learned, on on that bases give the student some really meaningful advice about how best to improve.</p>
<p>I know there is adaptive learning hype about computers being able to know the students and therefore offer contextual advice, but I will believe that when (if) I see it. </p>
<p>If you are thinking about designing a course today, you are much better off understanding the strengths and weaknesses of both computer-marked and conventional assessment, and using each for where they work best. There is currently nothing to be gained by trying to hide where you are using computer marking.</p>
<p>I think a reasonable analogy is with searching for information. You might do a Google search, which will give you the kind of results that a computer can give. Alternatively, you could ask a friend who knows more about the subject, and they will give you a different sort of advice about what to read. Neither is necessarily better. In some cases one of the two approaches might be clearly more appropriate. In other cases either would do. If you really want to understand something in depth, you probably want use both approaches, and it is an advantage that each will give different results that help in different ways.</p>
<p>If we are trying to create self-regulating learners, then it can be a merit that a computer only gives basic templated feedback, which could be as little as just right/wrong. The learner needs to do more work themself to get from the feedback to an action to take to improve. This is not always a benefit, but it could be.</p>
<p>So, while the idea of an assessment Turing test is usefully thought provoking, I don't think it is educationally useful, at least not for the foreseeable future. Having said that, the nicest thing anyone said about an online assessment system I helped build is still</p>
<blockquote><p>"It's like having a tutor at your elbow."</p></blockquote>
<p>The key word there is "like", which is not the same as "indistinguisable from".</p>Tim Hunthttp://www.blogger.com/profile/01349724348368316287noreply@blogger.com2tag:blogger.com,1999:blog-2247246257923129702.post-33085332046398027202015-06-25T18:27:00.000+01:002015-06-25T22:59:42.360+01:00The Assessment in Higher Education conference 2015<p> I am writing this on a sunny evening, sitting in <a href="http://www.taylor-walker.co.uk/pub/malt-house-birmingham/p0937/">a pub</a> overlooking <a href="https://en.wikipedia.org/wiki/Old_Turn_Junction">Old Turn Junction</a>, part of the <a href="https://canalrivertrust.org.uk/canals-and-rivers/birmingham-canal-navigations">Birmingham Canal Navigations</a>, with a well-earned beer after two fascinating and exhausting days at the <a href="http://aheconference.com">Assessment in Higher Education conference</a>.</p>
<p>It was a lovely conference. The organising committee had set out to try to make it friendly and welcoming and they succeeded. There was a huge range of interesting talks and since I could not clone myself I was not able to go to them all. I am not going to describe individual talks in detail, but rather draw out what seemed to me to be the common themes.</p>
<h4>A. It is all just assessment</h4>
<p>The first keynote speaker (<a href="http://www.sunderland.ac.uk/research/researchstaff/facultyofeducationsociety/education/drmaddalenataras/">Maddalena Taras</a>) said this directly, and there were a couple of other things along the same lines: the split between formative and summative assessment is a false dichotomy. If an assessment does not actually evaluate the students (give them a grade, hence summative) then it misses the main function of an assessment. This is not the same as saying that every assessment must be high stakes. Conversely, in the words of a quote Sally reminded me of:</p>
<blockquote>
<p>“As I have noted, summative assessment is itself ‘formative’. It cannot help but be formative. That is not an issue. At issue is whether that formative potential of summative assessment is lethal or emancipatory. Does formative assessment exert its power to discipline and control, a power so possibly lethal that the student may be wounded for life? … Or, to the contrary, does summative assessment allow itself to be conquered by the student, who takes up a positive, even belligerent stance towards it, determined to extract every human possibility that it affords?” (Boud & Falchikov (2007) <i>Rethinking Assessment in Higher Education: Learning for the Longer Term</i>)</p></blockquote>
<p>The first keynote was a critique of <a href="https://en.wikipedia.org/wiki/Assessment_for_learning">Assessment for Learning</a> (AfL). Not that assessment should not help students learn. Of course it should. Rather, the speaker questioned some of the specific recommendations from the AfL literature in a thought-provoking way.</p>
<p>The 'couple of other things' were a talk from Jill Barber of School of Pharmacy at Birmingham, about giving students quite detailed feedback after their end of year exams; and Sally Jordan’s talk (which I did not go to since I have heard it internally at the OU) about the OU Science faculty's semantic wranglings about whether all their assessment gets called “summative” or “formative”, and hence how the marks for the separate assignments are added up, without changing what the assessed tasks actually are.</p>
<h4>B. Do students actually attend to feedback?</h4>
<p>The second main theme came out many times. On the one hand, students say they like feedback and demand more of it. On the other hand, there is quite a lot of evidence that many students don’t spend much time reading it, or that when they do, it does not necessarily help them to improve. So, there were various approaches suggested for getting students to engage more with feedback, for example by</p>
<ul>
<li>giving feedback via a screen-cast video, talking them through their essay highlighting with the mouse (David Wright & Damian Kell, Manchester Metropolitan University). Would students spend 19 minutes reading and digestion written feedback on an essay? Well, they got a 19 minute (on average) video - one of the few cases where some students thought it was too much feedback!</li>
<li>making feedback a dialogue. That is, encouraging students to write questions on the cover sheet when they hand the work in, for their tutor to answer as part of the feedback. That was what Rebecca Westrup from the University of East Anglia was doing.</li>
<li>Stefanie Sinclair from the OU religious studies department talked about work she had one with John Butcher & Anactoria Clarke assessing reflection in an access module (a module to designed to help students with limited prior education to develop the skills they need to study at Level 1). Again, this was to encourage students to engage in a dialogue with their tutor about their learning.</li>
<li>Using peer and self assessment, so that students spend more time engaging with the assessment criteria by applying them to their own and other’s work. Also the suggestion from Maddalena Taras was that initially you give the student’s work back without the marks or feedback (but after a couple of weeks of marking) so that they read it with fresh eyes before they get the feedback (first) then the marks.</li>
<li>There was another peer assessment talk, by Blazenka Divjak of the University of Zagreb, using the Moodle Workshop tool. The results were along the same lines as other similar talks I have seen (for example at the OU where we are also experimenting with the same tool). Peer assessment activities do help students understand the assessment criteria. It helps them appreciate what teachers do more. Students’ grading of their peers, particularly in aggregate, is reliable, and comparable to the teacher’s grade.</li>
<li>A case of automated marking (in this case of programming exercises) where students clearly did engage with the feedback because they were allowed to submit repeatedly until they got it right. In computer programming this is authentic. It is what I do when doing Moodle development. (Stephen Nutbrown, Su Beesley, Colin Higgins, University of Nottingham and Nottingham Trent University.)</li>
<li>It was also something Sally touched on in her part of <a href="http://www.slideshare.net/tjh1000/2015-06-ahe-hunt-and-jordan">our talk</a>. With the OU's computer-marked questions with multiple tries, students say the feedback helps them learn and that they like it. However, if you look at the data or usability lab observations, you see that in some cases some students are clearly paying not attention to the feedback they get.</li>
</ul>
<h4>C. The extent to which transparency in assessment is desirable</h4>
<p>This was the main theme of the closing keynote by <a href="http://www.education.ox.ac.uk/about-us/directory/professor-jo-anne-baird/">Jo-Anne Baird</a> from the Oxford University Centre for Educational Assessment. The proposition is that if assessment is not transparent enough, it is unfair because students don’t really understand what is expected of them. A lot of university assessment is probably towards this end of the spectrum.</p>
<p>Conversely, if assessment is too transparent it encourages pathological teaching to the test. This is probably where most school assessment is right now, and it is exacerbated by the excessive ways school exams are made hight stakes, for the student, the teacher and the school. Too much transparency (and risk averseness) in setting assessment can lead to exams that are too predicable, hence students can get a good mark by studying just those things that are likely to be on the exam. This damages validity, and more importantly damages education.</p>
<p>Between these extremes there is a desirable balance where students are given enough information about what is required of them to enable them to develop as knowledgable and independent learners, without causing pathological behaviour. That, at least, is the hope.</p>
<p>While this was the focus of the last keynote, it resonated with several of the talks I listed in the previous section.</p>
<h4>D. The NSS & other acronyms</h4>
<p>The <a href="http://www.thestudentsurvey.com">National Student Survey</a> (NSS) is clearly a driver for change initiatives at a lot of other universities (<a href="http://tjhunt.blogspot.co.uk/2013/07/assessment-in-higher-education.html">as it was two years ago</a>). It is, or at least it is perceived to be a, big deal. Therefore it can be used as a catalyst or leaver to get people to review and change their assessment practices since feedback and assessment is something that students often give low ratings for. This struck me as odd, since I am not aware of this happening at the OU. I assume that is because the OU has so far scored highly in the NSS.</p>
<p>The other acronym floating around a lot was <a href="http://www.testa.ac.uk">TESTA</a>. This seems to be a framework for reviewing the assessment practice of a whole department or degree programme. In one case, however (a talk by Jessica Evans & Simon Bromley of the OU faculty of Social Science) their review was done before TESTA was invented, though along similar lines.</p>
<h4>Finally</h4>
<p>A big thank-you to Sue Bloxham and the rest of the organising team for putting together a great conference. Roll on 2017.</p>Tim Hunthttp://www.blogger.com/profile/01349724348368316287noreply@blogger.com0tag:blogger.com,1999:blog-2247246257923129702.post-52629980374556334962015-05-01T13:09:00.001+01:002015-05-01T13:09:06.309+01:00eSTEeM conference 2015<p><a href="http://www.open.ac.uk/about/teaching-and-learning/esteem/">eSTEeM</a> is an organising group
within the Open University which brings together people doing research into teaching and learning in the STEM disciplines, Science, Technology, Engineering and Maths. Naturally enough for the OU, a lot of that work revolves around educational technology. Once a year they have an annual conference for people to share what they have been doing. I went along because I like to see what people have been doing with our VLE, and hence how we could make it work better for students and staff in the future.</p>
<p>It started promisingly enough in a way. As I walked in to get my cup of coffee after registration, I was immediately grabbed by Elaine Moore from Chemistry who had two Moodle Quiz issues. She wanted <a href="https://moodle.org/plugins/view/qtype_combined">the Combined question type</a> to use the HTML editor for multiple choice choices (good idea, we should put that on the backlog) and a problem with a <a href="https://moodle.org/plugins/view/qtype_pmatch">Pattern-match questions</a> which we could not get to the bottom of over coffee.</p>
<p>But, on to the conference itself. I cannot possibly cover all the keynotes and parallel sessions so I will pick the highlights for me.</p>
<h4>Assessment matters to students</h4>
<p>The first was a graph from <a href="http://iet.open.ac.uk/people/l.price">Linda Price</a>’s keynote. Like most universities, at the end of every module we give have a student satisfaction survey. The graph showed the student's ratings in response to three of the questions:</p>
<ul>
<li>Overall, I am satisfied with the quality of this module.</li>
<li>I had a clear understanding of what was required to complete the assessed activities.</li>
<li>The assessment activities supported my learning.</li>
</ul>
<p>There was an extremely strong correlation between those. This is nothing very new. We know that assessment is important in determining the ‘<a href="http://www.open.ac.uk/blogs/SallyJordan/?p=1354">hidden curriculum</a>’, and hence we like to think that ‘authentic assessment’ is important. However, it is interesting to see how much this matters this is to students. Previously, I would not even have been sure that they could tell the difference.</p>
<h4>The purpose of education</h4>
<p>Into the parallel sessions. There was an interesting talk from the module team for <a href="http://www.open.ac.uk/courses/modules/tu100">TU100 my digital life</a>, the first course in the computing and technology degrees. Some of the things they do in that module’s teaching is based around the importance of language, even in science. Learning a subject can be thought of as learning to construct the world of that subject through language, or as they put it, humanities style thinking in technology education. Unsurprisingly, many students don’t like that “I came to learn computing, not writing.” However, there is a strong correlation between students language use and their performance in assessments. By the end of the module some students do come to appreciate what the module is trying to do.</p>
<p>This talk triggered a link to back to another part of Linda Price’s keynote. An important (if now rather cliched question) for formal education is “What is education for everything is now available on the web?” (or one might put that more crudely as “Why should students pay thousands of pounds for one of our degrees?”). The answer that came to me during this talk was “To make them do things they don’t enjoy, because we know it will do them good.” OK, so that is a joke, but I would like to think there is a nugget of truth in there.</p>
<h4>Peer assessment</h4>
<p>On to more specifically Moodle-related things. A number of modules have been trying out Moodle’s <a href="https://docs.moodle.org/28/en/Workshop">Workshop activity</a>. That is a tool for peer review or peer assessment. The talk was from the <a href="http://www.open.ac.uk/postgraduate/modules/sd815">SD815 Contemporary issues in brain and behaviour</a> module team. Their activity involved students recording a presentation (PowerPoint + audio) that critically evaluated a research article. Then they had to upload them to the Moodle Workshop, and review each others presentations as managed by the tool. Finally, they had to take their slide-cast, the feedback they had received, and a reflective note on the process and what they had learned from it, and hand it all in to be graded by their tutor.</p>
<p>Now for OU students (at least) collaborative activities, particularly those tied to assessments, are typically another thing we make them do that they don’t enjoy. This activity added the complexities of PowerPoint and/or Open Office and recording audio. However, it seems to have worked remarkably well. Students appreciated all the things that are normally said about peer review: getting to see other approaches to the same task; practising the skills of evaluating others’ work and giving constructive feedback. In this case the task was one that the students (healthcare workers studying at postgraduate level) could see was relevant to their vocation, which brings us back to visibly authentic assessment, and the student satisfaction graph from the opening keynote.</p>
<p>For me the strongest message from this talk, however, is what was not said. There was very little said about the Moodle workshop tool, beyond a few screen-grabs to show what it looked like. It seems that this is a tool that does what you need it to do without getting in the way, which is normally what you want from educational technology.</p>
<h4>Skipping briefly over</h4>
<p>There are many more interesting things I could write about in detail, but to keep this post to a reasonable length I will just skim over the posters with lunch. For example,</p>
<ul>
<li>The <a href="http://www.open.ac.uk/science/main/studying-science/s207-the-physical-world-0">S207 The physical world</a> module team giving more information about <a href="http://www.open.ac.uk/blogs/SallyJordan/?p=1654">the worrying gender differences in achievement in level 2 physics</a>. It is still early days as they try to work out what is going on there, and what they might be able to do about it, but it is being taken very seriously.</li>
<li>An effort to analyse all the 2000+ figures and diagrams in <a href="http://www.open.ac.uk/courses/modules/s215">S215 Chemistry: essential concepts</a> to try to work out how best to make Chemistry modules accessible to the visually impaired.</li>
<li>A study that captured and analysed all the emails we sent to a group of new students.</li>
</ul>
<p>And, some of the other talks:<p>
<ul>
<li>a session on learning analytics, in this case with a neural net, to try to identify early on those students (on TU100 again) who get through all the continuous assessment tasks with a passing grade, only to fail the end of module assessment, so that they could be targeted for extra support.</li>
<li>a whole morning on the second day, where we saw nine different approaches to remote experiments from around the world. For example, <a href="http://pirate.open.ac.uk">the Open University's remote control telescope PIRATE</a>. I was left me with the impression that this sort of thing is much more feasible and worthwhile than I had previously thought.</li>
</ul>
<h4>Our session on online Quizzes</h4>
<p>The only other session I will talk about in detail is the one I helped run. It was a ‘structured discussion’ about the OU’s use of iCMAs (which is what we call Moodle quizzes). I found this surprisingly nerve-wracking. I have given plenty of talks before, and you prepare them. You know what you are going to say, and you are fairly sure it is interesting. Therefore you are pretty sure what is going to happen. For this session, we just had three questions, and it was really up to the attendees how well it worked.</p>
<p>We did allow ourselves two five-minute presentations. We started with Frances Chetwynd showing some the different ways quizzes are used in modules’ teaching and assessment strategies. This set up a 10-minute discussion of our first question: “How are iCMAs best be used as part of an assessment strategy?”. For this, delegates were seated around four tables, with four of five participants and a facilitator to each table. The tables were covered with flip-chart paper for people to write on.</p>
<p>We were using a <a href="http://en.wikipedia.org/wiki/World_Café_(conversational_process)">World Café format</a>, so after 10 minutes I rang my bell, and all the delegates move to a new table while the facilitators stayed put. Then, in new groups, they discussed the second question: "How can we engage students using iCMAs?" The facilitators were meant to make a brief bridge between what had been said in the previous group at their table, before moving on to the new question with the new group.</p>
<p>After 10 minutes on the second question, we had the other five-minute talk from Sally Jordan, showing some examples of what we have previously learned through scholarship into how iCMAs work in practice. (If you are interested in that, come to my talk at either <a href="https://mootieuk15.moodlemoot.org">MoodleMoot IE UK 2015</a> or <a href="http://2015.imoot.org">iMoot 2015</a>). This lead nicely, after one more round of musical chairs, to the third question: "Where next for iCMAs? Where next for iCMA scholarship?". Finally we wrapped up with a brief plenary to capture they key answers to that last question from each table.</p>
<p>By the end, I really had no idea how well it had gone, although each time I rang my bell, I felt I was interrupting really good conversations. Subsequently, I have written up the notes from each table, and heard from some of the attendees that they had found it useful and interesting, so that is a relief. We had a great team of facilitators (Frances, Jon, Ingrid, Anna) which helped. I would certainly consider using the the same format again. With a traditional presentation, you are always left with the worry that perhaps you got more out of preparing and delivering the presentation than any of the audience did out of listening. In this case, I am sure the audience got much more out of it than me, which is no bad thing.</p>Tim Hunthttp://www.blogger.com/profile/01349724348368316287noreply@blogger.com3tag:blogger.com,1999:blog-2247246257923129702.post-30513509627436978662014-12-09T23:24:00.000+00:002014-12-09T23:24:29.358+00:00Is learning design like UML?<p>A couple of weeks ago, I attended the <a href="http://design4learning.org.uk/">#design4learning</a> conference, which was conveniently on my doorstep at the Open University. <a href="http://ltsdevmusings.wordpress.com/2014/12/05/design4learning/">Jenny Gray has already written her summary of the conference</a> (and she though she was a bit late writing it up!)</p><p>I would like to highlight the point the organisers made with the conference name. Calling learning design "learning design" is a misnomer. You cannot design learning. Learning is something that goes on inside the student's head, perhaps most effectively under the support and guidance of a teacher. Therefore, you can only "design for learning", whatever it is you are designing: a course, a activity, a learning community, …. I think this is more than just semantic pedantry. We should all remember this, particularly when thinking about educational technology. There is no magic bullet that guarantees learning will occur. Just things that are more or less likely to encourage students to learn. (Having said this, I am going to just write "learning design" in the rest of this post, since it is so much easier!)</p><p>The main thought I wanted to share here is, however, something else. After two interesting days at a conference all about learning design, I cannot recall a single diagram shown by any speaker where I thought, "that is a graphical representation of the design of a bit of learning." Was I right to expect to see that? I don't know, but I have seen other presentation about tools like <a href="http://compendiumld.open.ac.uk/">CompendiumLD</a> in the past so I know it can be done. Pondering this as I cycled home, I got to thinking about the type of design I do know about: design of software, and though of an interesting comparison.</p><p><a href="https://commons.wikimedia.org/wiki/File%3AActivity_conducting.svg" style="float: right;" title="spanish Wikipedia user Gwaur [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC-BY-SA-3.0 (http://creativecommons.org/licenses/by-sa/3.0/)], via Wikimedia Commons"><img alt="Activity conducting" src="//upload.wikimedia.org/wikipedia/commons/thumb/e/e7/Activity_conducting.svg/512px-Activity_conducting.svg.png" width="256" /></a>Software developers have a well-established way to draw the design of their software, called <a href="http://www.uml.org/">UML</a> (<a href="http://en.wikipedia.org/wiki/Unified_Modeling_Language">better description on Wikipedia</a>). Let me say immediately that I am not trying to suggest UML as a way to represent learning designs. Rather, I think it is interesting to think about how developers do (or more often don't) use UML to help their work. Can that tell us anything about how and whether teachers might engage with learning design?</p><p>There are two different ways to use UML. There is the quick-and-dirty, back-of-the-envelope way, where you draw of a part of the system to help explain or communicate a particular aspect of your design. This is the way I use UML <a href="https://docs.moodle.org/dev/Overview_of_the_Moodle_question_engine">as can be seen, for example, in this documentation I wrote</a>. You include the details that are relevant to making your point, and leave out anything that does not help.</p><p>The other way to use UML is much more elaborate. It is called "<a href="http://en.wikipedia.org/wiki/Model-driven_architecture">Model Driven Architecture</a>" which I studied as part of <a href="http://www.open.ac.uk/postgraduate/modules/m885">OU module M885</a>. In MDA, you try to draw complete diagrams of the design of your system using a very precise dialect of UML, dotting all the 'i's and crossing all the 't's. Then, using a software tool (that you probably had to buy at great expense) you press the magic button, and it creates all your classes and interfaces for you. Then you just need to fill in all the implementations. At least, that is the promise. As I say, I studied this as part of a postgraduate computing course. It was of some academic interest, but I have never seen anyone write software this way (though a some people do, if the references in the course are to be believed). I expect more people have bought expensive MDA tools than have actually used them. In a previous generation, the same was true of <a href="http://en.wikipedia.org/wiki/Computer-aided_software_engineering">CASE</a> tools that also failed to live up to their promises.</p><p>So what, if anything, can this tell us about learning design? Well, I can see exactly the same split happening. There will be hype about magic systems where you input your learning outcomes, and sketch your learning design, press a magic button, and hey, presto, there is your Moodle course. It won't work outside of research labs, but some vendors will try to commercialise it, and a some institutions will fall for it and end up with expensive white elephants.</p><p>On the other hand, it would be good to see a common notation emerge to represent learning designs. This would help teachers communicate with each other, and perhaps with students, about how their teaching is supposed to work. A good feature of UML is that it is really very natural. Most developers can understand most of a UML diagram without having to be taught a lot of rules. There are several types of diagram to represent different things, but they are the kinds of things people drew anyway before UML was invented. The creators of UML just picked one particular way of drawing each sort of diagram, and endorsed it, in an attempt to get everyone talking (drawing) a common language. If you want to draw highly detailed UML diagrams, you need to learn a lot of rules, but you can get a long way just by copying what you see other people do, which is a sign of an effective language. It would be nice to see such a language for communicating about learning.</p>Tim Hunthttp://www.blogger.com/profile/01349724348368316287noreply@blogger.com0tag:blogger.com,1999:blog-2247246257923129702.post-35747637380583916082014-09-29T09:06:00.002+01:002014-10-01T13:50:57.190+01:00What makes something a horrible hack?<p>Over in <a href="https://tracker.moodle.org/browse/MDL-42974">Moodle bug MDL-42974</a>, Derek Chirnside asked "What is it about a hack that makes it 'horrible'??". I had described the code sam wrote to fix that issue in those terms, while at the same time submitting it for integration. It was a fair enough comment. I had helped sam create the code, and it was the kind of code you only write to make things work in Internet Explorer 8.</p>
<p>Although "Horrible hack" is clearly an aesthetic judgement, and therefore rather subjective, I think I can give a definition. However, it is easier to start by defining the opposite term. What is "Good code"? Good code should have properties like this:</p>
<ol>
<li>It works: It does what it is supposed to.</li>
<li>It is readable: Another developer can read it and see what it is supposed to do.</li>
<li>It is logical: It does what it is supposed to do in a way that makes sense. It is not just that a developer can puzzle out what it does, but it is clear that it does just that and nothing else.</li>
<li>It is succinct: This is a companion to point 3). </li>
<li>It is maintainable: It is clear that the code will go in working in the future, or if circumstances do change, it is clear how the code could be modified to adapt to it.</li>
</ol>
<p>Note that property 1) is really just a starting point. It is not enough on its own.</p>
<p>A horrible hack is code that manages little more than property 1. I think sam's patch on MDL-42974 scores a full et.
<ol start="2">
<li>It is not at all obvious what the added code it for. Sam tried to mitigate that by adding a long comment to explain, but that is just a workaround to the hackiness of the code.</li>
<li>There is no logical reason why the given change makes things work in IE <= 8. We were just fiddling around in Firebug to try to find out how IE was going wrong. Changing the display property on one div appeared to solve the display problem, so we turned that into code. We still don't really understand why. Another sign of the illogicality is the two setTimeout calls. Why do we need those two delays to make it work? No idea, but they are necessary.</li>
<li>The whole chunk of added code should be unnecessary. Without the addition, it works in any other browser. We are adding some code that should be redundant just to make things work in IE.</li>
<li>We don't understand why this code works, so we cannot understand if it will go on working. In this case, lack of maintainability is not too serious. The code only executes on IE8 or below. In due course we know we can just delete it.</li>
</ol>
<p>Normally, you would wish to avoid code like this, but in this case it is OK because:</p>
<ul>
<li>The hack is confined in one small area of the code.</li>
<li>There is a comment to explain what is going on.</li>
<li>It is clear that we will be able to remove this code in future, once usage of IE8 has dropped to a low enough level.</li>
</ul>
<p>At least, we hope that the Moodle integration team agree that this code is acceptable for now. Otherwise, we wasted our time writing it.</p>Tim Hunthttp://www.blogger.com/profile/01349724348368316287noreply@blogger.com0tag:blogger.com,1999:blog-2247246257923129702.post-48811867080007553392014-04-25T16:54:00.002+01:002014-04-25T16:54:54.656+01:00Load-testing Moodle 2.6.2 at the OU<p>At the start of June we will upgrade the OU Moodle sites to Moodle 2.6. Before then we need to know that it will still perform well when subjected to our typical peak load of 100,000 page-views per hour. This time, I got 'volunteered' to do the testing.</p>
<h4>The testing servers</h4>
<p>To do the testing, we have a set of 10 servers that are roughly similar to our live servers. That is six web servers for handling normal requests, one web server that handles 'administrative' requests. That is, any URL starting /admin, /report or /backup. Those pages are often big, long-running processes, rather than quick page views, so it is better to put them on a different server that is tuned differently. There is one 'web' server is just for running the cron batch processes. Finally we have a database server and a file server.</p>
<p>In order to be able to make easy comparisons, we make two copies of our live site onto these servers. That is, we have two different www_root folders, which correspond to different URLs lrn2-perf-cur and lrn2-perf-upg. In due course we will upgrade one of the copies to the new release while leaving the other open running the current version of the code. This make it easy to switch back and forth when comparing the two.</p>
<p>In addition to the servers running Moodle, we have 6 virtual machines to generate the simulated load.</p>
<h4>The testing procedure</h4>
<p>We test using <a href="http://jmeter.apache.org/">JMeter</a>. In order to test Moodle, you need to send lots of requests for different pages, many of which include numeric ids in the URLs. Therefore, the JMeter script needs to be written specifically for the site being tested. Fortunately, our former colleague James Brisland made a script that automatically generates the necessary JMeter script. We <a href="https://github.com/lucisgit/moodle-jmeter-script-generator">shared that script with the community, and you can find a copy here</a>. However, we shared it a long time ago, and since then our version has probably diverged from the community version a bit. Oops!</p>
<p>I say this tool automatically generates the necessary JMeter script, but sadly that is an oversimplification. It fails in certain cases like if a forum is set to separate groups mode. So, having generated the JMeter script, you need to run it and check that it actually works. If not, you have to go into the courses and activities being tested and modify the settings. We really ought to automate that, but no one has had the time. Anyway, eventually (and this took ages) you have a working test script.</p>
<h4>Tuning the test script</h4>
<p>Once the test script works, in that it simulates users performing various actions without error, one at a time, then you have to start running it at high load. That is, simulating lots of users doing lots of things simultaneously. After it has settled down, you let it run for 15 or 20 minutes, and then look at what sort of load you are generating. The goal is to get about the same number of requests per second for each type of page (course view, forum view, post to forum, view resource, ...) in the test run as in real use on the live system. If not, you tweak the time delays, or number of threads, and then run again. It took about four runs to get to a simulated load that was close (actually slightly higher) than the target request rates we had taken from the live server logs.</p>
<p>All that creation and tuning of the tests scripts is done on the lrn2-perf-cur copy of the site. Once that is OK, then you run the same script against lrn2-perf-upg. That should give exactly the same performance, and before proceeding we want to verify that is the case. It turned out at first that it was slightly different. I had to find the few admin settings that were different between the two servers. Once the configuration was the same, the performance was the same, and we were finally in a position to start comparing the old and new systems.</p>
<h4>Upgrade to the new version of Moodle</h4>
<p>The next step is to upgrade lrn2-perf-upg to the new code. This code is still work-in-progress. Final testing of the code before release happens next month, but we try to keep our code in a releasable state, so it should be OK for load-testing. However, this is the first time we have run the upgrade on a copy of all our data. Unsurprisingly, we found some bugs. Fortunately they were easily fixed, and better to find them now than later.</p>
<p>Also, a new version of Moodle comes with a lot of new configuration options. This is the moment to consider what we should set them to. Luckily, most of the default values were right, so there was not a lot to do. Moodle prompts you for most of the new settings you need to make as part of the upgrade. However, it does not prompt you to configure any new caches, so you have to remember to go and do that.</p>
<h4>Compare performance</h4>
<p>At long last (about one and a half weeks into the process) you are finally ready to run the test you want. How does 2.6 performance compare to 2.5? Here is a screen-grab of today's testing:</p>
<p><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgJ9v_8GKGeDBVdPV05hFf7fEHed9JeXeyjOyvNcIytQ6TMi8pQiFQOcmAiJUY0XXgnlasnWlhWGBPIuZoRCF0fjB5yC4V0U9QizinCd8cNZqan6XO5wyzba6ZhiLrcJm00_k7AsZOwnn8/s1600/Results+summary.png" /></p>
<p>Good news: Moodle 2.6 is mostly a bit faster (5-10%) than Moodle 2.5. Bad news: every 15 minutes, it suddenly goes slow for about 15 seconds. What?!</p>
<h4>Problem solving</h4>
<p>Actually, there is a logical explanation. We have cron set to run every 15 minutes, so surely the problem is caused by cron, right? No. Wrong! We stopped cron running, and the spikes remained. We tried various things to see what it might be, and could not make any sense of it. One thing we discovered was that the spikes were about as large as the spikes you get by clicking the 'Purge all caches' button. OK, so something is purging caches, but what?</p>
<p>To cut a long story short, you need to remember that our two test sites lrn2-perf-cur and lrn2-perf-upg are sharing the same servers. Therefore they are sharing the same memcache storage. It appears that something in cron in Moodle 2.5 purges at least some of the caches. When we stopped cron on our Moodle 2.5 site the spikes went away on our 2.6 site. I am afraid we did not try to work out why Moodle 2.5 cron was purging caches, but there is probably a bug there. It turns out that purge caches does not cause a measureable slow-down in Moodle 2.5, at least not for us, which is worth knowing.</p>
<p>Why does Purge caches cause a slow-down in 2.6 but not in 2.5? I am pretty sure the reason is <a href="https://tracker.moodle.org/browse/MDL-41436">MDL-41436</a>. When things slowed down, it was the course page that slowed down the most, and that is the one most dependent on the modinfo cache.</p>
<h4>Summary</h4>
<ul>
<li>Moodle 2.6 is about 5-10% faster than 2.5, at least on our servers, which are RHEL5 + Postgres + memcache cluster store. (<a href="https://tracker.moodle.org/browse/MDL-42071">MDL-42071</a> - why has that not been integrated yet?)</li>
<li>In Moodle 2.5, doing Purge caches when your system is running at high load seems to cause remarkably little slow-down.</li>
<li>In Moodle 2.6, doing Purge caches does slow things down a lot, but only very briefly. Performance recovered within about 15 seconds in our test, but then the test was only using a few courses.</li>
<li>In Moodle 2.6, clicking Clear theme caches (at the top of Admin -> Appearance -> Themes -> Theme selector) causes no noticeable slow-down.</li>
</ul>
<p>The bit about what happens when you clear the caches is important because sometimes, when you patch the system with a bug fix, you need to purge one or more caches to make the fix take effect. In the past, we did not know what effect that had. We were cautious and had people waiting up until after midnight to click the button at a time of low system load. It turns out now that is probably not necessary. We can clear caches during the working day, when staff are in the office to pick up the pieces if anything does go wrong.</p>Tim Hunthttp://www.blogger.com/profile/01349724348368316287noreply@blogger.com9tag:blogger.com,1999:blog-2247246257923129702.post-78731284882917046372014-04-23T16:47:00.001+01:002014-04-28T18:55:35.163+01:00The four types of thing a Moodle developer needs to know<p>In order to write code for Moodle, there is an awful lot you need to know. Quite how much was driven home for me when I taught a Moodle developers' workshop at the <a href="http://moodlemoot.moodle.de/">German MoodleMaharaMoot in February</a>. When preparing for that workshop, I though you could group that knowledge into three categories, but I have since added a fourth to my thinking.</p>
<h4>1. The different types of Moodle plugin</h4>
<p>The normal way you add functionality to Moodle is to create a plug-in. There are <a href="http://docs.moodle.org/dev/Plugins">many different types of plug-in</a>, depending on what you want to add (for example, a report, an activity module or a question type). Therefore, the first thing to learn is what the different types of plug-in are and when you should use them. Then, once you know which type of plug-in to create, you need to know how to make that sort of plug in. For example, what exactly do you need to do <a href="http://docs.moodle.org/dev/Question_types">to create a new question type</a>?</p>
<h4>2. How to make Moodle code do things</h4>
<p>Irrespective of what sort of plug-in you are creating, you also need to know how to make your code do certain things. Moodle is written in PHP, so generic PHP skills are a prerequisite, but Moodle has <a href="http://docs.moodle.org/dev/Core_APIs">its own libraries for many common tasks</a>, like <a href="http://docs.moodle.org/dev/Form_API">getting input from the user</a>, <a href="http://docs.moodle.org/dev/Data_manipulation_API">loading and saving data from the database</a>, <a href="http://docs.moodle.org/dev/Output_API">displaying output</a>, and so on. A developer need to know many of these APIs.</p>
<h4>3. How to get things done in the Moodle community</h4>
<p>If you just want to write Moodle code for your own use, then the two types of know-how above are enough, but if you want to take full advantage of Moodle's open source nature, then you need to learn how to interact with the rest of the Moodle development community. For example <a href="https://moodle.org/mod/forum/view.php?id=55">how to ask for help</a>, <a href="http://docs.moodle.org/dev/Tracker_introduction">how to report a bug</a>, <a href="http://docs.moodle.org/dev/Process">how to submit the changes for a bug you have fixed or a feature you have implemented</a>, <a href="http://en.wikipedia.org/wiki/Flying_pig">get another developer to review your proposed code change</a>, <a href="http://tjhunt.blogspot.co.uk/2014/01/moving-ou-moodle-code-to-moodle-261.html">how to update your customised Moodle site using git</a>, and so on.</p>
<h4>4. Something about education</h4>
<p>Those three points were what I thought of when trying to work out what I needed to teach during the developer workshop I ran. Since then, while listening to one of the presentations at the <a href="http://moodlemoot.ie/category/moodlemoot-2014/">UK MoodleMoot</a> as it happens, I realised that there was a fourth category of knowledge required to be a good Moodle developer. It matters that we are making software to help people teach and learn. I am struggling to think of specific concepts here, with URLs for where you can learn about them, as I gave in the previous sections, but there is a whole body of knowledge about what makes for effective learning and teaching and it is useful to have some appreciation of that. You also need some awareness of how educational institutions operate. If you hang around the Moodle community for any length of time you will also discover the educational culture is different in different countries. For example in the southern hemisphere the long summer holiday is also the Christmas holiday, and in America, <a href="https://tracker.moodle.org/browse/MDL-12380">they expect to award grades of more than 100%</a>.</p>
<h4>Summary</h4>
<p>Does this subdivision into categories actually help you learn to be a Moodle developer? I am not sure, but it was certainly useful when planning my workshop. The workshop was structured around creating three plugins on the first day, a Filter, a Block and then a Local plug-in. However, those exercises were structured so that while moving through different types of category-one knowledge, we also covered key topics from categories two and three in a sensible order. So it helped me, and I thought it was an interesting enough thought to share.</p>
Tim Hunthttp://www.blogger.com/profile/01349724348368316287noreply@blogger.com3tag:blogger.com,1999:blog-2247246257923129702.post-17069694387235475712014-02-27T17:48:00.000+00:002014-02-27T17:48:36.854+00:00Reflections on listening to conference presentations in German<p>I am at <a href="http://moodlemoot.moodle.de">the MoodleMaharaMoot in Leipzig</a> listening to people talk about Moodle.</p><p>First, the good news is that about half the words in English came from the same roots as German, so there are a fair number of words you can recognise, at least if you have time to read them from the screen. For words that seem really key, there is Google translate. Also, the Germans seems like using English phrases for eLearning-related things, like Learning Analytics, or Multiple Choice.
</p><p>However, I don’t think I was even understanding 10% of the words. What really makes a difference to intelligibility is what is on the screen. If speaker just had powerpoint slides with textual bullet points, that does not help. If the speaker uses the screen to show you what they are talking about - screen grabs or live demos - that is much better. Of course, this is just: show, don’t tell.
</p><p>It also makes a big difference whether you already know a little bit about what is being said. I talked to some people from University of Vienna two years ago when they started building their <a href="http://www.academic-moodle-cooperation.org/module/offlinequiz/">offline quiz activity</a>, so I already knew what it was supposed to do. I followed that presentation (which contained many screen-grabs) better than most. What they have done looks really slick, by the way.
</p><p>Regarding my presentation, I feel vindicated in my plan to spend almost all of the presentation doing a live demonstration of the question types I was talking about. Of course, I am sure that almost everyone in the audience has better English than I have German. Also, I apologies that I talked for the whole time, and did not leave an opportunity for questions.
</p><p>Finally, I have been speculating (without reaching any conclusions) about whether the experience of sitting there, failing to understand almost everything that is being said, and just picking some scraps from the slides, is giving me any empathy for people with severe disabilities who need major accessibility support to use software? As I say, these thoughts are inconclusive. What does anyone else think?
</p><p>By the way, Germans applaud by rapping on the table with their knuckles. Your trivia fact for the day.</p>Tim Hunthttp://www.blogger.com/profile/01349724348368316287noreply@blogger.com0tag:blogger.com,1999:blog-2247246257923129702.post-49906256425765814582014-01-29T19:12:00.000+00:002014-01-29T19:35:44.447+00:00Moving the OU Moodle code to Moodle 2.6.1<p>I spent today upgrading our Moodle codebase from Moodle 2.5.4 to Moodle 2.6.1. This is the start of work towards our June release of the VLE. We have a March release based on Moodle 2.5.4 to get on the live servers first, and testing that will overlap with the development of the 2.6.1-based version.</p>
<h4>Doing the merge</h4>
<p>The first stage of the process is to merge in the new code. This is non-trivial because even if you just do</p>
<pre>
git checkout -b temp v2.5.4
git merge v2.6.1
</pre>
<p>Then you will get a lot of merge conflicts. That is a product of how the Moodle project manages its stable branches. If your own code changes also lead to other merge conflicts, then sorting out the two is a real mess.</p>
<p>Fortunately, there is a better way, because we know how we want to resolve any conflicts between 2.5.4 and 2.6.1. We want to end up with 2.6.1. Using git merge strategies, you can do that:</p>
<pre>
git checkout -b merge_helper_branch v2.6.1
git merge --strategy=ours v2.5.4
</pre>
<p>That gives you a commit that is upstream of both v2.5.4 and v2.6.1, and which contains code that is identical to v2.6.1. You can verify that using <tt>git diff v2.6.1 merge_helper_branch</tt>. That should produce no output.</p>
<p>Having built that helper branch, you can then proceed to upgrade your version of the code. Our version of Moodle lives on a branch called <tt>ouvle</tt> which we originally branched off Moodle 2.1.2 in October 2011. Since then, we have made lots of changes, including adding many custom plugins, and merging in many Moodle releases. Continuting from the above we do</p>
<pre>
git checkout ouvle
git merge --strategy-option=patience merge_helper_branch
</pre>
<p>That gave a lot of merge conflicts, but they were all to do with our changes. Most of them were due to <a href="https://tracker.moodle.org/browse/MDL-38189">MDL-38189</a>, which sam marshall developed for Moodle 2.6, and which we had back-ported into our 2.5 code. That back-port made a big mess, but fortunately most of the files affected did not have any other ou-specific changes, so I could just overwrite them with the latest versions from v2.6.1.</p>
<pre>
git checkout --theirs lang/en backup lib/filestorage admin/settings/development.php lib/form/form.js
git add lang/en backup lib/filestorage admin/settings/development.php lib/form/form.js
</pre>
<p>Simiarly, we had backported <a href="https://tracker.moodle.org/browse/MDL-35053">MDL-35053</a> which lead to more conflicts that were easy to resolve. Another case was the <a href="https://moodle.org/plugins/view.php?plugin=format_singleactivity">Single activity course format</a> which we had used as an add-on to Moodle 2.5. That is now part of the standard Moodle release. The change caused merge conflits, but again there was a simple solution: take the latest from 2.6.1.</p>
<p>After all that, there were only about 5 files that needed more detailed attention. They were mostly where a change had been made to standard Moodle code right next to a place where we had made a change. (Silly rules about full stops at the ends of comments!) They were easily to fix manually. The one tricky file was in <tt>lib/moodlelib.php</tt> where about 400 lines of code had been moved <tt>lib/classes/useragent.php</tt>. There were two ou-specific changes in the middle of that, which I had to re-do in the new version of that code.</p>
<h4>Verifying the merge</h4>
<p>Having resolved all the conflicts, it was then time to try to convince myself that I had not screwed anything up. The main check was to comparing our <tt>ouvle</tt> code with the standard 2.6.1 code. Just doing <tt>git diff v2.6.1 ouvle</tt> does not work well because it shows all contents of all the new files we have added. You need to read the <a href="https://www.kernel.org/pub/software/scm/git/docs/git-diff.html">git documentation</a> and work out the incantation</p>
<pre>
git diff --patience --diff-filter=CDMRTUXB v2.6.1 ouvle
</pre>
<p>That tells git to just show changes to existing files - the ones that are part of standard Moodle 2.6.1. That is a manageable amount of output to review. We have a strict policy that any change to core Moodle code is marked up like this:</p>
<pre>
// ou-specific begins #2381 MDL-28567
/*
$select = new single_select(new moodle_url(CALENDAR_URL.'set.php',
array('return' => base64_encode($returnurl->out(false)),
'var' => 'setcourse', 'sesskey'=>sesskey())),
'id', $courseoptions, $selected, null);
*/
$select = new single_select(new moodle_url(CALENDAR_URL.'view.php',
array('return' => $returnurl, 'view' => 'month')),
'course', $courseoptions, $selected, null);
// ou-specific ends #2381 MDL-28567
</pre>
<p>That is, the original Moodle code is still there, but commented out, alongside our modified version, and the whole thing is wrapped in paired begin and end markers that refer to a ticket id in our issues database and if applicable a Moodle tracker issue. In this case I can check that <a href="https://tracker.moodle.org/browse/MDL-28567">MDL-28567</a> has still not been resolved, so we still need this ou-specific change. What I am doing looking at the diff output is verifying that every change is marked up like that, and that any issues mentioned are things that are still relevant.</p>
<p>The other check is to search the whole codebase for <tt>ou-specific</tt> and again review all the issue numbers mentioned. These combined checks find a few ou-specific changes that are no longer needed, which is a good thing.</p>
<h4>What happens next</h4>
<p>Now that I think the code seems right, it is time to test it, so I upgrade my development install. It mostly works, except that our custom memcache session handler no longer works (the session code seems to have changed a lot, including an official memcached session hander in core). For now I just switch back to default Moodle sessions, and make a note to investigate this later.</p>
<p>Apart from that, the upgrade goes smootly, and, apart from thousands of debugging warnings about use of deprecated code, I have a working Moodle site, so I push the code to our git server, and warn the rest of the team that they can upgrade if they feel brave.</p>
<p>The next thing, which will take place over the next few weeks is to check every single one of our custom plugins to verify that it still works properly in Moodle 2.6. To manage that we use a Google Docs spreadsheet that we can all edit that lists all the add-ons, with who is going to be responsible for checking it, and whether they have done so yet. Here is a small section.</p>
<p><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhWji7RY8AULmLk3GaxUQZKbmeHOIpWve48YpnBoFJmkMRP1nS1b2HkWhd1oHlpKJBSdrZalc48APEd0cgQhbQXxXMnD1Woe07c6_AnKWnoukEqr-yLVJb8Z8RHkhiV2PJPd_I3sIBVdzQ/s1600/plugins.png" /></p>
<h4>The state of OU Moodle customisation</h4>
<p>Our regular code-merges are a good moment to take stock of the extend to which we have customised Moodle. Here are some headline numbers:</p>
<ul>
<li><b>212 custom plug-ins</b>: Of those 10 are ones we have taken from the community, including <a href="https://moodle.org/plugins/view.php?plugin=mod_questionnaire">Questionnaire</a>, <a href="https://moodle.org/plugins/view.php?plugin=mod_certificate">Certificate</a>, <a href="https://moodle.org/plugins/view.php?plugin=local_codechecker">Code-checker</a> and <a href="https://moodle.org/plugins/view.php?plugin=qtype_stack">STACK</a> (we helped create those last two). Of our own plugins, 58 (over a quarter) are shared with the community, though the counting is odd because <a href="https://moodle.org/plugins/view.php?plugin=mod_forumng">ForumNG</a> contains 20 sub-plugins.</li>
<li><b>17 ou-specific issues</b>: That is, reasons we made a change to core code that could not be an add-on.</li>
<li>Due to those 17 reasons, there are <b>42 pairs of <tt>// ou-specific begins/ends</tt> comments</b> in the code.</li>
</ul>
<p>So, we continue to be disciplined about not changing core code unless we really have to, but the number of plugins is getting a bit crazy. A lot of the plugins, are, however, very small. They just do one thing. Also, we run a range of very different sites, including <a href="http://www.open.edu/openlearn/">OpenLearn</a>, <a href="http://www.open.edu/openlearnworks/" class="_blanktarget">OpenLearn works</a>, <a href="https://learn5.open.ac.uk/" class="_blanktarget">The Open Science Lab</a> and our exams server. A significant number of our plugisn were just designed to be used on one of those sites.</p>
<p>Here are the numbers of custom plugins broken down by type (and ignoring sub-plugins of our custom plugins).</p>
<table>
<thead>
<tr><th>Plugin type</th><th>Number</ht></tr>
</thead>
<tbody>
<tr><td>Activity module</td><td>25</td></tr>
<tr><td>Admin tools</td><td>8</td></tr>
<tr><td>Authentication methods</td><td>2</td></tr>
<tr><td>Blocks</td><td>30</td></tr>
<tr><td>Course formats</td><td>3</td></tr>
<tr><td>Editors</td><td>1</td></tr>
<tr><td>Enrolment methods</td><td>1</td></tr>
<tr><td>Filters</td><td>6</td></tr>
<tr><td>Gradebook reports</td><td>1</td></tr>
<tr><td>Local plugins</td><td>44</td></tr>
<tr><td>Message outputs</td><td>2</td></tr>
<tr><td>Portfolio outputs</td><td>1</td></tr>
<tr><td>Question behaviours</td><td>4</td></tr>
<tr><td>Question types</td><td>14</td></tr>
<tr><td>Quiz reports</td><td>6</td></tr>
<tr><td>Quiz access rules</td><td>2</td></tr>
<tr><td>Reports</td><td>19</td></tr>
<tr><td>Repositories</td><td>3</td></tr>
<tr><td>Themes</td><td>9</td></tr>
<tr><td>TinyMCE plugins</td><td>1</td></tr>
</tbody>
</table>
Tim Hunthttp://www.blogger.com/profile/01349724348368316287noreply@blogger.com0tag:blogger.com,1999:blog-2247246257923129702.post-27185716779169040802013-11-28T18:11:00.001+00:002013-11-28T18:14:13.579+00:00Bug fixing as knowledge creation<p>There are lots of ways you can think about bug-fixing: it is just a job that developers do; it is problem solving; etc. Here I want to take one particular viewpoint, that it is generating new knowledge about a software system.</p><p>
One was to think about software is that it is the embodiment of a set of requirements, of how something should work. For example, <a href="https://moodle.org/">Moodle</a> can be thought of as a lot of knowledge about what software is required to teach online, and how that software should be designed. Finding and fixing bugs increases that pool of knowledge by identifying errors or omissions and then correcting them.</p>
<h4>The bug fixing process</h4>
<p>We can break down the process of discovering and fixing a bug into the following steps. This is really trying to break the process down as finely as possible. As you read this list, please think about what new knowledge is generated during each step.</p>
<ol>
<li><b>Something's wrong</b>: We start from a state of blissful ignorance. We think our software works exactly as it should, and then some blighter comes along and tells us "Did you know that sometimes ... happens?" Not what you want to hear, but just knowing that there is a problem is vital. In fact the key moment is not when we are told about the problem, but when the user encountered it. Good users report the problems they encounter with an appropriate amount of detail</li>
<li><b>Steps to reproduce</b>: Knowing the problem exists is vital, but not a great place to start investigating. What you need to know is something like "Using Internet Explorer 9, if you are logged in as a student, are on this page, and then click that link then on the next page press that button, then you get this error." and that all the details there are relevant. This is called steps to reproduce. For some bugs they are trivial. For bugs that initially appear to be random, identifying the critical factors can be a major undertaking.</li>
<li><b>Which code is broken</b>: Once the developer can reliably trigger the bug, then it is possible to investigate. The first thing to work out is which bit of code is failing. That is, which lines in which file.</li>
<li><b>What is going wrong</b>: As well as locating the problem code, you also have to understand why it is misbehaving. Is it making some assumption that is not true? Is it misusing another bit of code? Is it mishandling certain unusual input values? ...</li>
<li><b>How should it be fixed</b>: Once the problem is understood, then you can plan the general approach to solving it. This may be obvious given the problem, but in some cases there is a choice of different ways you could fix it, and the best approach must be selected.</li>
<li><b>Fix the code</b>: Once you know how you will fix the bug, you need to write the specific code that embodies that fix. This is probably the bit that most people think of when you say bug-fixing, but it is just a tiny part.</li>
<li><b>No unintended consequences</b>: This could well be the hardest step. You have made a change which fixed the specific symptoms that were reported, but have you changed anything else? Sometimes a bug fix in one place will break other things, which must be avoided. This is a place where peer review, getting another developer to look at your proposed changes, is most likely to spot something you missed.</li>
<li><b>How to test this change</b>: Given the changes you made, what should be done to verify that the issue is fixed, and that nothing else has broken? You can start with the steps to reproduce. If you work through those, there should no longer be an error. Given the previous point, however, other parts of the system may also need to be tested, and those need to be identified.</li>
<li><b>Verifying the fix works</b>: Given the fixed software, and the information about what needs to be tested, then you actually need to perform those tests, and verify that everything works.</li>
</ol>
<h4>Some examples </h4>
<p>In many cases you hardly notice some of the steps. For example, if the software always fails in a certain place with an informative error message, then that might jump you to step 4. To give a recent example: <a href="https://tracker.moodle.org/browse/MDL-42865">MDL-42863</a> was reported to me with this error message:</p>
<blockquote>
<p>Error reading from database</p>
<p style="font-size: x-small;">Debug info: ERROR: relation "mdl_questions" does not exist</p><p style="font-size: x-small;">LINE 1: ...ECT count(1) FROM mdl_qtype_combined t1 LEFT JOIN mdl_questi...</p><p style="font-size: x-small;">SELECT count(1) FROM mdl_qtype_combined t1 LEFT JOIN mdl_questions t2 ON t1.questionid = t2.id WHERE t1.questionid <> $1 AND t2.id IS NULL</p><p style="font-size: x-small;">[array (0 => '0',]</p><p style="font-size: x-small;">Error code: dmlreadexception</p><p style="font-size: x-small;">Stack trace:</p>
<ul style="font-size: x-small;">
<li>line 423 of /lib/dml/moodle_database.php: dml_read_exception thrown</li>
<li>line 248 of /lib/dml/pgsql_native_moodle_database.php: call to moodle_database->query_end()</li>
<li>line 764 of /lib/dml/pgsql_native_moodle_database.php: call to pgsql_native_moodle_database->query_end()</li>
<li>line 1397 of /lib/dml/moodle_database.php: call to pgsql_native_moodle_database->get_records_sql()</li>
<li>line 1470 of /lib/dml/moodle_database.php: call to moodle_database->get_record_sql()</li>
<li>line 1641 of /lib/dml/moodle_database.php: call to moodle_database->get_field_sql()</li>
<li><b>line 105 of /admin/tool/xmldb/actions/check_foreign_keys/check_foreign_keys.class.php</b>: call to moodle_database->count_records_sql()</li>
<li>line 159 of /admin/tool/xmldb/actions/XMLDBCheckAction.class.php: call to check_foreign_keys->check_table()</li>
<li>line 69 of /admin/tool/xmldb/index.php: call to XMLDBCheckAction->invoke()</li>
</ul>
</blockquote>
<p>I have emboldened the key bit that says where the error is. Well, there are really two errors here. One is that the Combined question type add-on refers to <tt>mdl_questions</tt> when it should be <tt>mdl_question</tt>. The other is that the XMLDB check should not die with a fatal error if presented with bad input like this. The point is, this was all immediately obvious to me from the error message.</p><p>
Another recent example at the other extreme is <a href="https://tracker.moodle.org/browse/MDL-42880">MDL-42880</a>. There was no error message in this case, but presumably someone noticed that some of their quiz settings had changed unexpectedly (Step 1). Then John Hoopes, who reported the bug, had to do some careful investigation to work out what was going on (Step 2). I am glad he did, because it was pretty subtle thing, so in this case Step 2 was probably a lot of work. From there, it was obvious which bit of the code was broken (Step 3).</p><p>
Note that Step 3 is not always obvious even when you have an error message. Sometimes things only blow up later as a consequence of something that went wrong before. To use an extreme example, if someone fills your kettle with petrol, instead of water, and then you turn it on to make some tea and it blows up. The error is not with turning the kettle on to make tea, but with filling it with petrol. If all you have is shrapnel, finding out how the petrol ended up in the kettle might be quite hard. (I have no idea why I dreamt up that particular analogy!)</p><p>
MDL-42880 also shows the difference between the conceptual Steps 4 and 5, and the code-related Steps 3 and 6. I though the problem was with a certain variable becoming un-set at a certain time, so I coded a fix to ensure the value was never lost. That led to complex code that required a paragraph-long comment to try to explain it. Then I had a chat with <a href="http://learn1.open.ac.uk/mod/oublog/view.php?user=13">Sam Marshall</a> who suggested that in fact the problem was that another bit of code was relying on the value that variable, when actually the value was irrelevant. That lead to a simpler (hence better) fix: stop depending on the irrelevant value.</p><p>
<h4>What does this mean for software?</h4>
<p>There are a few obvious consequences that I want to mention here, although they are well known good practice. I am sure there are other more subtle ones.</p><p>
First, you want the error messages output by your software to be as clear and informative as possible. They should lead you to where the problem actually occurred, rather than having symptoms only manifesting later. We don't want exploding kettles. There are some good examples of this in Moodle.</p>
<ul>
<li><a href="https://tracker.moodle.org/browse/MDL-42540">Moodle will tell you if your forgot to call setType when defining a form</a>. If you forget, then you might create a security hole when users submit the form. (Because of a belt-and-braces approach in Moodle, MDL-42540 does not open any security holes, which is why I can safely link to it.)</li>
<li><a href="https://moodle.org/mod/forum/discuss.php?d=243079">Moodle will tell you if the first column in a result set is not unique when it needs to be</a>. In the forum post I just linked to, the developer had not set up their Moodle to display the helpful warnings, and so they encountered what appeared to be a mystifying bug. That is why I <a href="https://github.com/moodle/moodle/commit/ade79a47685918c94aa499f7a147c8c5ee3e9fcd">added the warning</a> following a <a href="https://tracker.moodle.org/browse/MDL-12438">trying debugging session six years ago</a>.</li>
</ul>
<p>Second, because Step 7, ensuring that you have not broken anything else, is hard, it really pays to structure your software well. If you software is made up of separate modules that are each responsible for doing one thing, and which communicate in defined ways, then it is easier to know what the effect of changing a bit of one component is. If your software is a big tangle, who knows the effect of pulling one string.</p><p>
Third, it really pays to engage with your users and get them to report bugs. Of course, you would like to find and fix all the bugs before you release the software, but that is impossible. For example, we are working towards a new release of the OU's Moodle platform at the start of December. We have had two professional testers testing it for a month, and a few select users doing various bits of ad-hoc testing. That adds up to less than 100 person days. On the day the software is released, probably 50,000 different users will log in. 50,000 user days, even by non-expert testers, are quite likely to find something that no-one else noticed.</p>
<h4>What does this mean for users?</h4>
<p>The more important consequences are for users, particularly of open-source software.</p>
<ul>
<li>Reporting bugs (Step 1) is a valuable contribution. You are adding to the collective knowledge of the project.</li>
</ul><p>
There are, however, some caveats that follow from the fact that in many projects, the number of developers available to fix bugs is smaller than the number of users reporting bugs.</p>
<ul>
<li>If you report a bug that was already reported, then someone will have to find the duplicate and link the two. Rather than being a useful contribution, this just wastes resources, so try hard to find any existing bug report, and add your information there, before creating a new one.</li>
<li>You can contribute more by reporting good steps to reproduce (Step 2). It does not require a developer to work those out, and if you can do it, then there is more chance that someone else will do the remaining work to fix the bug. On the other hand, there is something of a knack to working out and testing which factors are, or are not, significant in triggering a bug. The chances are that an experienced developer or tester can work out the steps to reproduce quicker than you could. If, however, all the experienced developers are busy then waiting for them to have time to investigate is probably slower than investigating yourself. If you are interested, you can develop your won diagnosis skills.</li>
<li>If you have an error message then copy and paste it exactly. It may be all the information you need to give to get straight to Step 3 or 4. In Moodle you can get a really detailed error message by setting '<a href="http://docs.moodle.org/25/en/Debugging">debugging</a>' to 'DEVELOPER' level, then triggering the bug again. (One of the craziest mis-features in Windows is that most error pop-ups do not let you copy-and-paste the message. Paraphrased error messages can be worse than useless.)</li>
</ul>
<p>Finally, it is worth pointing out that Step 9 is another thing that can be done by the user, not a developer. For developers, it is really motivating when the person who reported the bug bothers to try it out and confirm that it works. This can be vital when the problem only occurs in an environment that the developer cannot easily replicate (for example an Oracle-specific bug in Moodle).</p>
<h4>Conclusion</h4>
<p>Thinking about bug finding and fixing as knowledge creation puts a more positive spin on the whole process than is normally the case. This shows that lots of people, not just developers and testers, have something useful to contribute. This is something that open source projects are particularly good at harnessing.</p><p>
It also shows why it makes sense for an organisation like the Open University to participate in an open source community like Moodle: Bugs may be discovered before they harm our users. Other people may help diagnose the problem, and there is a large community of developers with whom we can discuss different possible solutions. Other people will help test our fixes, and can help us verify that they do not have unintended consequences.</p>Tim Hunthttp://www.blogger.com/profile/01349724348368316287noreply@blogger.com2tag:blogger.com,1999:blog-2247246257923129702.post-84286343703267791432013-07-03T23:30:00.000+01:002013-07-03T23:30:56.744+01:00Assessment in Higher Education conference 2013<p>Last week I attended the <a href="http://aheconference.com/">Assessment in Higher Education conference</a> in Birmingham. This was the least technology and most education conference that I have been to. It was interesting to learn about the bigger picture of assessment in universities. One reason for going was that <a href="http://www.open.ac.uk/blogs/SallyJordan/">Sally Jordan</a> wanted my help running a 'masterclass' about producing good computer-marked assessment on the first morning. I may write more about that in a future post. Also I presented a poster about all the different online assessment systems the OU uses. Again a possible future topic. For now I will summarise the other parts of the conference, the presentations I listed to.</p>
<p>One thing I was surprised to discover is how much the National Student Survey (NSS) is influencing what universities do. Clearly it is seen as something that prospective students pay attention to, and attracting students is important. However, as Margaret Price from Oxford Brookes University, the first keynote speaker said, the kind of assessment that students like (and so rate highly in NSS) is not necessarily the most effective educationally. That is, while student satisfaction is something worth considering, students don't have all the knowledge to evaluate the teaching they receive. Also, she suggested that the NSS ratings have made universities more risk-averse in trying innovative forms of assessment and teaching.</p>
<p>The opening keynote was about "Assessment literacy", making the case that students need to be taught a bit about how assessment works, so they can engage with it most effectively. That is, we want the students to be familiar with the mechanics of what they are being asked to do in assessment, so those mechanics don't get in the way of the learning; but more than that, we want the students to learn the most from all the tasks we set them, and assessment tasks are the ones students pay the most attention to, so we should help the students understand why they are being asked to do them. I dispute one thing the Margaret Price said. She said that at the moment, if assessment literacy is developed at all, that only happens serendipitously. However, in my time as a student, there were plenty of times when it was covered (although not by that name) in talks about study skill and exam technique.</p>
<p>Another interesting realisation during the conference was that, at least in that company (assessment experts), the "<a href="http://en.wikipedia.org/wiki/Assessment_for_learning">Assessment for learning</a>" agenda is taken as a given. It is used as the reason that some things are done, but there is no debate that it is the right thing to do.</p>
<p>Something that is a hot topic at the moment is more authentic assessment. I think it is partly driven by technology improvements making it possible to capture a wider range of media, and to submit eportfolios. It is also driven by a desire for better pedagogy, and assessments that by their design make plagiarism harder. If you are being asked to apply what you have learned to something in your life (for example in a practice-based subject like nursing) it is much harder to copy from someone else.</p>
<p>I ended up going to all three of the talks given by OU folks. Is it really necessary to go to Birmingham to find out what is going on in the OU? Well, it was a good opportunity to do so. The first of these was about an on-going project to review the OU's assessment strategy across the board. So far a set of principles have been agreed (for example affirming the assessment for learning approach, athough that is nothing new at the OU) and they are about to be disseminated more widely. There was an interesting slide (which provoked some good discussion) pointing out that you need to balance top-down policy and strategy with bottom up implementation that allows each faculty use assessment that is effective for their particular discipline. There was another session by people from Ulster and Liverpool Hope universities that also talked about the top-down/bottom-up balance/conflict in policy changes.</p>
<p>In this OU talk, someone made a comment along the lines, "why is the OU re-thinking its assessment strategy? You are so far ahead of us already and we are still trying to catch up." I am familiar with hearing comments like that at education technology conferences. It was interested to learn that we are also held in similarly high for policy. The same questioner also used the great phrase "the OU effectively has a sleeper-cell in every other university, in the associate lecturer you employ". That makes what the OU does sound far more excitingly aggressive than it really is.</p>
<p>In the second OU talk, Janet Haresnape described a collaborative online activity in a third level environmental science course. These are hard to get right. I say that having suffered one as a student some years ago. This one seems to have been more successful, at least in part because it was carefully structured. Also, it started with some very easy tasks (put your name next to a picture and count some things in it), and the students could see the relationship between the slightly artificial task and what would happen in real fieldwork. Janet has been surveying and interviewing students to discover their attitudes towards this activity. The most interesting finding is that weaker students comment more, and more favourably, on the collaboration than the better students. They have more to learn from their peers.</p>
<p>The third OU talk was Sally Jordan talking about the ongoing change in the science faculty from summative to formative continuous assessment. It is early days, but they are starting to <a href="http://www.open.ac.uk/blogs/SallyJordan/?p=1420">get</a> <a href="http://www.open.ac.uk/blogs/SallyJordan/?p=1425">some</a> <a href="http://www.open.ac.uk/blogs/SallyJordan/?p=1439">data</a> to analyse. Nothing I can easily summarise here.</p>
<p>The closing keynote was about oral assessment. In some practice-based subjects like law and veterinary medicine it is an authentic activity. Also, a viva is a dialogue, which allows the extent of the student's knowledge to be probed more deeply than a written exam. With an exam script, you can only mark what is there. If something the student has written is not clear, then there is no way to probe that further. That reminded me of what we do in the Moodle quiz. For example in the <a href="http://stack.bham.ac.uk/">STACK</a> question type, if the student has made a syntax error in the equation they typed, we ask them to fix it before we try to grade it. Similarly, in <a href="http://moodle.org/plugins/view.php?plugin=qtype_pmatch">Pattern-match</a> questions, we spell check the student's answer and let them fix any errors before we try to grade it. Also, with all our interactive questions, if the student's first answer is wrong, we give them some feedback then let them try again. If they can correct their mistake themselves, then they get some partial credit. Of course computer-marked testing is typically used to assess basic knowledge and concepts, whereas an oral exam is a good way to test higher-order knowledge and understanding, but the parallel of enabling two-way dialogue between student and assessor appealed to me.</p>
<p>This post is getting ridiculously long, but I have to mention two other talks. Calum Delaney from Cardiff Metropolitan University reported on some very interesting work trying to understand what academics think about as they mark an essays. Some essays are easy to grade, and an experienced marker will rapidly decide on the grade. Others, particularly those that are partly right and partly wrong, take a lot longer weighing up the conflicting evidence. Overall though, the whole marking process struck me, a relative outsider, as scarily subjective.</p>
<p>John Kleeman, chair of QuestionMark, UK, summarised some psychology research that shows that the best way to learn something so that you can remember it again is to test yourself on it, rather than just reading it. That is, if you want to be able to remember something, then practice remembering it. It sounds obvious when you put it that way, but the point is that there is strong evidence to back up that statement. So, clearly you should all now go and create Moodle (or QuestionMark) quizzes for your students. Also, in writing this long rambling blog post I have been practising recalling all the interesting things I learned at the conference, so I should remember them better in future. If you read this far, thank you, and I hope you got something out of it too.</p>
Tim Hunthttp://www.blogger.com/profile/01349724348368316287noreply@blogger.com6tag:blogger.com,1999:blog-2247246257923129702.post-57314577494041689502013-07-01T18:56:00.000+01:002013-07-01T18:56:11.050+01:00Open University question types ready for Moodle 2.5<p>This is just a brief note to say that <a href="http://cellyoursole.blogspot.co.uk/">Colin Chambers</a> has now updated all the OU question types to work with <a href="http://docs.moodle.org/dev/Moodle_2.5_release_notes">Moodle 2.5</a>. Note that we are not yet running this code ourselves on our live servers, since we are on Moodle 2.4 until the autumn, but Phil Butcher has tested them all and he is very thorough.</p>
<p>You can download <a href="https://moodle.org/plugins/browse.php?list=category&id=29">all these question types</a> (and others) from <a href="https://moodle.org/plugins/">the Moodle add-ons database</a>.</p>
<p>Thanks to <a href="http://blog.danpoltawski.co.uk/">Dan Poltawski</a>'s Github repository plugin, that is easier than it used to be. Still, updating 10 plugins is pretty dull, so I feel like I have contributed a bit. I also reviewed most of the changes and fixed the unit tests.</p>
<p>I hope you enjoy our add-ons. I am wondering whether we should add the drag-and-drop questions types to the standard Moodle release. What do you think? If that seems like a good idea to you, I suggest posting something enthusiastic in the <a href="https://moodle.org/mod/forum/view.php?id=737">Moodle quiz forum</a>. It will be easier to justify adding these question types to standard Moodle if lots of non-OU Moodlers ask for it.</p>Tim Hunthttp://www.blogger.com/profile/01349724348368316287noreply@blogger.com1tag:blogger.com,1999:blog-2247246257923129702.post-14830779287010336452013-06-21T17:30:00.001+01:002013-06-21T18:20:24.279+01:00Book review: Computer Aided Assessment of Mathematics by Chris Sangwin<p><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi7EPOIHIr02Z9JqZJcGTH4PLSCUvAPolwrsO7CSaf1R9gRzyTPVQXNUhnMVOZYZpGKUTpYmzUo8vBmhcahtlQroRicTFQrjxiuLwQWjTCqreDoMDH1c2Z5Grg7MrIte4C9-0iHi06gaz0/s1600/CAAmathscover.jpg" alt="The book cover" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"/>Chris is the brains behind the STACK online assessment system for maths, and he has been thinking about how best to use computers in maths teaching for well over ten years. This book is the distillation what he has learned about the subject.</p>
<p>While the book focusses specifically on online maths assessment, it takes a very broad view of that topic. Chris starts by asking what we are really trying to achive when teaching and assessing maths, before considering how computers can help with that. There are broadly two areas of mathematics: solving problems and proving theorems. Computer assessment tools can cope with the former, where the student performs a calculation that the computer can check. Getting computers to teach the student to prove theorems is an outstanding research problem, which is touched on briefly at the end of the book.</p>
<p>So the bulk of the book is about how computers can help students master the parts of maths that are about performing calculations. As Chris says, learning and practising these routine techniques is the un-sexy part of maths education. It does not get talked about very much, but it is important for students to master these skills. Doing this requires several problems to be addressed. We want randomly generated questions, so we have to ask what it means for two maths questions to be basically the same, and equally difficult. We have to solve the problem of how students can type maths into the computer, since traditional mathematics notation is two dimensional, but it is easier to type a single line of characters. Chris precedes this with a fascinating digression into where modern maths notation came from, something I had not previously considered. It is more recent than you probably think.</p>
<p><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi8HGe4fqFjrurYMaqOmZHSAIzjx4ZxDMTAJeBrgHhEY0FBylHAk50pPWXUA7vW5tCNKSPQbcRlO3IoL7ZaGnOLOtvXexaqhGuDKYYy4PFpwca8OOoBKupLf3fJ_1zQoHOJPF-4wkEKeIg/s1600/stack_input.png" alt="Example of how STACK handles maths input" /></p>
<p>If we are going to get the computer to automatically assess mathematics, we have to work out what it is we are looking for in students' work. We also need to think about the outcomes we want, namely feedback for the student to help them learn; numerical grades to get a measure of how much the student has learned; and diagnostic output for the teacher, identifying which types of mistakes the students made, which may inform subsequent teaching decisions. Having discussed all the issues, Chris them brings them together by describing STACK. This is an opportune moment for me to add the dislaimer that I worked with Chris for much of 2012 to re-write STACK as a Moodle question type. That was one of the most enjoyable projects I have ever worked on, so I am probably biassed. If you are interested, you can <a href="http://stack.bham.ac.uk/">try out a demo of STACK here</a>.</p>
<p>Chris rounds off the book with a review of other computer-assissted assessment systems for maths that have notable features.</p>
<p>In summary, this is a facinating book for anyone who is interested in this topic. Computers will never replace teachers. They can only automate some of the more routine things that teachers do. (They can also be more available than teachers, making feedback on their work available to students even when the teacher is not around.) To automate anything via a computer you really have to understand that thing. Hence this book about computer-assessted assessment gives a range of great insights into maths education. Highly recommended. <a href="http://ukcatalogue.oup.com/product/9780199660353.do">Buy it here!</a></p>Tim Hunthttp://www.blogger.com/profile/01349724348368316287noreply@blogger.com0tag:blogger.com,1999:blog-2247246257923129702.post-41752800694559635102013-05-02T20:32:00.002+01:002013-05-02T22:45:37.471+01:00Performance-testing Moodle<h3>Background</h3>
<p>The <a href="https://www.open.ac.uk/">Open University</a> is moving from <a href="https://moodle.org/">Moodle</a> 2.3.x to Moodle 2.4.3 in June. As is usual with a major upgrade, we (that is Rod and Derek) did some load testing to see if it still runs fast enough on our servers.</p>
<p>The first results were spectacularly bad! Moodle 2.4 was ten times slower. We were expecting <a href="http://docs.moodle.org/dev/Moodle_2.4_release_notes#Performance_improvements">Moodle 2.4 to be faster than 2.3</a>. The first step was easy.</p>
<blockquote><p>Performance advice: if you are running Moodle 2.4 with load-balanced web servers, don't use the default caching option that stores the data in moodledata on a shared network drive. Use memcache instead.</p></blockquote>
<p>Take 2 was a lot better. Moodle 2.4 was now only about 1.5 times slower. Still not good enough, but in the right ball park. This blog post is about what we did next, which was to use the tools Moodle provides to work out what was slow and fix it.</p>
<h3>Moodle's profiling tool</h3>
<p>When your software is too slow, you need measurements to tell you which are the slow bits. Tools that do that are called profilers. One of the better profiling tools for PHP is called <a href="http://php.net/manual/en/book.xhprof.php">XHProf</a>. The good news is that it has already been integrated into Moodle, and there is <a href="http://docs.moodle.org/dev/Setting_up_xhprof_on_Moodle">documenation about getting it working</a>. Basically, you just need to install a PHP extension and turn on some options under Admin -> Development -> Profiling.</p>
<p>Since we already had the necessary PHP extension on our severs, that was really easy. The option I chose was to profile a page when <tt>&PROFILEME</tt> was added to the end of the URL, but there are several ways to control it.</p>
<h3>Profiling output</h3>
<p>Once you have profiled a page, the results appear under Admin -> Development -> Profiling runs.</p>
<p><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj2ZR_N22DoyysxFrhGfdFQM3bFDPkc0AYU5hhg7dOrIaWjM7BayPsdG4AxsozZFhGEschb_HtO9Q8FhOUbAJovKrd19mlOITxs-I-wN-vG10F8JkJpHrO2l0G0Z15O9FvL87c29s_Pe-o/s1600/overview.png" /></p>
<p>This just lists the runs you have done. You need to click through to see the details of one run. That looks like a big table of all the function that were called as part of rendering the page, how many times each one was called, and how much time each function was responsible for.</p>
<p><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgkwLVQVsaX1qF24Su8BPaOe_ZhS6-gj-LiI5iIdlgbaUmIVaV78BUB1NJHGDqKIOR1wRBtopkxBN5JWitMLfQrTFvW-vGFUuzy2zME04tGhliSJ5tbVMv3tKFLNBipeESZJwq6Khc1E7A/s1600/detail.png" /></p>
<p>Inclusive time is the amount of time taken by that function, and all the other functions it called. Exclusive time is the time taken by that function itself. Some people, like <a href="http://learn.open.ac.uk/mod/oublog/view.php?user=11">sam</a>, seem to like that tabular view. I am a more visual person, so I tend to click on the [View full callgraph] link. That produces a crazily big image, showing graphically which functions call which other functions, and how much time is spent in each one. Here is the image for the run we are looking at:</p>
<p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjVhbGxJChxDCzuXsJId5qzGflSmRaBZCzLqrJZv9ZMv3cM0nV8_APNkVqCBlK9yK5OpaCDOC7XNDMtJBitTzL0XIn868m35JzhqZm7OQgDrBTNWb0uKU5i3k5ORoqcMJhw0olSx-2xmAg/s1600/callgraph.png" imageanchor="1"><img border="0" height="510" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjVhbGxJChxDCzuXsJId5qzGflSmRaBZCzLqrJZv9ZMv3cM0nV8_APNkVqCBlK9yK5OpaCDOC7XNDMtJBitTzL0XIn868m35JzhqZm7OQgDrBTNWb0uKU5i3k5ORoqcMJhw0olSx-2xmAg/s640/callgraph.png" width="640" /></a></p>
<p>You can click for the full-sized image. The yellow and red highlighting is applied automatically to try to highlight places where a lot of time is being spent. Sometimes it is helpful. Sometimes not. The red box in the bottom right is where we do database queries. No suprise there. We know calling the database is one of the slowest things you can do in Moodle code. The other red box is fetching data from memcache, which also involves connecting to another server.</p>
<p><img style="clear: right; float: right;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiqV8Wv4JcizqDFn1UZsFiIXNhmLD8wF9G3cewHaFWB3vvbQ86-kf5oSWit1-9OJ1yGuDJ4p7iBOm-MuAiEUw-hvrlZGzdS5LhfVSWSBBO0Vw5XOTMiz-N_BsTg_S4ssDEq_WuBxDWikqk/s1600/graphdetail.png" />What you have to look for is somewhere on the diagram that makes you go "What! We are spending how much time doing that?! That's surely not necessary." In this case, my eye was drawn to the far right of the yellow chain. When viewing this small course, we are fetching the course format object 134 times, and doing that is accounting for about 9% of the page-load time. There is no way we need to do that.</p>
<h3>Fixing the problem</h3>
<p>Once you have identified what appears to be a gross inefficiency, then you have to fix it. Mostly that follows <a href="http://tjhunt.blogspot.co.uk/2012/03/fixing-bug-in-moodle-core-mechanics.html">the normal Moodle bug-fixing mechanics</a>, but it is worth saying a bit about the different approaches you could take to changing the code:</p>
<ol>
<li>You might work out that what is being done is unnecessary. Then you can just remove it. For example <a href="https://tracker.moodle.org/browse/MDL-39452">MDL-39452</a> or <a href="https://tracker.moodle.org/browse/MDL-39449">MDL-39449</a>. This is the best case. We have both improved performance and simplified the code.</li>
<li>The next option is to take an overview of the code, and re-organise it to be more sensible. For example, in the course format case, we should probably just get the course format object once, and then use it. However, that would be a big risky change, which I did not want to do at this time (just before the Moodle 2.5 release). This approach does, however, also have the potential to simplify the code while improving performance.</li>
<li>The next option is some other sort of refactoring. For example <tt>get_plugin_list</tt> was getting called a lot, and it in turn was calling the generic function <tt>clean_param</tt> to validate something. <tt>clean_param</tt> is quite fast, but not when you call it a thousand times. Therefore, it was worth extracting a simpler <tt>is_valid_plugin_name</tt> function. Doing that (<a href="https://tracker.moodle.org/browse/MDL-39445">MDL-39445</a>) reduced the page load time by about 2%, but did make the code slighly more complex. Still, that is a worth-while trade off.</li>
<li>The last option is to add caching. If you are doing the same thing repeatedly, and it is slow, and you can't avoid doing it repeatedly, then remember the answer the first time you compute it, and reuse it later. This should be the option of last resort because caches definitely increase the code complexity, and if you forget to clear them when necessary you introduce <a href="https://github.com/maths/moodle-qtype_stack/commit/2e1ed6d6a70534679e8c89ca38b4a738e700f51a">bugs</a>. However, as in <a href="https://tracker.moodle.org/browse/MDL-39450">the course formats example we are looking at</a> they can make a big difference. This fix reduced page-load times by 8%.</li>
</ol>
<p>So far, we have found <a href="https://tracker.moodle.org/browse/MDL-39443">nine speed-ups we can make to Moodle 2.4 in the core Moodle code</a>, and about the same in OU plugins. That is probably a 10-20% speed-up on most pages. Some of those are new problems introduced in Moodle 2.4. Others have been there since Moodle 2.0. We could really benefit from more people looking at Moodle profiling output often, and that is why I wrote this article.</p>
Tim Hunthttp://www.blogger.com/profile/01349724348368316287noreply@blogger.com20tag:blogger.com,1999:blog-2247246257923129702.post-60931102536741012402013-04-08T13:57:00.000+01:002013-04-15T08:31:55.471+01:00Do different media affect the effectiveness of teaching and learning<p>Here is some thirty-year-old research that still seems relevant today:</p><p>Richard E. Clark, 1983, "<a href="http://www.jstor.org/stable/1170217">Reconsidering Research on Learning from Media</a>", <i>Review of Educational Research</i>, Vol. 53, No. 4 (Winter, 1983), pp. 445-459.</p><p>This paper reviews the the seemingly endless research trying to ask whether teaching using Media X inherrently more effective than the same instruction in Media Y. Given the age of the paper, you will not suprised to learn that the research cited covers media like Radio for education (hot research topic in the 1950s), Television (1960s) and early computer-assessted assessment (1970s). Clark's earliest citation, however, is "since Thorndike (1912) recommended pictures as a labor saving device in instruction." Images as novel educational technology! Well, they were once. The point is that basically the same reserach was done for each new media to come along, and it was all equally inconclusive.</p>
<p>Here are some choice quotes that nicely summarise the article:</p>
<blockquote><p>Based on this consistent evidence, it seems reasonable to advise strongly against future media comparison research. Five decades of research suggest that there are no learning benefits to be gained from employing different media in instruction, regardless of their obviously attractive features or advertised superiority.</p></blockquote>
<blockquote><p>Where learning benefits are at issue, therefore, it is the method, aptitude, and task variables of instruction that should be investigated.</p></blockquote>
<blockquote><p>The best current evidence is that media are mere vehicles that deliver instruction but do not influence student achievement any more than the truck that delivers our groceries causes changes in our nutrition</p></blockquote>
<p>Clark does not miss out on the fact that effectiveness of the learning is the only problem in education:</p>
<blockquote><p>Of course there are instructional problems other than learning that may be influenced by media (e.g., costs, distribution, the adequacy of different vehicles to carry different symbol systems, equity of access to instruction).</p></blockquote>
<p>Since this paper is a thorough review of a lot of the available literature, it contains a number of other gems. For example:</p>
<blockquote><p>Ksobiech (1976) told 60 undergraduates that televised and textual lessons were to be (a) evaluated, (b) entertainment, or (c) the subject of a test. The test group performed best on a subsequent test with the evaluation group scoring next best and the entertainment group demonstrating the poorest performance.</p></blockquote>
<blockquote><p>Hess and Tenezakis (1973) ... Among a number of interesting findings was an unanticipated attribution of more fairness to the computer than to the teacher.</p></blockquote>
<p>I wonder how much later research fell it to the trap outlined in this paper. I am not familiar enough with the literature, but presumably there was lots of papers about the world-wide web, VLEs, social media, mobiles and tablets for education. I wonder how novel they really were?</p>
<p>Today, computers and the internet have made media cheaper to produce and more readily accessible than ever before. This removes many constraints on the instructional techniques available, but what this old paper is reminding us is that when it comes to teaching, it is not the media that matters, but the instructional design.</p>Tim Hunthttp://www.blogger.com/profile/01349724348368316287noreply@blogger.com1tag:blogger.com,1999:blog-2247246257923129702.post-23888624049807097352012-08-15T22:10:00.000+01:002012-08-15T22:10:12.109+01:00Automating git<p>This is a long-overdue follow-up to my <a href="http://tjhunt.blogspot.co.uk/2012/03/fixing-bug-in-moodle-core-mechanics.html">previous post about using git to fix Moodle bugs</a>. Thanks to Andrew Nichols of <a href="http://www.luns.net.uk/services/virtual-learning-environments/">LUNS</a> for nudging me in to writing this.<p>
</p>Git has an efficient command-line interface, but even so, there are some sequences of commands that you find yourself typing repeatedly. Git provides a mechanism called aliases which can be used to reduce this repetitive typing. This post explains how I use it in my Moodle development.</p>
<h4>Basic usage</h4>
<p>Let us start with the simplest possible example. I get bored typing <tt>git cherry-pick</tt> in full all the time. The solution is to edit the file <tt>.gitconfig</tt> in my home directory, and add</p>
<pre>
[alias]
cp = cherry-pick
</pre>
<p>Then <tt>git cp …</tt> is equivalent to <tt>git cherry-pick …</tt>. That saves 9 characters every time I have to copy a bug fix to a stable branch.</p>
<p>Simple aliases like this can also be used to to supply options. Another one I have set up is</p>
<pre>
ff = merge --ff-only
</pre>
<p>I use that when I need to update one of my local branches to match a remote branch. Suppose I think I are on the <tt>master</tt> branch, and I want to update that to the latest <tt>moodle/master</tt>. Normally one would just type <tt>git merge moodle/master</tt> and it would look like this:</p>
<pre>
timslaptop:moodle_head tim$ git merge moodle/master
Updating ddd84e9..b658200
Fast-forward
</pre>
<p>Suppose, however, that I had made a mistake, and I was actually on some other branch. Then git would try to do a merge between <tt>master</tt> and that branch, which is not what I want. The <tt>--ff-only</tt> option tells git not to do that. Instead it will stop with an error if it can't do a fast forward. So, to prevent mistakes, I normally use that option, and I do it frequently enough I found it worthwhile to create the alias.</p>
<h4>Getting more ambitious</h4>
<p>Sometimes the repeated operation you want to automate is a sequence of git commands. For example, when a new weekly build of Moodle comes out, I need to type a sequence of commands like this:</p>
<pre>
git checkout master
git fetch moodle
git merge --ff-only moodle/master
git push origin master
</pre>
<p>That updates my local copy of the <tt>master</tt> branch with the latest from <a href="http://git.moodle.org/gw?p=moodle.git;a=summary">moodle.org</a> and then copies that to <a href="https://github.com/timhunt/moodle/">my area on github</a>. To automate this sort of thing, you have to start using the power of Unix shell scripting. (If you are on Windows, don't worry, because you typically get the <a href="http://en.wikipedia.org/wiki/Bash_(Unix_shell)">bash</a> shell when you install git.)</p>
<p>Fortunately, you don't need to know much scripting, and you can probably just copy these examples blindly. The first thing to know is that you can put two commands on one line if you separate them using a semi-colon (just like in PHP). The previous sequence of commands could be typed on one line as</p>
<pre>
git checkout master; git fetch moodle; git merge --ff-only moodle/master; git push origin master
</pre>
<p>(Note that these lines of code are getting quite long, and will probably line-wrap. It should, however, be a single line of code.)</p>
<p>Doing it this way turns out to be a bad idea. What happens if one of the commands gives an error? Well, the system will just move on to the next command, even though the error from the previous command probably left things in an unknown state. Dangerous! Fortunately there is a better way. If you use <tt>&&</tt> instead of <tt>;</tt> then any error will cause everything to stop immediately. If you are familiar with PHP, then just image that every command is a function that returns true or false to say whether it succeeded or not. That is not so far from the truth. So, the right way to join the commands together looks like this:</p>
<pre>
git checkout master && git fetch moodle && git merge --ff-only moodle/master && git push origin master
</pre>
<p>Now we know what we want to automate, we need to teach this to git. It is a bit tricky because we don't just want to convert one single git command into another single git command. Instead we want to convert one git command into a sequence of shell commands. Fortunately this is supported, you just need to know the right syntax:</p>
<pre>
updatemaster = !sh -c 'git checkout master && git fetch moodle && git merge --ff-only moodle/master && git push origin master' -
</pre>
<p>Now I just have to type <tt>git updatemaster</tt> to run that sequence of commands.</p>
<h4>Parameterising your aliases</h4>
<p>That is all very well for <tt>master</tt>, but what about all the stable branches? Do I have to create lots of separate aliases like <tt>update23</tt>, <tt>update22</tt>, <tt>update21</tt>, …? Of course not. Git was created by and for computer programmers. Shell scripts can take parameters, and the solution is an alias that looks like</p>
<pre>
update = !sh -c 'git checkout MOODLE_$1_STABLE && git fetch moodle && git merge --ff-only moodle/MOODLE_$1_STABLE && git push origin MOODLE_$1_STABLE' -
</pre>
<p>With that alias, <tt>git update 23</tt> will update my <tt>MOODLE_23_STABLE</tt> branch, <tt>git update 22</tt> will update my <tt>MOODLE_22_STABLE</tt>, and so on.</p>
<p>You can use any number of parameters. If you remember my previous blog post, typically I will create the bug fix on a branch with a name like <tt>MDL-12345</tt> that starts from <tt>master</tt>, and then I will want to copy that to a branch called <tt>MDL-12345_23</tt> branching off <tt>MOODLE_23_STABLE</tt>. With the following alias, I just have to type <tt>git cpfix MDL-12345 23</tt> in my Moodle 2.3 stable check-out:</p>
<pre>
cpfix = !sh -c 'git fetch -p origin && git checkout -b $1_$2 MOODLE_$2_STABLE && git cherry-pick origin/master..origin/$1 && git push origin $1_$2' -
</pre>
<p>One final example that belongs in this section:</p>
<pre>
killbranch = !sh -c 'git branch -d $1 && git push origin :$1' -
</pre>
<p>That deletes a branch both in the local repository and also from my area on github. That is useful once one of my bug fixes has been integrated. I then no longer need the MDL-12345 branch and can eliminate it with <tt>git killbranch MDL-12345</tt>.</p>
<h4>To boldly go …</h4>
<p>Of course, all this automation comes with some risk. If you are going to screw up, automation lets you screw up more things quicker. I feel obliged to emphasis that at this point. If you are going to shoot yourself in the foot, a machine gun gives the most spectacular results, and we are about to build one, at least metaphorically.</p>
<p>We just saw the <tt>killbranch</tt> command that can be used to clean up branches that have been integrated. What happens if I submitted lots of branches for integration last week. I have to delete lots of branches. Can that be automated? Using git I can at least get a list of those branches:</p>
<pre>
timslaptop:moodle_head tim$ git checkout master
Already on 'master'
timslaptop:moodle_head tim$ git branch --merged
MDL-12345
MDL-23456
* master
</pre>
<p>Those are the branches that are included in <tt>master</tt>, and so are presumably ones that have already been integrated. It is a bit irritating that the <tt>master</tt> branch itself is included in the list, but I can get rid of it using the standard command <tt>grep</tt>:</p>
<pre>
timslaptop:moodle_head tim$ git branch --merged | grep -v master
MDL-12345
MDL-23456
</pre>
<p>I have a list of branches to delete, but how can I actually delete them? I need to execute a command for each of those branch names. Once again, we find that shell scripting was developed by hacker, for hackers. The command <a href="http://pubs.opengroup.org/onlinepubs/9699919799/utilities/xargs.html"><tt>xargs</tt></a> does exactly that. <tt>xargs</tt> executes a given command once for each line of input it receives. Feeding in the list of branches, and getting it to execute the <tt>killbranch</tt> command looks like this:</p>
<pre>
git branch --merged | grep -v $1 | xargs -I "{}" git killbranch "{}"
</pre>
<p>Now to make that into an alias</p>
<pre>
killmerged = !sh -c 'git checkout $1 && git branch --merged | grep -v $1 | xargs -I "{}" git killbranch "{}"' -
</pre>
<p>With that in place, <tt>git killmerged master</tt> will delete all my branches that have been integrated into <tt>master</tt>. Note that you can use one alias (<tt>killbranch</tt>) inside another (<tt>killmerged</tt>). That makes it easier to build more complex aliases.</p>
<p>Once I have deleted all the things that were integrated, I am left with the branches I have in progress that have not been integrated yet. Those all need to be rebased, and that can be automated too:</p>
<pre>
updatefix = !sh -c 'git checkout $1 && git rebase $2 && git checkout $2 && git push origin -f $1' -
updatefixes = !sh -c 'git checkout $1 && git branch | grep -v $1 | xargs -I "{}" git updatefix "{}" $1' -
</pre>
<p>With those in place, I just just type <tt>git updatefixes master</tt>, and that will rebase all my branches, both locally and on github. Use at your own risk!</p>
<h4>Thats all folks</h4>
<p>To summarise, here is the whole of the <tt>alias</tt> section of my <tt>.gitconfig</tt> file:
<pre>
[alias]
cp = cherry-pick
ff = merge --ff-only
cpfix = !sh -c 'git fetch -p origin && git checkout -b $1_$2 MOODLE_$2_STABLE && git cherry-pick origin/master..origin/$1 && git push origin $1_$2' -
update = !sh -c 'git checkout MOODLE_$1_STABLE && git fetch moodle && git merge --ff-only moodle/MOODLE_$1_STABLE && git push origin MOODLE_$1_STABLE' -
killbranch = !sh -c 'git branch -d $1 && git push origin :$1' -
killmerged = !sh -c 'git checkout $1 && git branch --merged | grep -v $1 | xargs -I "{}" git killbranch "{}"' -
updatefix = !sh -c 'git checkout $1 && git rebase $2 && git checkout $2 && git push origin -f $1' -
updatefixes = !sh -c 'git checkout $1 && git branch | grep -v $1 | xargs -I "{}" git updatefix "{}" $1' -
</pre>
<p>There is limited documentation for this on the <a href="http://www.kernel.org/pub/software/scm/git/docs/git-config.html">git config man page</a>. There is more on the <a href="https://git.wiki.kernel.org/index.php/Aliases">git wiki</a>.</p>Tim Hunthttp://www.blogger.com/profile/01349724348368316287noreply@blogger.com6tag:blogger.com,1999:blog-2247246257923129702.post-6134733061252869312012-08-02T13:17:00.001+01:002012-08-02T13:17:52.624+01:00Standards<p>Standardisation efforts are odd things. Most successful standards seem to have come out of one or a few brilliant individuals, and the standardisation committees only took over after the thing in question became widely adopted. Think of C, C++, Java, HTML, HTTP, JavaScript, SQL… Of course, it is only with hind-sight that we know those were successful things, that the people who created them were brilliant, and that it was worth investing effort in a standardisation committee to get different implementations to be interoperable. There are many fewer examples of successful standards that started with a committee. I am sure there are some, but I am failing to think of any right now.</p>
<p>Even when there are standards, that does not magically solve all your problems. Ask any developer about the problems of getting their web site to work on all browsers, particularly Internet Explorer, despite the existence of the HTML, CSS and JS standards; or look at the work Moodle has to do to work with the four databases we support, even though SQL is supposed to be a standardised language.</p>
<p>In theory a standard makes sense. If you have <i>n</i> different systems you want to move data between, then</p>
<ul>
<li>If you go directly from system to system, you would have to write ½<i>n</i>*(<i>n</i>-1) different importers.</li>
<li>Given a common standard, you only need to write <i>n</i> different importers.</li>
</ul>
<p>In practice, different systems have slightly different features, so you cannot perfectly copy data from one system to another. An importer from X to Y is not a perfect thing, it has to fudge some details. Now compare the two ways of handling import:</p>
<ul>
<li>An importer for System Y that directly imports the files output by System X can know all about the details of System X, so it can do the best possible job of dealing with the incompatible features.</li>
<li>Using Standard S, System X has to save its data in format S dealing with any incompatibilities between what X supports and what S supports. Then System Y has to take file S and import it, dealing with any incompatibilities between what S supports and what Y supports, and it has to do that without the benefit of knowing that the data originally came from System X.</li>
</ul>
<p>Therefore, going for direct import is likely to give better results, although at the cost of more work.</p>
<p>The particular case I am thinking about is, of course, moving questions between different eAssessment systems. The only standard that exists is IMS QTI, which has always struck me as the worst sort of product of a committee. It is not widely adopted and it is horribly complicated. Also, if we wanted to make Moodle support QTI, we would have to completely rewrite the Moodle to work the way QTI specifies. That is sort-of fair enough. If you want to display HTML like a web browser, you basically have to start from scratch and write you code from the ground up to work the way the HTML, CSS and JavaScript standards say. These standards are not designed to make content interoperate between different existing systems. You need only look at the horrible mess you get when you do Save as… -> HTML in MS Word, or even just copy-and-paste from Word to Moodle, to be convinced of that.</p>
<p>So, QTI is trying to solve the wrong problem. It is trying to be a full-featured standard that you can only support by basing your whole software around what the standard says. We don’t want to rewrite the whole Moodle question engine just to support some standard that hardly anyone else uses yet. We just want to be able to import 99%+ of questions from other systems, and from publishers, that Teachers can get access to. The kind of standard we want is more like CSV files. CSV is a nice simple standard to transfer data between spreadsheets and other applications.</p>
<p><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg8bkAumD3hOZ3qK4p34YF-yUUmDFyw5CSwiYm39U20losiSn0IE7ycSEI7VsymnUCL-HIxQCUSyHnx8tCpVBV7eg8zt0u0vwCDcSfUGqPkkauLLE5TINgeVwNKngwWVuiVddSi5YRN6cU/s1600/Import+formats.png" style="float: right; margin-left: 0.5em;"/>In the past, it has always been easier to write separate importers for each other system Moodle wants to import from, rather than trying to deal with one very complex generic standard like QTI. See the screen-grab of Moodle's import UI to the right. To write a new importer, you just need some example files in the format you want to support, containing a few questions of each type. Then it is easy to write the code to parse that sort of file, and converting the data to the format Moodle expects.</p>
<p>Having said that, the current situation is not perfect. The problem is that most of these other file formats are output by commercial software. Therefore, many developers cannot easily get sample files in those formats to use for developing and testing code. As a result, some of the importers are buggy. We have to rely on people in the community who care enough, and who have access to the software, to create example files for us. There was a good example of that recently: Someone called Rick Jerz from Le Claire, Iowa produced a good example Examview file, and long-time quiz contributor Jean-Michel Vedrine from St Etienne, France used that to <a href="http://tracker.moodle.org/browse/MDL-34483">fix the bugs in the Examview importer</a>.</p>
<p>On the standardisation front, there is a glimmer of hope. IMS Common Cartridge is a standard for moving learning content from one VLE to another. It uses a tiny, and hence manageable, subset of QTI that tries to solve the “transfer 99%+ of the questions teachers use” problem. It should be possible to get Moodle to read and write that QTI subset. We just need someone with the time and inclination to do the work. It is even possible that the OU's <a href="https://openlearn.open.ac.uk/">OpenLearn</a> project will be that someone, but QTI import/export is just one of many things on their to-do list.</p>Tim Hunthttp://www.blogger.com/profile/01349724348368316287noreply@blogger.com0tag:blogger.com,1999:blog-2247246257923129702.post-54667340434842916942012-06-20T22:24:00.000+01:002012-06-20T22:24:35.928+01:00Interesting workshop about self-assessment tools<p>About 10 days ago, I took part in a very interesting workshop about the use of assessment tools to promote learning:</p>
<p><a href="http://www.heacademy.ac.uk/events/detail/2012/seminars/themes/tw037_ou">Self-assessment: strategies and software to stimulate learning</a></p>
<p>The day was organised by Sally Jordan from the OU, and Tony Gardner-Medwin from UCL, and supported by the HEA, so thanks to all of them for making it happen.</p>
<p>People talked about different assessment tools (not all Moodle), how they were getting students to use them, and in some cases what evidence there was for whether that was effective.</p>
<p>Parts of the event were recorded, and you can now access the recordings at <a class="_blanktarget" href="http://stadium.open.ac.uk/stadia/preview.php?whichevent=1955&s=1">http://stadium.open.ac.uk/stadia/preview.php?whichevent=1955&s=1</a>. There is a total of 3.5 hours of video there, so you may not want to watch it all. My presentation is in Part 3, which also includes the final discussion, all in 30 minutes, and provides a reasonable summary of the day.</p>
<p>Despite having spent the whole day at the event, and discussed various aspects of what self-assessment is, I don't think we reached a single definition for what is self-assessment. Actually, I think it is clear that it is not one thing, but it is a useful way of looking at many different things, from the point of view of what is the most useful thing to help students learn.</p>
<p>One of the tools discussed during the day was <a href="http://peerwise.cs.auckland.ac.nz/">PeerWise</a>. If you have not come across that yet, then you should take a look, becuase it looks like a very interesting tool. There is a good introduction on Youtube:</p>
<p><iframe allowfullscreen="" frameborder="0" height="300" src="http://www.youtube.com/embed/j1tN006KEWo?rel=0&wmode=transparent" title="YouTube" width="400"></iframe>.</p>Tim Hunthttp://www.blogger.com/profile/01349724348368316287noreply@blogger.com2tag:blogger.com,1999:blog-2247246257923129702.post-42927083211250565572012-03-29T23:37:00.000+01:002012-03-30T00:07:27.516+01:00Fixing a bug in Moodle core: the mechanics<p>
Several people at work have asked me about this, so I thought I would write it out as a blog post. In this post, I want to focus on the mechanics of using <a href="http://git-scm.com/">git</a> and the <a href="http://tracker.moodle.org/">Moodle tracker</a> to prepare a bug-fix and submit it. Therefore I need a really simple bug. Fortunately one was reported recently: <a href="http://tracker.moodle.org/browse/MDL-32039">MDL-32039</a>. If you go and look at that, you can see that the mistake is that I failed to write coherent English when working on the code to upgrade from Moodle 2.0 to 2.1.</p>
<h4>
What we need to do</h4>
<p>
This bug was introduced in Moodle 2.1, and affects every version since then. Since it is a bug, it needs to be fixed in all supported versions, which means on the 2.1 and 2.2 stable branches, and on the master branch.</p>
<h4>
My development set-up</h4>
<p>
I need to fix then test code on three branches. The way I handle this is to have a separate install of Moodle for each stable branch, and one for master.</p>
<p>
I use <a href="http://www.eclipse.org/">Eclipse</a> as my IDE, so these three copies of Moodle are separate projects in my Eclipse workspace <tt>.../workspace/moodle_head</tt>, <tt>.../workspace/moodle_22</tt>, and <tt>.../workspace/moodle_21</tt>. Each of these folders is a git repository. In each repositor, I have two remotes set up, for example:</p>
<pre>timslaptop:moodle_head tim$ <b>git remote -v</b>
moodle git://git.moodle.org/moodle (fetch)
moodle git://git.moodle.org/moodle (push)
origin git@github.com:timhunt/moodle.git (fetch)
origin git@github.com:timhunt/moodle.git (push)</pre>
<p>
One is called <tt>moodle</tt> and points to <a href="http://git.moodle.org/gw?p=moodle.git">the master copy of the code on moodle.org</a>. The other is called <tt>origin</tt> and points to <a href="https://github.com/timhunt/">my area on github</a>, where I publish my changes.</p>
<p>
In order to test the code, I have three different Moodle installs, each pointing at one of these copies of the code. Each install uses an different database prefix in <tt>config.php</tt>, so they can share one database.</p>
<h4>
Getting ready to fix the bug on the master branch</h4>
<p>
The way I normally work is to fix the bug on the master branch first, and then transfer the fix to previous branches.</p>
<p>
The first thing I need to do is to make sure my master branch is up to date, which I do using git:</p>
<pre>timslaptop:workspace tim$ <b>cd moodle_head</b>
timslaptop:moodle_head tim$ <b>git fetch moodle</b>
remote: Counting objects: 1442, done.
remote: Compressing objects: 100% (213/213), done.
remote: Total 817 (delta 624), reused 790 (delta 602)
Receiving objects: 100% (817/817), 197.21 KiB | 111 KiB/s, done.
Resolving deltas: 100% (624/624), completed with 225 local objects.
From git://git.moodle.org/moodle
8925a12..09f011a MOODLE_19_STABLE -> moodle/MOODLE_19_STABLE
1cd62bf..a280d40 MOODLE_20_STABLE -> moodle/MOODLE_20_STABLE
a7899ca..c54172b MOODLE_21_STABLE -> moodle/MOODLE_21_STABLE
a81e8c4..58db57a MOODLE_22_STABLE -> moodle/MOODLE_22_STABLE
c856a1f..a280078 master -> moodle/master
timslaptop:moodle_head tim$ <b>git checkout master</b>
Switched to branch 'master'
timslaptop:moodle_head tim$ <b>git merge --ff-only moodle/master</b>
Updating c856a1f..a280078
Fast-forward
<i>... lots diff --stat output ...</i>
timslaptop:moodle_head tim$ <b>git push origin master</b>
Everything up-to-date</pre>
<p>
(Why is everything already up-to-date on github? because I was fixing bugs at work today, and so had already updated my github space from there.)</p>
<p>
Since there were updates, I now need to go to <tt>http://localhost/moodle_head/admin/index.php</tt> and let Moodle upgrade itself.</p>
<h4>
Fixing the bug on the master branch</h4>
<p>
First, I want to create a new branch for this bug fix, starting from where the master branch currently is. My convention is to use the issue id as the branch name, so:<p>
<pre>timslaptop:moodle_head tim$ <b>git checkout -b MDL-32039 master</b>
Switched to a new branch 'MDL-32039'</pre>
<p>
Other people use other conventions. For example they might call the branch <tt>MDL-32039_qeupgradehelper_typos</tt>. That is a much better name. It helps you see immediately what that branch is about, but I am too lazy to type long names like that.</p>
<p>
To fix this bug it is just a matter of going into <tt>admin/tool/qeupgradehelper/lang/en/tool_qeupgradehelper.php</tt> and editing the two strings that were wrong.</p>
<p>
Except that, if I screwed up those two strings, it is quite likely that I made other mistakes nearby. I therefore spent a bit of time proof-reading all the strings in that language file (it is not very long). That was worthwhile. I found and fixed two extra typos. This sort of thing is always worth doing. When you see one bug report, spend a bit of time thinking about and checking whether other similar things are also broken.</p>
<p>
OK, so here is the bug fix:</p>
<pre>timslaptop:moodle_head tim$ <b>git diff -U1</b>
<b>diff --git a/admin/tool/qeupgradehelper/lang/en/tool_qeupgradehelper.php b/admin
index 3010666..7bd7c13 100644
--- a/admin/tool/qeupgradehelper/lang/en/tool_qeupgradehelper.php
+++ b/admin/tool/qeupgradehelper/lang/en/tool_qeupgradehelper.php</b>
<span class="lines">@@ -50,3 +50,3 @@</span> $string['gotoresetlink'] = 'Go to the list of quizzes that can
$string['includedintheupgrade'] = 'Included in the upgrade?';
<span class="remove">-$string['invalidquizid'] = 'Invaid quiz id. Either the quiz does not exist, or </span>
<span class="add">+$string['invalidquizid'] = 'Invalid quiz id. Either the quiz does not exist, or</span>
$string['listpreupgrade'] = 'List quizzes and attempts';
<span class="lines">@@ -57,5 +57,5 @@</span> $string['listtodo_desc'] = 'This will show a report of all the
$string['listtodointro'] = 'These are all the quizzes with attempt data that st
<span class="remove">-$string['listupgraded'] = 'List already upgrade quizzes that can be reset';</span>
<span class="add">+$string['listupgraded'] = 'List already upgraded quizzes that can be reset';</span>
$string['listupgraded_desc'] = 'This will show a report of all the quizzes on t
<span class="remove">-$string['listupgradedintro'] = 'These are all the quizzes that have attempts th</span>
<span class="add">+$string['listupgradedintro'] = 'These are all the quizzes that have attempts th</span>
$string['noquizattempts'] = 'Your site does not have any quiz attempts at all!'
<span class="lines">@@ -82,2 +82,2 @@</span> $string['upgradedsitedetected'] = 'This appears to be a site t
$string['upgradedsiterequired'] = 'This script can only work after the site has
<span class="remove">-$string['veryoldattemtps'] = 'Your site has {$a} quiz attempts that were never </span>
<span class="add">+$string['veryoldattemtps'] = 'Your site has {$a} quiz attempts that were never</span></pre>
<p>
(Note that:<ul>
<li>I would not normally use the <tt>-U1</tt> option. That just makes the output smaller, for the benefit of this blog post.</li>
<li>The diff is chopped off at 80 characters wide, which is the size of my terminal window.)</li>
</ul>
<p>
Now I need to test that the fix actually works. I go to Site administration ▶ Question engine upgrade helper in my web browser, and verify that the strings now look OK.</p>
<p>
OK, so I have a good bug-fix and I need to commit it:</p>
<pre>timslaptop:moodle_head tim$ <b>git add admin/tool</b>
timslaptop:moodle_head tim$ <b>git status</b>
# On branch MDL-32039
# Changes to be committed:
# (use "git reset HEAD <file>..." to unstage)
#
# <span class="add">modified: admin/tool/qeupgradehelper/lang/en/tool_qeupgradehelper.php</span>
#
timslaptop:moodle_head tim$ <b>git commit -m "MDL-32039 qeupgradehelper: fix typos in the lang strings"</b>
[MDL-32039 9e45982] MDL-32039 qeupgradehelper: fix typos in the lang strings
1 files changed, 4 insertions(+), 4 deletions(-)</pre>
<p>
Notice that I followed the approved style for Moodle commit comments. First the issue id, then a brief indication of which part of the code is affected, then a colon, then a brief summary of what the fix was. This first line of the commit comment is meant to be less than about 70 characters long, which can be a challenge!</p>
<p>
If this had been a more complex fix, I would probably have added some additional paragraphs to the commit comment to explain things (and so I would have typed the comment in my editor, rather than giving it on the command-line with the <tt>-m</tt> option). In this case, however, the one line commit comment says enough.</p>
<p>
Now I need to publish this change to github so others can see it:</p>
<pre>timslaptop:moodle_head tim$ <b>git push origin MDL-32039</b>
Counting objects: 15, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (7/7), done.
Writing objects: 100% (8/8), 653 bytes, done.
Total 8 (delta 5), reused 0 (delta 0)
To git@github.com:timhunt/moodle.git
* [new branch] MDL-32039 -> MDL-32039</pre>
<p>
Now I can go to a URL like <a href="https://github.com/timhunt/moodle/compare/master...MDL-32039">https://github.com/timhunt/moodle/compare/master...MDL-32039</a>, and see the bug-fix through the github web interface.</p>
<p>
More to the point, I can go to <a href="http://tracker.moodle.org/browse/MDL-32039">the tracker issue</a>, click <b>Request peer review</b>, and fill in the details of this git branch, including that compare URL.</p>
<p>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhvaKtSoxiwtHoAec9z2GHfz01ox7v_T1NPx8Jeh0OjoGN0lgnleJsxZRbPmpcCFu-laCoVobFvbbMF6W7tjrS_zeAOT75nySdddDm5ab04ofC0i9GeDE6qNRPJiYNEvycwpPQe6REtssk/s1600/requestpeerreview.png" /></img></p>
<p>
If this was a complex fix, I would then want for someone else to review the changes and confirm that they are OK. In this case, however, the fix is simple and I will just carry on without waiting for a review.</p>
<h4>
Transferring the fix to the 2.2 stable branch</h4>
<p>
So, now I want to apply the same fix to my <tt>moodle_22</tt> code. First I need to update that install. Since this is similar to what we did above to update <tt>master</tt>, I will not show the output of these commands, just what I typed:</p>
<pre>timslaptop:moodle_head tim$ <b>cd ../moodle_22</b>
timslaptop:moodle_22 tim$ <b>git fetch moodle</b>
timslaptop:moodle_22 tim$ <b>git checkout MOODLE_22_STABLE</b>
timslaptop:moodle_22 tim$ <b>git merge --ff-only moodle/MOODLE_22_STABLE</b>
timslaptop:moodle_22 tim$ <b>git push origin MOODLE_22_STABLE</b></pre>
<p>
Then I visit <tt>http://localhost/moodle_22/admin/index.php</tt> to complete the upgrade.</p>
<p>
(This may seem a bit laborious, but look out for a future blog post where I intend to talk about how I automate some of this. I only typed out the commands in full this time because I was writing this blog post.)</p>
<p>
I want to apply the bug-fix I did on <tt>master</tt> on top of <tt>MOODLE_22_STABLE</tt>, and fortunately the command <tt>git cherry-pick</tt> is designed to do exactly that. (Since we are back in new territory, I will start showing the output of commands again.)</p>
<pre>timslaptop:moodle_22 tim$ <b>git fetch -p origin</b>
remote: Counting objects: 1138, done.
remote: Compressing objects: 100% (118/118), done.
remote: Total 586 (delta 451), reused 569 (delta 434)
Receiving objects: 100% (586/586), 189.65 KiB, done.
Resolving deltas: 100% (451/451), completed with 176 local objects.
From github.com:timhunt/moodle
* [new branch] MDL-32039 -> origin/MDL-32039
2117dcb..a280078 master -> origin/master
timslaptop:moodle_22 tim$ <b>git checkout -b MDL-32039_22 MOODLE_22_STABLE</b>
Switched to a new branch 'MDL-32039_22'
timslaptop:moodle_22 tim$ <b>git cherry-pick origin/MDL-32039</b>
[MDL-32039_22 2c92dc7] MDL-32039 qeupgradehelper: fix typos in the lang strings
1 files changed, 4 insertions(+), 4 deletions(-)
timslaptop:moodle_22 tim$ <b>git push origin MDL-32039_22</b>
Counting objects: 15, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (6/6), done.
Writing objects: 100% (8/8), 662 bytes, done.
Total 8 (delta 5), reused 3 (delta 1)
To git@github.com:timhunt/moodle.git
* [new branch] MDL-32039_22 -> MDL-32039_22</pre>
<p>
Notice that the convention I use is to append <tt>_22</tt> to the branch name to distinguish the branch for Moodle 2.2 stable from the branch for <tt>master</tt>. Other people use different conventions, but this one is simple and works for me.</p>
<p>
Of course, in the middle of that, I checked that the fix actually worked in Moodle 2.2. In this case, there is not much to worry about, but with more complex changes, you really have to check. For example the fix you did on the master branch might have used a new API that is not available in Moodle 2.2. In that case, you would have had to redo the fix to work on the stable branch.</p>
<h4>
Transferring the fix to the 2.1 stable branch</h4>
<p>
Now I rinse and repeat for the 2.1 branch. (I will supress the command output again, until the last command, when something interesting happens.)
<pre>timslaptop:moodle_22 tim$ <b>cd ../moodle_21</b>
timslaptop:moodle_21 tim$ <b>git fetch moodle</b>
timslaptop:moodle_21 tim$ <b>git checkout MOODLE_21_STABLE</b>
timslaptop:moodle_21 tim$ <b>git merge --ff-only moodle/MOODLE_21_STABLE</b>
timslaptop:moodle_21 tim$ <b>git push origin MOODLE_21_STABLE</b></pre>
<pre>timslaptop:moodle_21 tim$ <b>git fetch -p origin</b>
timslaptop:moodle_21 tim$ <b>git checkout -b MDL-32039_21 MOODLE_21_STABLE</b>
timslaptop:moodle_21 tim$ <b>git cherry-pick origin/MDL-32039</b>
error: could not apply 9e45982... MDL-32039 qeupgradehelper: fix typos in the lang strings
hint: after resolving the conflicts, mark the corrected paths
hint: with 'git add <paths>' or 'git rm <paths>'
hint: and commit the result with 'git commit'</pre>
<p>
So, <tt>git cherry-pick</tt> could not automatically apply the bug fix. To see what is going on, I use <tt>git status</tt> to get more information:</p>
<pre>timslaptop:moodle_21 tim$ <b>git status</b>
# On branch MDL-32039_21
# Unmerged paths:
# (use "git add/rm <file>..." as appropriate to mark resolution)
#
# <span class="remove">deleted by us: admin/tool/qeupgradehelper/lang/en/tool_qeupgradehelper.php</span>
#
no changes added to commit (use "git add" and/or "git commit -a")</pre>
<p>
That may or may not make things clear. Fortunately, I know the history behind this. What is going on here is that in Moodle 2.1, this code was in <tt>local/qeupgradehelper</tt>, and in Moodle 2.2 it moved to <tt>admin/tool/qeupgradehelper</tt>, and this confuses git. Therefore, I will have to sort things out ourselves.</p>
<p>
In this case, we can just move the altered file to the right place</p>
<pre>timslaptop:moodle_21 tim$ <b>mv admin/tool/qeupgradehelper/lang/en/tool_qeupgradehelper.php local/qeupgradehelper/lang/en/local_qeupgradehelper.php</b></pre>
<p>
Then use <tt>git diff</tt> to check the changes are just what we expect:</p>
<pre>timslaptop:moodle_21 tim$ <b>git diff -U1</b>
<b>diff --git a/local/qeupgradehelper/lang/en/local_qeupgradehelper.php b/local/qeu
index ac883b5..7bd7c13 100644
--- a/local/qeupgradehelper/lang/en/local_qeupgradehelper.php
+++ b/local/qeupgradehelper/lang/en/local_qeupgradehelper.php</b>
<span class="lines">@@ -19,3 +19,3 @@</span>
*
<span class="remove">- * @package local</span>
<span class="add">+ * @package tool</span>
* @subpackage qeupgradehelper
<span class="lines">@@ -50,3 +50,3 @@</span> $string['gotoresetlink'] = 'Go to the list of quizzes that can
$string['includedintheupgrade'] = 'Included in the upgrade?';
<span class="remove">-$string['invalidquizid'] = 'Invaid quiz id. Either the quiz does not exist, or </span>
<span class="add">+$string['invalidquizid'] = 'Invalid quiz id. Either the quiz does not exist, or</span>
$string['listpreupgrade'] = 'List quizzes and attempts';
<span class="lines">@@ -57,5 +57,5 @@</span> $string['listtodo_desc'] = 'This will show a report of all the
$string['listtodointro'] = 'These are all the quizzes with attempt data that st
<span class="remove">-$string['listupgraded'] = 'List already upgrade quizzes that can be reset';</span>
<span class="add">+$string['listupgraded'] = 'List already upgraded quizzes that can be reset';</span>
$string['listupgraded_desc'] = 'This will show a report of all the quizzes on t
<span class="remove">-$string['listupgradedintro'] = 'These are all the quizzes that have attempts th</span>
<span class="add">+$string['listupgradedintro'] = 'These are all the quizzes that have attempts th</span>
$string['noquizattempts'] = 'Your site does not have any quiz attempts at all!'
<span class="lines">@@ -82,2 +82,2 @@</span> $string['upgradedsitedetected'] = 'This appears to be a site t
$string['upgradedsiterequired'] = 'This script can only work after the site has
<span class="remove">-$string['veryoldattemtps'] = 'Your site has {$a} quiz attempts that were never </span>
<span class="add">+$string['veryoldattemtps'] = 'Your site has {$a} quiz attempts that were never </span></pre>
<p>
Actually, you can see that there is one wrong change there (the change to <tt>@package</tt>, so I need to undo that. The easy way to undo that would be to edit the file in Eclipse, but I want to show off another git trick:</p>
<pre>timslaptop:moodle_21 tim$ <b>git checkout -p local</b>
<b>diff --git a/local/qeupgradehelper/lang/en/local_qeupgradehelper.php b/local/qeupgradehelper/lang/en/local_qeupgradehelper.php
index ac883b5..7bd7c13 100644
--- a/local/qeupgradehelper/lang/en/local_qeupgradehelper.php
+++ b/local/qeupgradehelper/lang/en/local_qeupgradehelper.php</b>
<span class="lines">@@ -17,7 +17,7 @@</span>
/**
* Question engine upgrade helper langauge strings.
*
<span class="remove">- * @package local</span>
<span class="add">+ * @package tool</span>
* @subpackage qeupgradehelper
* @copyright 2010 The Open University
* @license http://www.gnu.org/copyleft/gpl.html GNU GPL v3 or later
Discard this hunk from worktree [y,n,q,a,d,/,j,J,g,e,?]? <b>y</b>
<span class="lines">@@ -48,16 +48,16 @@</span>
<i>... lots more diff output here ...</i>
Discard this hunk from worktree [y,n,q,a,d,/,K,j,J,g,s,e,?]? <b>q</b></pre>
<p>
Now, <tt>git diff</tt> will confirm that the change is just what we want, so we can test the change, and then finish up. (output suppressed again):
<pre>timslaptop:moodle_21 tim$ <b>git add local</b>
timslaptop:moodle_21 tim$ <b>git commit</b>
timslaptop:moodle_21 tim$ <b>git push origin MDL-32039_21</b></pre>
<h4>
Submitting the fix for integration</h4>
<p>
Now I have tested versions of the fix on all three of the branches where the bug needed to be fixed. So, I can go back to the Tracker issue and submit it for integration.</p>
<p>
When I get to <a href="http://tracker.moodle.org/browse/MDL-32039">the bug</a>, I see that Jim Tittsler, who reported the bug, has seen my request for peer review, and added a comment:</p>
<p>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgP9LTOiHHJUJrdzNoFcdpvXO5U1qq2tgpgcLkNsSpjAVyWKQR4b7nvnW8QKfvmhMp8ap07r1tXDU8xHgx9vBszL0LP3CjKibCJtPjAq8oTEtyqS9N6nzaZJlzeFIL_xhSgeWfoLH_ilYM/s1600/issuecomment.png" /></p>
<p>
Fantastic! It really makes a difference when people follow-up on the bugs they report, and supply extra information, or just say thank you. In this case, although I intended to carry on without a peer reveiw, I got one.</p>
<p>
Now I press the <b>Submit for integration...</b> button, and fill in the fix version, the details of the other branches, and most important, we write some testing instructions, so that someone else can test the bug next Wednesday as part of the <a href="http://docs.moodle.org/dev/Process">integration process</a>.</p>
<p>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi0kAfmFAyxkz8D3wrwjxgJvuL19Xr2TF4Y6mW_QfV-P7BptjoM73eyROQGSOeHpKBlW54q5ZrguiC_SvnepQeMd4F9Pd5WD3-MaFpAphf6clD7sCknxqH4RWWixqE09xCc9jshrGvw18Y/s1600/sumitforintegration.png" /></p>
<p>
Finally, we are done. Now we sit back and wait for next Monday, when the weekly integration cycle starts. Our change will be reviewed, tested, and, all being well, included in the next weekly build of Moodle.</p>
<h4>
Reflection</h4>
<p>
Is this an overly laborious process? Well, it is if you try to describe every detail in a blog post! In normal circumstances, however, it really does not take long. In normal circumstances I could probably have done this fix in ten to fifteen minutes.</p>
<p>
What usually takes the time is thinking about the problem, and writing and testing the code. This takes much longer than typing the git commands and completing the tracker issue. Writing the testing instructions can be laborious, particularly if it is a complex issue, but that is normally time well spent. It forces you to think carefully about the changes you have made, and what needs to be done to verify that they fix the bug that was reported without breaking anything else. As I said in my <a href="http://tjhunt.blogspot.co.uk/2012/02/1-2-12-testing-testing.html">previous blog post</a>, I think testing instructions are a really good discipline.</p>
<p>
I hope this rather long blog post was interesting, or at least useful to somebody.</p>Tim Hunthttp://www.blogger.com/profile/01349724348368316287noreply@blogger.com10tag:blogger.com,1999:blog-2247246257923129702.post-65671385546165770772012-02-01T23:49:00.000+00:002012-02-01T23:49:42.759+00:001 - 2 - 12, testing, testing<p>Apologies for the title, but it was irresistible today, and provides a good opportunity to talk about one of the fields in the <a href="http://tracker.moodle.org/">Moodle issue tracker</a>. If you go and look at any issue there, for example <a href="http://tracker.moodle.org/browse/MDL-30854">MDL-30854</a>, you can see it along side all the more common fields that most bug tracking systems have to describe issues. It is a good field, so I hope it will also become common in other bug-tracking systems. Allow me to explain why.</p><p>I don't think it is a good field all the time. When I just want to do a simple quick bug-fix, it is irritating to have another field to fill in before submitting the patch for integration. You have to think things like</p><ul><li>How can someone else verify that my patch fixes what was wrong?</li>
<li>What else might this change have broken? How can someone quickly check that there are no obvious regressions?</li>
</ul><p>and that is the whole point! Anything that makes you stop and thing about the important questions is good. Having to write it down in a few words, while taking a bit of time, forces you to think clearly. After all "<a href="http://docs.moodle.org/22/en/Pedagogy#Social_Constructionism_as_a_Referent">We learn particularly well from the act of creating or expressing something for others to see</a>."</p><p>The testing instructions can also help us think about other important questions. You have to describe how to test things through the Moodle interface, so<br />
<ul><li>How does this feature look in the user-interface?</li>
<li>How do users interact with it?</li>
</ul>These are important questions it is easy to forget when worrying about the technicalities of the code.</p><p>Writing the testing instructions is also a salutary reminder that you might want to actually test the code yourself before submitting it for review. I encountered that phenomenon today, writing the testing instructions for <a href="http://tracker.moodle.org/browse/MDL-31445">MDL-31445</a>. I was thinking about what might go wrong, and realised that the HTML might be invalid as a result of the change. So I went and validated, and found not only that there was a problem with the first version of the patch, which I fixed, but I also found and fixed <a href="http://tracker.moodle.org/browse/MDL-31469">MDL-31469</a>.</p><p>So: The Testing instructions field. Easily worth the hassle of filling it in. Consider adding it to you own bug-tracking systems - assuming you have a process where someone independent tests every bug fix. If you don't have that ...</p>Tim Hunthttp://www.blogger.com/profile/01349724348368316287noreply@blogger.com3tag:blogger.com,1999:blog-2247246257923129702.post-796516316711866352011-09-27T00:10:00.000+01:002011-09-27T00:10:05.612+01:00What I want to build next<p>Earlier this summer I finally finished the new Moodle question engine, which was released as part of Moodle 2.1. As you might expect with such a large change, a number of minor bugs were not spotted until after the release, but I (and others) have fixed quite a lot of them, and we will continue to fix more. I want to say "thank you" to everyone who has taken the time to report the problems they encountered. Pleasingly, some people, including Henning Bostelmann, Tony Levi, Pierre Pichet, Jamie Pratt, Joseph Rézeau and Jean-Michel Vedrine have not only been sending in bug reports, but also submitting bug fixes. I would like to thank them in particular. I don't know whether this means that the new <a href="http://docs.moodle.org/dev/Process">Moodle development processes</a> are working well and encouraging more contributors, or that I released the new question engine full of trivial bugs.</p>
<p>At the moment, apart from fixing bugs, we are about two months away from the end of the OU's one-year project to move from Moodle 1.9 to 2.x and implement a lot of new features at the same time. In the eAssessment area, we had about 30 work-packages to do, of which finishing the question engine was by far the biggest, and we have about 6 left to go. Most of the remaining tasks are at least started, but finishing them is what I, and the developers on my team, will be doing in the near future.</p>
<p>I have, however, been thinking ahead a bit, and I have an idea for what I would like to build, should I be given the opportunity. Honesty compels me to say these are not my ideas. I stole them from other people, and there are proper acknowledgements at the end of this post. I wanted to post about this because: 1. in my experience, if you post about your half-baked ideas, people will be able to suggest ways to make them better; and 2. I am hoping that at least one course-team at the OU will see this and say "we would love to use this in our teaching" because that might persuade the powers that be to let me build this.</p>
<h4>Rationale</h4>
<p><img border="0" width="320" style="margin-left: 1em; float: right; clear: right;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjWg1ZpStoHb5xv87tIYKWViKorAIHer1SfGs3fSGHOjYwCauYiHA6Qckuu3TauV0hu0z_tr4HT26i1Fj570VC6tLQW84RANN84ppn3NJDpg5O51lU5MQxOl9PNcKqTGst1E29X_KK2GOs/s1600/view.png" />
The Moodle quiz is a highly structured, teacher-controlled tool for building activities where students attempt questions. What I want to create is a more open activity where students can take charge of their learning using a bank of questions to practice some skill where the computer can mark their efforts and give feedback. For the sake of argument, I have been calling this the "Question practice" activity module.</p>
<h4>The entry page</h4>
<p>When a student goes into a Question practice activity, they see a front screen that lists all the categories in the question bank for this activity.</p>
<p>Next to each category, there are statistics for how the student has performed on that category so far. For example, it might say "recently you scored 19/21 (90%); all time you scored 66/77 (86%).” The categories are nested, and there is a subtotal for each category.</p>
<p>At the bottom of the page is an <b>Attempt some questions…</b> button. This takes the student to the …</p>
<h4>Start a session form</h4>
<p><img border="0" width="320" style="margin-left: 1em; float: right; clear: right;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiRhIrrQ2B-cl3Q-uZ0IaNCpmLbPAdcnCHo98iTrQ8wPhpRQdQnGv8IQIEpPjU45DQzu5yjJj0UX9zf1qIZZRqik_7UkNgnuw_6vu6bLvd_1gisJCkzE6xcSEfk8nBshS_Yv60YCfznwrM/s1600/start.png" />
… where they set up what practice they would like to do. Students can select which categories they want to attempt questions from. They may also be able to choose how many questions they want. For example "Give me 10 questions", "As many as possible in 20 minutes", or "Keep going until I say stop". The teacher will probably be able to constrain the range of options available here.</p>
<p>Once they are satisfied, the they clicks the "Start session" button. This takes them to the …</p>
<h4>Attempt page</h4>
<p><img border="0" width="320" style="margin-left: 1em; float: right; clear: right;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjeOrrS_3sGSa2EwcJSoQTDqIrnv2GbB14WAX6nwmnpg_ITo6JXRX6ON5bbU4-3w5Kip5hd2WRIa5KXm8qBGU9_g5uhMrgBUuxmWwRkJr2es4_hwWvZsbSP5Iwa8366dFMtQM5Ex7xmJDw/s1600/attempt.png" />
… which shows the student the first question, chosen according to the criteria they set. There will probably be a display of running statistics "In this session you have attempted 0 questions so far". The question will contain the controls necessary for attempting the question. There will also probably be a "Please stop, I'm bored" button, so the student can leave at any time.</p>
<p>When they get back to the front page, the statistics will have been updated.</p>
<p>If the student crashes out of a session, then when they go back in, the front page will have a "Continue current session" button.</p>
<h4>Overall activity log</h4>
<p>One batch of attempting questions will be called a 'practice session'. The system will keep track of all the sessions that the student has done, and what they achieved during each session.</p>
<p>The front page will have a link to a page that lists all of the student's sessions, showing what they achieved in each. This provides more detail than is visible on the front page.</p>
<h4>Possible extensions</h4>
<p>That is the key idea. Here are some further things that could be added to the basic concept.</p>
<h4>Milestones</h4>
<p>The system could recognise targets, goal, or achievement (I'm not sure of the best name). That would be something like "Attempt more than 10 questions from the Hard category, and score more than 90%". If the student achieves that target at any time, they system would notice, and the achievement would be recorded on the front page and in the session log in an ego-boosting way (e.g. a medal icon).</p>
<p>The whole point of this activity is to be as student-driven as possible, so should students be able to define their own targets or goals? Should students be able to set goals for each other?</p>
<h4>Locks / Conditional access</h4>
<p>The activity could also have locks, so that the student cannot access the questions in the Multiplication category until after they have scored more than 80% in the Hard addition category. Of course, unlocking a new category could be an achievement. We appear to be flirting with the gamification buzz-word here, so I will stop.</p>
<h4>Performance comparison</h4>
<p>Should there by any way for students to compare their performance, or achievements, with their peers? We are definitely getting to features that should be left until version 2.0. Let's get a basic system working first, but make sure it is extensible.</p>
<h4>How hard would this be to build</h4>
<p>I think this would not require too much work because a lot of the necessary building blocks already exist in Moodle. The question bank already handles questions organised into categories, and we would just use that. Similarly, the attempt page and practice sessions are very easy to manage with the new question engine.</p>
<p>The real work is in two places. First, building the start attempt form, and then writing the code that randomly selects questions based on the options chosen. Second, deciding what statistics to compute, and then writing the code to compute them.</p>
<p>Of course, before we can start writing any code, there are still a lot of details of the design to decide. Also one most not forget things like backup and restore, creating the database, and all the usual Moodle plumbing.</p>
<p>Overall, I think it would take a few months work to get a really useful activity built.</p>
<h4>Credit where credit is due</h4>
<p>I said earlier that I got most of these ideas from other people. To start with, things like this have been mooted in the <a href="http://moodle.org/mod/forum/view.php?id=737">Moodle quiz forum</a> over the years. The discussions there usually start from Computerised Adaptive Testing, whereas this idea is about student-driven use of questions. I think the latter is more interesting. (As a mathematician, I think CAT is an interesting concept. I just don't think it would make a useful Moodle activity.)</p>
<p>The real inspiration for this came at a meeting in London at the start of 2011. That meeting was at UCL with Tony Gardiner-Medwin who has already <a href="http://www.ucl.ac.uk/lapt/">built a system something like this</a>, but stand-alone, not in Moodle; and David Emmett from University of Queensland, Brisbane (who was <a href="http://blogs.ucl.ac.uk/ltss/tag/david-emmett/">giving a seminar</a>). David had been hoping to get a grant to build something like this proposal (in Moodle) but that did not pan out. We did, however, have a very interesting discussion, and that is where I got the key idea that this sort of question practice was most interesting if you could give the student control of their own learning as much as possible.</p>
<p>We have also discussed ideas like this on-and-off for a long time at the OU. There has, however, been a lot of other things we needed to deal with first. We had to do a lot of work getting the quiz system working to our satisfaction (a strand of work that eventually lead to the new question engine). We had to sort out the reporting of grades, including working with <a href="http://moodle.com/hq/">Moodle HQ</a> on the new gradebook in Moodle 1.9, and integrating Moodle with our student information system. We had to make a new question types that our users wanted. Only now can we start to think seriously about the last piece of the jigsaw: more activities that use all the question infrastructure we have built. I hope this post is a useful starting point for discussing what one of those activities might be.</p>Tim Hunthttp://www.blogger.com/profile/01349724348368316287noreply@blogger.com9tag:blogger.com,1999:blog-2247246257923129702.post-88060635519486269312011-08-06T14:37:00.001+01:002011-08-08T17:06:55.035+01:00The Good, the Bad and the Ugly<p>It was the best of times, it was the worst of times, ... It has certainly been a mixed week.</p>
<h4>The good</h4>
<p>... was that I helped three OU developers submit their first bug fix through the new <a href="http://docs.moodle.org/dev/Process">Moodle development process</a>: <a href="http://tracker.moodle.org/browse/MDL-27631">MDL-27631</a>, <a href="http://tracker.moodle.org/browse/MDL-28517">MDL-28517</a> and <a href="http://tracker.moodle.org/browse/MDL-28620">MDL-28620</a>. Hopefully those fixes all get through integration review next week.</p>
<h4>The Bad</h4>
<p>... was that the time had finally come to deal with a hot potato that we have been tossing around for some months; and, to mix metaphors, when the buck stopped, I was in the the wrong place at the wrong time.</p>
<p>As part of some new question types we are developing, we want students to be able to type responses that include superscripts and subscripts. For example <b>3×10<sup>8</sup> ms<sup>-1</sup></b> or <b>SO<sub>4</sub><sup>2-</sup></b>. We have an old implementation of this, done six years ago for OpenMark (for example <a href="http://www.open.ac.uk/openmarkexamples/p2_4.shtml">this</a> or <a href="http://www.open.ac.uk/openmarkexamples/p3_3.shtml">this</a>), but that never worked in Safari, and is a bit dodgy generally. We want a new, reliable implementation that works in IE, Firefox, Chrome and Safari.</p>
<p><b>Plan A</b>: back near the start of spring, I quickly knocked up a partial solution using the <a href="http://developer.yahoo.com/yui/2/">YUI 2</a> Rich Text Editor library. It mostly worked, but there were issues. It did not work consistently across browsers, and it lets you nest superscripts inside subscripts inside superscripts which just gets confusing, so we want to prevent that.</p>
<p>I had a sneaking suspicion how hard it would be to get from my quick partial solution to a robust implementation. Therefore I moved on to other things, and tried to unload this job onto three other people in turn. There were plenty of other more urgent tasks on our todo list.</p>
<p>Time passed, and many of the other things got done, so at the start of the week I realised that creating this input widget could not be put off any longer. I also felt it was unfair to expect other developers to deal with a crappy job that I was not prepared to do myself, so I decided to have another go.</p>
<p>The other thing that had changed is that while attempting to implement this, <a href="http://colchambers.blogspot.com/">Colin</a> and Wale had both eliminated some blind alleys, and suggested some promising ideas. Therefore, I was continuing from a far better place than where I left off. Even so, it was a long week.</p>
<h4>The Ugly</h4>
<p><b>Plan B</b>: Although we had a partial implementation in YUI 2, I did not want to continue with that. Moodle is trying to move away from YUI 2 and to YUI 3 as soon as possible. So, my first attempt was to use the <a href="http://developer.yahoo.com/yui/3/editor/">YUI 3(.3) Rich Text Editor</a>. As the docs make clear. That is not finished yet. It is also not terribly well documented. With hindsight, I now realise that it provides only a very thin rapper around the native editing facilities provided by web browsers. Therefore it does not really help with browser inconsistencies.</p>
<p><b>Plan C</b>: Since the Rich Text Editor is only beta, I decided to have a look at what was new in YUI 3.4, which is due for release soon. The answer is that they have made quite a lot of progress - in the sense that if you are trying to walk from London to Edinburgh, you have made quite at lot of progress <a href="http://maps.google.com/maps?q=from:London+to:Milton+Keynes+to:Leeds+to:Brampton,+UK+to:Edinburgh&saddr=London&daddr=Milton+Keynes+to:Leeds+to:Brampton,+UK+to:Edinburgh&hl=en&ll=53.566414,-1.647949&spn=7.39018,14.282227&sll=54.908988,-2.647018&sspn=0.213552,0.44632&geocode=FXjUEQMd5BL-_yl13iGvC6DYRzGZKtXdWjqWUg%3BFdgZGgMdRD_1_ymFhiOC-Ex2SDEUfrlM950aFg%3BFUZ9NQMdG4np_ymZvWTaSj55SDGp3BMC_bqtUQ%3BFZFZRgMd8TfW_yltrGVMSAF9SDE1WU8_nC9tSw%3BFWC7VQMdsFzP_ykjJpilALiHSDEnF-d8exTyZA&dirflg=w&doflg=ptk&t=h&z=6">by the time you reach Milton Keynes</a>. Compounded with the fact that it is hard to find any documentation for pre-release version of YUI, this approach also failed.</p>
<p>At this point, I decided to do a bit of reading. I found <a href="http://dev.opera.com/articles/view/rich-html-editing-in-the-browser-part-1/">two excellent</a> <a href="http://dev.opera.com/articles/view/rich-html-editing-in-the-browser-part-2/">articles from Opera</a> that explained exactly how <tt>contentEditable</tt> works in web browsers. I also found a good <a href="http://www.quirksmode.org/dom/execCommand.html">cross-browser compatibility table</a>. That made me realise that YUI was hardly doing anything to help with cross-browser differences.</p>
<p><b>Plan D</b>: Now that I knew roughly what the browsers were doing, I briefly toyed with the idea of implementing the widget entirely myself in plain JavaScript. Once again, getting something basic working was not too hard, but I had not even started to tackle the cross-browser differences.</p>
<p>Then I realised that <a href="http://www.tinymce.com/">TinyMCE</a>, which Moodle uses, tends to work really well across browsers. It is a bit slow to load, because it is a huge mass of code, but perhaps all that code is there for a reason. A quick play with superscript and subscript in the Moodle HTML editor in various browsers confirmed that TinyMCE must be working around most of the problems. So I dived into the TinyMCE code with the original idea of stealing the bits I needed to make Plan D work. It did not take much looking for me to develop a new-found respect for how hard TinyMCE is working to keep different web browsers in line. I did not want to have to replicate all that.</p>
<p><b>Plan E</b>: So I finally concluded I should just use TinyMCE directly. That is using a very large sledge-hammer to crack a nut, but at least it should work. Indeed, it was mostly just a matter of setting the right configuration options. What made it particularly good is that there is <a href="http://www.tinymce.com/wiki.php/Configuration:valid_children">an option to limit which tags can be nested inside other tags</a>. That robustly prevents people from nesting superscript inside subscript, etc.</p>
<p>I was very nearly there, but there were two more requirements. We did not want pressing enter to insert a line-break, and because we were only dealing with a single line of input, we wanted to use the up and down arrow keys as shortcuts for superscript and subscript. The only way I could find to do that was to write a simple TinyMCE plugin. Fortunately, <a href="http://www.tinymce.com/wiki.php/Creating_a_plugin">that is well documented</a>.</p>
<h4>The end</h4>
<p>I got there eventually. <a href="https://github.com/timhunt/moodle-editor_supsub">The code</a> needs to be cleaned up, tested some more, and integrated into the question types we are building, but I don't foresee any problems doing that.</p>
<p>I would like to thank the <a href="http://www.moxiecode.com/">Moxie Code</a>, who make TinyMCE, even though they have completely ignored <a href="http://www.tinymce.com/develop/bugtracker_view.php?id=3789">the patch I sent them</a> some time ago in relation to <a href="http://tracker.moodle.org/browse/MDL-27890">MDL-27890</a>. I would also like to thank Olav Junker Kjær, who wrote the Oracle blog posts, which were the most useful thing I read. Also, the team behind <a href="http://getfirebug.com/">Firebug</a>. I can't imagine doing JavaScript development without that debugging tool. Finally Colin, Wale and Jamie, who I tried to dump this on, and who in return gave me helpful ideas.</p>Tim Hunthttp://www.blogger.com/profile/01349724348368316287noreply@blogger.com4tag:blogger.com,1999:blog-2247246257923129702.post-79654437117520243102011-07-07T00:31:00.000+01:002011-07-07T00:31:04.527+01:00Keeping the discipline of not changing Moodle core<p><a href="http://learn.open.ac.uk/mod/oublog/viewpost.php?post=51014">We have said in the past</a> that at the OU we made too many changes to core code in our Moodle 1.9 system, and that as we moved to Moodle 2, we would make far fewer. The release of Moodle 2.1 provides a good opportunity to stop and reflect on how we are doing.</p>
<p>Exactly how many core changes we had made in 1.9 seems to depend on who you ask. It was something of the order of one or two thousand depending on how you count. As a result, every time there is a new Moodle 1.9.x release, someone (Derek) has to do a couple of days painstaking merging to upgrade to the new version.</p>
<p>Moodle 2.1 was released on Friday. On Monday afternoon we decided to try upgrading our development branch to it. The merge (literally <tt>git merge MOODLE_21_STABLE</tt>) only took a few hours, and that was most mostly a matter of thinking before typing <tt>git checkout --theirs</tt> to resolve most of the conflicts in favour of the Moodle 2.1 code. Then we had to test test install, upgrade, and basic functionality before pushing the merge to our central repository.</p>
<p>But, how many OU-specific changes do we have in core code right now? Well, the answer appears to be eight. Let me explain that number.</p>
<p>To control the core code changes, we use a simple approval procedure. Each change must be proposed by one of the leading developers. They do this by opening a special sort of ticket in our issue tracking system. That serves as a permanent record of the change, and is also a place to log any discussion. The other leading developers then review the proposal. For the change to be approved, at least one other leading developer must endorse it with a +1 vote, and there must not be any -1 votes. Votes are normally accompanied by an explanation of why that developer is voting that way.</p>
<p>After a suitable time for votes, the issue is declared either accepted or rejected. OU-specific changes can be rejected for two reasons.We may decide that it is not acceptable to change core to implement this feature, so we drop the feature; or we think of some devious way to achieve the feature without changing core code.</p>
<p>If a change is approved, then the code is written. Well, in some cases the code will already have been written, because you can have a much more informed debate about whether a certain change is a good idea if you can see exactly what the proposed change is. Once the code is written and approved, it is committed to out git repository and the issue moves into state 'Code committed'. Finally, we may find a way to get rid of the OU-specific change in future. The most common way that happens is if we contribute the change upstream to moodle.org. For example the new Moodle question engine was an ou-specific change as long as we were using it in Moodle 2.0, but now we have upgraded to Moodle 2.1, it is standard code. Therefore, that issue has now changed status to 'No longer required'.</p>
<p>Overall, our we, have 22 ou-specific change issues in our bug tracker. The break-down is:</p>
<p>Rejected: 2<br />
New (under discussion): 4<br />
Approved (but not yet implemented): 1<br />
Code committed: 8<br />
No longer required: 7</p>
<p>Most of the 'Code committed' changes are pretty boring. For example three of them are bug-fixes to the questionnaire module that we have submitted upstream, but which have not been reviewed and accepted by the questionnaire maintainers yet. Therefore, those three will almost certainly end up as 'No longer required' in due course. Another example is that we want to customise the "Database connection failed / It is possible that the database is overloaded or otherwise not running properly" page that you get when Moodle fails to connect to the database. If Moodle can't connect to the database, then it cannot load the configuration, and so cannot determine which theme to use to display the error. Therefore, the only way customise that page is to edit lib/setup.php.</p>
<p>The one 'serious' ou-specific change we have is some hacking around in course/lib.php to support one of our custom modules called 'subpage' (not released yet, but we hope to share it eventually). Given more time, we might be able to find a more elegant way to handle these changes, but we don't have that sort of time at the moment.</p>
<p>While we have controlled the core code changes, we do have written a lot of custom plugins. Those range from big things like forumng and ouwiki, to small things like a local plugin that just implements a single web-service function. I'm afraid I don't have a complete list, but we must have more than 50 plugins by now. As far as I am aware, the upgrade from 2.0 to 2.1 did not break any of them.</p>Tim Hunthttp://www.blogger.com/profile/01349724348368316287noreply@blogger.com1tag:blogger.com,1999:blog-2247246257923129702.post-74703105912284425032011-04-12T09:25:00.001+01:002011-04-12T09:25:47.827+01:00Performance and Scalability<p>When you set up a web application, often you will start small with everything running on one server. Everything, in this case, typically means the application and the database and data files. That is nice and simple. It has the advantage that everything it fast because it is all one the one server.</p>
<p>The capacity is, however, limited. Suppose the load on your application increases. You can get some way just by upgrading the one server, adding more memory and faster processors, but that will only get you so far.</p>
<p>Eventually, you will have to scale out. You will get a number of separate web servers, with a load-balancer to distribute the incoming requests between them. All the web servers will connect to a shared database sever, or cluster of database servers. The files will probably go on a separate file server.</p>
<p>While this increases the total load that the whole system can support, it means, paradoxically, that processing a single request is slower. For example, if you switch from one server to three servers (application server, database and files) your site will not support three times as many users. The scalability will not be linear. That is because every connection to the database or to get a file now has to travel over the network. Accessing something across a network tends to be an order of magnitude slower than accessing something on the same server.</p>
<p>The above is all standard knowledge about scaling web applications. I have been thinking about about it recently because it explains the way my working life has been evolving. Just over six months ago I was working essentially on my own, <a href="http://docs.moodle.org/en/Development:Question_Engine_2">re-developing the Moodle question engine</a>. I had been working away like that for a year, and I had got a lot done.</p>
<p>Since then, things have changed, and I am now managing a team including three other developers, and two out-sourced development contracts. It has been particularly 'bad' this last couple of weeks as one development period of the project came to an end and I had to review a lot of code, and then I had to sort out everything we were supposed to be doing for the next three months. I am starting to wonder if I will every get any of my own development work done at all!</p>
<p>That is, however, just some exasperation showing. I know really that this has just been a brief spell with an excessive amount of administration. Overall I am happy that the OU is investing so much in its eAssessment systems (and the other parts of its VLE); as a team we are achieving more than I could on my own; but right now my inner geek would really like to go and hide in a cave for a while and just write code undisturbed.</p>Tim Hunthttp://www.blogger.com/profile/01349724348368316287noreply@blogger.com3tag:blogger.com,1999:blog-2247246257923129702.post-50613825807832889852011-03-09T23:26:00.000+00:002011-03-09T23:26:18.739+00:00Moodle bug tracker<p>Today, between fixing bugs and reviewing code, I spent a bit of time tinkering with my dashboard in the Moodle bug tracker. I was trying to make it as clear as possible which issues need my attention. I am quite pleased with the result:</p>
<p><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEheVHiNzXiXKbT3N1aDYa3M7PZZ_d8wQz2os7kRcVtLHxaNuyEZe40YhTZWtRkGh2iTD3ZYj7fCLq6wAjmvSPV3QJxgyziT3_8M6ifWzSZMpWNXut8qZPZBSSAq5MLqvBf6l8QogvdF5go/s1600/tracker.png" alt="tracker screen grab"/></p>
<p>The issue statistics widget does not just show you the pretty graphs, it also makes it easy to get at those issues. For example, if I click on <b>1.9.12</b> in the <b>My targetted issues</b> box, then I am taken to a list of those 11 issues. That particular widget I have used for a while, the new parts are the boxes just under there.</p>
<p><b>My: Ongoing pull requests</b> I added to make it easy to find the things I have submitted for inclusion in next week's weekly build (hopefully). Thanks to Eloy, that filter is now available to everyone in the jira-developers group.</p>
<p>The next two boxes let me quickly get to issues with patches attached. There is an emerging convention of adding the label <b>patch</b> to such issues, where the attached code needs to be reviewed. This makes finding such issues very much easier. The whole point of the new <a href="http://docs.moodle.org/en/Development:Process">development processes</a> is to encourage more people to contribute patches, and then ensure those patches get looked at, rather than just sitting there for years. (Here is an example I found yesterday of what used to happen: <a href="http://tracker.moodle.org/browse/MDL-13983">MDL-13983</a>). Therefore, as quiz maintainer, I need to be able to see easily if anyone has submitted any relevant patches. I also want easy access to bugs with patches that I created or commented on.</p>
<p>Having brought it up, can I say that I am quite happy with how the new processes are working so far. My impression is that since they were introduced, I have received more usable bug fixes for the quiz that in the past. I am not sure how much causality one can claim there, however, since as well as the new processes, we also had the Moodle 2.0 release. Moodle 2.0 has plenty of minor bugs that are ripe for fixing. So, it may just be that we are seeing lots of bug fixes because there are lots of bugs.</p>
<p>At the other end, it has made it a bit easier to get my code reviewed. Well, finished code where I have created a PULL request certainly gets is reviewed. It is still sometimes a problem to get comments on work-in-progress, because everyone is so busy.</p>Tim Hunthttp://www.blogger.com/profile/01349724348368316287noreply@blogger.com0