Thursday, March 2, 2017

The online assessment Turing test

In the back-channel for yesterday's Transforming Assessment webinar (which I would recommend) Geoff Crisp asked me:

"Tim - what about the Turing test - what if a student could not tell the difference between a computer giving them feedback and the teacher?"

I think this is a really nice question. Food for some quite wide-ranging thoughts about what online assessment should be.

On the whole, I stand my the snap answer I gave at the time. Computers and human markers (at least currently) have different strengths. The computer (having been set up by a human teacher) can be there at any time the student wants, able to give immediate feedback on a range of more or less basic practice activities. A human teacher in only available at certain times, but is able to give feedback in a more holistic way. They may know the student, and have some concept about how their subject is best learned, on on that bases give the student some really meaningful advice about how best to improve.

I know there is adaptive learning hype about computers being able to know the students and therefore offer contextual advice, but I will believe that when (if) I see it.

If you are thinking about designing a course today, you are much better off understanding the strengths and weaknesses of both computer-marked and conventional assessment, and using each for where they work best. There is currently nothing to be gained by trying to hide where you are using computer marking.

I think a reasonable analogy is with searching for information. You might do a Google search, which will give you the kind of results that a computer can give. Alternatively, you could ask a friend who knows more about the subject, and they will give you a different sort of advice about what to read. Neither is necessarily better. In some cases one of the two approaches might be clearly more appropriate. In other cases either would do. If you really want to understand something in depth, you probably want use both approaches, and it is an advantage that each will give different results that help in different ways.

If we are trying to create self-regulating learners, then it can be a merit that a computer only gives basic templated feedback, which could be as little as just right/wrong. The learner needs to do more work themself to get from the feedback to an action to take to improve. This is not always a benefit, but it could be.

So, while the idea of an assessment Turing test is usefully thought provoking, I don't think it is educationally useful, at least not for the foreseeable future. Having said that, the nicest thing anyone said about an online assessment system I helped build is still

"It's like having a tutor at your elbow."

The key word there is "like", which is not the same as "indistinguisable from".

Thursday, June 25, 2015

The Assessment in Higher Education conference 2015

I am writing this on a sunny evening, sitting in a pub overlooking Old Turn Junction, part of the Birmingham Canal Navigations, with a well-earned beer after two fascinating and exhausting days at the Assessment in Higher Education conference.

It was a lovely conference. The organising committee had set out to try to make it friendly and welcoming and they succeeded. There was a huge range of interesting talks and since I could not clone myself I was not able to go to them all. I am not going to describe individual talks in detail, but rather draw out what seemed to me to be the common themes.

A. It is all just assessment

The first keynote speaker (Maddalena Taras) said this directly, and there were a couple of other things along the same lines: the split between formative and summative assessment is a false dichotomy. If an assessment does not actually evaluate the students (give them a grade, hence summative) then it misses the main function of an assessment. This is not the same as saying that every assessment must be high stakes. Conversely, in the words of a quote Sally reminded me of:

“As I have noted, summative assessment is itself ‘formative’. It cannot help but be formative. That is not an issue. At issue is whether that formative potential of summative assessment is lethal or emancipatory. Does formative assessment exert its power to discipline and control, a power so possibly lethal that the student may be wounded for life? … Or, to the contrary, does summative assessment allow itself to be conquered by the student, who takes up a positive, even belligerent stance towards it, determined to extract every human possibility that it affords?” (Boud & Falchikov (2007) Rethinking Assessment in Higher Education: Learning for the Longer Term)

The first keynote was a critique of Assessment for Learning (AfL). Not that assessment should not help students learn. Of course it should. Rather, the speaker questioned some of the specific recommendations from the AfL literature in a thought-provoking way.

The 'couple of other things' were a talk from Jill Barber of School of Pharmacy at Birmingham, about giving students quite detailed feedback after their end of year exams; and Sally Jordan’s talk (which I did not go to since I have heard it internally at the OU) about the OU Science faculty's semantic wranglings about whether all their assessment gets called “summative” or “formative”, and hence how the marks for the separate assignments are added up, without changing what the assessed tasks actually are.

B. Do students actually attend to feedback?

The second main theme came out many times. On the one hand, students say they like feedback and demand more of it. On the other hand, there is quite a lot of evidence that many students don’t spend much time reading it, or that when they do, it does not necessarily help them to improve. So, there were various approaches suggested for getting students to engage more with feedback, for example by

  • giving feedback via a screen-cast video, talking them through their essay highlighting with the mouse (David Wright & Damian Kell, Manchester Metropolitan University). Would students spend 19 minutes reading and digestion written feedback on an essay? Well, they got a 19 minute (on average) video - one of the few cases where some students thought it was too much feedback!
  • making feedback a dialogue. That is, encouraging students to write questions on the cover sheet when they hand the work in, for their tutor to answer as part of the feedback. That was what Rebecca Westrup from the University of East Anglia was doing.
  • Stefanie Sinclair from the OU religious studies department talked about work she had one with John Butcher & Anactoria Clarke assessing reflection in an access module (a module to designed to help students with limited prior education to develop the skills they need to study at Level 1). Again, this was to encourage students to engage in a dialogue with their tutor about their learning.
  • Using peer and self assessment, so that students spend more time engaging with the assessment criteria by applying them to their own and other’s work. Also the suggestion from Maddalena Taras was that initially you give the student’s work back without the marks or feedback (but after a couple of weeks of marking) so that they read it with fresh eyes before they get the feedback (first) then the marks.
  • There was another peer assessment talk, by Blazenka Divjak of the University of Zagreb, using the Moodle Workshop tool. The results were along the same lines as other similar talks I have seen (for example at the OU where we are also experimenting with the same tool). Peer assessment activities do help students understand the assessment criteria. It helps them appreciate what teachers do more. Students’ grading of their peers, particularly in aggregate, is reliable, and comparable to the teacher’s grade.
  • A case of automated marking (in this case of programming exercises) where students clearly did engage with the feedback because they were allowed to submit repeatedly until they got it right. In computer programming this is authentic. It is what I do when doing Moodle development. (Stephen Nutbrown, Su Beesley, Colin Higgins, University of Nottingham and Nottingham Trent University.)
  • It was also something Sally touched on in her part of our talk. With the OU's computer-marked questions with multiple tries, students say the feedback helps them learn and that they like it. However, if you look at the data or usability lab observations, you see that in some cases some students are clearly paying not attention to the feedback they get.

C. The extent to which transparency in assessment is desirable

This was the main theme of the closing keynote by Jo-Anne Baird from the Oxford University Centre for Educational Assessment. The proposition is that if assessment is not transparent enough, it is unfair because students don’t really understand what is expected of them. A lot of university assessment is probably towards this end of the spectrum.

Conversely, if assessment is too transparent it encourages pathological teaching to the test. This is probably where most school assessment is right now, and it is exacerbated by the excessive ways school exams are made hight stakes, for the student, the teacher and the school. Too much transparency (and risk averseness) in setting assessment can lead to exams that are too predicable, hence students can get a good mark by studying just those things that are likely to be on the exam. This damages validity, and more importantly damages education.

Between these extremes there is a desirable balance where students are given enough information about what is required of them to enable them to develop as knowledgable and independent learners, without causing pathological behaviour. That, at least, is the hope.

While this was the focus of the last keynote, it resonated with several of the talks I listed in the previous section.

D. The NSS & other acronyms

The National Student Survey (NSS) is clearly a driver for change initiatives at a lot of other universities (as it was two years ago). It is, or at least it is perceived to be a, big deal. Therefore it can be used as a catalyst or leaver to get people to review and change their assessment practices since feedback and assessment is something that students often give low ratings for. This struck me as odd, since I am not aware of this happening at the OU. I assume that is because the OU has so far scored highly in the NSS.

The other acronym floating around a lot was TESTA. This seems to be a framework for reviewing the assessment practice of a whole department or degree programme. In one case, however (a talk by Jessica Evans & Simon Bromley of the OU faculty of Social Science) their review was done before TESTA was invented, though along similar lines.

Finally

A big thank-you to Sue Bloxham and the rest of the organising team for putting together a great conference. Roll on 2017.

Friday, May 1, 2015

eSTEeM conference 2015

eSTEeM is an organising group within the Open University which brings together people doing research into teaching and learning in the STEM disciplines, Science, Technology, Engineering and Maths. Naturally enough for the OU, a lot of that work revolves around educational technology. Once a year they have an annual conference for people to share what they have been doing. I went along because I like to see what people have been doing with our VLE, and hence how we could make it work better for students and staff in the future.

It started promisingly enough in a way. As I walked in to get my cup of coffee after registration, I was immediately grabbed by Elaine Moore from Chemistry who had two Moodle Quiz issues. She wanted the Combined question type to use the HTML editor for multiple choice choices (good idea, we should put that on the backlog) and a problem with a Pattern-match questions which we could not get to the bottom of over coffee.

But, on to the conference itself. I cannot possibly cover all the keynotes and parallel sessions so I will pick the highlights for me.

Assessment matters to students

The first was a graph from Linda Price’s keynote. Like most universities, at the end of every module we give have a student satisfaction survey. The graph showed the student's ratings in response to three of the questions:

  • Overall, I am satisfied with the quality of this module.
  • I had a clear understanding of what was required to complete the assessed activities.
  • The assessment activities supported my learning.

There was an extremely strong correlation between those. This is nothing very new. We know that assessment is important in determining the ‘hidden curriculum’, and hence we like to think that ‘authentic assessment’ is important. However, it is interesting to see how much this matters this is to students. Previously, I would not even have been sure that they could tell the difference.

The purpose of education

Into the parallel sessions. There was an interesting talk from the module team for TU100 my digital life, the first course in the computing and technology degrees. Some of the things they do in that module’s teaching is based around the importance of language, even in science. Learning a subject can be thought of as learning to construct the world of that subject through language, or as they put it, humanities style thinking in technology education. Unsurprisingly, many students don’t like that “I came to learn computing, not writing.” However, there is a strong correlation between students language use and their performance in assessments. By the end of the module some students do come to appreciate what the module is trying to do.

This talk triggered a link to back to another part of Linda Price’s keynote. An important (if now rather cliched question) for formal education is “What is education for everything is now available on the web?” (or one might put that more crudely as “Why should students pay thousands of pounds for one of our degrees?”). The answer that came to me during this talk was “To make them do things they don’t enjoy, because we know it will do them good.” OK, so that is a joke, but I would like to think there is a nugget of truth in there.

Peer assessment

On to more specifically Moodle-related things. A number of modules have been trying out Moodle’s Workshop activity. That is a tool for peer review or peer assessment. The talk was from the SD815 Contemporary issues in brain and behaviour module team. Their activity involved students recording a presentation (PowerPoint + audio) that critically evaluated a research article. Then they had to upload them to the Moodle Workshop, and review each others presentations as managed by the tool. Finally, they had to take their slide-cast, the feedback they had received, and a reflective note on the process and what they had learned from it, and hand it all in to be graded by their tutor.

Now for OU students (at least) collaborative activities, particularly those tied to assessments, are typically another thing we make them do that they don’t enjoy. This activity added the complexities of PowerPoint and/or Open Office and recording audio. However, it seems to have worked remarkably well. Students appreciated all the things that are normally said about peer review: getting to see other approaches to the same task; practising the skills of evaluating others’ work and giving constructive feedback. In this case the task was one that the students (healthcare workers studying at postgraduate level) could see was relevant to their vocation, which brings us back to visibly authentic assessment, and the student satisfaction graph from the opening keynote.

For me the strongest message from this talk, however, is what was not said. There was very little said about the Moodle workshop tool, beyond a few screen-grabs to show what it looked like. It seems that this is a tool that does what you need it to do without getting in the way, which is normally what you want from educational technology.

Skipping briefly over

There are many more interesting things I could write about in detail, but to keep this post to a reasonable length I will just skim over the posters with lunch. For example,

And, some of the other talks:

  • a session on learning analytics, in this case with a neural net, to try to identify early on those students (on TU100 again) who get through all the continuous assessment tasks with a passing grade, only to fail the end of module assessment, so that they could be targeted for extra support.
  • a whole morning on the second day, where we saw nine different approaches to remote experiments from around the world. For example, the Open University's remote control telescope PIRATE. I was left me with the impression that this sort of thing is much more feasible and worthwhile than I had previously thought.

Our session on online Quizzes

The only other session I will talk about in detail is the one I helped run. It was a ‘structured discussion’ about the OU’s use of iCMAs (which is what we call Moodle quizzes). I found this surprisingly nerve-wracking. I have given plenty of talks before, and you prepare them. You know what you are going to say, and you are fairly sure it is interesting. Therefore you are pretty sure what is going to happen. For this session, we just had three questions, and it was really up to the attendees how well it worked.

We did allow ourselves two five-minute presentations. We started with Frances Chetwynd showing some the different ways quizzes are used in modules’ teaching and assessment strategies. This set up a 10-minute discussion of our first question: “How are iCMAs best be used as part of an assessment strategy?”. For this, delegates were seated around four tables, with four of five participants and a facilitator to each table. The tables were covered with flip-chart paper for people to write on.

We were using a World Café format, so after 10 minutes I rang my bell, and all the delegates move to a new table while the facilitators stayed put. Then, in new groups, they discussed the second question: "How can we engage students using iCMAs?" The facilitators were meant to make a brief bridge between what had been said in the previous group at their table, before moving on to the new question with the new group.

After 10 minutes on the second question, we had the other five-minute talk from Sally Jordan, showing some examples of what we have previously learned through scholarship into how iCMAs work in practice. (If you are interested in that, come to my talk at either MoodleMoot IE UK 2015 or iMoot 2015). This lead nicely, after one more round of musical chairs, to the third question: "Where next for iCMAs? Where next for iCMA scholarship?". Finally we wrapped up with a brief plenary to capture they key answers to that last question from each table.

By the end, I really had no idea how well it had gone, although each time I rang my bell, I felt I was interrupting really good conversations. Subsequently, I have written up the notes from each table, and heard from some of the attendees that they had found it useful and interesting, so that is a relief. We had a great team of facilitators (Frances, Jon, Ingrid, Anna) which helped. I would certainly consider using the the same format again. With a traditional presentation, you are always left with the worry that perhaps you got more out of preparing and delivering the presentation than any of the audience did out of listening. In this case, I am sure the audience got much more out of it than me, which is no bad thing.

Tuesday, December 9, 2014

Is learning design like UML?

A couple of weeks ago, I attended the #design4learning conference, which was conveniently on my doorstep at the Open University. Jenny Gray has already written her summary of the conference (and she though she was a bit late writing it up!)

I would like to highlight the point the organisers made with the conference name. Calling learning design "learning design" is a misnomer. You cannot design learning. Learning is something that goes on inside the student's head, perhaps most effectively under the support and guidance of a teacher. Therefore, you can only "design for learning", whatever it is you are designing: a course, a activity, a learning community, …. I think this is more than just semantic pedantry. We should all remember this, particularly when thinking about educational technology. There is no magic bullet that guarantees learning will occur. Just things that are more or less likely to encourage students to learn. (Having said this, I am going to just write "learning design" in the rest of this post, since it is so much easier!)

The main thought I wanted to share here is, however, something else. After two interesting days at a conference all about learning design, I cannot recall a single diagram shown by any speaker where I thought, "that is a graphical representation of the design of a bit of learning." Was I right to expect to see that? I don't know, but I have seen other presentation about tools like CompendiumLD in the past so I know it can be done. Pondering this as I cycled home, I got to thinking about the type of design I do know about: design of software, and though of an interesting comparison.

Activity conductingSoftware developers have a well-established way to draw the design of their software, called UML (better description on Wikipedia). Let me say immediately that I am not trying to suggest UML as a way to represent learning designs. Rather, I think it is interesting to think about how developers do (or more often don't) use UML to help their work. Can that tell us anything about how and whether teachers might engage with learning design?

There are two different ways to use UML. There is the quick-and-dirty, back-of-the-envelope way, where you draw of a part of the system to help explain or communicate a particular aspect of your design. This is the way I use UML as can be seen, for example, in this documentation I wrote. You include the details that are relevant to making your point, and leave out anything that does not help.

The other way to use UML is much more elaborate. It is called "Model Driven Architecture" which I studied as part of OU module M885. In MDA, you try to draw complete diagrams of the design of your system using a very precise dialect of UML, dotting all the 'i's and crossing all the 't's. Then, using a software tool (that you probably had to buy at great expense) you press the magic button, and it creates all your classes and interfaces for you. Then you just need to fill in all the implementations. At least, that is the promise. As I say, I studied this as part of a postgraduate computing course. It was of some academic interest, but I have never seen anyone write software this way (though a some people do, if the references in the course are to be believed). I expect more people have bought expensive MDA tools than have actually used them. In a previous generation, the same was true of CASE tools that also failed to live up to their promises.

So what, if anything, can this tell us about learning design? Well, I can see exactly the same split happening. There will be hype about magic systems where you input your learning outcomes, and sketch your learning design, press a magic button, and hey, presto, there is your Moodle course. It won't work outside of research labs, but some vendors will try to commercialise it, and a some institutions will fall for it and end up with expensive white elephants.

On the other hand, it would be good to see a common notation emerge to represent learning designs. This would help teachers communicate with each other, and perhaps with students, about how their teaching is supposed to work. A good feature of UML is that it is really very natural. Most developers can understand most of a UML diagram without having to be taught a lot of rules. There are several types of diagram to represent different things, but they are the kinds of things people drew anyway before UML was invented. The creators of UML just picked one particular way of drawing each sort of diagram, and endorsed it, in an attempt to get everyone talking (drawing) a common language. If you want to draw highly detailed UML diagrams, you need to learn a lot of rules, but you can get a long way just by copying what you see other people do, which is a sign of an effective language. It would be nice to see such a language for communicating about learning.

Monday, September 29, 2014

What makes something a horrible hack?

Over in Moodle bug MDL-42974, Derek Chirnside asked "What is it about a hack that makes it 'horrible'??". I had described the code sam wrote to fix that issue in those terms, while at the same time submitting it for integration. It was a fair enough comment. I had helped sam create the code, and it was the kind of code you only write to make things work in Internet Explorer 8.

Although "Horrible hack" is clearly an aesthetic judgement, and therefore rather subjective, I think I can give a definition. However, it is easier to start by defining the opposite term. What is "Good code"? Good code should have properties like this:

  1. It works: It does what it is supposed to.
  2. It is readable: Another developer can read it and see what it is supposed to do.
  3. It is logical: It does what it is supposed to do in a way that makes sense. It is not just that a developer can puzzle out what it does, but it is clear that it does just that and nothing else.
  4. It is succinct: This is a companion to point 3).
  5. It is maintainable: It is clear that the code will go in working in the future, or if circumstances do change, it is clear how the code could be modified to adapt to it.

Note that property 1) is really just a starting point. It is not enough on its own.

A horrible hack is code that manages little more than property 1. I think sam's patch on MDL-42974 scores a full et.

  1. It is not at all obvious what the added code it for. Sam tried to mitigate that by adding a long comment to explain, but that is just a workaround to the hackiness of the code.
  2. There is no logical reason why the given change makes things work in IE <= 8. We were just fiddling around in Firebug to try to find out how IE was going wrong. Changing the display property on one div appeared to solve the display problem, so we turned that into code. We still don't really understand why. Another sign of the illogicality is the two setTimeout calls. Why do we need those two delays to make it work? No idea, but they are necessary.
  3. The whole chunk of added code should be unnecessary. Without the addition, it works in any other browser. We are adding some code that should be redundant just to make things work in IE.
  4. We don't understand why this code works, so we cannot understand if it will go on working. In this case, lack of maintainability is not too serious. The code only executes on IE8 or below. In due course we know we can just delete it.

Normally, you would wish to avoid code like this, but in this case it is OK because:

  • The hack is confined in one small area of the code.
  • There is a comment to explain what is going on.
  • It is clear that we will be able to remove this code in future, once usage of IE8 has dropped to a low enough level.

At least, we hope that the Moodle integration team agree that this code is acceptable for now. Otherwise, we wasted our time writing it.

Friday, April 25, 2014

Load-testing Moodle 2.6.2 at the OU

At the start of June we will upgrade the OU Moodle sites to Moodle 2.6. Before then we need to know that it will still perform well when subjected to our typical peak load of 100,000 page-views per hour. This time, I got 'volunteered' to do the testing.

The testing servers

To do the testing, we have a set of 10 servers that are roughly similar to our live servers. That is six web servers for handling normal requests, one web server that handles 'administrative' requests. That is, any URL starting /admin, /report or /backup. Those pages are often big, long-running processes, rather than quick page views, so it is better to put them on a different server that is tuned differently. There is one 'web' server is just for running the cron batch processes. Finally we have a database server and a file server.

In order to be able to make easy comparisons, we make two copies of our live site onto these servers. That is, we have two different www_root folders, which correspond to different URLs lrn2-perf-cur and lrn2-perf-upg. In due course we will upgrade one of the copies to the new release while leaving the other open running the current version of the code. This make it easy to switch back and forth when comparing the two.

In addition to the servers running Moodle, we have 6 virtual machines to generate the simulated load.

The testing procedure

We test using JMeter. In order to test Moodle, you need to send lots of requests for different pages, many of which include numeric ids in the URLs. Therefore, the JMeter script needs to be written specifically for the site being tested. Fortunately, our former colleague James Brisland made a script that automatically generates the necessary JMeter script. We shared that script with the community, and you can find a copy here. However, we shared it a long time ago, and since then our version has probably diverged from the community version a bit. Oops!

I say this tool automatically generates the necessary JMeter script, but sadly that is an oversimplification. It fails in certain cases like if a forum is set to separate groups mode. So, having generated the JMeter script, you need to run it and check that it actually works. If not, you have to go into the courses and activities being tested and modify the settings. We really ought to automate that, but no one has had the time. Anyway, eventually (and this took ages) you have a working test script.

Tuning the test script

Once the test script works, in that it simulates users performing various actions without error, one at a time, then you have to start running it at high load. That is, simulating lots of users doing lots of things simultaneously. After it has settled down, you let it run for 15 or 20 minutes, and then look at what sort of load you are generating. The goal is to get about the same number of requests per second for each type of page (course view, forum view, post to forum, view resource, ...) in the test run as in real use on the live system. If not, you tweak the time delays, or number of threads, and then run again. It took about four runs to get to a simulated load that was close (actually slightly higher) than the target request rates we had taken from the live server logs.

All that creation and tuning of the tests scripts is done on the lrn2-perf-cur copy of the site. Once that is OK, then you run the same script against lrn2-perf-upg. That should give exactly the same performance, and before proceeding we want to verify that is the case. It turned out at first that it was slightly different. I had to find the few admin settings that were different between the two servers. Once the configuration was the same, the performance was the same, and we were finally in a position to start comparing the old and new systems.

Upgrade to the new version of Moodle

The next step is to upgrade lrn2-perf-upg to the new code. This code is still work-in-progress. Final testing of the code before release happens next month, but we try to keep our code in a releasable state, so it should be OK for load-testing. However, this is the first time we have run the upgrade on a copy of all our data. Unsurprisingly, we found some bugs. Fortunately they were easily fixed, and better to find them now than later.

Also, a new version of Moodle comes with a lot of new configuration options. This is the moment to consider what we should set them to. Luckily, most of the default values were right, so there was not a lot to do. Moodle prompts you for most of the new settings you need to make as part of the upgrade. However, it does not prompt you to configure any new caches, so you have to remember to go and do that.

Compare performance

At long last (about one and a half weeks into the process) you are finally ready to run the test you want. How does 2.6 performance compare to 2.5? Here is a screen-grab of today's testing:

Good news: Moodle 2.6 is mostly a bit faster (5-10%) than Moodle 2.5. Bad news: every 15 minutes, it suddenly goes slow for about 15 seconds. What?!

Problem solving

Actually, there is a logical explanation. We have cron set to run every 15 minutes, so surely the problem is caused by cron, right? No. Wrong! We stopped cron running, and the spikes remained. We tried various things to see what it might be, and could not make any sense of it. One thing we discovered was that the spikes were about as large as the spikes you get by clicking the 'Purge all caches' button. OK, so something is purging caches, but what?

To cut a long story short, you need to remember that our two test sites lrn2-perf-cur and lrn2-perf-upg are sharing the same servers. Therefore they are sharing the same memcache storage. It appears that something in cron in Moodle 2.5 purges at least some of the caches. When we stopped cron on our Moodle 2.5 site the spikes went away on our 2.6 site. I am afraid we did not try to work out why Moodle 2.5 cron was purging caches, but there is probably a bug there. It turns out that purge caches does not cause a measureable slow-down in Moodle 2.5, at least not for us, which is worth knowing.

Why does Purge caches cause a slow-down in 2.6 but not in 2.5? I am pretty sure the reason is MDL-41436. When things slowed down, it was the course page that slowed down the most, and that is the one most dependent on the modinfo cache.

Summary

  • Moodle 2.6 is about 5-10% faster than 2.5, at least on our servers, which are RHEL5 + Postgres + memcache cluster store. (MDL-42071 - why has that not been integrated yet?)
  • In Moodle 2.5, doing Purge caches when your system is running at high load seems to cause remarkably little slow-down.
  • In Moodle 2.6, doing Purge caches does slow things down a lot, but only very briefly. Performance recovered within about 15 seconds in our test, but then the test was only using a few courses.
  • In Moodle 2.6, clicking Clear theme caches (at the top of Admin -> Appearance -> Themes -> Theme selector) causes no noticeable slow-down.

The bit about what happens when you clear the caches is important because sometimes, when you patch the system with a bug fix, you need to purge one or more caches to make the fix take effect. In the past, we did not know what effect that had. We were cautious and had people waiting up until after midnight to click the button at a time of low system load. It turns out now that is probably not necessary. We can clear caches during the working day, when staff are in the office to pick up the pieces if anything does go wrong.

Wednesday, April 23, 2014

The four types of thing a Moodle developer needs to know

In order to write code for Moodle, there is an awful lot you need to know. Quite how much was driven home for me when I taught a Moodle developers' workshop at the German MoodleMaharaMoot in February. When preparing for that workshop, I though you could group that knowledge into three categories, but I have since added a fourth to my thinking.

1. The different types of Moodle plugin

The normal way you add functionality to Moodle is to create a plug-in. There are many different types of plug-in, depending on what you want to add (for example, a report, an activity module or a question type). Therefore, the first thing to learn is what the different types of plug-in are and when you should use them. Then, once you know which type of plug-in to create, you need to know how to make that sort of plug in. For example, what exactly do you need to do to create a new question type?

2. How to make Moodle code do things

Irrespective of what sort of plug-in you are creating, you also need to know how to make your code do certain things. Moodle is written in PHP, so generic PHP skills are a prerequisite, but Moodle has its own libraries for many common tasks, like getting input from the user, loading and saving data from the database, displaying output, and so on. A developer need to know many of these APIs.

3. How to get things done in the Moodle community

If you just want to write Moodle code for your own use, then the two types of know-how above are enough, but if you want to take full advantage of Moodle's open source nature, then you need to learn how to interact with the rest of the Moodle development community. For example how to ask for help, how to report a bug, how to submit the changes for a bug you have fixed or a feature you have implemented, get another developer to review your proposed code change, how to update your customised Moodle site using git, and so on.

4. Something about education

Those three points were what I thought of when trying to work out what I needed to teach during the developer workshop I ran. Since then, while listening to one of the presentations at the UK MoodleMoot as it happens, I realised that there was a fourth category of knowledge required to be a good Moodle developer. It matters that we are making software to help people teach and learn. I am struggling to think of specific concepts here, with URLs for where you can learn about them, as I gave in the previous sections, but there is a whole body of knowledge about what makes for effective learning and teaching and it is useful to have some appreciation of that. You also need some awareness of how educational institutions operate. If you hang around the Moodle community for any length of time you will also discover the educational culture is different in different countries. For example in the southern hemisphere the long summer holiday is also the Christmas holiday, and in America, they expect to award grades of more than 100%.

Summary

Does this subdivision into categories actually help you learn to be a Moodle developer? I am not sure, but it was certainly useful when planning my workshop. The workshop was structured around creating three plugins on the first day, a Filter, a Block and then a Local plug-in. However, those exercises were structured so that while moving through different types of category-one knowledge, we also covered key topics from categories two and three in a sensible order. So it helped me, and I thought it was an interesting enough thought to share.