In my last blog post I promised more details of the new question engine "in a week or so". Unfortunately, things like work (mainly fixing minor bugs in the aforementioned question engine); rehearsing for a rather good concert that will take place this Friday; and buying curtains for my new flat, have been rather getting in the way. Now is the time to remedy that.
Last time I explained roughly what a question engine was, and that I had made big changes to the one in Moodle. I now want to say more about what the question engine has to do, and how the new one does it.
The key processes
There are three key code-paths:
To display a page of the quiz:
- Load the outline data about the student's attempt.
- Hence work out which questions are on this page.
- Load the details of this student's attempt at those questions within this quiz attempt.
- Get the display settings to use (should the marks, feedback, and so on be visible to this user).
- Taking note of the state of each question (from Step 3) update the display options. For example, if the student has not answered the question yet, we don't want to display the feedback now, even if we will later.
- Using the details of the current state of each question, and the relevant display options, output the HTML for the question.
The bits in italic are the bits done by the quiz. The rest is done by the question engine.
To start a new attempt at the quiz:
- From the list of questions in the quiz, work out the layout for this attempt. (This is only really interesting if you are shuffling the order of the questions, or selecting questions randomly.)
- Create an initial state for each question in the quiz attempt. (This is when things like the order of multiple choice options are randomised.)
- Write the initial states to the database.
To process a student's responses to a page of the quiz.
- From the submitted data, work out which attempt and questions are affected.
- Load the details of the current state of those question attempts.
- Sort all the submitted data, into the bits belonging to each question. (The bits of data have names like 'q188:1_answer'. The prefix before the '_' identifies this as data belonging to the first question in attempt 188, and the bit after the '_' identifies this as the answer to that question.)
- For each question, process its data to work out whether the state has changed and, if so, what the new state is. This is really the most important procedure, and I will talk more about it in the next section.
- Write any updated states to the database.
- Update the overall score for the attempt, if appropriate, and store it in the database.
These outlines are, of course, oversimplifications. If you really want to know what happens, you will have to read the code.
There are other processes which I will not cover in detail. These include finishing a quiz attempt, re-grading an attempt, fetching data in bulk for the quiz reports, and deleting attempts.
The most important procedure
This is the step where we take the data submitted by the student for one particular question and use it to update the state of that question.
Moodle has long had the concept of different questions types. It can handle short-answer questions, multiple-choices questions, matching questions, and so on. Naturally, what happens when updating the state of the question depends on the question type. That is true for both the old and new code.
Now, however, there is a new concept in addition to question types. The concept of 'question behaviours'.
In previous versions of Moodle, there was a rather cryptic setting for the quiz: Adaptive mode, Yes/No. That affected what happened as the student attempted the quiz. When adaptive mode was off, the student would go through the quiz entering their response to each question. Those responses were saved. At the end, they would submit the quiz and everything would be marked, all at once. Then the student could review their attempt (either immediately, or later, depending on the quiz settings) to see their marks and the feedback. When adaptive mode was on, the student could submit each question individually during the attempt. If they were right first time, they got full marks. If they were wrong, the got some feedback and could try again for reduced marks.
The problem with the previous version of Moodle was the way this was implemented. There was a single process_responses function that was full of code like "if adaptive mode, do this, else do that". It was a real tangle. It was very common to change the code to fix a bug in adaptive mode (for example), only to find that you had broken non-adaptive mode. Another problem was the essay question type, which has to be graded manually by the teacher. It did not really follow either adaptive or non-adaptive mode, but was still processed by the same code. That lead to bugs.
A very important realisations in the design of the new question engine was identifying this concept of a 'question behaviour' as something that could be isolated. There is now a behaviour called 'Deferred feedback' that works like the old non-adaptive mode; there is an 'Adaptive' behaviour; and there is a 'Manually graded' behaviour specially for the essay question type. Since these are now separate, you can alter one without risking breaking the others. Of course, the separate behaviours still share common functions like 'save responses' or 'grade responses'. We now also have a clean way to add new behaviours. I made a 'certainly-based marking' behaviour, and a behaviour called 'Interactive', which is a bit like the old Adaptive mode but modified to work exactly how the Open University wants.
It takes two to tango, and three to process a question
In order to do anything, there now has to be a three-way dance between the core of the question engine, the behaviour and the question type. Does this just replace the old tangle with a new tangle (of feet)? Fortunately there is a consistent logic. The request arrives at the question engine. The question engine inspects it, and passes it on to the appropriate behaviour. The behaviour inspects it in more detail, to work out exactly what need to be done. For example is the student just saving a response, or are they submitting something for grading. The behaviour then asks the question type to do that specific thing. All this leads to a new state that is passed back to the question engine.
So, the flow of control is question engine -> behaviour -> question type except, critically, in one place. When we start a new attempt, we have to choose which behaviour to use for each question. At this point the question engine directly asks the question type to decide. Normally, the question type will just say "use whatever behaviour the quiz settings ask for", but certain question types, like the essay, can instead say "I don't care about the quiz settings, I demand the manual grading behaviour."
If you like software design patterns, you can think of this as a double application of the strategy pattern. The question engine uses a behaviour strategy, which uses a question type strategy (with the subtlety that the choice of behaviour strategy is made by the question type).
Summary
So that is roughly how it works. A clear separation of responsibility between three separate components. Each component focussing on doing one aspect of the processing accurately, which makes the system highly extensible, robust and maintainable. Of course, everyone says that about the software they design, but based on my experiences over the last year, first of building all the parts of the system, and then of fixing the bugs that were found during testing, I say it with some confidence.