Not An Easier Week

I thought it was going to be an easier week. Buuut…MSc project grading (done!), three exams to write (whoa, those are hugely not done), class prep (Not Done sob), and a programming contest (somewhat advanced)….it’s a bit much.

Sometimes, I can harness this pressure, but usually closer to the wire. Which I want to avoid!

Oh well. Next week is the last week of period 1. Now, I have a class in period 2 to panic over. Yay.

Advertisements

Grading Postmortem

I just finished the followup of grading a programming/software engineering assignment with a mostly automated toolkit. The goal was to have “next class” turnaround. It wasn’t quite same day but it was definitely within 20 hours, for the bulk of people. Some of the problem was that Blackboard is terrible. Just terrible. It refused to upload marks and feedback for people who had had multiple submissions and then sent me into a hellscape to try to enter them manually. So there were some upload errors (2 people didn’t have any feedback and a 1 had the wrong feedback due to cut and paste fail). Out of 49 submissions, I had 17 people report a problem or request a regrade. Of those 5 resulted in a change of mark for a total of 19 marks added (the total possible for each was 10 per assignment so 490 total ; 170 were originally given thus 10% of “rightful” marks went missing and needed a manual update; one of these was due to a rouge extra submission after the deadline that was the wrong one to grade for 3 points, so 8.5 missing marks were due to grader bugs).

Now the people with wrong marks generally got “0” often when it was obvious that they shouldn’t have. This was because their program would either crash in a new way or return a really unexpected result. In the later case, since we try to parse the program out put, we’d through an expected exception for that odd output. In both scenarios, this unexpected scenario would crash the grader before it wrote any feedback. Missing feedback was inferred to be an “upload” problem so the students got 0 and an unhelpful error message.

These were stupidly hard bugs to track down! But they point to a couple of holes in our robustness and test isolation approach (we’re generally pretty good on that). In general, I’d like to review the 0s before uploading to confirm but the tight time frame was just too much. It was a tradeoff between the real anxiety, pain, and confusions some students would feel at getting an erroneous 0, and delaying feedback. It’d have been great if I could have turned around the corrections more quickly, but I have only so much time and energy. All students who filed an issue got a resolution by the subsequent Monday evening at the latest. So, two full days with correct feedback before the next assignment. Obviously, quicker is always better, but this isn’t unreasonable.

At least two people were misled by the feedback which basically said “You are missing this file” when it should have said “You are missing at least one of this file or that directory.” Oops! That was mostly work for me than anything else.

In the same day lab, the students did an over the shoulder code review of each other’s first assignment. I wish I had gathered stats on problems found. I told everyone who wanted to file an issue to send me an email aftertheir code review discovered no problems and they had some simple test cases passing. In many of those cases, there were very obvious problems that a simple sanity test would have revealed and oddities in the code which lept out (to me).

I feel this justifies my decision not to return granular feedback or explicit tests. The program is very small and they have an oracle to test against (they are reverse engineering a small unix utility). The points awarded are few and  2 come from basically not messing up the submission format. 1 comes from following the spec requirement to use tabs as an output separator.

But the goal of these assignments is to get people thinking about software engineering, not programming per se. They need to reflect on their testing and release process and try to improve them. I had several students ask for detailed feedback so they would lose fewer marks on the next assignment and that’s precisely what I don’t want to do. The learning I’m trying to invoke isn’t “getting this program to pass my tests” but “becoming better at software engineering esp testing and problem solving and spec reading and…”.

It’s difficult, of course, for students to care about the real goals instead of the proxy rewards. That’s just being a person! All I can do is try to set up the proxy rewards and the rest of my teaching so as to promote the real goal as much as possible.

Giving students low marks drives a lot of anxiety and upset on my part. I hate it. I hate it because of their obvious suffering. I hate it because it can provoke angry reactions against me. I hate it because I love seeing people succeed.

But it seems necessary to achieve real learning. At least, I don’t see other ways that are as broadly effective.

Same Day Grading

Or near same day.

One reason I started adding MCQ quizzes to my coursework is the fast turn around. This year I aimed to get all my Software Engineering coursework back by the end of the class where it was due. Part is done by “automated” grading but part was organizing my TAs to grade the short essays quickly.

They rose to the occasion. All 4 did this set rather than the normal 2 so they had 12-15 each. They sat together for cross validation. They only used a rubric (no manual comments) but were at the late lab for discussion.

The students had a morning lab where they tried to apply the rubric to their own essay and a partners.

This seems nigh perfect. They got a lot of help before the next essay. There was no “wasted” grading effort (eg ignored feedback). And the TAs are free! No grading hanging over their heads.

I’m happy! Wins all around.

All About A*

Amit Patel’s multi part discussion of A* (esp in games) is a very nice read. Even if you just read the first page, you’ll get a clear picture of various pathfinding algorithms and their trade offs. So later bits aren’t quite as nice to follow (e.g., the trade offs between terms in the heuristic could be fruitfully visually indicated), but overall, it’s great.

It’s part of a series teaching math and computer science via games. Not necessarily wildly original in concept (cf AI: A Modern Approach), but here execution is everything. Check out the discussion of pathfinding for a tower defence game.

A Survey of Online Coding Tutorials

The paper, “A Pedagogical Analysis of Online Coding Tutorials“, provides an analytical review of a sample of online coding tutorials. One of my project students did something similar (they should have published!). The analytical framework is useful but not surprising: they have a set of types (interactive tutorial, MOOCs, web references, etc.) and “nine groups of 24 [analytical] dimensions” including content, organisation, and context. It all seems sensible, though I’m a bit leery. It seems almost too sensible. There’s no empirical work on actual effects (completion, satisfaction, and learning). It’s super tempting to think we can extrapolate from this beautiful and tempting set of features to these effects. Consider their key conclusion:

Our results suggest that most online coding tutorials are still immature and do not yet achieve many key principles in learning sciences. Future research and commercial development needs to better emphasize personalized support and precise, contextualized feedback and explore ways of explaining to learners why and when to use particular coding concepts. Based on our sampled tutorials, we recommend that teachers be very selective in their use of materials, focusing on the more evidence-based tutorials, particularly the educational games. All educational games in the list provide hierarchical structure, immediate feedback, and opportunities that learners actively write code and use subsequent knowledge for coding throughout the tutorial.

But they’re games. What kind of learners are responding to them? Are students responding to them? No coding game that I know off has bubbled up in the popular consciousness or trade press the way say Khan Academy or MOOCs in general have. That doesn’t mean they aren’t educationally superior, but it needs some explanation.

Overall, however, it seems like a solid, worthwhile paper and a good and necessary starting point. Someone needs to too this sort of work and we need more of it.

It’s also the sort of work that needs a dynamic, ongoing database that’s kept up to date with periodicity snapshot papers. One off papers get stale quickly! But it’s eminently replicable, so…have at it!

Worksheets to Support Active Learning

I do a lot of lecturing. I like to lecture and am good at it. It also is pretty much what’s expected. For MSc classes we have all day classes which kinda suck for lectures though that’s mostly what we do. For my software engineering class, I break up the day with two labs, which helps, but I’m always on the look out for ways to make the lectures more effective. This generally means trying to get the students to do active learning, that is, not to passively “receive” information…this is nearly worthless…but to anticipate, question, puzzle over, generally be engaged in the material I’m presenting. Student response systems seem to help some with this as does asking questions of the class (though this often fails). Our students are of wide ranging abilities and backgrounds. For many, just working in English is a big strain.

One thing I’ve had what I think of as some success with is providing little, simple activities on sheets of paper, e.g., to draw various complexity curves. It’s a variant of clicker questions, but trying to get them to do something.

I came across a Software Carpentry blog post on Git worksheets which intrigued me. The idea there is to “draw along” with the instructor instead of doing some task. It seems a nice way to provoke and support the good sort of note taking. I have a lot of “process style” slides which I maybe can change into worksheets or have anticipatory worksheets. I draw a lot on the board spontaneously, but perhaps using the opaque projector on a worksheet might be more effective (though take a lot more prep).

Making Principled Unprincipled Choices

I like principled decision making. Indeed, few things inspired me as much as this quote from Leibniz:

if controversies were to arise, there would be be no more need of disputation between two philosophers than between two calculators. For it would suffice for them to take their pencils in their hands and to sit down at the abacus, and say to each other (and if they so wish also to a friend called to help): Let us calculate.

Alas, there’s no decision making situation where this vision holds, even in principle. But still, I like my decisions to conform to some articulable rationale, preferably in the form of some set of general rules.

But some of my rules are meta-rules which focus on resource use. Obviously, one goal of decision making rules in to maximise the chances of making the “right” choice. But for any metric of rightness (let’s say, an appliance with the best value for money) there’s a cost in the effort to assure the maximum (e.g., research, testing, comparing…lots of shopping). That cost can be quite large and interact with subsequent satisfaction in a variety of ways. I’m prone to this and, indeed, end up in decision paralysis.

In response to this, one of my meta-rules is “don’t over-sweat it”. So, for small stuff, this reduces to “don’t sweat the small stuff”. But, because of my anxiety structures, I tend to see certain classes of small stuff as big stuff. So, I dedicate some effort to seeing small stuff as small. Sometimes, this means making it invisible to me. Poor Zoe often has to make the actual purchase after I’ve done the research, or even make the decision after I’ve done the research. For various classes of minor, irrevocable sub-optimal decisions, I prefer not to know about them. I will obsess, and that doesn’t help anyone.

When the decision is essentially arbitrary (because all choices are incommensurable in toto, or their value is unknowable at the moment), I try to make myself flip a coin (metaphorically, at least). What I try to avoid is building a fake rationale (except when that enables the choosing or makes me happier with the arbitrary choice).

Technical (or teaching) decisions often are best treated as arbitrary, but we have tons of incentives to treat them as requiring a ton of analysis to make the “right” choice. At the moment, I’m evaluating what Python testing framework to use and teach in my software engineering class. I currently use doctest and unittest and have a pretty decent lesson plan around them. doctest is funky and unittest is bog standard. I’d consider dropping doctest because I need room and we don’t do enough xUnit style testing for them to really grasp it. They are also built into the standard library.

But then there’s pytest which seem fairly popular. It has some technical advantages, including a slew of plugins (including for regression testing and BDD style testing). It scales in complexity nicely…you can just write a test function and you’re done.

But, of course, it’s a third party thing and needs to be installed. Any plugins would have to be installed. Is it “better enough” to ignore the built in libraries? Or should I add it on with the builtin libraries? AND THERE MIGHT BE SOMETHING YET BETTER OUT THERE OH NOES!!!!

No. The key principle here is a meta-principle: Don’t invest too much more effort. Make a decision and stick with it. In the end, any of the choices will do and a big determiner will be “does it spark my interest now?” while the other will be “how much extra work is that?”

And that’s fine.