A Survey of Online Coding Tutorials

The paper, “A Pedagogical Analysis of Online Coding Tutorials“, provides an analytical review of a sample of online coding tutorials. One of my project students did something similar (they should have published!). The analytical framework is useful but not surprising: they have a set of types (interactive tutorial, MOOCs, web references, etc.) and “nine groups of 24 [analytical] dimensions” including content, organisation, and context. It all seems sensible, though I’m a bit leery. It seems almost too sensible. There’s no empirical work on actual effects (completion, satisfaction, and learning). It’s super tempting to think we can extrapolate from this beautiful and tempting set of features to these effects. Consider their key conclusion:

Our results suggest that most online coding tutorials are still immature and do not yet achieve many key principles in learning sciences. Future research and commercial development needs to better emphasize personalized support and precise, contextualized feedback and explore ways of explaining to learners why and when to use particular coding concepts. Based on our sampled tutorials, we recommend that teachers be very selective in their use of materials, focusing on the more evidence-based tutorials, particularly the educational games. All educational games in the list provide hierarchical structure, immediate feedback, and opportunities that learners actively write code and use subsequent knowledge for coding throughout the tutorial.

But they’re games. What kind of learners are responding to them? Are students responding to them? No coding game that I know off has bubbled up in the popular consciousness or trade press the way say Khan Academy or MOOCs in general have. That doesn’t mean they aren’t educationally superior, but it needs some explanation.

Overall, however, it seems like a solid, worthwhile paper and a good and necessary starting point. Someone needs to too this sort of work and we need more of it.

It’s also the sort of work that needs a dynamic, ongoing database that’s kept up to date with periodicity snapshot papers. One off papers get stale quickly! But it’s eminently replicable, so…have at it!

Advertisements

Degrees of Belief

I’m not sure why this paper showed up and lingered in my tabs, but it did. I vague recall thinking “oh that sounds interesting!” then being disappointed.

It starts with a weird argument for why the topic (the metaphysical status of beliefs) is worth exploring. But the arguments seem pretty…weird. One is to help formal epistemologists avoid having to say “all out belief” or “binary belief” instead of just “belief” and then taking about degrees of confidence rather than degrees of belief. I guess I’m losing some aspect of being a philosopher because that sounds like a really dumb reason to write a paper.

We then see one rebuttal of a supposedly common argument:

Assumption 1: The property of having confidence that p is identical to the property of having belief that p.
Assumption 2: ‘Belief’ and ‘confidence’ pick out the same thing.

They then infer that since the property of having confidence, or the thing picked out by ‘confidence’, comes in degrees, it follows that belief comes in degrees.

However, no reasons are given for Assumptions 1 and 2. They seem to just be assumed. Now, on the face of things, belief and confidence do seem to be similar sorts of mental entities; perhaps they are identical. On the other hand, our having formed different words for them is some evidence that they are distinct. So, as it stands, I see no convincing argument here that beliefs come in degrees. We will have to look elsewhere for better arguments.6

Now I want to say, “are you kidding me”. First I want to know is how common this argument is. Next I want to know what problems this eliding causes, if it exists. Finally, I want to know whether the author has even seen a thesaurus. Multiple words for the same thing happens all the time.

But it gets worse:

Consider (i). One can talk of much hope, little confidence, much desire, and so on. For any paradigm propositional attitude that comes in degrees, higher or lower degrees of that attitude can be attributed to a person by way of an occurrence of a mass noun. This is inductive evidence for (i).

Consider (ii). One cannot ascribe higher or lower degrees of belief to a person with ‘belief’. (5) does ascribe belief by way of a mass noun, but this only ascribes a number of single beliefs to a population, not a degree of belief to a single individual. Whenever belief is ascribed to a single person by way of a noun, it is by the occurrence of a count noun and not a mass noun. That is why (3) and (4) do not make sense. From (i) and (ii), it follows that beliefs do not come in degrees.

Say what? We easily say that I have a strong or weak believe or that this belief is stronger than that one. And language is quirky! Consider temperature! It canonically comes in degrees! But I can’t say that I have much or little temperature!

Just no.

And, you know, people ask “Ok you believe P, but how much do you believe it?” “100%!”

“Do you believe it more or less than you believe the earth is round?” “Oh much less.”

So I remember now why I gave up with irritation. If you are going to argue from natural language to metaphysics (which I find weird in this day and age) and even if we accept confining yourself to English (which is bad) a minimal constraint should be a systematic linguistic analysis! Not a couple of cherry picked examples and some blather about mass vs count terms!

(Note that I don’t believe my example prove that belief does have degrees because I am not a silly person. I recognize that people might well talk about things in funny ways!)

In any case, I would have thought a metaphysical paper would have explored the, you know, metaphysics. Eg looked at the ontological aspects of beliefs. One might explore whether neuroscience dictates some aspect of the metaphysics of belief. (If beliefs supervenes on excitation dispositions they have a natural degree aspect in us independent of evidential strength.)

I’m so grouchy.

Constant Time Code

Timing attacks on crypto are on the rise. (They are one class of side channel attacks. In general, side channel attacks are very tricky.)

Most software has multiple execution paths and the time (or other resources) it takes to follow different paths can vary considerably. Indeed, one key aspect of efficient programs is the handling of potentially “fast paths” in an actually fast way. But even if you aren’t breaking out some paths as optimisations, normal program structuring leaves you with programs whose performance is sensitive in specific ways to input (of otherwise comparable size). This can allow attackers to infer things, including, sensitive information.

One way to combat this is to make all execution paths take (essentially) the same amount of time. For example, suppose I have a short circuiting Boolean operation “shortOp OR longOp”. Since my OR will only execute “longOp” if “shortOp” fails, I have two rather different execution paths. If I replace it with a non short circuiting “OR” i.e. one that always executes all its operands, then I’ve make this test constant time… for if my functions are constant time and there are no surprises from the compiler when optimising. To a first approximation, there are always surprises. At the very least, I need to check the output of my compiler.

The paper “FaCT: A Flexible, Constant-Time Programming Language” presents a new programming language and tool chain designed to support the implantation of constant time, thus timing attack resistant, functions.

It’s worth reading for part II alone which is a tour of vulnerabilities of normal C code and some standard mitigating tricks. Their solution seems interesting:

FaCT is designed to: (1) allow developers to easily write idiomatic code that runs in constant time, (2) be flexible enough to express real-world crypto code, (3) interoperate with C code, (4) produce fast assembly code, and (5) be verified to be resilient against timing attacks.

The DSL looks pretty neat and the use of verifiers and solvers as key points is fun. There’s no empirical evaluation so whether it helps is still open.

It’d be interesting to try to embed this DSL in a language like Rust. I’d think you’d have to do it at the language level and not using the macro facilities but I’m not sure. You definitely need to perform verification checks late in the compilation process and that might not be easily accessible from the language level.

Addendum: While cleaning up tabs, I found an interesting blog post on writing a “branchless” UTF-8 decoder. The goal there was performance by helping pipelining. This would also avoid speculative execution for the decoder, I’m guessing.