Keypad Research

Just some tab cleanup!

Phone dialers are a fascinating part of our lives. I remember moving from rotary dialers to a keypad and it was amazing. It was so much faster and easier on the hand! But the real game changer was the way it engaged muscle memory. (This is all fading as contact lists make remembering or even dealing numbers increasingly less necessary.) Most of us only experienced rotary or dialpad so it may not be clear just how designed the dialpad is. But Bell Labs experimented with a ton of layouts:

And Sajid Saiyed is revisiting this design space today! Dialing on a touch screen with a hand held, modern smartphone is very thumb oriented, but screen sizes make the pads less thumb friendly esp with the “dial” button pushing everything up a row. You can participate in the study (which looks for issues across demographics groups, which is interesting…I didn’t see across tab on screen size, though) using an iPhone app.

The current leading concept is very simple and obvious…move the seldom uses buttons (* and #) up top and slot the call button beside the 0. This brings every thing down and makes the call button easier to hit. Neat!

Advertisements

Brutalist Web Design

This is a fun article:

▪ Only hyperlinks and buttons respond to clicks.

▪ Hyperlinks are underlined and buttons look like buttons.

I nodded fiercely with each bullet. Then I thought “Well, pop up menus are handy.”

That’s when I knew I was part of the problem.

Making Principled Unprincipled Choices

I like principled decision making. Indeed, few things inspired me as much as this quote from Leibniz:

if controversies were to arise, there would be be no more need of disputation between two philosophers than between two calculators. For it would suffice for them to take their pencils in their hands and to sit down at the abacus, and say to each other (and if they so wish also to a friend called to help): Let us calculate.

Alas, there’s no decision making situation where this vision holds, even in principle. But still, I like my decisions to conform to some articulable rationale, preferably in the form of some set of general rules.

But some of my rules are meta-rules which focus on resource use. Obviously, one goal of decision making rules in to maximise the chances of making the “right” choice. But for any metric of rightness (let’s say, an appliance with the best value for money) there’s a cost in the effort to assure the maximum (e.g., research, testing, comparing…lots of shopping). That cost can be quite large and interact with subsequent satisfaction in a variety of ways. I’m prone to this and, indeed, end up in decision paralysis.

In response to this, one of my meta-rules is “don’t over-sweat it”. So, for small stuff, this reduces to “don’t sweat the small stuff”. But, because of my anxiety structures, I tend to see certain classes of small stuff as big stuff. So, I dedicate some effort to seeing small stuff as small. Sometimes, this means making it invisible to me. Poor Zoe often has to make the actual purchase after I’ve done the research, or even make the decision after I’ve done the research. For various classes of minor, irrevocable sub-optimal decisions, I prefer not to know about them. I will obsess, and that doesn’t help anyone.

When the decision is essentially arbitrary (because all choices are incommensurable in toto, or their value is unknowable at the moment), I try to make myself flip a coin (metaphorically, at least). What I try to avoid is building a fake rationale (except when that enables the choosing or makes me happier with the arbitrary choice).

Technical (or teaching) decisions often are best treated as arbitrary, but we have tons of incentives to treat them as requiring a ton of analysis to make the “right” choice. At the moment, I’m evaluating what Python testing framework to use and teach in my software engineering class. I currently use doctest and unittest and have a pretty decent lesson plan around them. doctest is funky and unittest is bog standard. I’d consider dropping doctest because I need room and we don’t do enough xUnit style testing for them to really grasp it. They are also built into the standard library.

But then there’s pytest which seem fairly popular. It has some technical advantages, including a slew of plugins (including for regression testing and BDD style testing). It scales in complexity nicely…you can just write a test function and you’re done.

But, of course, it’s a third party thing and needs to be installed. Any plugins would have to be installed. Is it “better enough” to ignore the built in libraries? Or should I add it on with the builtin libraries? AND THERE MIGHT BE SOMETHING YET BETTER OUT THERE OH NOES!!!!

No. The key principle here is a meta-principle: Don’t invest too much more effort. Make a decision and stick with it. In the end, any of the choices will do and a big determiner will be “does it spark my interest now?” while the other will be “how much extra work is that?”

And that’s fine.

 

The Cyber Security Body Of Knowledge

This effort looks cool. Providing a codified overview of What We Should Know about cyber security could be very helpful, esp for teaching. I just read the Software Security Knowledge Area and it wasn’t a bad read. It felt a little “listy” without a good cognitive map. In particular, the cop out on the completeness of their taxonomy of faults. I don’t blame them on not claiming to be comprehensive, but I don’t know whether they cover the bulk or the important ones just from reading this. I should have a better sense of what I don’t know!

Then there was this thing that bugged me:

• A detection technique is sound for a given category of vulnerabilities if it can correctly conclude that a given program has no vulnerabilities of that category. An unsound detection technique on the other hand may have false negatives, i.e., actual vulnerabilities that the detection technique fails to find.

• A detection technique is complete for a given category of vulnerabilities, if any vulnerability it finds is an actual vulnerability. An incomplete detection technique on the other hand may have false positives, i.e. it may detect issues that do not turn out to be actual vulnerabilities.

Oy! This reverses the ordinary (i.e., mathematical logic) notions of soundness and completeness…sorta. They didn’t quite flip the meaning, but instead they focused on an entailment class that’s weird. Take soundness. The entailment they pick is, “Program P, for vulnerability class V, has no Vs.” It’s that “no” that messes it up. In order to conclude that there are no Vs it has to be the  case that ifthere was a V, it would find it. I.e., that it was complete with respect to “P has a V”. And I mean, the last sentence makes it clear that they are thinking at the “P has V” level. And, of course, their bogus complements focuses on the “P has a V” level, so they just screwed up. Sigh.

It would be much more straight forward to define a detection technique for V as a procedure which takes P as an input and returns a list of “P has a V” statements (with specific Vs). Then the technique is sound if it produces no false positives and complete if no false negatives. A sound and complete technique that returns the empty list allows us to conclude that the software is secure wrt V.

Then there’s this:

It is important to note, however, that some detection techniques are heuristic in nature, and hence the notions of soundness and completeness are not precisely defined for them. For instance, heuristic techniques that detect violations of secure coding practices as described in 2.3 are checking compliance with informally defined rules and recommendations, and it is not always possible to unambiguously define what false positives or false negatives are. Moreover, these approaches might highlight ’vulnerabilities’ that are maybe not exploitable at this point in time, but should be fixed nonetheless because they are ’near misses’, i.e., might become easily exploitable by future maintenance mistakes.

Sigh. Detection techniques that are “heuristic” are generally unsound or incomplete. What they seem to be talking about is problems (or maybe just infelicities) with the definition of some category of vulnerabilities.

Still! It’s in development and even as such, I’d point a student at it. These things aren’t supposed to substitute for textbooks, but they can be helpful as a quick orientation and sanity check.

C++ Header Analysis

I’ve been running a third year project on “analysing the Python ecosystem.” It was intended to be pretty flexible (eg analysing available teaching books for coverage). I’ve had various nibbles and bites at both the 3rd year and MSc level, but people find it confusing. I keep thinking it should be easy given how much stuff that’s easily available but people don’t seem to know what to ask.

Here’s a pretty simple analysis of C++ headers. As an example, it’s ok. But it does suffer from the standard problem that it stops before any interesting questions get answers. What I do find interesting is the way that C++ ends up importing vector all over (since it’s super useful but not built in). But it’s not a very surprising or deep result.

Electronic Kanban Boards

I’m a fan of Trello.

Up to a point. I’ve made many a board and found it super useful until my efforts to use it inevitably peter out. But that’s probably me.

Kanban boards seem worth teaching in a software engineering course if only because they are so common. They are also fairly easy to grasp so you can have a pretty good discussion about organisation and management. There’s a lot of intellectual baggage associated with them but just getting students to think about writing down todo lists is a win.

I doubt I can force a class to sign up for Trello. Requiring students to use third party services has a lot of implications that were dodgy before the new data protection laws.

This article discusses five open source alternatives but frankly, even though there are some attractive feature lists, they don’t really appeal. Worst, the idea that I would either host such a thing or get some other part of the uni to host it seems impossible. I guess the students could run the one with docker images available…but that seems more impossible.

We can go back to post it’s or index cards…but we don’t have a shared environment.

So…maybe it’s better to focus on issue lists and bug reports? Keep off these larger management issues until the next class?