The Tabulator JavaScript Library

I have to build a website. It’s a browsable repository of modesty complex structured documents largely represented as graphs. There’s lots of possible entry points, summaries, and views.

Fine. We know this will be sorta easy on the one hand and brutally suck on the other.

This is exemplified by the excellent table library I’ve been using, Tabulator.

I mean, out of the box (or in the demos at least) the tables look nice. They can sort. You can resize columns. You feed it some JSON and bob is all avuncular toward you.

Except.

Whoof, cutting and pasting the example code is an exercise in weirdness. It took me a lot of dork age to realise that the HTML5 doctype was essential, entirely essential, to moderately normal functioning.

And then there are the filters. Seems really nice…add a bit of search without any server mucking. Well, we jump from sorta declarative specs to a rat nest of bizarre (to these naive eyes) JavaScript barfing. Cutting and pasting example code yields disappeared table with no clue to what’s going on.

Plus this documentation says “source code” and then gives you something which is, at best, partial snippets, not working source code for the example.

Don’t get me wrong. It’s very cool and maybe if I was a wired in front end developer the documentation would make perfect sense.

But oy! It’s some brutal, empirical try and mostly fail to do anything.

As with so much web dev stuff, some select simple things aren’t so horrible then it fucking drives off a cliff of doom. It’s all so unforgiving and weird. Why isn’t there a simple “filter” flag? Am I supposed to make the buttons and fields or do they magically show up? If the form shows up why do I have to call all the filter functions? Including “clear filters”?

Maybe there’s some point where I’ll get it. But adding tree data was adding a flag and structuring the data appropriately. Adding filters is some mass of coding. It feels uneven to my untutored mind.

Advertisements

Apple Store Suckery

Zoe’s phone died. It was doing the wonky battery thing. This was one of the recalled iPhone 6Ss. Last year I had the battery replaced under the program. It didn’t quite fix the problem but did seriously mitigate it.

Ok but now it won’t charge. As the Apple Store guy pointed out…it rattled.

Yikes! I guess they did a crap job last year.

Nope! It was a knock off battery poorly installed.

Now of course they didn’t “accuse us” of anything…they just thought that they fixed it properly and sometime in the past year, probably the past month, someone meddled with it and so no fix for us.

This phone has literally never been out of our possession. We had no motive to have any other repair or battery repair…we thought it was still under warranty!

And don’t fucking tell me that you aren’t accusing me because you did it by implication. That just sucks. Just say we can’t comment on the mechanism but since we can’t prove it’s their fault we’re shit out of luck.

Of course this doesn’t just mean they won’t repair it for free, it means they won’t repair it.

There was an iSmash in the mall that repaired it in under 15 minutes while we watched. Apple drove us to what they forbade.

We’ve dumped serious cash into that store. I don’t have a good feeling about them.

Paraglare is Doubleplusgood

Every now and again I look for a Python parser framework. I tend to want something highly declarative but also as easy to use and debug as PetitParser (which inclines me toward PEG grammar systems). I’d really like multi language targeting so people can reuse my grammars. (But then I want the actions to be specifiable as well.)

Usually I’m disappointed. I read about a bazillion systems and nothing happens.

I did this again the other day because I wanted to write my reasoner tests in Jewel and 1) I didn’t want to find, much less get working, my Smalltalk Jewel parser and 2) hooking that up to Python tests seemed even more awful.

So, I did my usual cursory look and stumbled on Paraglare. Feature list looked promising…esp. the idea of good error recovery so I installed it…

…and it just worked! What a treat! Here’s my grammar:

ONT: AX*;

CE: CE '&' CE  {left, 1}
 | '' CE  {left, 1}
 | name;

AX: CAX | RAX;
CAX: CE '=>' CE end;
RAX: name '==>' name end;

terminals
name: /[A-Za-z][A-Za-z0-9]*/;
end: /\./;

And here are my actions:

actions = {
    "CAX": [lambda _, nodes: SubClassOf(nodes[0], nodes[2])],
    "RAX": [lambda _, nodes: SubPropertyOf(Property(nodes[0]), Property(nodes[2]))],
    "CE": [lambda _, nodes: And(nodes[0], nodes[2]),
           lambda _, nodes: Some(Property(nodes[1]), nodes[3]),
           lambda _, nodes: Class(nodes[0])]}

Daaaaamn that was easy! It’s not a complex grammar but still. It’s parsing my tests no problem!

Good stuff.

Test test test

My little reasoner project hit a milestone today: it terminated in reasonable time (2 minutes or so) on one of the EL variants of Galen.

Yay! No more unboundedly growing to do lists!

And it wasn’t anything clever! No change in the balance of SQL and Python!

Nope, first I refactored so I would be able to more systematically explore variants and then I finally wrote some super basic tests. Eg

A=>B. B=>C.

I did this because I should and because I suspect some subtle unsoundness was causing the unbounded growth.

It wasn’t subtle problems but blatant, serious ones. I mean trouble with retrieving chains and reasoning with conjunctions.

Fixing them fixed the leak. I don’t know if the Galen classification is correct (yet) but I have a hell of a lot more confidence. And it terminates in reasonable time!!

So simple testing pays off again.

Students Surprise You

I have a group of pretty good students. They are very nice and likable and bright…all good things.

They went from most of them saying “I don’t have the faintest idea what I’m doing” to “here’s a pretty full featured first version” in three weeks.

It wasn’t totally unexpected given their progress last week but overall it still was a fun surprise.

Non linear progress is one of the frustrations and pleasures of teaching.

Redesigning a Course That May Disappear

Making a new course is a ton of work. This is true no matter what, but it is harder in this day and age where you have to produce a ton of detailed material. When I was a philosophy instructor, there were classes where I had to pick texts, think about what I was going to discuss, design a midterm, final, and two paper topics. In principle, I didn’t need to work out my lectures in detail, in advance. (I mean, I experimented with such, including writing them out verbatim). I can easily give very coherent, extemporaneous lectures (I do it all the time…indeed, a visitor once asked me if what they’d just saw was a prepared lecture). I’d guess most academics can do this, though we’d prefer not to.

I like a loose structure when I’m hoping for lots of interaction. Well, when I <i>expect</i> lots of interaction (I’m always hoping).

These days I have to produce slides. Lots of slides. Assignments need a metric ton of vetting plus rubrics, submission procedures, etc.

All things being equal, if we’re going to redo a course, we’d like it to last for a while. Major reworks get us 8 hrs of duties credit per contact hour. That’s a <i>lot</li>.

The other dirty secret is that while I think the class gets better…it’s not always clear.

BUT if I can improve a course even for a year, the temptation is to go for it. After all, for <i>those students</i> it’s the only version of the course they’ll get.

I’m thinking about how to get more efficient at major changes.

Nuitka Markdown Performance

Mistune has a…not terrific…benchmark script that tests a bunch of different parsers. There’s a bit of discussion in Hsiaoming Yang’s (the author of Mistune) blog post, though that’s more a reflective review of Python accessible Markdown parsers. It keeps everything in one process and just parses the same document over and over. But hey! It’s there and why not use it to try for a broader view of how well Nuitka does on optimising string heavy applications.

Some of the parsers didn’t work after pip installing so I dropped them. First, let’s look at the results from the blog post:

Parsing the Markdown Syntax document 1000 times...
Mistune: 12.7255s
Mistune (with Cython): 9.74075s
Misaka: 0.550502s
Markdown: 46.4342s
Markdown2: 78.2267s

Ok, I won’t have the Cython version but fine! First we have regular Python:

$ python bench.py 
Parsing the Markdown Syntax document 1000 times...
mistune: 14.963354
misaka: 0.43727499999999964
markdown: 48.857309
markdown2: 311.745707
hoep: 0.5286040000000298
$ ./bench.bin 
Parsing the Markdown Syntax document 1000 times...
mistune: 12.439452999999999
misaka: 0.39941600000000044
markdown: 37.808417999999996
markdown2: 631.708383
hoep: 0.4311249999999518

Ok wackiness. First, I’d guess my machine is roughly similar to HY’s, just looking at the times and his reporting that he ran on a MacBook Air (mine’s pretty old).

However, markdown2 is way out of whack, being much much slower on my machine. I’d guess it’s a different version?

The second wackiness is that compiling with Nuitka doubles the time to complete the benchmark. I did read that this can happen (in a bug report thread). It seems to be an issue with dynamic libraries.

So Mistune with Cython takes 76% of the time as without, while Nuitka compiled Mistune takes 83% of the time. That’s pretty good esp. as I didn’t have to touch a line of code with Nuitka. While Cython claims to compile normal Python code (getting better speedups as you add typing info), just looking at the tutorial suggests that even given that, Cython wants you to do a lot of stuff to get it to work, e.g., .pyx files and setup.py shenanigans. Maybe for another day, esp. as Mistune doesn’t seem to have Cython support anymore.

One clear message is always test and compare. The markdown2 slowdown is a bit of surprise. I didn’t do a PyInstaller version of the bench.py script to see how sizes compare (bench.bin is about 3.6 megs). This seems like a great test bed for a good N-Version testing framework.

I have to wonder how hard it would be to have Nuitka either compile Cython or compile to Cython, esp as it gets its type inference going.