Floating Point Explained Visually?

Tab cleanup time!

I’ve had this article on “explaining” floating point numbers in my phone’s browser for eons. Basically, my first glance made me go “huh” and I didn’t have the time or energy to figure it out at that moment and that’s what stuck with me.

But I need a post today and killing this tab suddenly was motivating.

One annoying thing about the article is that it doesn’t say what about floating point it’s explaining or why floating point works the way it does. It seems to be more about explaining how to convert from floating point notation in to standard decimal notation. Which…fair enough. I’m not sure the “scary formula” is such a problem or that the window metaphor is all that useful in the end. In the end, standard floating point is just scientific notation in base2 with a fixed width (thus a “moving” decimal point). If standard decimal scientific notation is confusing in the same way then maybe this visual/windowing explanation might help. But I’d start with scientific notation as most people will be a little familiar with it.

The big thing not quite explained is how (and why) various numbers are approximated and thus departures from various standard features of the rationals and reals. That seems to be the deep part of floating point, esp the bits that come from features of the base rather than features of the width.

Any rational number can be represented exactly as a pair (ie ratio!) of integers. The size of your integers constrain which rationals you can represent (ie max rational is maxint/1 and min (positive) rational (greater than zero) is 1/maxint). But in positional notations with a radix point (like standard decimal notation) we typically can’t capture all rationals because some rationals will have an infinite expansion. 1/3 in decimal, for example. (Though the decimal will be repeating.) Binary has a different set of numbers it can’t easily represent.

Ok the point of this isn’t to fully explain floating point but to get back to the idea that it has a lot of pieces so there’s a lot to understand. If you’re only going to explain a small piece of it say that! Say why this piece is useful to understand! Without this you’re problem typically making things worse.

Advertisements

Fuzzy on Fuzzy Logic

Sometimes, you just have to go “oy” about an article on rice cookers:

But it’s the math this one runs on, not the adorable music, that makes it so special. The rice cooker of my adulthood is built on fuzzy logic, a field of computing that tries to make rational decisions in a world of imprecision. By mimicking our gray matter’s ability to reconcile gray information, this frivolous gadget has become one of the most essential items in my kitchen.

This isn’t going to end well. This modern rice cooker is compared with her old one:

The Aristotle-inspired rice cooker I had in college would heat until the temperature of the rice rose above 212 degrees Fahrenheit, at which point all of the water would have been absorbed. As the temperature rose past this point, a magnet was activated by a thermostat and the machine would shut off. The appliance was either on or off, and it did but one thing while it was on.

And then we have the modern one:

In my current fuzzy-logic cooker, however, I tell the machine what kind of rice I’m using and how long it has been soaking. It takes that information and decides what temperature it should reach, and for how long. Generally using what are essentially if/then statements, it can fine-tune the process. For example, it can take into account the surrounding air temperature and turn the heating element up or down to compensate. The rice isn’t cooked or uncooked; the fuzzy-logic machine wants it to be cooked correctly.

The second machine might be better, but if this is what it is doing, it’s not using fuzzy logic. It seems to be using perfectly crisp logic. The machine has more than 2 states, but the logic needn’t have more than two truth values to capture those multiple states.

Now, I suppose it could be using some sort of fuzzy thresholds to determine when to switch, but I don’t see why it would bother. It’s going to determine temperature plus time. Both of these are going to be crisp. It’s going to be in a particular temperature for a given time (patch heating/cooling cycles of the element under control of the thermostat…but it won’t represent the heat that way!). Then it might switch to another temperature for a different time. It might change its program on the fly depending on sensor action.

But none of this is fuzzy logic in any sense.

To add a bit of pain:

Fuzzy logic was first proposed in 1965 by Lotfi Zadeh, a computer scientist who is now retired from the University of California, Berkeley.

Except multi-value and, in particular, infinite valued logic had been investigated before (c.f. Łukasiewicz-Tarski logic).

I doubt a correction will be forthcoming.

Is it irrational to think Plantinga is rational?

I’m hardly the first to respond to the Plantinga interview.

A few caveats: interviews are a terrible basis to judge a thinker or line of argument. The compression alone means that things are going to look enthymeic at best esp if you are unfamiliar with the work or hostile to the conclusions. Both if these are true for me wrt Plantinga, so my title is perhaps unfair.

OTOH, he put it out there and the title there is equi-offensive/silly so why not?!

I want to address two parts, one in a general way and one in specific detail.

To the first, I think any sort of supernatural being is necessary to fill explanatory gaps has to deal with the fact that every supernatural explanation to date has failed miserably. Our successful predictions and manipulations of the world have come entirely from naturalistic science. And it’s not even close. Given the failures to date, there should be a strong presumption against supernatural explanations going forward.

And note that if it doesn’t give us predictions or manipulations, then we really have to ask whether it is a explanation at all.

Until some reason for thinking this time is different is forthcoming, I think fine tuning arguments should be regarded as nonsense.

Second, let’s look at the cognitive suicide argument. (It actually relates to the above argument.)

AP: Evolution will have resulted in our having beliefs that are adaptive; that is, beliefs that cause adaptive actions. But as we’ve seen, if materialism is true, the belief does not cause the adaptive action by way of its content: It causes that action by way of its neurophysiological properties. Hence it doesn’t matter what the content of the belief is, and it doesn’t matter whether that content is true or false. All that’s required is that the belief have the right neurophysiological properties. If it’s also true, that’s fine; but if false, that’s equally fine.

There are two immediate oddities: 1) why worry about true content rather than content at all and 2) why are true and false content equally fine?

The first point suggests that the argument proves too much, or, at least, one might wonder why we are focusing on true content when content itself is threatened.

The second point, I think is based on our belief content being closed under negation (in principle) and bivalence. If P is false then not P is true and, in some sense, the fact that I got a belief that P instead of that not P was a coin flip. Or maybe given bivalence, half of all possible beliefs are false (since half of all propositions are false) and since selection doesn’t select for truth (indeed, it seems that selection is wildly indifferent to truth) we are in no better position with respect to our beliefs than a uniform sampling of content. (Probably within size limits…although, why even that? If content is unmoored from its material instantiation then why can’t we have actually infinite content! I hope it’s clear from such considerations how bonkers this line is!)

But now we’re just into brute skepticism aren’t we? If naturalism and evolution require the idea that content is completely unmoored from our evolution and material basis, then why assume we aren’t zombies (ok, we perceive that we aren’t….but <i>how do we know?!?!?</i>) or generally quite wrong?

Evolution will select for belief-producing processes that produce beliefs with adaptive neurophysiological properties, but not for belief-producing processes that produce true beliefs. Given materialism and evolution, any particular belief is as likely to be false as true.

Again, why does it produce an equal likelihood? If I select sentences for an arbitrary non-truth tracking property, how likely am I to end up with roughly half of the sentences being true (even given an underlying distribution of half true/half false)? Well, consider the property of “sounding like it was something written by Shakespeare”. My guess is that Shakespeare wrote a lot more affirmative sentences than denying sentences so I doubt we’re going to get an equal number of negated and un-negated sentences. So, I don’t see that we’re going to end up with sets that are half P and half not P. Indeed, given that a big chunk of folks he wrote about are fictional, it seems regardless, I’m going to end up with false sentences. (It’s really a big deal, as Pigliucci emphasises, that the example specifies independent beliefs and defines reliability as “being true”.)

GG: So your claim is that if materialism is true, evolution doesn’t lead to most of our beliefs being true.

AP: Right. In fact, given materialism and evolution, it follows that our belief-producing faculties are not reliable.

Here’s why. If a belief is as likely to be false as to be true, we’d have to say the probability that any particular belief is true is about 50 percent. Now suppose we had a total of 100 independent beliefs (of course, we have many more). Remember that the probability that all of a group of beliefs are true is the multiplication of all their individual probabilities. Even if we set a fairly low bar for reliability — say, that at least two-thirds (67 percent) of our beliefs are true — our overall reliability, given materialism and evolution, is exceedingly low: something like .0004. So if you accept both materialism and evolution, you have good reason to believe that your belief-producing faculties are not reliable.

Ok, I keep screwing this up (e.g., on FaceBook). It’s true that if getting true beliefs is like getting heads by flipping a fair coin, that the probability of getting 67 out of 100 (or even 67 or more out of 100) is very low. Of course, the probability of us getting very few beliefs being true (say, 33 or less) is also wildly improbable. But, so exactly what? Let us even grant the skepticism, i.e., that for any given belief (knowing nothing else about it!) it’s as likely to be true as false. This doesn’t tell us anything about whether we are, indeed, in that situation. In fact, there’s plenty of reason to think that we are in that situation! (Consider John Ioannidis’ famous paper, “Why Most Published Research Findings Are False“.)

But to believe that is to fall into a total skepticism, which leaves you with no reason to accept any of your beliefs (including your beliefs in materialism and evolution!). The only sensible course is to give up the claim leading to this conclusion: that both materialism and evolution are true. Maybe you can hold one or the other, but not both.

Given that we know lots of our faculties and belief forming processes are wildly unreliable, why is believing that a fall into total skepticism? The not entirely hidden premise is that our belief forming processes are reliable in this way (i.e., generate far more truths than falsities). But is that true? Can it not be true without total skepticism? Well, I don’t feel like a total skeptic, and I’m pretty convinced by Ioannidis. Of course, a key features of Ioannidis is having a more sophisticated analysis of belief such that beliefs are not all of a piece. Within naturalistic science, evolution and materialism have tremendous weight of evidence. Essentially, all the evidence points that way at every level. And we can distinguish between people who are essentially indifferent to evidence and those who are not. Astrologers do a crappy job of making concrete predictions. Creationists have produced no scientific discoveries. Etc. etc. Theologians have made predictions about the world (and the sort of explanation we will find) and have been wrong, wrong, and wrong again. Why would this metaconsideration make us feel otherwise? Isn’t more likely, ex ante, to be wrong, yet again?

(This needs tightening. Grr.)

(Update: PZ Myers has, I think, a similar set of considerations. Unsurprisingly, he’s not as charitable to Plantinga’s argument. But the core bit is similar: Our brains and senses are unreliable! That’s why we put so much effort into knowing! This doesn’t defeat Plantinga’s argument in detail, but one does have to wonder why we need to do that.)

Another cautionary tale

(Doing some distracting writing before the writing I need, personally, to do.)

This time, I’ve reason to believe the cautionee isn’t a PhD student, but is already graduated. I’ve no definite evidence, but there were some web pages wherein their name is prefixed with “Dr.” which is pretty reasonable.

Yucong Duan posted a message to very old thread that started out unpromising (“I think that there is usually a misunderstanding on the meaning of CWA vs. OWA”…starting with such a broadside puts me, personally, on my guard for kookdom) and descended into what I called “gibberish” (though, in the “nice” way).

Probably the highlight of malpractice was the accusation against me of having been inconsistent and the generalization from the alleged sample of one to my whole corpus. (Note, I totally understand the latter move — I’m sorely tempted to dismiss without reading their publications because I cannot see how someone so confused could produce anything reasonable…but, of course, I can so see: they might be more careful in print, they have helpful coauthors, they might be ok in their own field, etc. Hence, no comment on the rest without reading them. Which is totally not worth my time.) Slightly reformatted for clarity:

Firstly please check piceces of your reply which i have copied as below:

(1)”…My mind reading capabilities failed to detect that you are a student(***)…”

(2)”…you mobilized was used in any standard or reasonable sense. (E.g., “notation”, “CWA”, “OWA”, “semantics”, “ontological”, “negation”). This is characteristic of naive students(***) …”

Can you see the contradiction in your expression?! I am not imagine how many similar cases could be counterred in those more than 100 papers published by you in the past five years?!

Obviously, there’s no even prima facie contradiction (they had to selectively quote to even get as far as they did). I pointed this out (with some snark) and they doubled down.

Doubling down here was clearly a really bad move. While I think that one can make a case against my use of the term “gibberish” (see the discussion on Feminist Philosophers) as being provocative, I will point out that they did not express agitation until I refuted them again and they started not getting as straightforwardly refuted by others. They spiralled into insinuations against me and my competence (e.g., challenging my authority, as if I made a claim to authority to buttress my points; however, I do have pretty good claim to authority on these issues, which makes the whole thing weird).

In the end, as far as I can tell, Yucong Duan left the conversation still not knowing how much they didn’t understand, but largely happy with the result. This was fairly predictable from the start, alas.

Look, I could be a grump or nuts in spite of my expertise or just wrong on some point. It’s good to challenge the basics on occasion. But there’s often important signals in what you might perceive as noise. It’s important to know what criticisms to dismiss, but it’s also important to recognize what criticisms not to dismiss.

That being said, per usual, and per the Feminist Philosophers thread, I have to reflect again on my still. Snark, and bluntness, and teasing can be effective and even fun for all, but obviously have the obvious downsides. I sometimes worry that my clinging to them (however tempered over the years) is like someone clinging to racist or sexist jokes and language. I can hear the similarities, which worry me.

A Cautionary Tale

It’s hard being a PhD student.

Having been one for quite a long time, I can speak quite passionately about it. Being a passionate person entails that I probably will at the drop of a hat.

Of course, lots of the difficulties with being a PhD student are simply a matter of life. I take a special interest because it was a defining condition of so much of much life and mentoring PhD students will is and will be such a condition for the rest of my life. So when I see a massive failure by a PhD student, I’m inclined to overreflect on it.

Kindred Winecoff posted quite a silly critique of Paul Krugman which was picked up by Henry Farrell. Now, Daniel Drezner has a similar, somewhat more nuanced view expressed with rather less vitrol and hyperbole. They share the same basic flaw: A hugely uncharitable misreading of Krugman as saying that the public bears absolutely no responsibility for since it had no influence on the massively disastrous Bush and Bush era policies. (I’m risking similar problems by not doing a very close exegesis of any of the articles. Furthermore, my generally pro-Krugman bent generates similar risks as Winecoff’s anti-Krugman bent.)

(The big error in this reading, AFAICT, is to miss the dialectic at several levels. The line Krugman is pushing back against is the one which justifies austerity measures with a massive negative effect on the poor and powerless along with irresponsible give aways to the rich and powerful. While there are piles of crap justifications, the key one here is that the public is irresponsible and the elites are relatively helpless in the face of massive public irresponsibility. (Think Santelli.) Whatever responsibility the public bears, I trust that it’s pretty obvious that this line is total nonsense and that’s Krugman’s core point. And, frankly, it’s the interesting point.)

Winecroff is now in a trap of their own making (yes, like Jane Austen, use the 3rd person plural as a neutral 3rd person singular). They gave a junky critique based on a junky reading and littered it with junky hyperbole, e.g.,

If Greenspan’s “with notably rare exceptions” deserves internet infamy, and it does, then surely Krugman’s less notable exceptions should too.

(Even if the junky reading were correct these are not remotely comparable. If the junky reading were correct, Krugman would be wrong (this is what Drezner tries, rather crappily afaict, to show). Greenspan is engaged in a kind of amazing and disgusting chuzpah in the service of some rather dangerous hackery.)

When appropriately (and gently!) chastized by Farrell, Winecroff fails to do the sensible thing that many commentators urged him to do: Take a moment, reflect, and back down. Instead, Winecroff doubles- and trebles-down on the silliness. The silliness is as every level including a classic “I’m leaving thread now” followed almost immediately by several more comments.

All this is relatively minor in the grand scheme of things: In the midst of an event like this, it’s really hard to turn oneself around. But given the systematic failures exhibited, I wonder if Winecroff is going to learn from it. If I were his supervisor (US: advisor), I would print all these out and go through them carefully. I’d probably focus more on the dialectic issues (e.g., problems with burden of proof, charity, self-awareness, tactics, and strategy, etc.). For example, it’s very unclear what Winecroff hopes to get out of the exchange. I’m afraid that bashing Krugman is core, which is really a worthless goal, esp. in this context. An easy win would have been to say, “Ok, let’s put my reading of Krugman aside (I’m not ready to give up on it, but maybe that’s because I really can’t stand him; I have to let that rest for awhile) and focus on the more interesting question of how to apportion responsibility for policy.”

This only wins if making the point is more important than making the bash. Which is why it’s a good move regardless of your goal if you are in hostile territory. It sidelines bashback for a while in favor of counterpoint. Given enough point and counterpoint, you might find your own goal moving from bashing to pointmaking. (This is not to say that bashing is worthless. Sometimes it’s very worthwhile indeed. But it needs to work, at the very least.)

As I said, Winecroff isn’t irrecoverable. I had a similar (more heated) exchange with a random PhD student on the web and they turned out just fine and we’re reasonble colleagues (I’m still a bit wary of them, though). Of course, I had a similar (even more heated) exchange which did not resolve favorably. If you find yourself in this circumstance, get as much reality checking as you can. Reflect. Talk to other (possibly critical) people. Don’t necessarily seek out supportive people, but people who will tell you when you’re off the rails. If you determine you have gone off the rails, apologize and retract and learn from the experience. In particular, learn something about your own strengths, weaknesses, and reactions.

Update: You don’t have to be a student to have major level fail as the Synthese scandal shows. The solution to such fails is the same.

However, the action Frances recommend (apologize first) works best in good faith circumstances. If there’s bad faith or bad blood admitting fault early can really, really screw you. Asking for time to think about it, or putting up similar disclaimers, can be useful. It really is the case that we fallible people sometimes can’t see the obvious. If you aren’t seeing it, then ask for some time to see it. “Hey folks, I’m seeing a lot of heat from people I generally respect but I’m not getting it. Can we hold things for a bit while I figure out for sure what’s going on?” is a reasonable move.

“…only a convention”

Welcome to the first post in my “logic malpractice” category. In this category, I will document various actions and words by nominally (and seriously) qualified logicians that are so wacked out as to constitute malpractice (if anyone comes to harm). I’ll usually document such words when they could cause confusion in the logic laity. Our first entry comes from a post by Pat Hayes:

Most conventional logical notations segregate relations from the things (“individuals”) they are relations on. OWL-DL and the OBO Foundry logics follow this ‘segregated’ convention. However, this is only a convention, and there is no fundamental logical requirement why this must be done: OWL-Full, RDF and Common Logic all do not make any strong distinction between relations and other entities.

Full disclosure: I am not speaking with Pat since he publicly accused me and some others of falsifying an academic paper in order to “destroy RDF”. Note that Pat was wrong on the technical point (check the rest of the thread) and does not retract his appalling remark.

Sad, isn’t it. That doesn’t mean I can’t document his malpractices of course!

So what’s wrong with this quote? Critically, it presents the standard sorting of first order logic as a “mere convenience” which has little to no serious implications for the logic (“there is no fundamental logical requirement why this must be done”). This is well-known to be utterly false. While you can lift the sorting of the user defined vocabulary, there are several different ways to do it and they radically different effects. If you also lift the sort distinction between user defined vocabulary and the logical vocabulary, things get even messier (see OWL Full).

Of course, it depends on how you do it. In OWL 2, we allow “punning” which breaks the sorting without and semantic implications at all. That is, we allow the sort of an occurrence of a term to be determined by its syntactic position. Thus, by simple syntactic analysis, we can rewrite each name in the logical theory to incorporate its sort (e.g., C_a_unary_predicate vs. C_a_logical_constant and then separate the terms in the normal way. Going beyond this is tricky. For a nice analysis of some key liftings, see Boris Motik’s paper on metamodeling in OWL. (He includes a discussion on punning.)

To suggest otherwise is serious malpractice. To phrase your malpractice in weasel words (i.e., no fundamental logical requirement) lifts it from culpable negligence to, at best, criminal negligence with a strong suggestion of malice. (The weasel words are there, IMHO, so that when challenged on this point, Hayes can backpedal with “What I say is technically TRUUUUUEEEE!!!!!! The problems aren’t fundamental since you CAN work around them and still have an “essentially” first order logic. Thus they aren’t a REQUIREMENT either!?111.” But this is malpractice. Even expert logicians, steeped in the field, might well find Pat’s suggestion surprising. But they are in a position to check and evaluate the various ways of breaking the basic sorting. The general public isn’t. The connotation of Hayes’ line is that the sorting is piddling and rather unimportant. That’s so not true.) The suggestion to the logical novice that this is an easy peasy thing that has been relaxed many times without problem and what ho! see OWL Full and Common Logic is really off the charts. At the moment, it’s still unknown if OWL Full is consistent. That’s not a trivial matter.