Saturday, January 30, 2010

desperately seeking ideal speech situation

Sorry for not following anyone. I'm still trying to lear how to use this tool. JH

Yes, that JH is Jürgen Habermas.

From the indispensable Marginal Revolution. The rest here.

[Later]

Jonathan Stray reports:

Over the last several days there has been considerable hubbub around the notion that pioneering media theorist Jürgen Habermas might have signed up for Twitter as @JHabermas. This would be “important if true”, as Jay Rosen put it. Intrigued, I tracked him down through the University of Frankfurt. I succeeded in getting him on the phone at his home in Sternburg, and asked him if he was on Twitter. He said,

No, no, no. This is somebody else. This is a mis-use of my name.

He added that “my email address is not publicly available,” which suggests that perhaps he didn’t quite understand what I was getting at.

The rest here.


Sunday, January 3, 2010

Avanti

2010. Time to move over to Wordpress. Yes.

But before we go, here's Cosma Shalizi on the Neyman-Pearson lemma and William James:

When last we saw the Neyman-Pearson lemma, we were looking at how to tell whether a data set x was signal or noise, assuming that we know the statistical distributions of noise (call it p) and the distribution of signals (q). There are two kinds of mistake we can make here: a false alarm, saying "signal" when x is really noise, and a miss, saying "noise" when x is really signal. What Neyman and Pearson showed is that if we fix on a false alarm rate we can live with (a probability of mistaking noise for signal; the "significance level"), there is a unique optimal test which minimizes the probability of misses --- which maximizes the power to detect signal when it is present. This is the likelihood ratio test, where we say "signal" if and only if q(x)/p(x) exceeds a certain threshold picked to control the false alarm rate.


CRS goes on to elaborate, then gets to William James and the will to believe:

Let's step back a little bit to consider the broader picture here. We have a question about what the world is like --- which of several conceivable hypotheses is true. Some hypotheses are ruled out on a priori grounds, others because they are incompatible with evidence, but that still leaves more than one admissible hypothesis, and the evidence we have does not conclusively favor any of them. Nonetheless, we must chose one hypothesis for purposes of action; at the very least we will act as though one of them is true. But we may err just as much through rejecting a truth as through accepting a falsehood. The two errors are symmetric, but they are not the same error. In this situation, we are advised to pick a hypothesis based, in part, on which error has graver consequences.

This is precisely the set-up of William James's "The Will to Believe". (It's easily accessible online, as are summaries and interpretations; for instance, an application to current controversies by Jessa Crispin.) In particular, James lays great stress on the fact that what statisticians now call Type I and Type II errors are both errors:

There are two ways of looking at our duty in the matter of opinion, — ways entirely different, and yet ways about whose difference the theory of knowledge seems hitherto to have shown very little concern. We must know the truth; and we must avoid error, — these are our first and great commandments as would-be knowers; but they are not two ways of stating an identical commandment, they are two separable laws. Although it may indeed happen that when we believe the truth A, we escape as an incidental consequence from believing the falsehood B, it hardly ever happens that by merely disbelieving B we necessarily believe A. We may in escaping B fall into believing other falsehoods, C or D, just as bad as B; or we may escape B by not believing anything at all, not even A.

Know the truth! Shun error! 2010, Excelsior!

The whole thing here.