KTM User Pages
Click here to find the comments for this topic
I majored in experimental psychology, and was taught that the 'frequentist' model was the only model. Large sample size, random assignment, double-blind controls, tests for significance: these were the only conceivable means to discover the truth or something close to it. Nobody said boo about Bayes. At some point along the line, probably within the last 10 years, I realized something was missing. First of all, peer-reviewed, random-assignment, frequentist studies are often wrong. How often? Probably 15% of the time: (subscription required)
THEODORE STURGEON, an American science-fiction writer, once observed that “95% of everything is crap”. John Ioannidis, a Greek epidemiologist, would not go that far. His benchmark is 50%. But that figure, he thinks, is a fair estimate of the proportion of scientific papers that eventually turn out to be wrong. Dr Ioannidis, who works at the University of Ioannina, in northern Greece, makes his claim in PLoS Medicine, an online journal published by the Public Library of Science. His thesis that many scientific papers come to false conclusions is not new. Science is a Darwinian process that proceeds as much by refutation as by publication. But until recently no one has tried to quantify the matter. Dr Ioannidis began by looking at specific studies, in a paper published in the Journal of the American Medical Association in July. He examined 49 research articles printed in widely read medical journals between 1990 and 2003. Each of these articles had been cited by other scientists in their own papers 1,000 times or more. However, 14 of them—almost a third—were later refuted by other work. Some of the refuted studies looked into whether hormone-replacement therapy was safe for women (it was, then it wasn't), whether vitamin E increased coronary health (it did, then it didn't), and whether stents are more effective than balloon angioplasty for coronary-artery disease (they are, but not nearly as much as was thought). [snip] ...he concluded that even a large, well-designed study with little researcher bias has only an 85% chance of being right. An underpowered, poorly performed drug trial with researcher bias has but a 17% chance of producing true conclusions. Overall, more than half of all published research is probably wrong.
Jakob Nielsen says to use bullets, so I'm using bullets What are the odds of any given study being right?
med school Apparently, Dr. Ioannidis' exercise has been a tradition in med schools for some time. Two physicians, who attended different medical schools, have told me that when they started med school their professors said that half of the articles published in JAMA that year would prove to be wrong by the time they graduated. These professors had never conducted a study. So how did they come up with a figure of 50-50? I'd say they used Bayesian reasoning. This is an example of the human mind using Bayesian analysis to arrive at a correct conclusion -- the same conclusion a frequentist study like Ionnidis' will reach (assuming his study is correct, of course).
when you don't need a large sample Carolyn linked to an ECONOMIST article on research showing the human mind probably uses Bayesian reasoning.
...the Bayesian capacity to draw strong inferences from sparse data could be crucial to the way the mind perceives the world, plans actions, comprehends and learns language, reasons from correlation to causation, and even understands the goals and beliefs of other minds. [snip] The key to successful Bayesian reasoning is not in having an extensive, unbiased sample, which is the eternal worry of frequentists, but rather in having an appropriate “prior”, as it is known to the cognoscenti. This prior is an assumption about the way the world works—in essence, a hypothesis about reality—that can be expressed as a mathematical probability distribution of the frequency with which events of a particular magnitude happen. The best known of these probability distributions is the “normal”, or Gaussian distribution. This has a curve similar to the cross-section of a bell, with events of middling magnitude being common, and those of small and large magnitude rare, so it is sometimes known by a third name, the bell-curve distribution. But there are also the Poisson distribution, the Erlang distribution, the power-law distribution and many even weirder ones that are not the consequence of simple mathematical equations (or, at least, of equations that mathematicians regard as simple). With the correct prior, even a single piece of data can be used to make meaningful Bayesian predictions. By contrast frequentists, though they deal with the same probability distributions as Bayesians, make fewer prior assumptions about the distribution that applies in any particular situation. Frequentism is thus a more robust approach, but one that is not well suited to making decisions on the basis of limited information—which is something that people have to do all the time.
the cognitive unconscious knows what it's talking about I believe it. As I was saying, at some point I realized that: a) published, peer-reviewed research is frequently wrong and b) personal opinions, gut feelings, and intuition are frequently right
At least, my own personal opinions & gut feelings have proved correct often enough that I never dismiss personal opinion & gut feeling — my own or other people's — out of hand. But until I read this article, I didn't know why, when, or how. I would have a 'feeling' about something, or an idea, and I would have no clue whether this was or was not likely to be right. Then, after awhile, I accumulated so much experience in certain realms that I began to trust my judgment. For example, after a few years working with medication for Jimmy, I began to have a sense of what we ought to try with him. Often, I was right. I had meant to write a post about this back when we were talking about 'partnering' with teachers.....I've had numerous partnerships with Jimmy's doctors. I would read a piece of research that made sense, bring it in to our doctor, and our doctor would either instantly agree that it made sense, or would pursue it further. Often he or she decided to try the medication I thought should be tried. There are no medications approved for autism; all prescribing is done off-label. When we began working with meds, the standing belief was that medication 'did not treat autism.' The most you could hope for was to ameliorate a couple of symptoms, like hyperactivity and insomnia, and these symptoms were considered not to be 'core.' I rejected that line of reasoning years before the profession did, and I was right. Now Ed has developed tremendous 'Bayesian' expertise with meds. He's been supervising medication for the past 10 years, since the twins were born, and he knows what he's doing. We're working with one of the best psychiatrists in the world (IMO) and Ed can frequently predict what Dr. Hollander will do next. That's the cognitive unconscious at work. Research on the cognitive unconscious, which Arthur Reber surveys in his book Implicit Learning and Tacit Knowledge: An Essay on the Cognitive Unconscious, shows that it is startlingly accurate. Since reading Reber, I know that the cognitive unconscious — my own or others' — knows what it's talking about at least some of the time. My problem has been figuring out when. There's probably a simple answer to that. According to Robin Hogarth, who wrote Educating Intuition, intuition — the cognitive unconscious — is likely to be right in realms that offer feedback. A weatherman gets feedback. On Monday he predicts rain. On Tuesday, either it rains or it doesn't. That's feedback. An experienced weatherman is going to develop good intuition. A constructivist teacher who's not using formative assessment is getting very little feedback. In September he predicts that kids in the TRAILBLAZERS curriculum will learn their math facts without drill. In May he assumes they have. That's not feedback. This is why I don't listen to the casual observations and assertions of constructivists. They haven't had enough feedback to develop good intuition. In my experience, at least, a constructivist talking education is often talking belief, not experience.
rule of thumb That last sentence gave me a new rule of thumb: I tend to trust people who sound as if they're speaking from direct experience. I don't trust people who sound as if they are restating educational philosophy. This is the glaring difference between the writings of an Engelmann or a John Gatto Taylor and a generic constructivist. Engelmann's work is filled with experience. I don't have to perform a post-hoc analysis of the statistical techniques used by Project Follow-Through to conclude that Engelmann knows what he's talking about. He's got a Bayesian brain, I've got a Bayesian brain, and 95% of the time he's talking about his experience, not his philosophy.
blookis for Bayes Which brings me to Kitchen Table Math. A blooki is the perfect venue for Bayesian analysis. I remember back in the first couple of months, someone left a personal narrative & then apologized for having done so, saying that his experience was just one example, nothing more. I answered that a major reason I started writing Kitchen Table Math in the first place was that I wanted to learn about other people's personal experiences. I'm not looking for the 'large sample' of a frequentist study. I'm looking for the personal experience & observations of people with good priors. That's what I've been getting ever since we started!
Bayes statistics & false positives
does human mind use Bayesian reasoning?
Bayesian reasoning, intuition, & the cognitive unconscious
most bell curves have thick tails
ECONOMIST explanation Bayesian statistics
Bayesian certainty scale
-- CatherineJohnson - 09 Jan 2006 Back to: Main Page.