Kitchen > PrivateWebHome > SubjectArea > StatisticsTeaching

select another subject area

# Entries from StatisticsTeaching

FalsePositives 12 Sep 2005 - 03:15 CatherineJohnson

A couple of days ago, Carolyn explained the difference between frequentist statistics and Bayesian. She's a Bayesian, she said.

Well, that explained a lot, because it turns out I'm a Bayesian, too. I just didn't know it. Obviously, that's why Carolyn and I constantly find ourselves traveling the exact same thought path, even though we've never met, and didn't know each other until a year ago.

Of course, a real Bayesian (that would be a Bayesian who knew what she was doing, which would not be me) would probably not conclude that the reason she likes a person well enough to start a vast time-gobbling math-ed web site with her is that you both subscribe to the same school of statistical thought. I'll have to ask Carolyn.

I'm a Bayesian aspirant.

I'm having quite a little midlife run of Self-Discovery here, I must say. First I find out I'm Scots-Irish; next I'm hearing I'm a Bayesian.

I just hope no one's gonna tell me I'm adopted.

### I have a question

My question concerns a passage in a terrific book called What the Numbers Say: A Field Guide to Mastering Our Numerical World by Derrick Niederman & David Boyum. Boyum, it turns out, majored in applied mathematics at Harvard--I didn't know there was such a thing as a major in applied mathematics at Harvard!

Or anywhere else, for that matter.

I wish I'd know that when I was 17.

'Bayes Watch' is Niederman & Boyum's title for this passage:

Years ago a study asked the following question of students and doctors at Harvard Medical School:

If a test to detect a disease whose prevalence is 1/1000 has a false positive rate of 5 percent, what is the chance that a person found to have a positive result actually has the disease, assuming that you know nothing about the person's symptoms or signs?

Ed and I both understand the answer now (neither of us got it right), but we still have a question about the precise calculations. (Don't hit this link unless you want to see the answer.)

### update

I've just checked Niederman & Boyum. They do not specify a zero rate for false negatives. They say nothing about false negatives one way or the other. (Neither does John Kay in false positives, part 2, assuming I'm understanding him correctly).

### Bayes & God

I actually bought this book a couple of years ago, though I haven't read it yet:

I believe it's intended to be a Bayesian proof of the existence of God, although I don't know how the word 'proof' is used either in the book or in the context of Bayesian statistics.

low birth weight paradox (& Monty Hall)
Monty Hall, part 2
Monty Hall, part 3
false positives
false positives, part 2
Doug Sundseth on Monty Hall
John Kay: We are likely to get probability wrong (subscription only)
Monty Hall diagram from Curious Incident
Bayes & the human mind
Bayesian reasoning, intuition, & the cognitive unconscious
most bell curves have thick tails
ECONOMIST explanation Bayesian statistics
Bayesian certainty scale
probability question from Saxon 8/7

Bayesianprobability

FalsePositivesPart2 21 Dec 2005 - 15:31 CatherineJohnson

Another version of the False Positives challenge. This one ran in John Kay's column in the Financial Times yesterday. (Probably only available to subscribers.)

...intuition does not correspond to the mathematics of probability. One person in a 1,000 suffers from a rare disease. A friend has just tested positive for this illness and the test gives a correct diagnosis in 99 per cent of cases. How likely is it that your friend has the disease? Not at all likely. In random groups of 1,000 people an average of 10 would display false positives and only one would be correctly diagnosed with the disease. But most people, including most doctors, think otherwise. “The human mind,” said science writer Stephen Jay Gould, “did not evolve to deal with probabilities.”

Hmmm. Let's see. This problem does give us false negatives, right???

OK, let me think.

[pause]

Good grief. Not only can the human mind not intuit Bayesian probability; apparently the human mind equally cannot produce consistently lucid prose. (Nothing wrong with Mr. Kay's lucidity on a normal day.)

Kay's example, too, appears to assume a false negative rate of 0.

As far as I can tell.

### update

This is funny. I was skimming Amazon reviews of Stephen Jay Gould's Mismeasure of Man, and I found this:

As Oxford academician Richard Dawkins says (see Bryson, "A Short History of Nearly Everything", pp. 330-332) "If only Stephen Gould could think as clearly as he writes!"

It's a Core Principle in the Writing Biz (& definitely in the Writing Instruction Biz) that you can't write clearly without thinking clearly. (True in my experience; that's for sure.)

low birth weight paradox (& Monty Hall)
Monty Hall, part 2
Monty Hall, part 3
false positives
false positives, part 2
Doug Sundseth on Monty Hall
John Kay: We are likely to get probability wrong (subscription only)
Monty Hall diagram from Curious Incident
probability question from Saxon 8/7

MontyHallPart2 17 Aug 2005 - 21:49 CatherineJohnson

Here is Kay on the Monty Hall problem:

The Monty Hall problem is named after the host of a 1970s quiz show, Let’s Make a Deal. The successful contestant chooses from three closed boxes. One contains the keys to a car and the other two a picture of a goat. The choice made, Monty opens one of the other doors to reveal – a goat. He taunts the guest to change the decision. Should the guest switch to the other closed box?

When the solution was published in an American magazine, thousands of readers – including professors of statistics – alleged an error. Paul Erdös, the great mathematician, reputedly died still musing on the Monty Hall problem. But the answer is, indeed, yes: you should change.

I'm happy to hear that Paul Erdos stumbled over Monty Hall, seeing as how I still don't understand it.

low birth weight paradox (& Monty Hall)
Monty Hall, part 2
false positives
false positives, part 2
Doug Sundseth on Monty Hall
John Kay: We are likely to get probability wrong (subscription only)
Monty Hall diagram from Curious Incident
probability question from Saxon 8/7

DougSundsethOnMontyHall 18 Aug 2005 - 19:52 CatherineJohnson

Doug Sundseth (welcome, Doug!) posted an explanation of the Monty Hall problem, which I have never been able to understand:

It took me a long time to understand it, too.

The model that finally worked for me was something like this:

You have a 1/3 chance of being right to start with, and a 2/3 chance of being wrong. If you guessed wrong originally, Monte's pick will unambiguously determine the correct choice (he never picks the good door).

There are nine pairs of (your pick):(correct pick), A:A, A:B, A:C, B:A, B:B, B:C, C:A, C:B, and C:C. In three of those, you picked correctly, Monte's information isn't useful, and you shouldn't switch. In the other six, you picked incorrectly and Monte told you which of the other picks was correct; thus you should switch.

If you never switch, you have three chances in nine of being correct. If you always switch, you have six chances in nine of being correct and three chances in nine of switching off the correct choice.

Note that the latter possibility (choosing correctly at random then switching to an incorrect choice) may be more psychologically painful than just guessing wrong and not switching. This may have an undue effect on the choices of contestants.

Doug, thank you!

OK, I've just sat down and quickly thought this through.

[pause]

On my initial reading, I think it makes sense to me. What's particularly useful, for me, is the information that, yes, you could already have chosen the correct door, in which case, if you change your choice, you have moved to the incorrect door.

I think people who haven't studied probability get hung up on the 'what if I'm already right' issue.....and then, when math-savvy people try to explain Monty Hall without addressing, as Doug has, the issue foremost in their minds, the explanation doesn't 'take.'

### metacognition again

I mentioned awhile back that metacognition is a huge issue amongst constructivists, both of the radical & the peer-reviewed , department of psychology cognitive science constructivists.

One of the main reasons for thinking about metacognition as you teach is that students may very well bring quite wrong ideas to the classroom, which they then 'build upon' as they acquire new knowledge. There's a lovely example of this in the National Research Council's book on learning. Many children, when told that the earth is round, picture it as a disk, not a sphere. (more t/k--I need to go take a look at these pages.)

In any case, Doug has addressed an aspect of metacognition that I haven't seen mentioned, which is to tell a student what it is they already know that's right, but incomplete.

I was having the same experience yesterday, puzzling through the 'false positives' problem. The objection both Ed and I were bumping our heads against--if it's 1 in 1000 and 50 in 1000, how can you ever have 1000???--was right; we just weren't seeing what to do about it.

I wonder how often it's the case that an incomplete right answer is the problem, as opposed to a Total Crackpot Misconception that has to be stomped out, obliterated, and disappeared without a trace before a person can learn Thing One about math? (And does this wording give you a feel for the challenge involved in attempting to re-learn elementary math in midlife?)

Or, as Steve H says, A little knowledge goes way too far.

### a new question

This sentence confuses me:

You have a 1/3 chance of being right to start with, and a 2/3 chance of being wrong. If you guessed wrong originally, Monte's pick will unambiguously determine the correct choice (he never picks the good door).

[pause]

hmmm. Interesting. Reading this again, it makes sense.

I'm going to take a paper and pencil break, and see what I come up with working through Doug's explanation myself.

I love it!

### back again

OK, paper and pencil session complete.

I do understand this explanation, with one question: the funky, counterintuitive odds are created by the fact that Monty always opens the wrong door, correct?

That's why you shouldn't go with the 50-50 answer everyone automatically does go with--yes?

Carolyn was telling me the other day that a lot of Bayesian statistical results are counterintuitve (hey! just like the Bayesian proof of the existence of God!).

That's for sure.

### other explanations

Here's a strictly mathematical explanation that will work for some people (and actually works OK for me....although frankly Doug's list helps move me a bit towards 'getting' the Monty Hall problem at a more intuitive level...):

After you pick but before you open any doors, there's a 1/3 chance that you've picked correctly, and a 2/3 chance that you've picked wrong. Assuming that the host can open doors, but can not move prizes, nothing that the host does will change the probabilities described above.

Now the host opens one of the doors, and there's nothing behind it. There's still a 1/3 chance that you've picked correctly, and a 2/3 chance that you've picked wrong. This means that the remaining door has a 2/3 chance of being correct.

This explanation helps me formulate exactly what it is that goes wrong for people: the chance to change your pick seems like a second event, with a second set of probabilities attached.

question: So how often does this happen in life?

How often do we perceive second events where we ought to perceive a continuation of the first?

### update: an intuitive approach to Monty Hall that might work

I'm going to have to live with the Monty Hall problem for awhile....

But here's an interesting approach to rendering the answer intuitively correct:

It was a while ago that I accepted the idea that switching doors was the correct play every time because it improves your chances of winning, but I had trouble convincing my friends that it was the correct answer. However, a friend of mine just came up with this explanation that I think should really make it obvious.

Let's say that you choose your door (out of 3, of course). Then, without showing what's behind any of the doors, Monty says you can stick with your first choice or you can have both of the two other doors. I think most everyone would then take the two doors collectively.

Unfortunately, I don't think this works for me...

### update: Keith Devlin's better version

OK, I think what the person above was trying to say was this:

...one last attempt at an explanation. Back to the three door version now. When Monty has opened one of the three doors and shown you there is no prize behind, and then offers you the opportunity to switch, he is in effect offering you a TWO-FOR-ONE switch. You originally picked door A. He is now saying "Would you like to swap door A for TWO doors, B and C ... Oh, and by the way, before you make this two-for-one swap I'll open one of those two doors for you (one without a prize behind it)."

I agree. Anyone told at the outset that he can pick one door or he can pick two doors would pick the two.

### I give up

from Keith Devlin:

... suppose you are playing a seven door version of the game. You choose three doors. Monty now opens three of the remaining doors to show you that there is no prize behind it. He then says, "Would you like to stick with the three doors you have chosen, or would you prefer to swap them for the one other door I have not opened?" What do you do? Do you stick with your three doors or do you make the 3 for 1 swap he is offering?

OK, I'm switching doors.

But I'm doing so purely on the basis of 4/7 being greater than 3/7. Nothing common sense about it.

Of course, given that my family motto is no common sense-y, it's easy to dump my first pick and jump to Door Number Seven!

low birth weight paradox (& Monty Hall)
Monty Hall, part 2
Monty Hall, part 3
false positives
false positives, part 2
Doug Sundseth on Monty Hall
John Kay: We are likely to get probability wrong (subscription only)
Monty Hall diagram from Curious Incident
probability question from Saxon 8/7

AlanGreenspanOnRisingInequality 21 Aug 2005 - 02:25 CatherineJohnson

I'm going to start posting this email from NYC Math Forum at NYC HOLD once a month:

In the matter of preaching to the choir, C-Span has a video of Alan Greenspan's testimony to the House Joint Economic Committee. There is a fascinating exchange between Greenspan and Senator Reed about the divergence in income between skilled/supervisory workers and unskilled workers. They agree this is a very serious problem. At one point, Reed asks what short term policies can be implemented to "enhance the incomes of most of the workers of America.

I transcribed about two minutes of testimony which you can hear for yourselves, starting around minute 34:00 of the video clip.

Greenspan:

Well, Senator, I don't think there are short term policies, other than the ones we typically use to assuage those who fall into unemployment or policies in the tax area in which we endeavor to redistribute income.

The basic problem, as we have discussed previously, as best I can judge, goes back to the education system. We do not seem to be pushing through our schools our student body at a sufficiently quick rate to create a sufficient supply of skilled workers to meet the ever-rising demand for skilled workers which means that wage rates are accelerating. But the very people who have not been able to move up into the education categories where they become skilled overload the lesser skills market and cause wages to be moving up well below average.

The consequence, of course, is an increased concentration of income. And, as I have often said, this is not the type of thing which a capitalist democratic society can really accept without addressing. And as far as I am concerned, the cause is very largely education.

It is not the children because at the 4th grade they are above the world average. Whatever it is we do between the 4th grade and the 12th grade is obviously not as good as what our competitors abroad do because our children fall below, well below, the median in the world, which suggests that we have to do something to prevent that from happening and I suspect, were we able to do that, we will indeed move children through high school, into college, and beyond in adequate numbers. As indeed we did in the early post WW II period, such that we do not get the divergeance in income which is so pronounced in the data we currently looked at.

Rising inequality has been Topic A for months now (make it years) with the WALL STREET JOURNAL & the NEW YORK TIMES both running major several-part series on the subject. Rising inequality alond with declining social mobility.

Well, what is the reason for rising inequality and declining social mobility?

Is it just that the rich get richer? (Which seems to be the thesis of everything I read, but don't go by me.)

I'm with Alan Greenspan. It's basic supply and demand. If you don't have enough highly educated people to fill jobs requiring highly educated people, those wages go up.

If you have too many highly uneducated people to fill jobs where advanced education isn't a requirement, those wages go down.

Now I'm going to indulge in some psychologizing, which generally speaking I don't approve of.

I think the reason journalists don't bring up this possibility is that journalists, being highly educated, and NOT being highly educated when it comes to math & economics (I speak from experience), just naturally tend to assume that of course the wage gap between them and the custodial staff is widening; what journalists do is lots more valuable. (I'm only dinging journalists here because I'm talking about journalism. I'll hazard a guess that just about every highly educated person other than Alan Greenspan thinks the same thing.)

Alan Greenspan on rising inequality
rising inequality, part 2
rising inequality, part 3
median income families UCSC students
another statistics question
channeling the Wall Street Journal
Financial Times on US college costs
Economist on US higher ed
The Economist on rising inequality in universities

StatisticsHelpPart2 22 Aug 2005 - 17:48 CatherineJohnson

It's obvious I'm going to have to take courses in statistics & in economics after I finish re-learning elementary & high school math. Otherwise I'm going to end up like Cliff Claven.

But until then, I'm going to depend on you guys.

Here is my question.

Assuming I've pulled the relevant data on rising median household incomes for selective public universities, what is the proper way to compare these figures?

Here are the figures I'm using (dollars inflation-adjusted to 2002):

1990
median household income: \$40,000
median household income selective public universities: \$75,800

2002
median household income = \$42,000
median household income selective public universities: \$82,500

(mortifying confession: I used an inflation calculator for prices, not wages. At the moment I can't work my way through whether that does or does not make a difference.)

I've done the simple comparison, dividing \$75,800 by \$40,000 and \$82,500 by \$42,000.

That shows that the median family at selective public universities in 2002 is more affluent than the median family at selective public universities in 1990....but how would a statistician would show this?

MontyHallPart3 28 Aug 2005 - 02:39 CatherineJohnson

A ktm reader (I'm sorry--I've forgotten who it was) mentioned that Mark Haddon has a nice illustration of the Monty Hall problem in his novel The Curious Incident of the Dog in the Night-Time.

He does, and it's terrific:

### update

Wow. Carolyn's search engines are fantastic.

I searched Comments & discovered that it was Greta Frohbieter who left the tip about Curious Incident.

Thanks, Greta!

### update update

One of the things I like about this chart is that you can see that you are 'still in the same event' from start to finish.

The reason people think the odds change from 1 in 3 to 1 in 2 is that they see the second choice (stick or change) as a secont event, with a second set of odds.

This visual representation makes you feel that the event is ongoing. You haven't changed odds because you haven't changed events.

It's the Unbearable Seamlessness of Being.

low birth weight paradox (& Monty Hall)
Monty Hall, part 2
Monty Hall, part 3
false positives
false positives, part 2
Doug Sundseth on Monty Hall
John Kay: We are likely to get probability wrong (subscription only)
Monty Hall diagram from Curious Incident
probability question from Saxon 8/7

BlookisForBayes 09 Jan 2006 - 23:29 CatherineJohnson

I majored in experimental psychology, and was taught that the 'frequentist' model was the only model.

Large sample size, random assignment, double-blind controls, tests for significance: these were the only conceivable means to discover the truth or something close to it.

At some point along the line, probably within the last 10 years, I realized something was missing.

First of all, peer-reviewed, random-assignment, frequentist studies are often wrong.

How often?

Probably 15% of the time: (subscription required)

THEODORE STURGEON, an American science-fiction writer, once observed that “95% of everything is crap”. John Ioannidis, a Greek epidemiologist, would not go that far. His benchmark is 50%. But that figure, he thinks, is a fair estimate of the proportion of scientific papers that eventually turn out to be wrong.

Dr Ioannidis, who works at the University of Ioannina, in northern Greece, makes his claim in PLoS Medicine, an online journal published by the Public Library of Science. His thesis that many scientific papers come to false conclusions is not new. Science is a Darwinian process that proceeds as much by refutation as by publication. But until recently no one has tried to quantify the matter.

Dr Ioannidis began by looking at specific studies, in a paper published in the Journal of the American Medical Association in July. He examined 49 research articles printed in widely read medical journals between 1990 and 2003. Each of these articles had been cited by other scientists in their own papers 1,000 times or more. However, 14 of them—almost a third—were later refuted by other work. Some of the refuted studies looked into whether hormone-replacement therapy was safe for women (it was, then it wasn't), whether vitamin E increased coronary health (it did, then it didn't), and whether stents are more effective than balloon angioplasty for coronary-artery disease (they are, but not nearly as much as was thought).

[snip]

...he concluded that even a large, well-designed study with little researcher bias has only an 85% chance of being right. An underpowered, poorly performed drug trial with researcher bias has but a 17% chance of producing true conclusions. Overall, more than half of all published research is probably wrong.

Jakob Nielsen says to use bullets, so I'm using bullets

What are the odds of any given study being right?

• large, well-designed study with little researcher bias: 85% chance of being right

• underpowered, poorly performed drug trial with researcher bias: 17% chance of being right

• all published research, taken as a whole: 50% chance of being right

med school

Apparently, Dr. Ioannidis' exercise has been a tradition in med schools for some time.

Two physicians, who attended different medical schools, have told me that when they started med school their professors said that half of the articles published in JAMA that year would prove to be wrong by the time they graduated.

These professors had never conducted a study.

So how did they come up with a figure of 50-50?

I'd say they used Bayesian reasoning.

This is an example of the human mind using Bayesian analysis to arrive at a correct conclusion -- the same conclusion a frequentist study like Ionnidis' will reach (assuming his study is correct, of course).

when you don't need a large sample

Carolyn linked to an ECONOMIST article on research showing the human mind probably uses Bayesian reasoning.

...the Bayesian capacity to draw strong inferences from sparse data could be crucial to the way the mind perceives the world, plans actions, comprehends and learns language, reasons from correlation to causation, and even understands the goals and beliefs of other minds. [snip]

The key to successful Bayesian reasoning is not in having an extensive, unbiased sample, which is the eternal worry of frequentists, but rather in having an appropriate “prior”, as it is known to the cognoscenti. This prior is an assumption about the way the world works—in essence, a hypothesis about reality—that can be expressed as a mathematical probability distribution of the frequency with which events of a particular magnitude happen.

The best known of these probability distributions is the “normal”, or Gaussian distribution. This has a curve similar to the cross-section of a bell, with events of middling magnitude being common, and those of small and large magnitude rare, so it is sometimes known by a third name, the bell-curve distribution. But there are also the Poisson distribution, the Erlang distribution, the power-law distribution and many even weirder ones that are not the consequence of simple mathematical equations (or, at least, of equations that mathematicians regard as simple).

With the correct prior, even a single piece of data can be used to make meaningful Bayesian predictions. By contrast frequentists, though they deal with the same probability distributions as Bayesians, make fewer prior assumptions about the distribution that applies in any particular situation. Frequentism is thus a more robust approach, but one that is not well suited to making decisions on the basis of limited information—which is something that people have to do all the time.

more bullets!

• Bayesian reasoning draws strong — and accurate — inferences from 'sparse data'

• all you need for Bayesian reasoning to work is an 'appropriate prior' — an accurate hypothesis about the way the world works

• if you have a good hypothesis about the way the world works, you don't need a huge sample

• real people in real life have to make decisions based on limited data all the time; hence we probably developed Bayesian analytic abilities

the cognitive unconscious knows what it's talking about

I believe it.

As I was saying, at some point I realized that:

a) published, peer-reviewed research is frequently wrong

and

b) personal opinions, gut feelings, and intuition are frequently right

At least, my own personal opinions & gut feelings have proved correct often enough that I never dismiss personal opinion & gut feeling — my own or other people's — out of hand.

I would have a 'feeling' about something, or an idea, and I would have no clue whether this was or was not likely to be right.

Then, after awhile, I accumulated so much experience in certain realms that I began to trust my judgment.

For example, after a few years working with medication for Jimmy, I began to have a sense of what we ought to try with him. Often, I was right.

I had meant to write a post about this back when we were talking about 'partnering' with teachers.....I've had numerous partnerships with Jimmy's doctors. I would read a piece of research that made sense, bring it in to our doctor, and our doctor would either instantly agree that it made sense, or would pursue it further.

Often he or she decided to try the medication I thought should be tried.

There are no medications approved for autism; all prescribing is done off-label. When we began working with meds, the standing belief was that medication 'did not treat autism.' The most you could hope for was to ameliorate a couple of symptoms, like hyperactivity and insomnia, and these symptoms were considered not to be 'core.' I rejected that line of reasoning years before the profession did, and I was right.

Now Ed has developed tremendous 'Bayesian' expertise with meds. He's been supervising medication for the past 10 years, since the twins were born, and he knows what he's doing. We're working with one of the best psychiatrists in the world (IMO) and Ed can frequently predict what Dr. Hollander will do next.

That's the cognitive unconscious at work. Research on the cognitive unconscious, which Arthur Reber surveys in his book Implicit Learning and Tacit Knowledge: An Essay on the Cognitive Unconscious, shows that it is startlingly accurate.

Since reading Reber, I know that the cognitive unconscious — my own or others' — knows what it's talking about at least some of the time.

My problem has been figuring out when.

There's probably a simple answer to that.

According to Robin Hogarth, who wrote Educating Intuition, intuition — the cognitive unconscious — is likely to be right in realms that offer feedback.

A weatherman gets feedback. On Monday he predicts rain. On Tuesday, either it rains or it doesn't.

That's feedback. An experienced weatherman is going to develop good intuition.

A constructivist teacher who's not using formative assessment is getting very little feedback. In September he predicts that kids in the TRAILBLAZERS curriculum will learn their math facts without drill. In May he assumes they have.

That's not feedback.

This is why I don't listen to the casual observations and assertions of constructivists.

They haven't had enough feedback to develop good intuition.

In my experience, at least, a constructivist talking education is often talking belief, not experience.

rule of thumb

That last sentence gave me a new rule of thumb:

I tend to trust people who sound as if they're speaking from direct experience.

I don't trust people who sound as if they are restating educational philosophy.

This is the glaring difference between the writings of an Engelmann or a John Gatto Taylor and a generic constructivist.

Engelmann's work is filled with experience. I don't have to perform a post-hoc analysis of the statistical techniques used by Project Follow-Through to conclude that Engelmann knows what he's talking about.

He's got a Bayesian brain, I've got a Bayesian brain, and 95% of the time he's talking about his experience, not his philosophy.

blookis for Bayes

Which brings me to Kitchen Table Math.

A blooki is the perfect venue for Bayesian analysis.

I remember back in the first couple of months, someone left a personal narrative & then apologized for having done so, saying that his experience was just one example, nothing more.

I answered that a major reason I started writing Kitchen Table Math in the first place was that I wanted to learn about other people's personal experiences.

I'm not looking for the 'large sample' of a frequentist study.

I'm looking for the personal experience & observations of people with good priors.

That's what I've been getting ever since we started!

Bayes statistics & false positives
does human mind use Bayesian reasoning?
Bayesian reasoning, intuition, & the cognitive unconscious
most bell curves have thick tails
ECONOMIST explanation Bayesian statistics
Bayesian certainty scale

Bayesianprobability

DataWarehousing 07 Oct 2006 - 22:10 CatherineJohnson

Our school district is now using 'data warehousing.'

The couple who came to dinner Friday night — both employed in math-related fields — were highly unenthusiastic about this development.

The Friday-night-couple said data warehousing is the same thing as data mining.....which I think I favor.

Is that wrong?

I'm certain they're right, though, that data mining will allow the district to flummox parents with whatever statistics they decide to pull out.

Although.....so far district efforts to flummmox parents, namely me, have been unimpressive to say the least. These efforts consist of the Assistant Superintendent sending me one letter and one email telling me 'scores have gone up' since we purchased TRAILBLAZERS.

I pointed out that scores went up all over the state and that, furthermore, 'scores went up' is raw data, and we left it at that.

Color me Not flummoxed.

Then they shut down my Singapore Math course.

not flummoxed now & don't plan to be in the future

What do I need to start learning in order to not get flummoxed down the line?

Apart from real knowledge, comprehension, & procedural skills, I could use some lingo, just so I sound like I know what I'm talking about.

If the District is going to blow smoke-with-data, I need to be able to blow my own smoke, which I can do just through language. (Have I mentioned how ruthless I am lately?)

whose data is it, anyway?

What I fear — because we've hit this brick wall many, many times in special ed — is that parents won't get to see data because parents seeing data will represent an invasion of other parents' privacy.

Maybe things won't go that way, but seeing as how they've always gone that way for us in the past, and seeing as how Bush & c. had to pass a huge, major, revolutionary law just to get schools to disaggregate and publish their data some place where parents could find it, tells my Bayesian mind to count on it.

So maybe I should be familiarizing myself with the FOIA, right?

Wal-Mart has a warehouse for data, too

No idea whether this book would be useful or not.

-- CatherineJohnson - 16 Jan 2006

BayesianBrainRunAmok 22 Jan 2006 - 15:43 CatherineJohnson

-- CatherineJohnson - 20 Jan 2006

BayesAndTheBellCurves 25 Jan 2006 - 01:19 CatherineJohnson

I'm still cruising Edge's Annual Question, 2006.

I can't possibly form an educated opinion of Bart Kosko's 'dangerous idea.'

And yet, after reading the opening paragraphs, I'm convinced he's right. Bayes strikes again. (Have I mentioned I'm an early adopter?)

I'm going to be needing some Bayesian Rules Of Thumb pretty soon here.

When is it OK to trust your priors, and when is it a really bad idea?

Most bell curves have thick tails

Any challenge to the normal probability bell curve can have far-reaching consequences because a great deal of modern science and engineering rests on this special bell curve. Most of the standard hypothesis tests in statistics rely on the normal bell curve either directly or indirectly. These tests permeate the social and medical sciences and underlie the poll results in the media. Related tests and assumptions underlie the decision algorithms in radar and cell phones that decide whether the incoming energy blip is a 0 or a 1. Management gurus exhort manufacturers to follow the "six sigma" creed of reducing the variance in products to only two or three defective products per million in accord with "sigmas" or standard deviations from the mean of a normal bell curve. Models for trading stock and bond derivatives assume an underlying normal bell-curve structure. Even quantum and signal-processing uncertainty principles or inequalities involve the normal bell curve as the equality condition for minimum uncertainty. Deviating even slightly from the normal bell curve can sometimes produce qualitatively different results.

The proposed dangerous idea stems from two facts about the normal bell curve.

First: The normal bell curve is not the only bell curve. There are at least as many different bell curves as there are real numbers. This simple mathematical fact poses at once a grammatical challenge to the title of Charles Murray's IQ book The Bell Curve. Murray should have used the indefinite article "A" instead of the definite article "The." This is but one of many examples that suggest that most scientists simply equate the entire infinite set of probability bell curves with the normal bell curve of textbooks. Nature need not share the same practice. Human and non-human behavior can be far more diverse than the classical normal bell curve allows.

Second: The normal bell curve is a skinny bell curve. It puts most of its probability mass in the main lobe or bell while the tails quickly taper off exponentially. So "tail events" appear rare simply as an artifact of this bell curve's mathematical structure. This limitation may be fine for approximate descriptions of "normal" behavior near the center of the distribution. But it largely rules out or marginalizes the wide range of phenomena that take place in the tails.

Again most bell curves have thick tails. Rare events are not so rare if the bell curve has thicker tails than the normal bell curve has. Telephone interrupts are more frequent. Lightning flashes are more frequent and more energetic. Stock market fluctuations or crashes are more frequent. How much more frequent they are depends on how thick the tail is — and that is always an empirical question of fact. Neither logic nor assume-the-normal-curve habit can answer the question. Instead scientists need to carry their evidentiary burden a step further and apply one of the many available statistical tests to determine and distinguish the bell-curve thickness.

[ed.: this is where I fall off the cliff] One response to this call for tail-thickness sensitivity is that logic alone can decide the matter because of the so-called central limit theorem of classical probability theory. This important "central" result states that some suitably normalized sums of random terms will converge to a standard normal random variable and thus have a normal bell curve in the limit. So Gauss and a lot of other long-dead mathematicians got it right after all and thus we can continue to assume normal bell curves with impunity.

That argument fails in general for two reasons.

etc.

I should probably use this article as a benchline for Progress in Understanding Statistics, once I actually take a course in statistics.

What courses would I have to take — what would I have to know — to Read The Whole Thing?

on the other hand

Asking a bunch of Big Brains what their 'dangerous idea' is is a dangerous idea, as far as I'm concerned. This exercise reminds me of all the 800-lb. gorillas in Hollywood — movie directors mostly — whose work invariably collapsed the instant they were so powerful they could do what they wanted, instead of answering to the studios giving them the money to do it.

There's a lot of claptrap in this year's WORLD QUESTION CENTER......there's so much claptrap, I'm thinking maybe I should revise my flash-judgment that Wow! Yes! The standard bell curve has a thicker tail than we think! Cool!

The tails over at the WORLD QUESTION CENTER aren't seeming too thick at the moment.

But maybe I'm wrong.

"Competing bell curves"

(website may not always respond)

these don't look like wide tails to me

I'm confused

Bayes statistics & false positives
does human mind use Bayesian reasoning?
Bayesian reasoning, intuition, & the cognitive unconscious
most bell curves have thick tails
ECONOMIST explanation Bayesian statistics
Bayesian certainty scale

Bayesianprobability

-- CatherineJohnson - 23 Jan 2006

TheFuture 23 Jan 2006 - 17:25 CatherineJohnson

Tracy left this link to Principles of Forecasting:

The Forecasting Principles site seeks to summarize all useful knowledge about forecasting so that it can be used by researchers, practitioners, and educators. This knowledge is provided as principles (guidelines, prescription, rules, conditions, action statements, or advice about what to do in given situations). The evidence-based principles apply to

• management
• operations research, and
• social sciences.

This site is designed to be used in conjunction with the Principles of Forecasting book.

-- CatherineJohnson - 23 Jan 2006

InPraiseOfBayes 04 Feb 2006 - 16:34 CatherineJohnson

Carolyn wrote a post about THE ECONOMIST's recent article on Bayes & the human mind (\$).

Here are excerpts from the article on Bayesian statistics they ran in September 20, 2000 issue, In Praise of Bayes (\$):

IT IS not often that a man born 300 years ago suddenly springs back to life. But that is what has happened to the Reverend Thomas Bayes, an 18th-century Presbyterian minister and mathematician—in spirit, at least, if not in body. Over the past decade the value of a statistical method outlined by Bayes in a paper first published in 1763 has become increasingly apparent and has resulted in a blossoming of “Bayesian” methods in scientific fields ranging from archaeology to computing. Bayes’s fans have restored his tomb and posted pictures of it on the Internet, and a celebratory bash is planned for next year to mark the 300th anniversary of his birth. There is even a Bayes songbook—though, since Bayesians are an academic bunch, it is available only in the obscure file formats that are used for scientific papers.

Proponents of the Bayesian approach argue that it has many advantages over traditional, “frequentist” statistical methods. Expressing scientific results in Bayesian terms, they suggest, makes them easier to understand and makes borderline or inconclusive results less prone to misinterpretation. Bayesians claim that their methods could make clinical trials of drugs faster and fairer, and computers easier to use. There are even suggestions that Bayes’s ideas could prompt a re-evaluation of fundamental scientific concepts of evidence and causality....

The essence of the Bayesian approach is to provide a mathematical rule explaining how you should change your existing beliefs in the light of new evidence. In other words, it allows scientists to combine new data with their existing knowledge or expertise.

The canonical example is to imagine that a precocious newborn observes his first sunset, and wonders whether the sun will rise again or not. He assigns equal prior probabilities to both possible outcomes, and represents this by placing one white and one black marble into a bag. The following day, when the sun rises, the child places another white marble in the bag. The probability that a marble plucked randomly from the bag will be white (ie, the child’s degree of belief in future sunrises) has thus gone from a half to two-thirds. After sunrise the next day, the child adds another white marble, and the probability (and thus the degree of belief) goes from two-thirds to three-quarters. And so on. Gradually, the initial belief that the sun is just as likely as not to rise each morning is modified to become a near-certainty that the sun will always rise. In a Bayesian analysis, in other words, a set of observations should be seen as something that changes opinion, rather than as a means of determining ultimate truth. In the case of a drug trial, for example, it is possible to evaluate and compare the degree to which a sceptic and an enthusiast would be convinced by a particular set of results. Only if the sceptic can be convinced should a drug be licensed for use.

This is far more subtle than the traditional way of presenting results, in which an outcome is deemed statistically significant only if there is a better than 95% chance that it could not have occurred by chance. The problem, according to Robert Matthews, a mathematician at Aston University in Birmingham, is that medical researchers have failed to understand that subtlety. In a paper to be published shortly in the Journal of Statistical Planning and Inference, he sets out to demystify the Bayesian approach, and explains how to apply it after the event to existing data.

Patients in clinical trials will soon benefit. Bayesian methods offer the possibility of modifying a trial while it is being conducted, something that is impossible with traditional statistics. Andy Grieve and his colleagues at Pfizer, a drug firm, are intending to do just that.

Traditionally, dose-allocation trials—in which the aim is to establish the most effective dose of a new drug—involve giving different groups of patients different doses and evaluating the results once the trial has finished. This is fine from a statistical point of view, but unfair on those patients who turn out to have been given non-optimal doses. Rather than analysing the results at the end of a trial, Dr Grieve’s method will evaluate patients’ responses during it, and adjust the doses accordingly.

[snip]

Pfizer is intending to conduct a trial using this new method, and the plan is to re-analyse the data once it is completed in ways that will satisfy both Bayesians and non-Bayesians.

[snip]

Bayesian methods can also be used to decide between several competing hypotheses, by seeing which is most consistent with the available data.

[snip]

Bayes is still, however, the focus of much controversy.

[snip]

Perhaps the grandest claims made for Bayesian methods are those of Judea Pearl, a computer scientist at the University of California, Los Angeles. Dr Pearl has suggested that by analysing scientific data using a Bayesian approach it may be possible to distinguish between correlation (in which two phenomena, such as smoking and lung cancer, occur together) and causation (in which one actually causes the other).

This is why I would like to see more educational research focused on good teachers.

It's easy enough to pick out the good teachers in a school — not for me, probably, but for other teachers & administrators in the school.

I'd like to know what they're doing.

In the past, the only kind of research one could do on an individual teacher was.....Geertzian thick description or qualitative analysis of some kind.

I'd like to see lots more thick description & qualitative analysis; I'm not Frequentist-with-a-capital-F.

But Bayesian statistics strike me as being, potentially, incredibly useful for a empirical research on Individual Great Teachers.

spaced repetition

In a Bayesian analysis, in other words, a set of observations should be seen as something that changes opinion, rather than as a means of determining ultimate truth.

Bayesian statistics & false positives
Bayes & the human mind
Bayesian reasoning, intuition, & the cognitive unconscious
most bell curves have thick tails
ECONOMIST explanation Bayesian statistics
Bayesian certainty scale

Bayesianprobability

-- CatherineJohnson - 29 Jan 2006

BayesAndMedicalBreakthroughs 16 Mar 2006 - 20:09 CatherineJohnson

Ed spotted this op-ed in the Financial Times today:%BR5

A dose of realism exposes the heart of the matter (\$)
By Robert Matthews

A study of patients with heart disease has found that high doses of a cholesterol-lowering drug known as a statin can break down the potentially fatty deposits lining the arteries. Over three-quarters of the patients showed some improvement, with the most severe cases showing the biggest reductions.

Announced at a meeting of the American College of Cardiology this week, the results have been hailed as a big breakthrough in the fight against this killer disease.

[snip]

AstraZeneca’s share price duly rose a couple of per cent, and may benefit again when the results of the trial are published in a leading medical journal next month.

[snip]

But there is ... tell-tale sign of DSS – damp squib syndrome – and one routinely found in supposed breakthroughs in many other fields. Ironically, it centres on the level of surprise that leads to such findings making the media spotlight in the first place.

In the case of Crestor, even the researchers themselves admit to being stunned by the results of the trial. While statins were already known to be capable of slowing the build-up of arterial deposits, few expected them to produce a reduction. The increase in “healthy” cholesterol levels was also a surprise. The research team leader summed up the results as “shockingly positive”.

In other words, the results fly in the face of previous experience with these drugs. Considering there is no shortage of that, and that the new results come from a small trial, the smart money is on this “breakthrough” falling victim to DSS.

That may sound glib, but it has its basis in sophisticated techniques for making sense of new findings. Known collectively as Bayesian methods, they allow new findings to be assessed in the light of extant knowledge – with often salutary consequences.

The op-ed's concluding paragraphs are terrifically well-put:

Take the case of anistreplase, a clot-busting drug hailed in the early 1990s as a breakthrough in the treatment of heart attacks. A small trial conducted in Scotland suggested that early administration of the drug could cut death-rates by an astonishing 50 per cent. This again flew in the face of experience, which had suggested a much more modest level of benefit. Using Bayesian methods to combine that extant knowledge with the trial results, statisticians predicted the real improvement would be about 20 per cent. This has now been confirmed by much larger studies.

Despite their obvious value in making sense of new claims, Bayesian methods are still regarded by some as esoteric. The blame for this must lie with statisticians, who have done a dismal job of making these powerful techniques accessible to a much wider audience – including the business community. After all, even the least mathematical can appreciate the central message: extraordinary claims demand extraordinary evidence.

We can thank the Reverend Bayes for giving us a statistical method to demonstrate that if something seems to good to be true, that's because it is.

Bayesian statistics & false positives
Bayes & the human mind
Bayesian reasoning, intuition, & the cognitive unconscious
most bell curves have thick tails
ECONOMIST explanation Bayesian statistics
Bayesian certainty scale
Bayes and medical breakthroughs

-- CatherineJohnson - 15 Mar 2006

WorldMapper 05 Apr 2006 - 01:51 CatherineJohnson

I should just forget about writing Kitchen Table Math and send everyone over to Marginal Revolution for good.

population-weighted map of the world, circa 1500:

projected world population map, circa 2050:

source:
Worldmapper

I've decided I want Christopher to go to George Mason University. Their basketball win is responsible for about 16% of my feelings (seriously!) The rest is the Slate article and the four blogs.

update: speaking of college

Ed says Washington University, in St. Louis, is one of the hot schools now. Something like 19 seniors in the Chappaqua High School have applied there this year.

Washington University was my back-up school. The last place on earth I wanted to go was St. Louis. These days, of course, I like St. Louis. We often fly in there to go see my brother & my dad in Springfield, IL.

Which reminds me: I MUST MAKE HOTEL RESERVATIONS TODAY. PERIOD.

I want to go to the State Fair again this year; we missed it last year because we didn't make reservations in time.

Why don't I go do that now?

-- CatherineJohnson - 01 Apr 2006

ProbabilityQuestionSaxon87 10 Apr 2006 - 15:23 CatherineJohnson

update: take a look at Ken's & Rudbeckia Hirta's discussion in the Comments thread

Robert was asked to select and hold three cards from a normal deck of cards. If the first two cards selected were aces, what is the chance that the third card he selects will be one of the two remaining aces?

source:
Saxon Math Homeschool 8/7
Lesson 119
Mixed Practice
#1

I don't know how to answer this problem.

First of all, choosing 3 aces in a row seems like a series of dependent events, but the answer given — 4% — is the answer you would get by assuming that the 3rd ace is an independent even.

So this is like tossing coins?

And why don't I get that from the wording?

Another thing: this question reminds me of the Monty Hall problem, but I don't know why.

Why do I think that?

low birth weight paradox (& Monty Hall)
Monty Hall, part 2
Monty Hall, part 3
false positives
false positives, part 2
Doug Sundseth on Monty Hall
John Kay: We are likely to get probability wrong (subscription only)
Monty Hall diagram from Curious Incident
probability question from Saxon 8/7

-- CatherineJohnson - 08 Apr 2006

QuestionAboutLearningProbability 09 Apr 2006 - 21:24 CatherineJohnson

I learned almost nothing about probability as a kid; I'm almost a blank slate.

So I'll try to remember to take notes on my learning process as I go along — which explanations and lessons work well for me, which ones aren't as effective, and so on. (Carolyn's much better at this kind of thing than I am, so we'll see. I've never kept a journal in my life!)

Looking at Rudbeckia Hirta's explanation, which I have yet to read, sparked an observation and a question.

So far, I think I've done best with visual, 'branching tree' examples like hers —

Have other people experienced this?

Another helpful approach: Saxon 8/7 has an Activity Sheet that gives you a sample space for the various outcomes when you roll two die, then has you construct a bar graph of the results.

That was wonderful.

It didn't cause me to understand probability better, I don't think.

But it was incredibly compelling and 'useful' (can't explain 'useful') to see the shape and order of the bar graph. I assume this is related to the 'bar model' phenomenon. I can't explain why bar models should be so powerful and 'ordering' for me, but they are.

One more thing.

I find branching trees much more comprehensible than 'sample spaces' and grids of all kinds. The multiplication grid never makes much sense to me, though I haven't sat down and really focused on the thing.

The fact that I haven't done so is a clue, however.

I do feel motivated to study a branching tree.

I don't feel motivated to study a grid. When I see a grid, I think "I'll get to that later."

So far, for me, grids seem to erase distinctions and patterns, rather than to highlight and reveal them.

Each square on the grid is the same size, and the numbers inside the grid seem subordinate to the ordered squares.

I don't necessarily think it has to be this way. The '100s grid' I use with Andrew now makes all kinds of sense to me. When I look at it, the numbers 'pop.' (I think they may 'pop' for Andrew, too. Tens go below tens, 1s below 1s, etc. He seems to see this. I'll post some KUMON sheets so you know what I mean.) So it's possible that a grid requires more textual support than I've been given with some of the grids I've encountered in textbooks.

I wonder whether branching trees work better off the bat because a branching tree contains an implicit 'narrative element' — a first, middle & last sequence....(can you tell I went to film school?)

I'm going to have to finally get around to reading Daniel Willingham's article on narrative.

probability question from Saxon 8/7

-- CatherineJohnson - 09 Apr 2006

WhatDoesAGoodSchoolLookLike 12 Apr 2006 - 20:30 CatherineJohnson

I've mentioned getting back in touch with an old friend whose kids are in what is probably one of the best private schools in the country.

I've been debriefing her, which naturally has had the effect of bringing into even sharper focus all of my frustrations with our school.

Two things stood out:

1.
Her school performs huge quantities of formative assessment, or assessment for learning.

They use the ERB,* which is the test private schools use. They can't use any of the norm-referenced tests public schools use, because the kids are all "in the 99th percentile."

They give the ERB four times a year. Then a team of learning specialists meets with the parents and goes over the results.

In one of her recent meetings, the learning specialists told her that her child's performance on unit conversions wasn't where it should be. They attributed this to the fact that the school hadn't taught the subject well.

They also found that her child's performance on prepositional phrases should be better than it was. In that case, they don't see a deficit in their teaching, but for further practice. They'll provide her child with a packet of practice materials for the summer, and when he returns to school he'll have the skill mastered. (They're confident of this, because they have specific information and data on all of their worksheets and textbooks.)

Now that is assessment for learning!

The goal is to find out what the child knows, connect what the child knows with the school's teaching practices and curriculum, and quickly remediate any gaps in the child's learning.

A couple of years ago, after reading a letter from a mom who discovered her child was two years behind public school kids on the placement test for private school, I was going to give Christopher the private school entrance exam. But then I found out it costs \$400 or some such.

2.
The school does not assign letter grades. (I'll find out whether they do in middle school and high school.) They give the kids tons of tests and assignments, all of which are graded on a percent basis, which, my friend says, is the equivalent of a letter grade.

But they don't write letter grades on papers, or send home report cards with letter grades.

how far behind are public school kids?

The salient portion of the letter I mentioned above:

Most telling [after the introduction of TERC in NYC schools] was the progressive drop in students performing at Level 4 (the highest level) on the city-wide tests, culminating in a plunge of over 50% the year my son took the 4th-grade test.

Considering the amounts of taxpayer's money spent on these studies, it is criminal that this huge drop in higher achieving children's scores has not even been mentioned or noted in any NSF-founded research I have seen. In fact, it is now difficult to find any figures for NYC test scores that do not show Levels 3 and 4 combined, in effect, hiding this alarming fact.

In the fall of 2000, my child took the ISEE test (a test used for application to private schools). A public school, Level 4 "top 2 or 3 in his class of 130 children" math student placed mid-range. I was told by private school directors that this was consistent with what they were seeing with District 2 children.

Please remember that the TERC program moves very slowly as the children are all "finding their own solutions" without the benefit of using time-proven methods. When I looked at the ISEE test prep materials, I realized my child was almost 2 years behind ISEE standards for grade level and helped him catch up as much as possible, otherwise his score would have been even lower.

The Student Guide (pdf file) for the ISEE has 4 pages of sample questions towards the end of the guide. I'm going to see how Christopher does on them.

St. Anne's

Apparently, Jay Matthews says that St. Anne's, in Brooklyn, is the best private school in the country. (Haven't been able to track the column down, but I trust my source.)

I know two people who have kids in the school, so I'm going to start asking them what the school is like.

oh my gosh!

St. Anne's has a Monty Hall simulation!

OK, no doubt in my mind this is THE BEST PRIVATE SCHOOL IN THE COUNTRY.

* news flash: there is no test called 'the ERB'...

-- CatherineJohnson - 12 Apr 2006

SundemTierneyUnifiedCelebrityTheory 19 Sep 2006 - 15:59 CatherineJohnson

Sometime in my youth, in high school I think, I came up with my first writer idea.

I wanted to write a Dear Abby column with numbers.

The plan was to do a Math Trailblazers-like counting job on social pain.

Basically, my plan was to figure out how long it took to get over things.

How long did it take to get over being dumped?

How long did it take to get over someone dying? (Two years, I figured.)*

etc.

Then people could write in, tell me what bad thing had just happened to them, and I could write back telling them how long before they felt OK again.**

At the time, I hadn't (really) heard of probability & statistics — or, rather, I'd heard of statistics and probability, but I had no idea how it worked.

Geek Logik

Today I learn from John Tierney (\$) that a fellow named Garth Sundem has actually gone out and done a geek version of my high school kid concept:

I wish no ill to Brangelina, Tom and Katie, or Pamela Anderson and Kid Rock. Like any mortal, I revere the romances on Olympus. I thrilled to hear of Pam’s secret wedding and agonized at reports of Angelina’s reluctance to marry (or is Brad dragging his feet?). When I finished poring over Vanity Fair’s photo spread of Tom Cruise and Katie Holmes with their daughter, my only bitter thought was: Why just 22 pages?

But we inquiring minds must be realistic. Remember your crazy joy at past celebrity marriages — Jessica and Nick, Julia and Lyle, Uma and Ethan?

[snip]

[Y]ou were sure this one was for the ages — until the day their publicist put out the statement about an “amicable” decision to pursue “separate lives.” Amicable! How could the couple of the century bear to be apart? You felt deceived, used, discarded. You stared at their photo and thought: I don’t even know you anymore.

I can’t bear any more of these breakups, so I have turned to science to steel my heart. I went to Garth Sundem, the wickedly ingenious author of “Geek Logik,” a new book of mathematical formulas for deciding questions like whether you should sleep with a co-worker, whether you should join a gym or see a therapist, and whether you can wear a Speedo without frightening small children.

Sundem's formula predicting the likelihood that a celebrity marriage will last:

Sundem's odds

Geek Logik

* Amazingly enough, two years turned out to be a pretty good estimate. At least, it's a good estimate for me.

**This would be your Midwest farmer's concept of self-help.

-- CatherineJohnson - 19 Sep 2006

ThreeHundredMillion 17 Oct 2006 - 01:04 CatherineJohnson

U.S. Population Clock Projection

COMPONENT SETTINGS FOR OCTOBER 2006

One birth every.................................. 7 seconds
One death every.................................. 13 seconds
One international migrant (net) every............ 31 seconds
Net gain of one person every..................... 11 seconds

I think 300,000,000 is a nice, friendly number.

-- CatherineJohnson - 17 Oct 2006

WebForm
TopicType: SubjectArea
SubjectArea: