The most disturbing thing is the breakdown of the causes of death. Over half the deaths -- 56 percent -- are due to gunshot wounds, but 13 percent are due to air strikes. No terrorists do air strikes. No Iraqi government forces do air strikes, either, because they don't have combat aircraft. Air strikes are done by "coalition forces" (i.e. Americans and British), and air strikes in Iraq have killed over 75,000 people since the invasion.I have a dim memory that "carelessness" in the context of military operations in a civilian environment can be considered a war crime. I believe that's where Dyer is going with this.
Oscar Wilde once observed that "to lose one parent...may be regarded as a misfortune; to lose both looks like carelessness." To lose 75,000 Iraqis to air strikes looks like carelessness, too.
Maybe Kissinger can replace Rumsfeld and the circle will be complete ...
 Anyone who displays their publications on the web as .txt files is, by today's standards, eccentric.
A couple of things regarding the Lancet study and this Dyer piece. Before I say anything, however, no one should argue that civilian casualties in Iraq are unimportant. I am not arguing that the number of civilians killed is small nor that the number doesn't matter.
Gwynne Dyer says, "To reject it, you must either reject the whole discipline of statistics, or you must question the professional integrity of those doing the survey." Well that is just a bit hyperbolic. To be skeptical of it and ask appropriate questions you needn't do either.
To make my point, I need to use some concepts from cluster sample survey design and stats (with which I have a passing familiarity). I am not an expert, but do have some stats background.
The study was a cluster sample survey. Cluster sample surveys require larger samples than simple random sample surveys. The reason why is fairly intuitive. Each additional interview in a cluster gives you less information about the population at large than another purely random interview (individuals within a cluster are more likely to answer certain questions the same - it's called intraclass correlation).
The larger the clusters, the larger sample you need. Same with high intraclass correlation. Generally, the intraclass correlation is not known and must be estimated. If you underestimate your correlation and have a large cluster size, you can easily underestimate the needed sample size. They estimated their cluster size to be ~240 people, but they lost 3 clusters and found their cluster size increased by nearly 13% (to ~273 people). Their sample was only ~7% (801/12000) larger than estimated...so it was probably a bit too small given the increased cluster size. I'm not sure how they calculated their confidence intervals, but this undersizing of their sample may not be reflected in their already large (80% of the estimate) confidence intervals.
Importantly, that sample size calculation was done to catch a doubling of the mortality rate 80% of the time with 95% certainty. It was explicitly NOT powered to determine the number of deaths by airstrike. Given the nature of airstrikes, cluster sampling seems like a particularly bad way to estimate deaths by airstrike. Your intraclass correlation is likely to be higher than the one they estimated for intraclass mortality rate. Consequently, their sample size was likely to be way too small for that outcome. Basing accusations of negligence or war-crimes on this characteristic of the sample population is dubious at best.
Finally, something curious about their confidence intervals. Gwynne Dyer says that the paper estimated "between 426,369 and 793,663 excess deaths." Actually, that's not true. That's the estimated number of deaths due to violence. The estimate number of excess deaths from any cause was "654965 (392979–942636)." Does it strike you as odd that they're 95% confident that there were greater than 426369 excess VIOLENT deaths but only 95% confident that there were greater than 392979 TOTAL excess deaths? Shouldn't those two be reversed? How do they get a tighter CI (~ 60% of the estimate vs. 80% of the estimate for all deaths) for a subset of their sample population? And how can their lower bound for violent deaths be HIGHER than the lower bound for TOTAL deaths.
But my main point is that using this data to estimate the number of deaths by airstrike is a bad idea. Especially if it's gonna be the basis for accusations of war crimes.
P.S. I generally try to engage the argument rather than the person, but I think reasonable people can question the objectivity of the primary author of the paper and the editor in chief of the Lancet with respect to this subject. The first author ran for Congress in New York as an anti-war Democrat. The editor of the Lancet has a nice foaming at the mouth rant on YouTube and is widely on the record about his opposition to the Iraq war. Neither screams scientific objectivity. At the very least, it begs consumers of their results (neatly supporting their political points of view) with an extra measure of cautiousness.
It will be interesting to read the letters page. Have you written a letter to Lancet?
I'd like to see better data, but I doubt the US is going to fund a f/u study. I would not be at all surprised if the US military, behind closed doords, finds the numbers believable.
I think we'll hear more about the casualties associated with the quiet air war ...
Post a Comment