this post was submitted on 30 Nov 2023
-23 points (10.3% liked)

Conservative

384 readers
96 users here now

A place to discuss pro-conservative stuff

  1. Be excellent to each other. Civility, No Racism, No Bigotry, No Slurs, No calls to violences, No namecalling, All that good stuff, follow lemm.ee's rules, follow the rules of your instance, etc.

  2. We are a Pro-Conservative forum. Posts must have a clear pro-conservative, or anti left-wing bias. We are interested in promoting conservatism and discussing things that might get ignored elsewhere. All sources are acceptable, however reputable sources with a reputation for factual reporting are preferred.

  3. Dissent is allowed in the comments, but try to be constructive; if you do not agree, then provide a reason which is backed up by references or a reasonable alternative interpretation of the provided facts. That means the left wing is welcome to state their opinions, but please keep it in good faith.

A polite request, not a rule, if you feel the need to report a comment, please don't reply to it.

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 11 months ago (1 children)

This easily veers into the problems of statistical analysis.

The inclusion of the 2.5 million DGU numbers hinges on what counts as a defensive gun use. According to "The Reload", on which your link is based:

GVA uses the most conservative criteria for what constitutes a defensive gun use. Instead of attempting to capture any time a person legally uses a gun to defend themselves or others, it only counts incidents that make it into media reports or police reports (though it’s unclear how many police reports they have access to). The site’s methodology takes a strikingly dismissive tone towards any other potential defensive gun uses.

But what's Gary Kleck's methodology, the means by which he estimated the 2.5 million DGUs?

By pure coincidence, The Reload doesn't cover that explicitly. It merely alludes to the fact that he extrapolated that amount.

So, doing their research again since people can't seem to do it themselves (also, thank god for AI...really makes this process go way faster), here's analysis of their work by David Hemenway, a professor of health policy at Harvard.

I'm going to quote the entire "The Kleck-Gertz Survey" section of that paper:

In 1992, Kleck and Gertz conducted a national random-digit-dial survey of five thousand dwelling units, asking detailed questions about self-defense gun use. Their estimates of civilian self-defense gun use range from 1 million to 2.5 million times per year. The 2.5 million figure is the one they believe to be most accurate and the one Kleck has publicized, so that figure will be discussed in this paper.

K-G derive their 2.5 million estimate from the fact that 1.33% of the individuals surveyed reported that they themselves used a gun in self-defense during the past year; in other words, about 66 people out of 5000 reported such a use. Extrapolating the 1.33% figure to the entire population of almost 200 million adults gives 2.5 million uses.

Many problems exist with the survey conducted by Kleck and Gertz. A deficiency in their article is that they do not provide detailed information about their survey methodology or discuss its many limitations. For example, the survey was conducted by a small firm run by Professor Gertz. The interviewers presumably knew both the purpose of the survey and the staked-out position of the principal investigator regarding the expected results.

The article states that when a person answered, the interview was completed 61% of the time. But what happened when there was a busy signal, an answering machine, or no answer? If no one was interviewed at a high percentage of the initially selected homes, the survey cannot be relied on to yield results representative of the population. Interviewers do not appear to have questioned a random individual at a given telephone number, but rather asked to speak to the male head of the household. If that man was not at home, the caller interviewed the adult who answered the phone. Although this approach is sometimes used in telephone surveys to reduce expense, it does not yield a representative sample of the population.

The 2.5 million estimate is based on individuals rather than households. But the survey is randomized by dwelling unit rather than by the individual, so the findings cannot simply be extrapolated to the national population. Respondents who are the only adults in a household will receive too much weight.

K-G oversampled males and individuals from the South and West. The reader is presented with weighted rather than actual data, yet the authors do not explain their weighting technique. K-G claim their weighted data provide representative information for the entire country, but they appear to have obtained various anomalous results. For example, they find that only 38% of households in the nation possess a gun, which is low, outside the range of all other national surveys. They find that only 8.9% of the adult population is black, when 1992 Census data indicate that 12.5% of individuals were black.

The above limitations are serious. However, it is two other aspects of the survey that, when combined together, lead to an enormous overestimation of self-defense gun use: the fact that K-G are trying (1) to measure a very low probability event which (2) has positive social desirability response bias. The problem is one of misclassification.

Conducting a survey like Kleck did would be like if I did a survey of Trump support from /c/conservative, and took the proportion that said they do, and multiplied it by number of accounts in the Fediverse. Do you really think that's representative of support for Trump across the fediverse? If you do, you're just wrong. If you don't, then you shouldn't accept Kleck's haphazardly generated 2.5 million number either.

The inclusion of the 2.5 million DGUs isn't a political issue, though gun rights activists make it out be. It's a one of statistics, and statisticians say his methodology is trash. No matter what you want to believe, no matter how hard, the 2.5 million DGU's is far, far more probably false than it is true..

[–] [email protected] 3 points 11 months ago

Great response! I’ll further add that the OP article does the exact same thing. It redefines the metric to be more favorable to its narrative despite not being the metric by which any agency in the country measures these numbers and then it fails to explain its own methodology and why that is more accurate. It’s dishonestly redefining its terms while ignoring all the issues inherent with this data in the first place.