December 8, 2024

cjstudents

News for criminal justice students

Do Studies Show Gun Control Works? No.

[ad_1]

After reaching historic lows in the mid-2010s, gun violence rates in America have gone up in recent years, and they remain higher than in some other parts of the developed world. There are hundreds of laws and regulations at the federal and state level that restrict Americans’ access to guns, yet according to some advocates, social science research shows that a few more “simple, commonsense” laws could significantly reduce the number of injuries and deaths attributed to firearms.

There has been a massive research effort going back decades to determine whether gun control measures work. A 2020 analysis by the RAND Corporation, a nonprofit research organization, parsed the results of 27,900 research publications on the effectiveness of gun control laws. From this vast body of work, the RAND authors found only 123 studies, or 0.4 percent, that tested the effects rigorously. Some of the other 27,777 studies may have been useful for non-empirical discussions, but many others were deeply flawed. 

We took a look at the significance of the 123 rigorous empirical studies and what they actually say about the efficacy of gun control laws. 

The answer: nothing. The 123 studies that met RAND’s criteria may have been the best of the 27,900 that were analyzed, but they still had serious statistical defects, such as a lack of controls, too many parameters or hypotheses for the data, undisclosed data, erroneous data, misspecified models, and other problems. 

And these glaring methodological flaws are not specific to gun control research; they are typical of how the academic publishing industry responds to demands from political partisans for scientific evidence that does not exist.

Not only is the social science literature on gun control broadly useless, but it provides endless fodder for advocates who say that “studies prove” that a particular favored policy would have beneficial outcomes. This matters because gun laws, even if they don’t accomplish their goals, have large costs. They can turn otherwise law-abiding citizens into criminals, they increase prosecutorial power and incarceration, and they exacerbate the racial and socioeconomic inequities in the criminal justice system. 

The 123 papers identified by RAND tested 722 separate hypotheses about the impact of gun control policies for “statistical significance.” Peer-reviewed journals generally accept a result as statistically significant if it has a one-in-20 chance or less of being due to random chance. So if researchers run 100 tests on the relationship between two things that obviously have no connection to each other at all—say, milk consumption and car crashes—by pure chance, they can be expected to get five statistically significant results that are entirely coincidental, such as that milk drinkers get into more accidents. 

In terms of the gun control studies deemed rigorous by RAND, this means that even if there were no relationship between gun laws and violence—much like the relationship between drinking milk and getting into car accidents—we’d nevertheless expect about five percent of the studies’ 722 tests, or 36 results, to show that gun regulations had a significant impact. But the actual papers found positive results for only 18 combinations of gun control measure and outcome (such as waiting periods and gun suicides). That’s not directly comparable to the 36 expected false positives, since some combinations had the support of multiple studies. But it’s not out of line with what we would expect if gun control measures made no difference.

Also concerning is the fact that there was only one negative result in which the researchers reported that a gun control measure seemed to lead to an increase in bad outcomes or more violence. Given the large number of studies done on this topic, there’s a high statistical likelihood that researchers would have come up with more such findings just by random chance. This indicates that researchers may have suppressed results that suggest gun control measures are not working as intended. 

Most inconclusive studies also never get published, and many inconclusive results are omitted from published studies, so the rarity of pro-gun-control results and the near-total absence of anti-gun-control results is a strong argument that, based on the existing social science, we know nothing about the effects of gun control.

The reasons that we have no good causal evidence about the effectiveness of gun control are fundamental and unlikely to be overcome in the foreseeable future. The data on gun violence are simply too imprecise, and violent events too rare, for any researcher to separate the signal from the noise, or, in other words, to determine if changes in gun violence rates have anything to do with a particular policy. 

One common research approach is to compare homicide rates in a state the years before and after gun control legislation was passed. But such legislation can take months or years to be fully implemented and enforced, if ever. Most modern gun control measures only affect a minority of gun sales, and new gun sales are a small proportion of all firearms owned. Very few of the people who would be prevented from buying guns by the legislation were going to kill anyone, and many of the people who were going to kill someone would do it anyway, with another weapon or by getting a gun some other way.

Therefore, the most optimistic projection of the first-year effect of most laws on gun homicides would be a reduction of a fraction of a percent. But gun homicide rates in a state change by an average of six percent in years with no legislative changes, based on FBI Uniform Crime Reporting (UCR) data going back to 1990. As a statistician’s rule of thumb, this kind of before-and-after study can only pick up effects about three times the size of the average year-to-year change, meaning that such studies can’t say anything about the impact of a gun law unless it leads to an 18 percent change or greater in the gun murder rate in a single year. That’s at least an order of magnitude larger than any likely effect of the legislation.

One way to try to get around these limitations is to use what statisticians call “controls,” which are mathematical tools that allow them to compare two things that are different in many ways, and isolate just the effect they’re looking for. In this case, gun control studies often compare the violence rates in two or more states that have stricter versus more lax gun laws, and they try to control for all the differences between the states except for these policies. 

Another option for researchers is to compare violence rates in a single state with national averages. The idea is that factors that change homicide rates other than the legislation will affect both state and national numbers in the same way. Comparing changes in the state rate to changes in the national rate supposedly controls for other factors that are affecting rates of violence, such as a nationwide crime wave or an overall decline in shootings.

The problem here is that national violence rates actually don’t track well with individual states’ violence rates. Based on the FBI’s UCR data, annual changes within states have only about a 0.4 correlation with national rates when there is no change in legislation. That means the difference between any individual state’s rate and the national rate is more volatile than the change in the state’s rate on its own. The control adds noise to the study rather than filtering noise out. The same problem exists if you try to compare the state to similar or neighboring states. We just don’t have good controls for state homicide rates.

To find an effect large enough to be measured, gun control researchers sometimes group together dozens or hundreds of state legislative initiatives and then look for changes in homicide rates. But states with strong gun control regulations are different from states with weaker gun control regulations: they’re generally richer, more liberal, more urban, and they have lower murder and suicide rates. The cultural differences are too big and there’s just too much uncertainty in the data to say anything about what would happen if we enforced Greenwich, Connecticut, laws in Festus, Missouri.

Researchers try to avoid the pitfalls of before-and-after studies or inter-state comparisons by using longer periods of time—say, by studying the change in gun homicide rates in the 10 years after legislation was enacted compared to the 10 years before. Now you might plausibly get an effect size large enough to be distinguished from one year’s noise, but not from 20 years’ noise.

Another limitation on the usefulness of all gun control studies is that the underlying data are incomplete and unreliable. Estimates of the number of working firearms in the U.S. differ by a factor of two—from around 250 million to 600 million—and most uses of firearms go unreported unless someone is killed or injured. We have some information on gun sales, but only from licensed dealers, and on gun crimes, but not all crimes are reported to the police, not all police report to the FBI, many non-crimes are reported, and reported crimes often have missing or erroneous details.

Even if you could somehow assemble convincing statistical evidence that gun violence declined after the passage of gun control legislation, there are always many other things that happened around the same time that could plausibly explain the change.

The solution is more basic research on crime and violence, rather than more specific studies on gun control legislation. Better understanding can lead to precise experimentation and measurement to detect changes too small to find in aggregate statistical analyses. 

By way of comparison, take the contribution of cigarette smoking to cancer. For years, smoking was alleged to cause cancer on the basis of aggregate statistics, and the studies were deeply flawed. Eventually, however, medical researchers—not statisticians or policy analysts—figured out how cigarette smoking affected cells in the lungs, and how that developed into cancer. Certain types of cancer and other lung problems were identified that were virtually only found in smokers. With this more precise understanding, it was possible to find overwhelming statistical evidence for each link in the chain.

In terms of studies on gun violence, suppose someday psychological researchers can demonstrate empirically the effect that being abused as a child has on the probability a person will commit a gun homicide. This more specific understanding of why violent crime occurs would allow precise, focused studies on the effect of gun control legislation. Instead of comparing large populations of diverse individuals, researchers could focus on specific groups with high propensities for gun violence.

Only when we know much more about why people kill themselves and each other, and how the presence or absence of guns affects rape, assault, robbery and other crimes, can we hope to tease out the effect of gun control measures.

It’s not just gun control. Nearly all similar research into the effects of specific legislation suffers from the same sort of problem: too much complexity for the available data. Political partisans yearn for statistical backing for their views, but scientists can’t deliver it. Yet researchers flood to favored fields because there is plenty of funding and interest in results, and they peer review each other’s papers without applying the sort of rigor required to draw actual policy conclusions.

This doesn’t mean that gun control legislation is necessarily ineffective. But short of legitimate scientific evidence, belief in the efficacy of additional gun control laws is, and will remain, a matter of faith, not reason.

Tellingly, the studies that have gotten the most media or legislative attention aren’t among the 123 that met RAND’s approval. The best studies made claims that were too mild, tenuous, and qualified to satisfy partisans and sensationalist media outlets. It was the worst studies, with the most outrageous claims, that made headlines.

One prominent study, which was touted from the debate stage by Sen. Cory Booker (D–N.J.) when he was running for president in the 2016 election, made the astounding claim that a permit requirement for handgun purchases in Connecticut reduced their gun murder rate by 40 percent. It is true that the state’s gun murder rate fell rapidly after that law was passed in 1995, but so did gun murder rates throughout the country. The study’s 40 percent claim is the actual murder rate in Connecticut compared to something the researchers call “synthetic Connecticut,” which they constructed for the purpose of their study—a combination of mostly Rhode Island, but also Maryland, California, Nevada, and New Hampshire. 

As it turns out, the authors’ entire claimed effect (the 40 percent reduction they reported) was due to the fact that Rhode Island experienced a temporary spate of about 20 extra murders between 1999 and 2003, and synthetic Connecticut was more than 72 percent Rhode Island.

Even compared to synthetic Connecticut, the decline the authors found didn’t last. Although the law remained on the books, by 2006, the gun murder rate in real Connecticut had surpassed synthetic Connecticut, and then continued to increase to the point where it was 46 percent higher. The authors, despite publishing in 2015, elected to ignore data from 2006 and afterwards. 

This study is typical of the field: strong claims based on complex models and uncertain data. Worse, researchers often cherry pick outcome measures, time periods, and locations to get their preferred results. 

For example, take the studies that look at whether bans on assault-style weapons and large-capacity magazines, which are often passed together, have reduced the frequency or deadliness of mass shootings. Researchers define basic terms like “assault weapons” and “mass shootings” differently. They limit their data by time, place, or other factors, such as classifying an event as an act of terror or gang violence and therefore not considering it a mass shooting.

These studies suffer from even greater data issues than other gun violence research. Mass shootings are extremely rare relative to other forms of gun violence, and most of them don’t involve assault weapons. Though estimates vary depending on the definitions used, mass shootings involving assault weapons constitute a small fraction of one percent of all gun homicides. 

The U.S. federal ban on assault weapons and large capacity magazines, which was the subject of numerous studies that reached widely varying and often contradictory conclusions about its efficacy, was in place for 10 years, from 1994 to 2004. Before, during, and after the time the law was in effect, many societal factors caused crime rates to vary widely, making it impossible to draw useful conclusions about the effect of the ban on anything, and in particular on something as rare as mass shootings. But with all the noise in the data, it is easy for researchers to find weak results that support any conclusion they hope to reach.

Moreover, states and countries with bans define assault weapons and other key elements of laws differently. Combined with the data problems inherent in comparing different populations of people over different periods of time, comparisons between states and countries are almost meaningless.

Another RAND Corporation meta-analysis updated in 2020 found inconclusive evidence that bans on assault weapons and large-capacity magazines have any effect on mass shootings or violent crime.

But how about the more straightforward question of whether owning a gun makes you more or less safe? One widely influential study that has constantly resurfaced in headlines since it was published in the New England Journal of Medicine in 1993 concluded that, “rather than confer protection, guns kept in the home are associated with an increase in the risk of homicide by a family member or intimate acquaintance.” 

There are major problems with this study. First of all, the researchers concluded that keeping a gun at home increases a person’s risk of being killed, but nearly half the murders they included in their analysis were not committed with a firearm. And among gun owners who were killed with a gun, the authors didn’t establish whether the weapon used was the victim’s own gun or if it belonged to another person. 

This points to another explanation for why research on this topic is so often inconclusive: individual differences can’t easily be controlled for in social science research. A gun expert with a gun safe in a high crime neighborhood may well be safer with a gun, whereas a careless alcoholic living in a low crime area who keeps loaded guns in his kids’ closet is certainly going to be less safe. People want a simple overall answer to whether guns make you less safe or more safe in order to inform legislation, but social science cannot deliver that. 

Population averages can be useful when one rule has to be applied to everyone—for example, estimating how many lives would be saved by a pollution control regulation, or how many dental cavities are prevented by fluoridating the water supply. But with guns and personal safety, the relevant question is not whether guns make the average gun owner safer, but which people guns make safer and which people guns make less safe.

Anyone basing a gun control position on scientific evidence of any kind is building on sand. We have no useful empirical data on the subject, no body of work that rises above the level we would expect based on random chance, either for or against gun control measures. And the claim that there are “simple, commonsense” laws we could pass that would significantly reduce gun violence, if only we had the political will to go through with them, is simply false.

These are complex issues that require rigorous scientific investigation to come to any kind of useful conclusion, and they depend far more on individual variation and broad social and cultural factors than on any regulation. We should not look to pass laws that sweep up innocent victims while potentially doing more harm than good, all with the alleged backing of science that can’t possibly tell us what we need to know.

Produced and edited by Justin Monticello. Written by Monticello and Aaron Brown. Graphics by Isaac Reese. Audio production by Ian Keyser.

Music: Aerial Cliff by Michele Nobler, Land of the Lion by C.K. Martin, The Plan’s Working by Cooper Cannell, Thoughts by ANBR, Flight of the Inner Bird by Sivan Talmor and Yehezkel Raz, and Run by Tristan Barton.

Photos: Hollandse-Hoogte/ZUMA Press/Newscom; Robin Rayne/ZUMAPRESS/Newscom; Ted Soqui/Sipa USA/Newscom; YES Market Media/Yaroslav Sabitov/YES Market Medi/Newscom; Chuck Liddy/TNS/Newscom; YES Market Media/Yaroslav Sabitov/YES Market Medi/Newscom; Brett Coomer/Rapport Press/Newscom; Martha Asencio-Rhine/ZUMAPRESS/Newscom; Jebb Harris/ZUMA Press/Newscom; John Gastaldo/ZUMA Press/Newscom; Greg Smith/ZUMA Press/Newscom; Richard Ellis/ZUMA Press/Newscom; Matthew McDermott/Polaris/Newscom; KEVIN DIETSCH/UPI/Newscom; Bill Clark/CQ Roll Call/Newscom; Michael Brochstein/ZUMAPRESS/Newscom; Sandy Macys / UPI Photo Service/Newscom; E. Jason Wambsgans/TNS/Newscom; Eye Ubiquitous/Newscom; Matthew McDermott/Polaris/Newscom



[ad_2]

Source link