The state of gun violence research is poor, writes Maggie Koerth-Baker. Right now, whatever your beliefs on guns are, it’s incredibly difficult to back them up with any solid science at all.

“Our daughter lives about a mile from us, in a rural area. One night, while her son and husband were away, she comes over to visit. She’s over 40 now, but still, when she leaves, I say, ‘give me a call when you get home’.”

Charles Wellford is a professor of criminology and criminal justice at The University of Maryland. He’s a scientist who studies how social systems work, an expert in the process of homicide investigations. He knows far more about crime than the average American, but that doesn’t stop him from being scared in a very normal, average way.

On this particular night, Wellford waited the 10 minutes he thought it would take his daughter to get home, but he didn’t hear anything. At first, he figured it was no big deal. He rationalized that she must have gotten delayed. But then 20 minutes, went by and the phone was still silent. At 30 minutes, Wellford tried calling his daughter. Nobody answered. He waited a few minutes and dialed the number a second time. The phone rang. The voicemail picked up. This was the point where Charles Wellford really started to worry.

“So I went upstairs and I got a revolver and got in my car and drove out there,” he told me. “I pull up, and her car is there and all the lights are on everywhere. Now I’m convinced – somebody was in the house. Someone else was there when she got home. I get the gun and I start walking towards the house. And that’s when my daughter comes out of the barn,” he said.

“She’d just started doing chores and she’d forgotten to call.”

This is more than just a story about a jumpy father, worried for his child’s safety. It’s a story that illustrates how complicated and flawed the science on gun use and gun violence is in the United States.

If you were studying gun use, and you wanted to know how often guns were used in self-defense, how would you categorize Charles Wellford’s experience?

If you look at real-world research, Wellford said, the answer is far from consistent. Some research papers would classify his story an example of defensive gun use. Others wouldn’t. And that difference in definitions is part of why we don’t have solid answers to the big questions about gun violence, gun ownership, and the effects of gun laws.

Wellford doesn’t study guns, himself. But in 2004, he served as the chairman on a National Academy of Sciences panel that reviewed a huge amount of gun violence research and presented a sort of “state-of-the-field” report summarizing what we know, what we don’t know, and why.

The results were less than glowing. In the executive summary, the committee wrote that, despite lots of research, it was still impossible to answer some of the most pressing questions surrounding gun violence. The paper does its best to praise researchers for the good work they have produced – this isn’t a situation where we know absolutely nothing about gun use, gun ownership, and the impact of gun laws. But the committee members I spoke with were also critical of the field, and say that the confidence politicians, lobbyists, and activists put in this research is seriously premature. Gun violence research suffers from a lack of consistently recorded data and, for that matter, a lack of data, in general. As John Pepper, associate economics professor at The University of Virginia and the study director on the 2004 report, put it, “The data are just terrible.”

Worse, critics say the methods used to analyze that data are also deeply flawed in many cases. What you end up with, researchers told me, is a field where key pieces of the puzzle are missing entirely and where multiple scientists are reaching wildly different conclusions from the exact same data sets. For instance, because of those differences in the definition of “defensive gun use” some researchers will tell you that Americans use a gun to defend themselves something like 1.5 million times every year. Others say it happens maybe 200,000 times annually.

That kind of variability does not create an environment where it is easy to craft evidence-based policy, and the situation has not improved since 2004, Wellford said.

A couple of months ago, I wrote a short piece here at BoingBoing, briefly addressing these issues. That piece was written quickly, mostly by reading a few review analyses. Because gun violence – and how to deal with it – continues to be a major issue in our society, I wanted to come back to these questions and dig a little deeper. We know that gun violence research is deeply flawed. We know that it cannot currently answer the questions we need it to answer. But why? What, specifically, is missing? What about this field is broken? And how do we fix it?

According to scientists who do gun research, scientists who were involved in the National Academies review, and scientists who study the way other scientists do research, there are two key problems. First is the issue of missing and poorly matched data. Second, there are also serious problems with the mathematical models scientists use to analyze that data, and with the type of conclusions they attempt to draw from it. In this first of a two-part series, I’m going to focus on the data.

About 11,000 Americans died at the end of a gun in 2010. We know that because the basic, Clue-esque information on who is killed, where, and with what gets documented by local law enforcement agencies – all of which is, in turn, compiled by the FBI into the Uniform Crime Report. This system has been around since 1930.

The other primary source of this kind of information is the CDC’s National Violent Death Reporting System. It’s been around since 2002 and collects more-detailed information than the Uniform Crime Report. For one thing, it includes suicides. When I say that guns killed 11,000 people in 2010, I’m only talking about deaths that were classified as homicides. Another thing the CDC records do is link deaths to other pieces of information – like previous domestic violence calls — that can help researchers understand what lead up to the death. Unfortunately, only 18 states participate in that system.

In 1989, the FBI also started collecting more-detailed reports of crimes – including crimes that might involve a gun, but not be homicides – as part of the National Incident Based Reporting System. But that system is still used by only a small minority of law enforcement agencies.

Taken all together, these reporting systems give scientists a place to start. But it’s just that. A place to start. It’s a nice diagram of your street. It’s not a road map showing you the way to your cousin’s house in Cleveland.

One of the big problems is something that you’ve already seen here – definitions. How one person collecting data classifies a type of crime can be different from how somebody else does it, and neither of those might really capture the details of specific cases.

Mark Hoekstra is an associate professor of economics at Texas A&M University. He’s been studying the effects of stand-your-ground laws – legislation that changes the way the law expects people to act when they feel threatened. Historically (and this is dating back to English common law), you were expected to remove yourself from a threatening situation, rather than attacking the person you felt threatened by … unless the situation happened within your own home. Stand-your-ground laws basically expand the places and situations where it’s legally acceptable to go straight to “fight” without first attempting “flight”.

So what happens when a state institutes a stand-your-ground law? A good way to study this, as you might guess, is to start by looking at the rates of justifiable homicides and the rates of criminal homicides and see how each change after the law takes effect. The good news is that the FBI has a standardized definition of what “justifiable homicide” means.

The problem: The FBI definition doesn’t necessarily capture the full story of what’s going on. The FBI calls justifiable homicide “the killing of a felon during commission of a felony”, Hoekstra said. There are only about 200-300 of those reported annually in the entire country, he said. But nobody knows whether that is because justifiable homicide is actually rare, or whether it’s more common, but not captured by the reporting system. Remember, what’s happening here is that somebody puts another tick mark under one category or another. The details of how specific shootings happened and why don’t usually make it into the record.

It’s easy to imagine lots of situations that wouldn’t fit neatly into the FBI definitions. “Like one guy breaks a beer bottle and hits the other guy with it, and the guy who got hit shoots and kills the first guy,” Hoekstra said. “According to the FBI handbook, that’s not legally justifiable. But you don’t know the specific details of the case. In reality, you can imagine a situation where that scenario was deemed justifiable. You can also imagine a situation where it would be criminal and the guy would go to prison.”

That makes it difficult for people like Hoekstra to study justifiable homicide, and it makes it difficult for lay people, like you and I, to understand what’s going on when we hear about stuff like this in the news or see statistics repeated on a Facebook JPEG. There’s a lot of room for people and organizations to take a concept – what happens when states institute stand-your-ground laws, say – and fiddle with different ways of counting until they end up being able to make the statement they want to make. What’s more, those folks can all probably make a decent case for why they chose to tally up the numbers the way they did. It’s not really as simple as someone lying to you and someone not. At least, not always. When data and definitions don’t capture the full story, it leaves room for reasonable (and unreasonable) people to group the numbers in different ways.

Whether you think it’s the guns or the people that kill people, you’re bound to agree that homicide isn’t the only kind of violence guns end up involved in. Guns are part of burglaries. They’re used as a threat in of some kinds of rape. They’re used to harass and intimidate victims of domestic violence. Sometimes, people who are shot with guns don’t die. Sometimes, people shoot themselves, whether accidentally or intentionally.

All of those things are, presumably, affected in some way by the availability of guns and by the regulations that we place on guns. This isn’t just about people killing one another. But research on gun violence tends to focus on homicide. And there’s a very good reason for that.

“Start with deaths and go down from there to shooting yourself in the hand,” Charles Wellford explained. “As you go down that continuum, the comprehensiveness and quality of the data decreases.”

There’s a lot we just don’t know when it comes to how guns are used and misused in a whole range of violent events. The simple explanation is that a dead body is hard to hide. Murders get reported to police. The police generally follow up on those cases and report them to the FBI. Other crimes are much more of a patchwork, said John Donohue, professor of law at Stanford Law School. People may or may not call the cops to report domestic violence or an assault by someone they know. If the cops are called, the situation may or may not be taken seriously enough that it’s logged in any meaningful way. And if the violent incident in question isn’t technically a crime – shooting yourself in the foot, for instance, or drunkenly blowing a hole in your mother-in-law’s garage on the 4th of July – there’s no reason why that information would be reported to the FBI’s Uniform Crime Report, to begin with.

All those things matter very much to the people who are trying to figure out how guns are used in our society and how gun use changes over time. But there’s not really a solid, nation-wide, uniform way of tracking any of it. So what we say we know about gun violence is almost always just a synonym for what we know about gun murders.

And that’s not the only information that is just flat-out missing.

Think about right-to-carry laws, which allow licensed individuals to pack heat in a holster or handbag, or even just slung over their shoulder at a JC Penny. Scientists like Donohue and Hoekstra study the effects of those laws by analyzing data on crime statistics – murders, and whatever else happens to be available in the states they’re researching. That information can help them get an idea of what’s going on. But to really understand how the specific conceal-carry laws affect those crime statistics you would need to know what people are actually doing with their newfound rights. How many people were carrying guns last year? How about this year? How often do they carry them? Where do they take them? That data simply doesn’t exist, Wellford told me.

Another thing we don’t have is reliable, long-term data on where the guns that are actually used in crimes come from. One of the ways we legislate gun use is through registration programs and systems that limit who can buy a gun legally. But if we don’t know whether guns used in crimes are purchased legally, illegally, or purchased legally and then sold or given illegally to a third party, we have no idea how to craft those laws or even if they make any difference at all.

Finally, consider the question of whether more guns in the hands of law-abiding citizens serves as a deterrent to criminals. That’s a pretty basic argument that many people make, and scientists try to answer that question using lots of different methods. (For the record, the National Academies report came to the conclusion that the research is currently inconclusive on this. Right now, we don’t know whether having more guns means less crime, or more crime, or whether it has any effect at all. The research is all over the place and nobody has made a strong enough case to be conclusive.)

But here’s one thing nobody has ever done: Find out what the criminals think. That same issue also came up when John Pepper was involved in a National Academies panel considering research on the death penalty. “If you think about whether it has a deterrent effect, we know almost nothing, because we know almost nothing about how offenders perceive the risk of execution,” he told me. And the same is true of the risk of being shot by a potential victim.

Sixty years ago, nobody really knew how America had sex. Sure, scientists could guess sex was happening, based on the basic population numbers collected in the census. But who was doing it, when, with whom … that was all lost in the mists of incredibly awkward conversations that nobody wanted to have. Figuring out ways to collect and compile that data was a daunting task. And, in fact, a lot of people likely would have thought it was pretty invasive for scientists and government entities like the CDC to even want to know the answers to those questions.

But here we are, in 2013, and even if we don’t know exactly what people get up to between 9:35 and 9:37 on a Wednesday night, we do know a lot more about American sex habits. More importantly, we know how those sex habits affect other parts of people’s lives, and we know a lot more about how public policy affects both sex and quality of life. That matters. It’s uncomfortable, potentially invasive research that actually makes us aware of rapes and sexual assaults that go unreported in crime statistics. It’s that research that helps us track STD rates, and makes sure we notice when those patterns change for the better or worse. Research on sex means that we know more about teen sex, teen pregnancy rates, and how to reduce the latter.

“We made progress,” Charles Wellford told me. “There are lots of examples of difficult measurement issues and we didn’t just throw our hands up and walk away from them.”

We can solve the problems with gun violence data, scientists say, but it’s going to take funding and it’s going to take political willpower. There are a few key solutions that the researchers I spoke with suggested.

First, we need to expand the crime reporting systems that track a broader range of incidents and collect more detailed accounts of what actually happened in those incidents. That means expanding the CDC’s National Violent Death Reporting System from 18 states to 50. And it means getting more local law enforcement agencies using the FBI’s National Incident Based Reporting System. The basic Uniform Crime Report has been useful, they say, but it’s time to bring this kind of reporting into the 21st century.

The harder task is going to be finding ways to collect a new kind of data. Wellford calls it the “left side variables”. If you think about the relationship between crime and guns as an equation, he said, all we really have right now is the information in the right-hand side of that equation. We have data on the occurrence of gun violence. What we’re missing is all the stuff that connects people to those guns.

“None of the surveys used to study other crimes, where you could include information about guns and then link that up to other things we care about like crime, labor markets, schooling outcomes … we just don’t have the data,” John Pepper said. “Take a simple question about correlation between gun ownership and crime, or gun ownership and suicide. We can’t even answer that.”

There are two ways to study questions like those. If you had a survey or some reports that could tell you how many gun owners in the state of Virginia had committed suicide, then you could compare that to suicides among people in Virginia who didn’t own guns. Alternately, you could take broadly aggregated data about how many suicides happen in the state of Virginia and broadly aggregated data about gun ownership rates in the state of Virginia, and you can compare those statistics to other states. You can easily tell that the former method is going to produce a much more accurate estimate of the relationship between gun ownership and suicide than the latter. But we have no way to do that.

Creating a system that allows scientists to gather that data might be objectionable to some people who own guns. But think of it this way. Right now, whatever your beliefs on guns happen to be, it’s incredibly difficult to back them up with solid science. If you want to be able to make any kind of statement about gun ownership and the effects thereof – and have anybody who doesn’t agree with you 100% actually take you seriously – then you should support better data. This should be the first step. Because right now, we don’t know enough to know definitively what effects guns have, or what effects gun policies have.

Better data would help that. But, unfortunately, it’s not the only thing that needs fixing. In my next post on gun violence research, I’ll focus in on the way scientists analyze data. To avoid misleading conclusions, we need good mathematical models. But some experts say we don’t have those. So what does that mean for the research scientists are publishing? And what does it tell us about the usefulness of evidence-based policy making, in general? Stay tuned.

 

Playing politics

This is a story about science, a peek behind-the-scenes at some of the factors that make it difficult for experts to come to definitive conclusions about how gun ownership and gun laws affect crime, violence, and self-defense. But, in a lot ways, it’s impossible to separate that from politics. Data and methodology are the gooey filling. Politics is the crust. It’s all one pie.

In particular, it’s important to acknowledge that politics is a big part of why some of the missing data discussed here is actually missing. This issue goes far beyond a simple lack of funding for the expansion of improved crime reporting systems.

For instance, in this piece, I mentioned that social scientists don’t have a good way to track where a gun used in a crime came from. And that problem isn’t unique to science – local law enforcement runs into the same roadblocks. From a practical perspective, this is an easy problem to solve. From a political perspective, it’s not – the Bureau of Alcohol Tobacco and Firearms is prohibited by law from setting up a national, centralized gun tracking system. While the agency does collect data on gun sales and background checks, it’s forced to regularly destroy some of that information. And it can’t share the information it does retain with any member of the public. This was a key complaint voiced by Charles Wellford and other scientists, who count as members of the public.

Another key problem, at least in the eyes of the researchers I spoke with, was the existence of Public Law 104-208 and Public Law 112-74. Passed in 1996 and 2011, respectively, these pieces of legislation included provisions that prevented first the Centers for Disease Control and Prevention, and then all Department of Health and Human Services agencies, from using federal funding to advocate or promote gun control. The language used in this legislation was both broad and vague. According to researchers I spoke with, those laws have had a chilling effect on government-funded scientists, who worried any research they did could be construed as gun-control advocacy if legislators didn’t like the results.

Charles Wellford described a meeting he attended he chaired recently, aimed at formulating research goals and figuring out how to fill in some of the most important blanks in gun violence research. Of the 15 people at the meeting, one was from the CDC. Throughout the meeting, the man began nearly every statement he made by first hedging, explaining that he didn’t want ideas attributed to him and that the CDC would never consider research directions he might personally recommend. “[That legislation] doesn’t say we can’t do research on, say, whether someone has a gun in their home, but careers were damaged and people lost jobs and that has a lasting effect,” Wellford said.

Not all research is done the same way or with an equal level of quality. John Pepper, for instance, was very critical of the gun violence research coming out of the public health sector, which would include work being done by the CDC. This kind of research often matches populations of people who have been subjected to violence to populations that have not and looks for commonalities and differences between them. But that perspective is based on modeling the spread of disease, not on modeling complex behaviors and decisions, Pepper said. He didn’t think it did a good job of dealing with questions of correlation vs. causation. “A gun isn’t a virus,” he said.

But, even so, Pepper thought the legal restrictions were unreasonable, and he thought they had a detrimental effect on gun violence research, in general, not just on the research coming out the public health tradition. For instance, some of the important work the CDC used to do, he said, included adding questions about gun ownership and gun violence to national surveys that tracked a wide variety of health issues and outcomes – data that would have been useful to social scientists and public health experts, alike.

Some of this may change in the future. On January 16th, President Obama issued an executive action authorizing (and, in fact, mandating) the CDC to do gun violence research and collect better crime data. But the pesky money problem still exists. The action called for $10 million in funding for research and another $20 million in funding to expand the National Violent Death Reporting System. That cash, however, has to come from congress. And if there’s one thing we can all agree on, it’s that this is probably not the ideal time to push large spending bills through Capitol Hill.

For further reading on the political side of gun research:

A New York Times story by Erica Goode and Sheryl Gay Stolberg that digs into the restrictions placed on the ATF.

• Justin George of The Baltimore Sun writes about the ATF restrictions and the problems this presents for policy making.

• A CBS Evening News report by Mark Strassman on the legislation that blocked gun research at the CDC.

READ PART 2