*Q: What should one do with papers that largely consist of non-findings? Is it just a waste of time trying publish a study without significant effects? Or, should I simply aim for lower quality journals with such papers? *

Author: Bill Carbonaro, University of Notre Dame

This is an important question that we must all confront as researchers. Not every study works out as we anticipated. No one wants to waste time on a project that has little chance of success. However, I think it is a myth that good journals** only** publish papers with statistically significant results. Yes, there is a bias in favor of publishing papers with statistically significant findings, but there are many great examples of papers published in top journals that are built around “non-findings.” Here are the lessons that I have learned from these papers:

**Characteristics of successful papers with “non-findings”**

*1. Have a good theory.* For starters, it is important to have a well-grounded theoretical explanation for why one would either expect, or not expect, to find a relationship between the variables in your analysis. If your paper doesn’t find a relationship between X and Y, and the reader doesn’t have a clear idea of why we would **expect** that X should (or should not) covary with Y, then your paper is doomed. Ainsworth-Darnell & Downey (AD&D)(*ASR*, 1998) relied upon Ogbu’s much cited “oppositional culture” framework to generate several hypotheses about black-white differences in attitudes, behaviors, and school performance. They didn’t find any support for the Ogbu’s hypotheses, but that’s what makes the paper so interesting (and one reason the paper is so often cited!).

*2. Cover new ground*. The best case scenario is that one has a theory that has not been tested, and there is a “research vacuum” to exploit. AD&D covered new ground by being the first paper to systematically evaluate Ogbu’s claims with a multivariate analysis of nationally representative data. Their “non-findings” filled an important void in the field. In separate studies, Hallinan & Kubitschek and I examined sector differences in learning among elementary school students, and found either no effects, or, positive public school effects. This went against the grain of prior research on the Catholic school advantage in high school (see point 3 below), but it also exploited the “research vacuum” in this area.

*3. Re-visit old ground.* Non-findings are especially interesting when they go against the grain of prior research. If prior research consistently finds that X matters for Y, and you find that X doesn’t matter for Y, that’s interesting! The big question is of course WHY you didn’t find that X matters for Y. If you used different data than other scholars, you need to show why your data are superior in quality to prior research. If your data are more recent, then you have a very good story to tell: X used to matter for Y, but it no longer does. Of course, you need a good explanation for* why* X no long matters for Y. Think of Wilson’s *The Declining Significance of Race* and *The Truly Disadvantaged*. Much debated works, but they are compelling and provocative narratives! Finally, along the lines of points 4 and 5 below, you have an interesting paper if you can show that, in contrast with prior research, your use of higher quality measures and/or more rigorous methods indicates that X doesn’t matter for Y.

*4. Have impeccable measures.* If your analysis has poor measures of your key concepts, it shouldn’t be surprising that you have insignificant findings in your analyses. In order for a “non-findings” paper to be compelling, the reader should be convinced that the measures are credible, with high validity and reliability. This is a common problem in “non-findings” papers that I reject as a reviewer. AD&D used multiple measures of “oppositional culture” in their study, and while one might prefer better measures, their measures were credible enough to make a significant contribution to the field.

*5. Have a rigorous (and possibly novel) research design*. Happily, the bar of what counts as “rigorous research” is constantly being raised. “Non-findings” are especially important when one can make a convincing argument that prior research findings are in fact “artifacts” of weak research design/methodology. For example, Guo and VanWey (*ASR*, 1999) used fixed effects models (rather than cross-sectional, between family models) to examine sibling effects on achievement, and found no effects of siblings. This study has been much debated, but it was compelling and innovative enough to merit publication in a top journal. Mouw (*ASR*, 2003) re-examined research on network ties and job searches using substantially more rigorous empirical models than prior research, and he found that job contacts did not affect job search outcomes. In each of these cases, reviewers were obviously convinced enough by the results (based on the rigor of the design and analysis) to deem the paper publishable.

To summarize, don’t immediately despair if your paper is filled with “non-findings.” Some “non-findings” papers should be abandoned, but typically, that is because they are not very good papers to begin with! Under the right circumstances, “non-findings” are actually “FINDINGS” that can be published in top journals, and may have an important impact in your subfield.

Related to points 4 and 5, I would add that a worthwhile “non-finding” requires a very precise point estimate of zero. An imprecise point estimate of zero implies a confidence interval consistent with “X has an effect anywhere from moderately negative to moderately positive,” which is not informative. In other words, you can’t get credit for a non-finding if you’ve just overloaded your model with controls and inflated your standard errors.

There is some asymmetry here: an imprecise estimate of a non-zero effect can still be interesting if the point estimate is large enough. The confidence interval will at least tell us that the likely effect size ranges from moderately large to very large. So from one perspective, a non-finding is harder to demonstrate rigorously than a large finding.

Yes – Carl raises an excellent point. If you are going to build a paper around a “no effect” finding, you have to be very thoughtful about what your “non-effect” really means, and why you found this result. For example, you should make sure that you are not simply making a type II error due to a small sample size, and/or substantial heterogeneity in the sample. Power issues are an important, but often under-appreciated issue in fixed effects analyses, where within unit variation is often very low (particularly when there are only two time points being compared), and one can end up with large coefficients, but also very large standard errors. A power analysis can often help shed light on these issues. Another common problem is a narrow focus on “statistically significant” coefficients whose magnitude is trivial. For example, I recently reviewed a paper in which the author identified a statistically significant effect of variable X on Y (largely due to a huge sample size). However, the effect size was roughly one-hundredth of a standard deviation! To me, the author should have simply concluded that the relationship in question was more or less zero, and left it at that. Instead, s/he tried to (unsuccessfully) convince the reader that s/he had found something that was substantively important.