Hypotheses: Introduction & selection of articles
Methodspace explore phases of the research process throughout 2021. In the first quarter explored design steps, starting with a January focus on research questions.
A discussion of the hypothesis fits with this month's focus on research questions and initial stages of design. Let's start with a basic introduction by Malcolm Williams from The SAGE Encyclopedia of Social Science Research Methods. Following this explanation, find a multidisciplinary selection of open access articles about hypotheses and research design.
[The hypothesis] is a central feature of scientific method, specifically as the key element in the hypothetico-deductive model of science (Chalmers, 1999). Variations on this model are used throughout the sciences, including social science. Hypotheses are statements derived from an existing body of THEORY that can be tested using the methods of the particular science. In chemistry or psychology, this might be an experiment, and in sociology or political science, the social survey. Hypotheses can be confirmed or falsified, and their status after testing will have an impact on the body of theory, which may be amended accordingly and new hypotheses generated. Consider a simplified example: Classical migration theory states that economic push and pull factors will be the motivation for migration, and agents will have an awareness of these factors. From this, it might be hypothesized that migrants will move from economically depressed areas to buoyant ones. If, upon testing, it is found that some economically depressed areas nevertheless have high levels of inmigration, then the original theory must be amended in terms of either its claims about economic factors, or the agents' knowledge of these, or possibly both.
Rarely, however, is it the case that hypotheses are proven wholly right (confirmed) or wholly wrong (falsified), and even rarer that the parent theory is wholly confirmed or falsified. Through the second half of the 20th century, there was a great deal of debate (which continues) around the degree to which hypotheses and, by implication, theories can be confirmed or falsified. Karl Popper (1959) maintained that, logically, only a falsification approach could settle the matter, for no matter how many confirming instances were recorded, one can never be certain that there will be no future disconfirming instances. Just one disconfirming instance will, however, demonstrate that something is wrong in the specification of the theory and its derived hypothesis. Therefore, in this view, scientists should set out to disprovea hypothesis.
Falsification is logically correct, and its implied skepticism may introduce rigor into hypothesis testing, but in social science in particular, hypotheses cannot be universal statements and are often probabilistic. In the example above, no one would suggest that everyone migrates under the particular given circumstances, but that one is more or less likely to migrate. Indeed, in many situations, the decision as to whether one should declare a hypothesis confirmed or falsified may be just a matter of a few percentage points' difference between attitudes or behaviors in survey findings.
Hypotheses can be specified at different levels. Research hypotheses are linguistic statements about the world whose confirmation/falsification likewise can be stated only linguistically as “none,” “all,” “some,” “most,” and so on. In qualitative research, a hypothesis might be framed in terms of a social setting having certain features, which, through observation, can be confirmed or falsified. However, in survey or experimental research, hypothesis testing establishes the statistical significance of a finding and, thus, whether that finding arose by chance or is evidence of a real effect. A null hypothesis is stated—for example, that there is no relationship between migration and housing tenure—but if a significant relationship is found, then the null hypothesis is rejected and the alternative hypothesis is confirmed. The alternative hypothesis may be the same as, or a subdivision of, a broader research hypothesis (Newton & Rudestam, 1999, pp. 63–65).
Williams, M. (2004). Hypothesis. In M. S. Lewis-Beck, A. Bryman, & T. Futing Liao (Eds.), The SAGE encyclopedia of social science research methods. Thousand Oaks, California.
Donovan, S. M., O’Rourke, M., & Looney, C. (2015). Your Hypothesis or Mine? Terminological and Conceptual Variation Across Disciplines. SAGE Open. https://doi.org/10.1177/2158244015586237
Abstract. Cross-disciplinary research (CDR) is a necessary response to many current pressing problems, yet CDR practitioners face diverse research challenges. Communication challenges can limit a CDR team’s ability to collaborate effectively, including differing use of scientific terms among teammates. To illustrate this, we examine the conceptual complexity and cross-disciplinary ambiguity of the term hypothesis as it is used by researchers participating in 16 team building workshops. These workshops assist CDR teams in finding common ground about fundamental research assumptions through philosophically structured dialogue. Our results show that team members often have very different perceptions about the nature of hypotheses, the role of hypotheses in science, and the use of hypotheses within different disciplines. Furthermore, we find that such assumptions can be rooted in disciplinary-based training. These data indicate that potentially problematic terminological differences exist within CDR teams, and exercises that reveal this early in the collaborative process may be beneficial.
Goldberg, A. (2015). In defense of forensic social science. Big Data & Society. https://doi.org/10.1177/2053951715601145
Abstract. Like the navigation tools that freed ancient sailors from the need to stay close to the shoreline—eventually affording the discovery of new worlds—Big Data might open us up to new sociological possibilities by freeing us from the shackles of hypothesis testing. But for that to happen we need forensic social science: the careful compilation of evidence from unstructured digital traces as a means to generate new theories.
Raghavan, P. (2014). It’s time to scale the science in the social sciences. Big Data & Society. https://doi.org/10.1177/2053951714532240
Abstract. The social sciences are at a remarkable confluence of events. Advances in computing have made it feasible to analyze data at the scale of the population of the world. How can we combine the depth of inquiry in the social sciences with the scale and robustness of statistics and computer science? Can we decompose complex questions in the social sciences into simpler, more robustly testable hypotheses? We discuss these questions and the role of machine learning in the social sciences.
Slowiaczek, L. M., Klayman, J., Sherman, S. J., & Skov, R. B. (1992). Information selection and use in hypothesis testing: What is a good question, and what is a good answer?. Memory & Cognition, 20(4), 392-405.
Abstract. The process of hypothesis testing entails both information selection (asking questions) and information use (drawing inferences from the answers to those questions). We demonstrate that although subjects may be sensitive to diagnosticity in choosing which questions to ask, they are insufficiently sensitive to the fact that different answers to the- same question can have very different diagnosticities. This can lead subjects to overestimate or underestimate the information in the answers they receive. This phenomenon is demonstrated in two experiments using different kinds of inferences (category membership of individuals and composition of sampled populations). In combination with certain information-gathering tendencies, demonstrated in a third experiment, insensitivity to answer diagnosticity can contribute to a tendency toward preservation of the initial hypothesis. Results such as these illustrate the importance of viewing hypothesis testing behavior as an interactive, multistage process that includes selecting questions, interpreting data, and drawing inferences.
Veazie, P. J. (2015). Understanding Statistical Testing. SAGE Open. https://doi.org/10.1177/2158244014567685
Abstract. Statistical hypothesis testing is common in research, but a conventional understanding sometimes leads to mistaken application and misinterpretation. The logic of hypothesis testing presented in this article provides for a clearer understanding, application, and interpretation. Key conclusions are that (a) the magnitude of an estimate on its raw scale (i.e., not calibrated by the standard error) is irrelevant to statistical testing; (b) which statistical hypotheses are tested cannot generally be known a priori; (c) if an estimate falls in a hypothesized set of values, that hypothesis does not require testing; (d) if an estimate does not fall in a hypothesized set, that hypothesis requires testing; (e) the point in a hypothesized set that produces the largest p value is used for testing; and (f) statistically significant results constitute evidence, but insignificant results do not and must not be interpreted as evidence for or against the hypothesis being tested.
Wood, M. (2019). Simple methods for estimating confidence levels, or tentative probabilities, for hypotheses instead of p values. Methodological Innovations. https://doi.org/10.1177/2059799119826518
Abstract. In many fields of research, null hypothesis significance tests and p values are the accepted way of assessing the degree of certainty with which research results can be extrapolated beyond the sample studied. However, there are very serious concerns about the suitability of p values for this purpose. An alternative approach is to cite confidence intervals for a statistic of interest, but this does not directly tell readers how certain a hypothesis is. Here, I suggest how the framework used for confidence intervals could easily be extended to derive confidence levels, or “tentative probabilities,” for hypotheses. I also outline four quick methods for estimating these. This allows researchers to state their confidence in a hypothesis as a direct probability, instead of circuitously by p values referring to a hypothetical null hypothesis—which is usually not even stated explicitly. The inevitable difficulties of statistical inference mean that these probabilities can only be tentative, but probabilities are the natural way to express uncertainties, so, arguably, researchers using statistical methods have an obligation to estimate how probable their hypotheses are by the best available method. Otherwise, misinterpretations will fill the void.
Wood, M., & Welch, C. (2010). Are ‘Qualitative’ and ‘Quantitative’ Useful Terms for Describing Research? Methodological Innovations Online, 5(1), 56–71. https://doi.org/10.4256/mio.2010.0010
Abstract. We examine the concepts of quantitative research and qualitative research and argue that this dichotomy has several dimensions which are often, erroneously, assumed to coincide. We analyse two of the important dimensions – statistical versus non-statistical, and hypothesis testing versus induction. The crude quantitative-qualitative dichotomy omits many potentially useful possibilities, such as non-statistical hypothesis testing and statistical induction. We also argue that the first dimension can be extended to include establishing deterministic laws and the consideration of fictional scenarios; and the second to include ‘normal science’ research based on questions defined by an established paradigm. These arguments mean that the possible types of research methods are more diverse than is often assumed, and that the terms ‘quantitative’ and ‘qualitative’ are best avoided, although other, more specific, terms are useful. One important sense in which the term ‘qualitative’ is used is simply to refer to the use of data which yields a deep and detailed picture of the subject matter: we suggest the use of the word ‘rich’ to describe such data.