Research Terms and Breakdown (Practice)

Research Terms

This glossary is intended to assist you in understanding commonly used terms and concepts when reading, interpreting, and evaluating scholarly research in the social sciences. Also included are general words and phrases defined within the context of how they apply to social sciences research.

Accuracy — a term used in survey research to refer to the match between the target population and the sample.

Affective Measures — procedures or devices used to obtain quantified descriptions of an individual’s feelings, emotional states, or dispositions.

Anonymity — a research condition in which no one, including the researcher, knows the identities of research participants.

Beliefs — ideas, doctrines, tenets, etc. that are accepted as true on grounds which are not immediately susceptible to rigorous proof.

Bias — a loss of balance and accuracy in the use of research methods. It can appear in research via the sampling frame, random sampling, or non-response. It can also occur at other stages in research, such as while interviewing, in the design of questions, or in the way data are analyzed and presented. Bias means that the research findings will not be representative of, or generalizable to a wider population.

Case Study — the collection and presentation of detailed information about a particular participant or small group, frequently including data derived from the subjects themselves.

Causal Hypothesis — a statement hypothesizing that the independent variable affects the dependent variable in some way.

Causal Relationship — the relationship established that shows that an independent variable, and nothing else, causes a change in a dependent variable. It also establishes how much of a change is shown in the dependent variable.

Causality — the relation between cause and effect.

Central Tendency — any way of describing or characterizing typical, average, or common values in some distribution.

Chi-square Analysis — a common non-parametric statistical test which compares an expected proportion or ratio to an actual proportion or ratio.

Classification — ordering of related phenomena into categories, groups, or systems according to characteristics or attributes.

Cluster Analysis — a method of statistical analysis where data that share a common trait are grouped together. The data is collected in a way that that allows the data collector to group data according to certain characteristics.

Cohort Analysis –: group by group analytic treatment of individuals having a statistical factor in common to each group. Group members share a particular characteristic [e.g., born in a given year] or a common experience [e.g., entering a college at a given time].

Confidentiality — a research condition in which no one except the researcher(s) knows the identities of the participants in a study. It refers to the treatment of information that a participant has disclosed to the researcher in a relationship of trust and with the expectation that it will not be revealed to others in ways that violate the original agreement, unless permission is granted by the participant.

Confirm ability Objectivity — the findings of the study could be confirmed by another person conducting the same study.

Construct — refers to any of the following: something that exists theoretically but is not directly observable; a concept developed [constructed] for describing relations among phenomena or for other research purposes; or, a theoretical definition in which concepts are defined in terms of other concepts. For example, intelligence cannot be directly observed or measured; it is a construct.

Construct Validity — seeks an agreement between a theoretical concept and a specific measuring device, such as observation.

Context Sensitivity –: awareness by a qualitative researcher of factors such as values and beliefs that influence cultural behaviors.

Control Group — the group in an experimental design that receives either no treatment or a different treatment from the experimental group. This group can thus be compared to the experimental group.

Controlled Experiment — an experimental design with two or more randomly selected groups [an experimental group and control group] in which the researcher controls or introduces the independent variable and measures the dependent variable at least two times [pre- and post-test measurements].

Correlation — a common statistical analysis, usually abbreviated as (r) that measures the degree of relationship between pairs of interval variables in a sample. The range of correlation is from -1.00 to zero to +1.00. Also, a non-cause and effect relationship between two variables.

Critical Theory — an evaluative approach to social science research, associated with Germany’s neo-Marxist “Frankfurt School, “that aims to criticize as well as analyze society, opposing the political orthodoxy of modern communism. Its goal is to promote human emancipator forces and to expose ideas and systems that impede them.

Data — factual information [as measurements or statistics] used as a basis for reasoning, discussion, or calculation.

Data Quality — this is the degree to which the collected data [results of measurement or observation] meet the standards of quality to be considered valid [trustworthy] and reliable [dependable].

Deductive — a form of reasoning in which conclusions are formulated about particulars from general or universal premises.

Dependability — being able to account for changes in the design of the study and the changing conditions surrounding what was studied.

Dependent Variable –: a variable that varies due, at least in part, to the impact of the independent variable. In other words, its value “depends” on the value of the independent variable. For example, in the variables “gender” and “academic major,” academic major is the dependent variable, meaning that your major cannot determine whether you are male or female, but your gender might indirectly lead you to favor one major over another.

Deviation — the distance between the mean and a particular data point in a given distribution.

Discrete Variable –: a variable that is measured solely in whole units, such as, gender and number of siblings.

Distribution — the range of values of a particular variable.

Empirical Research — the process of developing systematized knowledge gained from observations that are formulated to support insights and generalizations about the phenomena being researched.

External Validity –: the extent to which the results of a study are generalizable or transferable.

Factor Analysis –: a statistical test that explores relationships among data. The test explores which variables in a data set are most related to each other. In a carefully constructed survey, for example, factor analysis can yield information on patterns of responses, not simply data on a single response. Larger tendencies may then be interpreted, indicating behavior trends rather than simply responses to specific questions.

Field Studies — academic or other investigative studies undertaken in a natural setting, rather than in laboratories, classrooms, or other structured environments.

Focus Groups — small, roundtable discussion groups charged with examining specific topics or problems, including possible options or solutions. Focus groups usually consist of 4-12 participants, guided by moderators to keep the discussion flowing and to collect and report the results.

Framework — the structure and support that may be used as both the launching point and the on-going guidelines for investigating a research problem.

Generalizability — the extent to which research findings and conclusions conducted on a specific study to groups or situations can be applied to the population at large.

Grounded Theory — practice of developing other theories that emerge from observing a group. Theories are grounded in the group’s observable experiences, but researchers add their own insight into why those experiences exist.

Hypothesis — a tentative explanation based on theory to predict a causal relationship between variables.

Independent Variable — the conditions of an experiment that are systematically manipulated by the researcher. A variable that is not impacted by the dependent variable and that itself impacts the dependent variable. In the earlier example of “gender” and “academic major,” (see Dependent Variable) gender is the independent variable.

Inductive — a form of reasoning in which a generalized conclusion is formulated from particular instances.

Inductive Analysis — a form of analysis based on inductive reasoning; a researcher using inductive analysis starts with answers, but formulates questions throughout the research process.

Internal Consistency –: the extent to which all questions or items assess the same characteristic, skill, or quality.

Internal Validity — the rigor with which the study was conducted [e.g., the study’s design, the care taken to conduct measurements, and decisions concerning what was and was not measured]. It is also the extent to which the designers of a study have taken into account alternative explanations for any causal relationships they explore. In studies that do not explore causal relationships, only the first of these definitions should be considered when assessing internal validity.

Measurement — process of obtaining a numerical description of the extent to which persons, organizations, or things possess specified characteristics.

Methodology — a theory or analysis of how research does and should proceed.

Methods — systematic approaches to the conduct of an operation or process. It includes steps of procedure, application of techniques, systems of reasoning or analysis, and the modes of inquiry employed by a science or discipline.

Mixed-Methods — a research approach that uses two or more methods from both the quantitative and qualitative research categories are used. It is also referred to as blended methods, combined methods, or methodological triangulation.

Modeling — the creation of a physical or computer analogy to some phenomenon. Modeling helps in estimating the relative magnitude of various factors involved in a phenomenon. A successful model can be shown to account for unexpected behavior that has been observed, to predict certain behaviors, which can then be tested experimentally, and to demonstrate that a given theory cannot account for certain phenomenon.

Models — representations of objects, principles, processes, or ideas often used for imitation or emulation.

Null Hypothesis –: the proposition, to be tested statistically, that the experimental intervention has “no effect,” meaning that the treatment and control groups will not differ as a result of the intervention. Investigators usually hope that the data will demonstrate some effect from the intervention, thus allowing the investigator to reject the null hypothesis.

Panel Study — a longitudinal study in which a group of individuals is interviewed at intervals over a period of time.

Participant — individuals whose physiological and/or behavioral characteristics and responses are the object of study in a research project.

Population — the target group under investigation. The population is the entire set under consideration. Samples are drawn from populations.

Predictive Measurement — use of tests, inventories, or other measures to determine or estimate future events, conditions, outcomes, or trends.

Probability — the chance that a phenomenon will occur randomly. As a statistical measure, it is shown as p [the “p” factor].

Questionnaire — structured sets of questions on specified subjects that are used to gather information, attitudes, or opinions.

Random Sampling — a process used in research to draw a sample of a population strictly by chance, yielding no discernible pattern beyond chance. Random sampling can be accomplished by first numbering the population, then selecting the sample according to a table of random numbers or using a random-number computer generator. The sample is said to be random because there is no regular or discernible pattern or order. Random sample selection is used under the assumption that sufficiently large samples assigned randomly will exhibit a distribution comparable to that of the population from which the sample is drawn. The random assignment of participants increases the probability that differences observed between participant groups are the result of the experimental intervention.

Reliability — the degree to which a measure yields consistent results. If the measuring instrument [e.g., survey] is reliable, then administering it to similar groups would yield similar results. Reliability is a prerequisite for validity. An unreliable indicator cannot produce trustworthy results.

Representative Sample — sample in which the participants closely match the characteristics of the population, and thus, all segments of the population are represented in the sample. A representative sample allows results to be generalized from the sample to the population.

Rigor — degree to which research methods are scrupulously and meticulously carried out in order to recognize important influences occurring in an experimental study.

Sample — the population researched in a particular study. Usually, attempts are made to select a “sample population” that is considered representative of groups of people to whom results will be generalized or transferred. In studies that use inferential statistics to analyze results or which are designed to be generalizable, sample size is critical, generally the larger the number in the sample, the higher the likelihood of a representative distribution of the population.

Sampling Error — the degree to which the results from the sample deviate from those that would be obtained from the entire population, because of random error in the selection of respondent and the corresponding reduction in reliability.

Standard Deviation — a measure of variation that indicates the typical distance between the scores of a distribution and the mean; it is determined by taking the square root of the average of the squared deviations in a given distribution. It can be used to indicate the proportion of data within certain ranges of scale values when the distribution conforms closely to the normal curve.

Statistical Analysis –: application of statistical processes and theory to the compilation, presentation, discussion, and interpretation of numerical data.

Statistical Bias — characteristics of an experimental or sampling design, or the mathematical treatment of data, that systematically affects the results of a study so as to produce incorrect, unjustified, or inappropriate inferences or conclusions.

Statistical Significance –: the probability that the difference between the outcomes of the control and experimental group are great enough that it is unlikely due solely to chance. The probability that the (null hypothesis) can be rejected at a predetermined significance level [0.05 or 0.01].

Statistical Tests — researchers use statistical tests to make quantitative decisions about whether a study’s data indicate a significant effect from the intervention and allow the researcher to reject the null hypothesis. That is, statistical tests show whether the differences between the outcomes of the control and experimental groups are great enough to be statistically significant. If differences are found to be statistically significant, it means that the probability [likelihood] that these differences occurred solely due to chance is relatively low. Most researchers agree that a significance value of .05 or less [i.e., there is a 95% probability that the differences are real] sufficiently determines significance.

Theory — a general explanation about a specific behavior or set of events that is based on known principles and serves to organize related events in a meaningful way. A theory is not as specific as a hypothesis.

Unit of Analysis — the basic observable entity or phenomenon being analyzed by a study and for which data are collected in the form of variables.

Validity — the degree to which a study accurately reflects or assesses the specific concept that the researcher is attempting to measure. A method can be reliable, consistently measuring the same thing, but not valid.

Variable — any characteristic or trait that can vary from one person to another [race, gender, academic major] or for one person over time [age, political beliefs].

Weighted Scores — scores in which the components are modified by different multipliers to reflect their relative importance.

White Paper — an authoritative report that often states the position or philosophy about a social, political, or other subject, or a general explanation of an architecture, framework, or product technology written by a group of researchers. A white paper seeks to contain unbiased information and analysis regarding a business or policy problem that the researchers may be facing.

 

 

Design Flaws to Avoid

The research design establishes the decision-making processes, conceptual structure of investigation, and methods of analysis used to address the central research problem of your study.  Taking the time to develop a thorough research design helps to organize your thoughts, set the boundaries of your study, maximize the reliability of your findings, and avoid misleading or incomplete conclusions. Therefore, if any aspect of your research design is flawed or under-developed, the quality and reliability of your final results and, by extension, the overall value of your study will be weakened.

Here are some common problems to avoid when designing a research study.

  • Lack of Specificity — do not describe aspects of your study in overly-broad generalities. It is important that you design a study that describes the process of investigation in clear and concise terms. Otherwise, the reader cannot be certain what you intend to do.
  • Poorly Defined Research Problem — the starting point of most new research is to formulate a problem statement and begin the process of formulating questions about that problem. Your paper should outline and explicitly delimit the research problem and state what you intend to investigate.
  • Lack of Theoretical Framework — the theoretical framework represents the conceptual foundation of your study. Therefore, your research design should include an explicit set of basic postulates or assumptions related to the research problem and an equally explicit set of logically derived hypotheses.
  • Significance — the research design must include a clear answer to the “So What” question. Be sure you clearly articulate why your study is important and how it contributes to the larger body of literature about the topic of investigation.
  • Relationship between Past Research and Your Study — do not simply offer a summary description of prior research. Your literature review should include an explicit statement linking the results of prior research to the research you are about  to undertake. This can be done, for example, by indentifying basic weaknesses in previous research studies and how your study helps to fill this gap in knowledge.
  • Contribution to the Field — in linking to prior research, don’t just note that a gap exists; be clear in describing how your study contributes to, or possibly challenges, existing assumptions or findings.
  • Provincialism — this refers to designing a narrowly applied scope, geographical area, sampling, or method of analysis that unduly restricts your ability to create meaningful outcomes and, by extension, obtaining results that are relevant and possibly transferable to understanding phenomena in other settings.
  • Objectives, Hypotheses, or Questions — your research design should include one or more questions or hypotheses that you are attempting to answer about the research problem underpinning your research. They should be clearly articulated and closely tied to the overall aims of your study.
  • Poor Method — the design must include a well-developed and transparent plan for how you intend to collect or generate data and how it will be analyzed.
  • Proximity Sampling — this refers to using a sample which is based not upon the purposes of your study, but rather, is based upon the proximity of a particular group of subjects. The units of analysis, whether they be persons, places, or things, must not be based solely on ease of access and convenience.
  • Techniques or Instruments — be clear in describing the techniques [e.g., semi-structured interviews] or instruments [e.g., questionnaire] used to gather data. Your research design should note how the technique or instrument will provide reasonably reliable data to answer the questions associated with the central research problem.
  • Statistical Treatment — in quantitative studies, you must give a complete description of how you will organize the raw data for analysis. In most cases, this involves describing the data through the measures of central tendencies like mean, median, and mode that help the researcher explain how the data are concentrated and, thus, leading to meaningful interpretations of key trends or patterns in the data.
  • Vocabulary — research often contains jargon and specialized language that assumes the reader be familiar with. However, avoid overuse of technical or pseudo-technical terminology. Problems with vocabulary also can refer to the use of popular terms, cliché’s, or culture-specific language that is inappropriate for academic writing.
  • Ethical Dilemmas — in the methods section of qualitative research studies, your design must document how you intend to minimize risk for participants during stages of data gathering while, at the same time, still being able to adequately address the research problem. Failure to do so can lead the reader to question the validity and objectivity of your entire study.
  • Limitations of Study — all studies have limitations and your research design should anticipate and explain the reasons why these limitations may exist. The description of results should also clearly describe the extent of missing data. In both cases, it is important to include a statement concerning what impact these limitations may have on the validity of your results.

 

 

Action Research Design

Definition and Purpose

The essentials of action research design follow a characteristic cycle whereby initially an exploratory stance is adopted, where an understanding of a problem is developed and plans are made for some form of interventionary strategy. Then the intervention is carried out (the action in Action Research) during which time, pertinent observations are collected in various forms. The new interventional strategies are carried out, and the cyclic process repeats, continuing until a sufficient understanding of (or implement able solution for) the problem is achieved. The protocol is iterative or cyclical in nature and is intended to foster deeper understanding of a given situation, starting with conceptualizing and particularizing the problem and moving through several interventions and evaluations.

What do these studies tell you?

  1. A collaborative and adaptive research design that lends itself to use in work or community situations.
  2. Design focuses on pragmatic and solution-driven research rather than testing theories.
  3. When practitioners use action research it has the potential to increase the amount they learn consciously from their experience. The action research cycle can also be regarded as a learning cycle.
  4. Action search studies often have direct and obvious relevance to practice.
  5. There are no hidden controls or preemption of direction by the researcher.

What these studies don’t tell you?

  1. It is harder to do than conducting conventional studies because the researcher takes on responsibilities for encouraging change as well as for research.
  2. Action research is much harder to write up because you probably can’t use a standard format to report your findings effectively.
  3. Personal over-involvement of the researcher may bias research results.
  4. The cyclic nature of action research to achieve its twin outcomes of action (e.g. change) and research (e.g. understanding) is time-consuming and complex to conduct.

 

 

 Case Study Design

Definition and Purpose

A case study is an in-depth study of a particular research problem rather than a sweeping statistical survey. It is often used to narrow down a very broad field of research into one or a few easily researchable examples. The case study research design is also useful for testing whether a specific theory and model actually applies to phenomena in the real world. It is a useful design when not much is known about a phenomenon.

What do these studies tell you?

  1. Approach excels at bringing us to an understanding of a complex issue through detailed contextual analysis of a limited number of events or conditions and their relationships.
  2. A researcher using a case study design can apply a variety of methodologies and rely on a variety of sources to investigate a research problem.
  3. Design can extend experience or add strength to what is already known through previous research.
  4. Social scientists, in particular, make wide use of this research design to examine contemporary real-life situations and provide the basis for the application of concepts and theories and extension of methods.
  5. The design can provide detailed descriptions of specific and rare cases.

What these studies don’t tell you?

  1. A single or small number of cases offers little basis for establishing reliability or to generalize the findings to a wider population of people, places, or things.
  2. The intense exposure to study of the case may bias a researcher’s interpretation of the findings.
  3. Design does not facilitate assessment of cause and effect relationships.
  4. Vital information may be missing, making the case hard to interpret.
  5. The case may not be representative or typical of the larger problem being investigated.
  6. If the criteria for selecting a case is because it represents a very unusual or unique phenomenon or problem for study, then your interpretation of the findings can only apply to that particular case.

 

 

Causal Design

Definition and Purpose

Causality studies may be thought of as understanding a phenomenon in terms of conditional statements in the form, “If X, then Y.” This type of research is used to measure what impact a specific change will have on existing norms and assumptions. Most social scientists seek causal explanations that reflect tests of hypotheses. Causal effect (nomothetic perspective) occurs when variation in one phenomenon, an independent variable, leads to or results, on average, in variation in another phenomenon, the dependent variable.

Conditions necessary for determining causality:

  • Empirical association–a valid conclusion is based on finding an association between the independent variable and the dependent variable.
  • Appropriate time order–to conclude that causation was involved; one must see that cases were exposed to variation in the independent variable before variation in the dependent variable.
  • Non spuriousness–a relationship between two variables that is not due to variation in a third variable.

What do these studies tell you?

  1. Causality research designs helps researchers understand why the world works the way it does through the process of proving a causal link between variables and eliminating other possibilities.
  2. Replication is possible.
  3. There is greater confidence the study has internal validity due to the systematic subject selection and equity of groups being compared.

What these studies don’t tell you?

  1. Not all relationships are casual! The possibility always exists that, by sheer coincidence, two unrelated events appear to be related [e.g., one  could accurately predict the duration of Winter for five consecutive years but, the fact remains, he’s just a big, furry rodent].
  2. Conclusions about causal relationships are difficult to determine due to a variety of extraneous and confounding variables that exist in a social environment. This means causality can only be inferred, never proven.
  3. If two variables are correlated, the cause must come before the effect. However, even though two variables might be causally related, it can sometimes be difficult to determine which variable comes first and therefore to establish which variable is the actual cause and which is the actual effect.

 

Cohort Design

Definition and Purpose

Often used in the medical sciences, but also found in the applied social sciences, a cohort study generally refers to a study conducted over a period of time involving members of a population which the subject or representative member comes from, and who are united by some commonality or similarity. Using a quantitative framework, a cohort study makes note of statistical occurrence within a specialized subgroup, united by same or similar characteristics that are relevant to the research problem being investigated, rather than studying statistical occurrence within the general population. Using a qualitative framework, cohort studies generally gather data using methods of observation. Cohorts can be either “open” or “closed.”

  • Open Cohort Studies [dynamic populations, such as the population of Los Angeles] involve a population that is defined just by the state of being a part of the study in question (and being monitored for the outcome). Date of entry and exit from the study is individually defined; therefore, the size of the study population is not constant. In open cohort studies, researchers can only calculate rate based data, such as, incidence rates and variants thereof.
  • Closed Cohort Studies [static populations, such as patients entered into a clinical trial] involve participants who enter into the study at one defining point in time and where it is presumed that no new participants can enter the cohort. Given this, the number of study participants remains constant (or can only decrease).

What do these studies tell you?

  1. The use of cohorts is often mandatory because a randomized control study may be unethical. For example, you cannot deliberately expose people to asbestos; you can only study its effects on those who have already been exposed. Research that measures risk factors often relies on cohort designs.
  2. Because cohort studies measure potential causes before the outcome has occurred, they can demonstrate that these “causes” preceded the outcome, thereby avoiding the debate as to which is the cause and which is the effect.
  3. Cohort analysis is highly flexible and can provide insight into effects over time and related to a variety of different types of changes [e.g., social, cultural, political, and economic, etc.].
  4. Either original data or secondary data can be used in this design.

What these studies don’t tell you?

  1. In cases where a comparative analysis of two cohorts is made [e.g., studying the effects of one group exposed to asbestos and one that has not], a researcher cannot control for all other factors that might differ between the two groups. These factors are known as confounding variables.
  2. Cohort studies can end up taking a long time to complete if the researcher must wait for the conditions of interest to develop within the group. This also increases the chance that key variables change during the course of the study, potentially impacting the validity of the findings.
  3. Because of the lack of randomization in the cohort design, its external validity is lower than that of study designs where the researcher randomly assigns participants.

 

 

Cross-Sectional Design

Definition and Purpose

Cross-sectional research designs have three distinctive features: no time dimension, a reliance on existing differences rather than change following intervention; and, groups are selected based on existing differences rather than random allocation. The cross-sectional design can only measure differences between or from among a variety of people, subjects, or phenomena rather than change. As such, researchers using this design can only employ a relative passive approach to making causal inferences based on findings.

What do these studies tell you?

  1. Cross-sectional studies provide a ‘snapshot’ of the outcome and the characteristics associated with it, at a specific point in time.
  2. Unlike the experimental design where there is an active intervention by the researcher to produce and measure change or to create differences, cross-sectional designs focus on studying and drawing inferences from existing differences between people, subjects, or phenomena.
  3. Entails collecting data at and concerning one point in time. While longitudinal studies involve taking multiple measures over an extended period of time, cross-sectional research is focused on finding relationships between variables at one moment in time.
  4. Groups identified for study are purposely selected based upon existing differences in the sample rather than seeking random sampling.
  5. Cross-section studies are capable of using data from a large number of subjects and, unlike observational studies, are not geographically bound.
  6. Can estimate prevalence of an outcome of interest because the sample is usually taken from the whole population.
  7. Because cross-sectional designs generally use survey techniques to gather data, they are relatively inexpensive and take up little time to conduct.

What these studies don’t tell you?

  1. Finding people, subjects, or phenomena to study that are very similar except in one specific variable can be difficult.
  2. Results are static and time bound and, therefore, gives no indication of a sequence of events or reveals historical contexts.
  3. Studies cannot be utilized to establish cause and effect relationships.
  4. Provide only a snapshot of analysis so there is always the possibility that a study could have differing results if another time-frame had been chosen.
  5. There is no follow up to the findings.

 

 

Descriptive Design

Definition and Purpose

Descriptive research designs help provide answers to the questions of who, what, when, where, and how associated with a particular research problem; a descriptive study cannot conclusively ascertain answers to why. Descriptive research is used to obtain information concerning the current status of the phenomena and to describe “what exists” with respect to variables or conditions in a situation.

What do these studies tell you?

  1. The subject is being observed in a completely natural and unchanged natural environment. True experiments, whilst giving analyzable data, often adversely influence the normal behavior of the subject.
  2. Descriptive research is often used as a pre-cursor to more quantitatively research designs, the general overview giving some valuable pointers as to what variables are worth testing quantitatively.
  3. If the limitations are understood, they can be a useful tool in developing a more focused study.
  4. Descriptive studies can yield rich data that lead to important recommendations.
  5. Approach collects a large amount of data for detailed analysis.

What these studies don’t tell you?

  1. The results from a descriptive research cannot be used to discover a definitive answer or to disprove a hypothesis.
  2. Because descriptive designs often utilize observational methods [as opposed to quantitative methods], the results cannot be replicated.
  3. The descriptive function of research is heavily dependent on instrumentation for measurement and observation.

 

 

 

Experimental Design

Definition and Purpose

A blueprint of the procedure that enables the researcher to maintain control over all factors that may affect the result of an experiment. In doing this, the researcher attempts to determine or predict what may occur. Experimental Research is often used where there is time priority in a causal relationship (cause precedes effect), there is consistency in a causal relationship (a cause will always lead to the same effect), and the magnitude of the correlation is great. The classic experimental design specifies an experimental group and a control group. The independent variable is administered to the experimental group and not to the control group, and both groups are measured on the same dependent variable. Subsequent experimental designs have used more groups and more measurements over longer periods. True experiments must have control, randomization, and manipulation.

What do these studies tell you?

  1. Experimental research allows the researcher to control the situation. In so doing, it allows researchers to answer the question, “what causes something to occur?”
  2. Permits the researcher to identify cause and effect relationships between variables and to distinguish placebo effects from treatment effects.
  3. Experimental research designs support the ability to limit alternative explanations and to infer direct causal relationships in the study.
  4. Approach provides the highest level of evidence for single studies.

What these studies don’t tell you?

  1. The design is artificial, and results may not generalize well to the real world.
  2. The artificial settings of experiments may alter subject behaviors or responses.
  3. Experimental designs can be costly if special equipment or facilities are needed.
  4. Some research problems cannot be studied using an experiment because of ethical or technical reasons.
  5. Difficult to apply ethnographic and other qualitative methods to experimental designed research studies.

 

 

 

Exploratory Design

Definition and Purpose

An exploratory design is conducted about a research problem when there are few or no earlier studies to refer to. The focus is on gaining insights and familiarity for later investigation or undertaken when problems are in a preliminary stage of investigation.

The goals of exploratory research are intended to produce the following possible insights:

  • Familiarity with basic details, settings and concerns.
  • Well grounded picture of the situation being developed.
  • Generation of new ideas and assumption, development of tentative theories or hypotheses.
  • Determination about whether a study is feasible in the future.
  • Issues get refined for more systematic investigation and formulation of new research questions.
  • Direction for future research and techniques get developed.

What do these studies tell you?

  1. Design is a useful approach for gaining background information on a particular topic.
  2. Exploratory research is flexible and can address research questions of all types (what, why, how).
  3. Provides an opportunity to define new terms and clarify existing concepts.
  4. Exploratory research is often used to generate formal hypotheses and develop more precise research problems.
  5. Exploratory studies help establish research priorities.

What these studies don’t tell you?

  1. Exploratory research generally utilizes small sample sizes and, thus, findings are typically not generalizable to the population at large.
  2. The exploratory nature of the research inhibits an ability to make definitive conclusions about the findings.
  3. The research process underpinning exploratory studies is flexible but often unstructured, leading to only tentative results that have limited value in decision-making.
  4. Design lacks rigorous standards applied to methods of data gathering and analysis because one of the areas for exploration could be to determine what method or methodologies could best fit the research problem.

  

Historical Design

Definition and Purpose

The purpose of a historical research design is to collect, verify, and synthesize evidence from the past to establish facts that defend or refute your hypothesis. It uses secondary sources and a variety of primary documentary evidence, such as, logs, diaries, official records, reports, archives, and non-textual information [maps, pictures, audio and visual recordings]. The limitation is that the sources must be both authentic and valid.

What do these studies tell you?

  1. The historical research design is unobtrusive; the act of research does not affect the results of the study.
  2. The historical approach is well suited for trend analysis.
  3. Historical records can add important contextual background required to more fully understand and interpret a research problem.
  4. There is no possibility of researcher-subject interaction that could affect the findings.
  5. Historical sources can be used over and over to study different research problems or to replicate a previous study.

What these studies don’t tell you?

  1. The ability to fulfill the aims of your research are directly related to the amount and quality of documentation available to understand the research problem.
  2. Since historical research relies on data from the past, there is no way to manipulate it to control for contemporary contexts.
  3. Interpreting historical sources can be very time consuming.
  4. The sources of historical materials must be archived consistently to ensure access.
  5. Original authors bring their own perspectives and biases to the interpretation of past events and these biases are more difficult to ascertain in historical resources.
  6. Due to the lack of control over external variables, historical research is very weak with regard to the demands of internal validity.
  7. It rare that the entirety of historical documentation needed to fully address a research problem is available for interpretation; therefore, gaps need to be acknowledged.

 

 

Longitudinal Design

Definition and Purpose

A longitudinal study follows the same sample over time and makes repeated observations. With longitudinal surveys, for example, the same group of people is interviewed at regular intervals, enabling researchers to track changes over time and to relate them to variables that might explain why the changes occur. Longitudinal research designs describe patterns of change and help establish the direction and magnitude of causal relationships. Measurements are taken on each variable over two or more distinct time periods. This allows the researcher to measure change in variables over time. It is a type of observational study and is sometimes referred to as a panel study.

What do these studies tell you?

  1. Longitudinal data allow the analysis of duration of a particular phenomenon.
  2. Enables survey researchers to get close to the kinds of causal explanations usually attainable only with experiments.
  3. The design permits the measurement of differences or change in a variable from one period to another [i.e., the description of patterns of change over time].
  4. Longitudinal studies facilitate the prediction of future outcomes based upon earlier factors.

What these studies don’t tell you?

  1. The data collection method may change over time.
  2. Maintaining the integrity of the original sample can be difficult over an extended period of time.
  3. It can be difficult to show more than one variable at a time.
  4. This design often needs qualitative research to explain fluctuations in the data.
  5. A longitudinal research design assumes present trends will continue unchanged.
  6. It can take a long period of time to gather results.
  7. There is a need to have a large sample size and accurate sampling to reach representativeness.

 

 

Observational Design

Definition and Purpose

This type of research design draws a conclusion by comparing subjects against a control group, in cases where the researcher has no control over the experiment. There are two general types of observational designs. In direct observations, people know that you are watching them. Unobtrusive measures involve any method for studying behavior where individuals do not know they are being observed. An observational study allows a useful insight into a phenomenon and avoids the ethical and practical difficulties of setting up a large and cumbersome research project.

What do these studies tell you?

  1. Observational studies are usually flexible and do not necessarily need to be structured around a hypothesis about what you expect to observe (data is emergent rather than pre-existing).
  2. The researcher is able to collect a depth of information about a particular behavior.
  3. Can reveal interrelationships among multifaceted dimensions of group interactions.
  4. You can generalize your results to real life situations.
  5. Observational research is useful for discovering what variables may be important before applying other methods like experiments.
  6. Observation research designs account for the complexity of group behaviors.

What these studies don’t tell you?

  1. Reliability of data is low because seeing behaviors occur over and over again may be a time consuming task and difficult to replicate.
  2. In observational research, findings may only reflect a unique sample population and, thus, cannot be generalized to other groups.
  3. There can be problems with bias as the researcher may only “see what they want to see.”
  4. There is no possibility to determine a “cause and effect” relationship since nothing is manipulated.
  5. Sources or subjects may not all be equally credible.
  6. Any group that is studied is altered to some degree by the very presence of the researcher, therefore, skewing to some degree any data collected (the Heisenburg Uncertainty Principle).

 

 

Philosophical Design

Definition and Purpose

Understood more as an broad approach to examining a research problem than a methodological design, philosophical analysis and argumentation is intended to challenge deeply embedded, often intractable, assumptions underpinning an area of study. This approach uses the tools of argumentation derived from philosophical traditions, concepts, models, and theories to critically explore and challenge, for example, the relevance of logic and evidence in academic debates, to analyze arguments about fundamental issues, or to discuss the root of existing discourse about a research problem. These overarching tools of analysis can be framed in three ways:

  • Ontology — the study that describes the nature of reality; for example, what is real and what is not, what is fundamental and what is derivative?
  • Epistemology — the study that explores the nature of knowledge; for example, on what does knowledge and understanding depend upon and how can we be certain of what we know?
  • Axiology — the study of values; for example, what values does an individual or group hold and why? How are values related to interest, desire, will, experience, and means-to-end? And, what is the difference between a matter of fact and a matter of value?

What do these studies tell you?

  1. Can provide a basis for applying ethical decision-making to practice.
  2. Functions as a means of gaining greater self-understanding and self-knowledge about the purposes of research.
  3. Brings clarity to general guiding practices and principles of an individual or group.
  4. Philosophy informs methodology.
  5. Refine concepts and theories that are invoked in relatively unreflective modes of thought and discourse.
  6. Beyond methodology, philosophy also informs critical thinking about epistemology and the structure of reality (metaphysics).
  7. Offers clarity and definition to the practical and theoretical uses of terms, concepts, and ideas.

What these studies don’t tell you?

  1. Limited application to specific research problems [answering the “So What?” question in social science research].
  2. Analysis can be abstract, argumentative, and limited in its practical application to real-life issues.
  3. While a philosophical analysis may render problematic that which was once simple or taken-for-granted, the writing can be dense and subject to unnecessary jargon, overstatement, and/or excessive quotation and documentation.
  4. There are limitations in the use of metaphor as a vehicle of philosophical analysis.
  5. There can be analytical difficulties in moving from philosophy to advocacy and between abstract thought and application to the phenomenal world.

  

Sequential Design

Definition and Purpose

Sequential research is that which is carried out in a deliberate, staged approach [i.e. serially] where one stage will be completed, followed by another, then another, and so on, with the aim that each stage will build upon the previous one until enough data is gathered over an interval of time to test your hypothesis. The sample size is not predetermined. After each sample is analyzed, the researcher can accept the null hypothesis, accept the alternative hypothesis, or select another pool of subjects and conduct the study once again. This means the researcher can obtain a limitless number of subjects before finally making a decision whether to accept the null or alternative hypothesis. Using a quantitative framework, a sequential study generally utilizes sampling techniques to gather data and applying statistical methods to analyze the data. Using a qualitative framework, sequential studies generally utilize samples of individuals or groups of individuals [cohorts] and use qualitative methods, such as interviews or observations, to gather information from each sample.

 

What do these studies tell you?

  1. The researcher has a limitless option when it comes to sample size and the sampling schedule.
  2. Due to the repetitive nature of this research design, minor changes and adjustments can be done during the initial parts of the study to correct and hone the research method. Useful design for exploratory studies.
  3. There is very little effort on the part of the researcher when performing this technique. It is generally not expensive, time consuming, or workforce extensive.
  4. Because the study is conducted serially, the results of one sample are known before the next sample is taken and analyzed.

What these studies don’t tell you?

  1. The sampling method is not representative of the entire population. The only possibility of approaching representativeness is when the researcher chooses to use a very large sample size significant enough to represent a significant portion of the entire population. In this case, moving on to study a second or more samples can be difficult.
  2. Because the sampling technique is not randomized, the design cannot be used to create conclusions and interpretations that pertain to an entire population. Generalizability from findings is limited.
  3. Difficult to account for and interpret variation from one sample to another over time, particularly when using qualitative methods of data collection.