Data Collection, Measurement And Analysis Scenarios:

A researcher wants to know why individuals in Community A have a higher rate of a rare form of cancer when compared to those living in Community B. To find out the reasons for the differences in cancer rates in these two communities, the investigator surveyed residents about their lifestyle, noted the types of businesses that were present in the community and searched medical records. The researcher found that the headquarters for the Toxico Chemical Plant is located in Community A, there is a higher rate of cigarette smoking in this community and residents tended to delay or skip going to the doctor for an annual checkup. In Community B, the largest employer was a department store and on average, residents did not smoke as much as residents from Community A. However, like individuals from Community A, Community B residents tended to delay or skip their annual checkups with their doctor.

Instructions: Minimum 300 words

Read the scenario above and answer the following questions:

What makes this a descriptive study?

What type of data collection method was used in this scenario? What type of collection methods are usually used in descriptive studies?

Why did the researcher collect information about the lifestyle of community residents? What about the type of businesses present in each community? Medical records?

Can the investigator establish that the chemical plant and cigarette smoking are the cause for the higher rate of cancer among those in Community A?

Can the investigator establish that lower smoking rates and the absence of a chemical factory explain the lower rate of cancer among those in Community B? Data Collection, Meaurements, and Analysis

Experimental and Quasi-Experimetal Designs

Data Collection, Mesurements and Analysis

Experimental and Quasi-Experimental Designs

Objectives

After reviewing this lesson you will be able to:

1. Describe briefly the purpose of experimental research.

2. Explain the difference between random assignment and random selection and the importance of each.

3. Distinguish the differences between experimental and quasi-experimental designs.

4. Define internal study validity associated with experimental and quasi-experimental designs

What is an Experimental Research ?

Experimental research is an attempt by the researcher to maintain control over

all factors that may affect the result of an experiment. In doing this, the

researcher attempts to determine or predict what may occur.

So that 1 or more independent variables can be manipulated to test a

hypothesis about a dependent variable.

Experimental research directly attempts to influence a particular variable, and

it is the only type that, when used properly, can really test hypotheses about

cause-and-effect relationships while all other variables are eliminated or

controlled.

Research Essentials Mesurements

The use of numbers as a tool for identifying and presenting information

The process that links the conceptual to the empirical

Necessary to conduct quantitative research

Measurement principles

Numbers measure value, intensity, degree, depth, length, width, distance

Descriptive and evaluative device

Numbers have no value until we provide meaning

Includes everything the researcher does to arrive at a number

Details the operationalization of the variable

Remember the Types of Variables:

Dependent Variables (DV)

Variable that is expected to be dependent on the manipulation of the independent variable.

e.g., weight loss

Independent Variables (IV)

Any variable that can be manipulated, or altered, independently of any other variable.

DV is the variable used to assess or measure group differences thought to be due to (or caused by) the presence (or

absence) of the IV.

e.g., participation in a training program

Confounding variable

Is an extraneous variable that correlates (directly or inversely) with both the dependent variable and the independent

variable.

Levels of measurement

Data are discrete or continuous

Both can represent communication phenomena

Each produces different kind of data

How data are collected determines how they can be used in statistical analyses

Essential Characteristics of Experimental Research Data Representation

Experiments differ from other types of research in two basic ways ― comparison of treatments and the direct manipulation of

one or more independent variables by the researcher.

Researchers responsible for

Collecting data accurately and ethically

Interpreting and reporting data responsibly

Quality of data interpretation cannot be better than quality of data collected

For example in Experimental Research:

Comparison of Groups

Participants selected and assigned to groups

control

experimental

Experimental and Control Groups

Must be as similar as possible.

Control group represents what the experimental group would have been like had it not been exposed to the experimental

stimulus.

Randon Selection refers to how sample members (study participants) are selected from the population for inclusion in the study.

Random assignment is an important ingredient in the best kinds of experiments.

It means that every individual who is participating in the experiment has an equal chance of being

assigned to any of the experimental or control conditions using a ramdom procedure.

Assumes that any important intervening variable will be equally distributed between the

groups minimizing variance and decreasing selection bias.

Why randomize?

Avoid bias.

Control the role of chance.

Control of Extraneous Variables

The researcher in an experimental study has an opportunity to exercise far more control than in most other forms of research.

Control

Efforts to remove the influence of any extraneous variable (other than the IV)

that might affect the DV.

The researcher strives to ensure that the characteristics and experiences of

the groups are as equal as possible on all important variables except the

independent variable.

Manipulation

“Doing something” to at least some of the subjects

Selecting the number & type of treatments (IVs) to & to randomly assign

participants to treatments (IVs)

Experimental Process

Six steps to conducting experimental research

1. Selection and definition of the problem

Statement of a hypothesis indicating a causal relationship between variables

2. Selection of participants and instruments

Random selection of a sample of subjects from a larger population

Random assignment of members of the sample to each group

Selection of valid and reliable instruments

3. Selection of a research plan

Three types of comparisons

Comparison of two different approaches

Comparison of new and existing approaches

Comparison of different amounts of a single approach

4. Execution of the research plan

Two concerns

Sufficient exposure to the treatment

Substantively different treatments

5. Analysis of data

6. Formulation of conclusions

Group Designs in Experimental Research

Two major classes of group designs

Single-variable designs – one independent variable

Factorial designs – two or more independent variables

Three types of experimental designs

1.Pre-experimental designs

2. Experimental designs

3. Quasi-experimental designs

1. Pre-Experimental Designs

Designs (no random assignment)

Cannot be classified as true experiments

Often used in exploratory research

2. True Experimental Designs

Independent and dependent variables

IV is manipulated

DV is observed for change

Pre-testing and post-testing

To compare variation in DV before and after treatment

Experimental and control groups

Experimental group receives “treatment” and is compared to control group (no treatment

Provide control of extraneous variable

Randomized Clinical Trial (RCT)

Use experimental and control groups

Have a very specific sampling plan, using inclusion and exclusion criteria

Intervention fidelity ensures that every subject receiving the intervention receives the identical intervention.

Use statistical comparisons to determine any differences between groups

Sample size is important—too large wastes time, resources, and money; too small may lead to inaccurate results

Sample Size

If sample size is too small, differences may not be detected, resulting in a type II error

Determining the right sample size is called a power analysis

Researchers should provide information that the sample size was adequate

The Double-Blind Experiment

Neither researchers or subjects know who is experimental group

Technique used to control subjects’ knowledge of whether or not they have been given the

experimental treatment.

Taste tests, placebos (chemically inert pills), etc.

To reduce experimental bias

Placebo Effects

An artifact that occurs when participant’s expectations about what effect an experimental

manipulation is supposed to have influence the dependent variable

If participants think they are in a drug group they may be more likely to say the drug produced an

effect.

Placebo control group: receive a pill but with no drug, so participants do not know if they are truly

receiving the drug

Quasi-Experimental Designs

More realistic than true experiments

Also test cause-and-effect relationships

Researchers lacks full control over the scheduling of experimental treatments or

Groups or subjects not randomly assigned

e.g., sample of convenience

Separate participants based on some characteristic, e.g.: Gender, occupation,

May not have a comparison group

Typical of clinical research

e.g., within subjects repeated measures

Less “subject-intensive”

Factorial Designs

Two independent variables and one dependent variable

The effect of teaching strategy and gender on students’ achievement

The effect of a particular counseling technique and the clients’ ethnicity on the success of the treatment

The effect of a specific coaching approach and children in three age groups on the ability to perform certain physical tasks

This design increases explained variance and reduces unexplained variance

Explained variance is that which can be accounted for by the independent variable(s)

By adding an additional variable into the design the explained variance is likely going to increase

Experimental Validity

Reliability: The consistency and stability of a measure or score.

Validity: the extent to which a measure actually measures what it is intended to measure

•The truthfulness of a measures.

•A test can be reliable and not be valid.

Internal validity

The degree to which the results are attributable to the independent variable and not some other rival explanation.

Indicates whether the independent variable was the sole cause of the change in the dependent variable variable rather than

to confounding variables.

Degree to which researchers can draw accurate conclusions about the effects of the independent variable.

External validity

The extent to which the results of a study can be generalized.

Indicates the extent to which the results of the experiment are applicable to the real world.

Threats to validity and reliability

Issues of data collection

Internal validity

Reliability over time

Issues of sample representativeness

External validity

Ecological validity

Do alternative explanations exist?

Threats to Internal Validity

Cohort Effect: Change in the dependent variable that occurs because members of one experimental group experienced different

historical situations than members of other experimental groups.

History Effects:Something other than the independent variable may have occurred between the pretest and posttest.

Maturation Effect :Effect on experimental results caused by experimental subjects maturing or changing over time During a

daylong experiment, subjects may grow hungry, tired, or bored

e.g. students may have matured over the reading program. They may have got better at reading just because of time and not due

to the program

Testing Effect :In before-and-after studies, pretesting may sensitize subjects when taking a test for the 2nd time.

May cause subjects to act differently than they would have if no pretest measures were taken

Instrumentation Effect:Caused by a change in the wording of questions, in interviewers, or in other procedures used to measure

the dependent variable.

Selection Effect: Sampling bias that results from differential selection of respondents for the comparison groups.

Mortality or Sample Attrition:Results from the withdrawal of some subjects from the experiment before it is completed

Effects randomization: Especially troublesome if some withdraw from one treatment group and not from the others (or at least

at different rates)

Threats to External Validity

Hawthorne Effect A specific type of reactive effect in which merely being a research participant in an investigation may affect

behavior

Suggests that, as much as possible, participants should be unaware they are in an experiment and unaware of the hypothesized

outcome

Placebo Effect Participants may believe that the experimental treatment is supposed to change them, so they respond to the

treatment with a change in performance

John Henry Effect A threat to internal validity wherein research participants in the control group try harder just because they

are in the control group

Rating Effect Variety of errors associated with ratings of a participant or group

Experimenter Bias Effect The intentional or unintentional influence that an experimenter (researcher) may exert on a study

Controlling for Extraneous Variables

Extraneous variables must be controlled to be able to attribute the effect to the treatment

Group equivalency must be assured

Generalizability important

Probability sampling where possible

Four major means to achieve control

Randomization

Selection – controls for representation

Assignment – controls for group equivalency

Matching

Identifying pairs of subjects “matched” on specific characteristics of interest

Randomly assigning subjects from each pair to different groups

Difficulty with subjects for whom no match exists

Comparing homogeneous groups

Restricting subjects to those with similar characteristics

Restricting subjects results in problems related to generalization

Using subjects as their own controls

Multiple treatments across time

Problem with carry-over effects

Statistical Techniques: Analysis

Testing for relationships

Correlation

2 continuous variables

Regression

2 or more continuous level variables

Basic assumptions

Data collected from sample to draw conclusion about population

Data from normally distributed population

Appropriate variables are selected to be tested using theoretical models

Participants randomly selected

Alternative and null hypotheses

Inferential statistics test the likelihood that the alternative hypothesis is true and the null hypothesis is no

Significance level of .05 is generally the criterion for this decision

If p .<05, then alternative hypothesis accepted If p > .05, then null hypothesis is retained

Four analytical steps

1.Statistical test determines if a relationship exists

2.Examine results to determine if the relationship found is the one predicted

3.Is the relationship significant?

4.Evaluate the process and procedures of collecting data

Correlation

Also known as Pearson product-moment correlation coefficient

Represented by r

Correlation reveals one of the following:

Scores on both variables increase or decrease

Scores on one variable increase while scores on the other variable decrease

There is no pattern or relationship

Correlation coefficient or r reveals the degree to which two continuous level variables are related

Participants provide measures of two variables

If p of the r statistic is � .05

relationship is significant hypothesis or research question accepted Correlation cannot necessarily determine causation

Limits of correlation

Examines relationship between only 2 variables

Any relationship is presumed to be linear

Limited in the degree to which inferences can be made

Correlation does not necessarily equal causation

Causation depends on the logic of relationship

Testing for Differences

Inferential statistics

Statistical test used to evaluate hypotheses and research questions

Results of the sample assumed to hold true for the population if participants are

Normally distributed on the dependent variable

Randomly assigned to categories of the IV

Inferential statistics test the likelihood that the alternative hypothesis is true and the null hypothesis is not

Significance level of .05 is generally the criterion for this decision

If p < .05, then alternative hypothesis accepted If p > .05, then null hypothesis is retained

Degrees of freedom

Represented by df

Specifies how many values vary within a statistical test

Collecting data always carries error

Rules for calculating df for each statistical test

Four analytical steps

1.Statistical test determines if a difference exists

2.Examine results to determine if the difference found is the one predicted

3.Is the difference significant?

4.Evaluate the process and procedures of collecting data

Chi-square

Represented as χ2

Determines if differences among categories are statistically significant

Compares the observed frequency with the expected frequency

The greater the difference between observed and expected, the larger the χ2

Data must be nominal or categorical

t-test

Represented by t

Determines if differences between two groups of the independent variable on the dependent variable are significant

IV must be nominal data of two categories

DV must be continuous level data at interval or ratio level

Forms of t-test

Independent sample t-test

Compares mean scores of IV for two different groups of people

Example: Those with public speaking experience in one group; those without in another group

Paired comparison t-test

Compares mean scores of paired or matched IV scores from same participants

Example: Those without public speaking experience are tested and tested again after training

Analysis of variance

Referred to with acronym ANOVA

Represented by F

Compares the influence of two or more groups of IV on the DV

One or more IVs can be tested

must be nominal

can be two or more categories

DV must be continuous level data