Sacramento Head Start Alumni Association

Understanding Researc h :TopTenTips for Advocates

Feb 20, 2002

R e s e a rch crosses the desks of advocates and policymakers
on a regular basis. You receive a new re p o rt
f rom a think tank or government agency, or you re a d
a newspaper article describing the results from a new study.
Research is an important tool because it allows us to assess
the effectiveness of the wide array of policies and programs
affecting the lives of children and families. Having research
evidence to recommend or refute specific policy choices is
especially relevant in this era of increased demand for accountability
in human services and government.
But how can you tell if a given re s e a rch study is one you can
trust? Below are several tips that can help you to evaluate
critically the re s e a rch you encounter.
1. Consider the source.
It is important to evaluate the credibility of the individual(s) and
the organization that produced the re s e a rch. Research produced
by respected re s e a rchers and institutions is more likely to
be tru s t w o rt h y. Also, re s e a rch produced or funded by gro u p s
with a strong political or commercial agenda (e.g., part i s a n
g roups, or the company which manufactured the product being
studied) is less tru s t w o rt h y, since these groups have a vested
i n t e rest in the study?’s findings supporting their viewpoint.
A D V O C ATE?’ S CHECKLI ST
n What do you know (or what can you find out) about
the person and the organization that did the re s e a rc h ?
What are the author?’s re s e a rch qualifications?
What is the author?’s reputation as a re s e a rc h e r ?
n Is the re s e a rcher from a reputable organization,
university or re s e a rch institute?
n Does the person or the organization have a political
agenda they consistently promote?
2. Media is also a source to be evaluated.
If you are learning about re s e a rch through the media, keep in
mind that the media coverage may not fully or accurately summarize
the original re s e a rch. Because re s e a rch can be technical
and complex, and because media coverage often seeks to be
attention-grabbing and succinct, media re p o rting of re s e a rc h
sometimes oversimplifies the re s e a rch, leading to misinterpretation.
Don?’t assume that media?’s re p o rt of the re s e a rch is
necessarily what the actual study says, particularly if the media
coverage is very brief or provocative. Do follow up by trying to
get a copy of the original research article, or by getting more
i n f o rmation from additional sources.
A D V O C ATE?’ S CHECKLI ST
If you learned about the re s e a rch through the media:
n Was the media coverage very brief?
If so, there may be more to the story than was
a d d ressed in the limited coverage.
n Was the re p o rting on it provocative?
If so, you?’ll want to determine if the re s e a rch finding
itself has controversial implications, or if the re p o rting
of it played up that angle.
To get more information on re s e a rch that?’s getting
media coverage, try the following additional sources:
n Media coverage from additional sources, such as other
newspapers, many of which are available on the Intern e t ,
may provide another perspective.
n The web site of the re s e a rc h e r ?’s organization, the
o rganization sponsoring a conference at which the
re s e a rch was presented, or the journal or publication
in which the re s e a rch was published may have a pre s s
release or the full re s e a rch paper available.
3. Has the re s e a rch been published, and where?
R e s e a rch published in peer- reviewed re s e a rch journ a l s1 is more
t ru s t w o rthy because it has been scrutinized by other
re s e a rchers before being published. For example, the J o u rnal
of the American Medical Association, a peer- reviewed re s e a rc h
N a t i o n a l A s s o c i a t i o n o f C h i l d A d v o c a t e s H F a c t S h e e t
BY STEPHANIE A. SCHAEFER, PH.D.
Under standing Rese arc h :
TopTenTips for Advocates and Policymakers
1 Research journals use a peer-review process, in which a research article submitted for publication
is given to several other researchers knowledgeable in the topic for critical review.
These peer reviewers provide independent assessments of the research, and can recommend
revisions, or that the article not be published. You can tell if a journal is peer re v i e w e d
by looking at the information for authors submitting articles, which is generally included in
e v e ry journ a l issue and on a journal?’s web site.
N a t i o n a l A s s o c i a t i o n o f C h i l d A d v o c a t e s H F a c t S h e e t
j o u rnal, is considered a highly reputable source. Unpublished
re s e a rch, or re s e a rch published in publications that don?’t
critically evaluate it, has not gone through such scru t i n y, so
you should put less trust in this re s e a rch. For example,
re s e a rch presented at a conference generally has not yet been
published, and thus should be viewed as pre l i m i n a ry until it
goes through the full publication review process. This is also
t rue for re s e a rch that is ?“unpublished?” in the re s e a rch sense
but nonetheless has been re p o rted in the media. However,
even good re s e a rch starts out as unpublished work and is
published later, so the fact that a study is unpublished does not
mean that it is poor quality. Research published by cre d i b l e
re s e a rch institutions, such as the Urban Institute, is acceptable;
look to the reputation of the re s e a rch institution as a guide to
the tru s t w o rthiness of the re s e a rc h .
A D V O C ATE?’S CHECKLI ST
n Has the re s e a rch been published?
If so:
n Does the publication use a peer- review pro c e s s ?
n How reputable is the journal in which it was published?
( You can ask a re s e a rcher who works in that field of study
how well-respected the journal is.)
4. Research results are really about the topic
AS MEASURED, not as we may think of it.
In any re s e a rch study, the topic studied is measured in some
specific way. Knowing how the topic was measured helps you to
understand what the re s e a rch was really about.
For example, a researcher may study child aggression. This
topic2 could mean a lot of different things to different people
(calling someone names, or physically attacking someone, for
example). Since a topic such as aggression can be so broadly
defined, researchers always come up with a more specific,
precise definition3 of the topic they are studying. The definition
of aggression in a study could be the number of times
the child displayed five specific behaviors (shouting, hitting,
kicking, biting, pushing), as observed by researchers or as
reported by the child?’s teacher.
When the results from a study are re p o rted, the results are
really about the precise definition (display of specific behaviors
observed by the teacher), rather than the larger topic
( a g g ression). In reading re s e a rch, you want to assess whether
the way the re s e a rchers defined and measured their topic
makes common sense. Much of the time, the specific definition
does make common sense and seems re a s o n a b l e
( a g g ression = hitting people), but on occasion, a study
defines a term in an unusual way (aggression = name-calling).
In the latter case, it is important to be aware of the definition,
because the study may re p o rt its findings as being about the
b roader topic.
Also, diff e rent studies may use diff e rent definitions for the
same topic. It is important to pay attention to these definitions
when you are comparing the results from diff e rent studies.
A D V O C ATE?’ S CHECKLI ST
n How was the re s e a rch topic(s) defined
and measured in this study?
n Does the precise definition used make common sense?
n Did this study use a similar or diff e rent definition than
other studies have for this topic?
5. Different types of research have
different strengths.
Another indicator of the quality of a re s e a rch study, and the
claims that can be made based on it, is the study?’s re s e a rc h
design. The re s e a rch design is the way the study is stru c t u re d
to answer a question. There are two broader categories of
re s e a rch: quantitative re s e a rch, and qualitative re s e a rc h .
Quantitative re s e a rch uses numbers, and analyzes and re p o rt s
data in numeric form. Qualitative re s e a rch typically re p o rt s
results through story-like descriptions rather than numbers.
Experimental design studies, a type of quantitative study, off e r
the strongest evidence about the impact of a program. In an
experimental design, re s e a rchers randomly assign individuals
f rom the same population to two groups, a treatment gro u p
and a control group, and then compare the two groups on
some outcome. Experimental studies, known as the ?‘gold stand
a rd?’ of re s e a rch methodology, produce the strongest evidence
that a program produced an effect. Experimental studies,
sometimes called control group studies or experiments, are the
only type of study that can show a causal relationship.
Although experimental studies can provide the strongest
evidence, there are limitations to the situations in which this
re s e a rch design can be used. Experiments are very expensive to
conduct. Also, in the world of social policy, it is often impractical
or unethical to assign children to diff e rent re s e a rch tre a t m e n t
g roups (children growing up with one versus two parents in the
home, for example) to attain the control needed for an experimental
study. Another limitation of experiments is that the
results obtained under the carefully controlled research situation
may not occur in the same way when replicated out in the
c o m m u n i t y. This issue is called g e n e r a l i z a b i l i t y.
Quasi-experimental and survey studies are another type of
quantitative re s e a rch design that are useful for measuring the
e ffects of diff e rent programs on children. Quasi-experimental
studies do not use random assignment to create the gro u p s
being studied. Instead, they find comparable groups in which to
2 The research topic being studied is called a construct.
3 This precise definition is called an operational definition.
N a t i o n a l A s s o c i a t i o n o f C h i l d A d v o c a t e s H F a c t S h e e t
4 The sample size is the number of people included in the study.
5 The response rate is the pro p o rtion of people who were selected to be in the study
c o m p a red to those that actually part i c i p a t e d .
study the effects of diff e rent programs. These studies can find
associations between a program and childre n ?’s outcomes, but
they cannot be used to establish a causal relationship. For example,
a quasi-experimental study may find that children who participated
in an enrichment program had better social skills than
those that did not, but it cannot prove that the program caused
the increase in social skills (perhaps the children who part i c ipated
in the program had better social skills to begin with).
Quasi-experimental studies are especially useful for studying
complex systems as they exist naturally in the community. They
a re the best approach for large-scale studies which study larg e r
numbers of people and which study more topics.
Although advocates less often use qualitative re s e a rch, it is
another useful approach. Qualitative re s e a rch, which typically
re p o rts data in non-numeric form such as categories or descriptions,
can be an important source of information. Qualitative
studies often provide descriptive, story-like accounts of people?’s
experiences in a program or in a community. Qualitative
re s e a rch is particularly well-suited to finding out new things
you didn?’t know to look for and ask about in a surv e y.
6. Sampling is more important than sample size.
As many advocates know, the study?’s sample size4 is import a n t .
The minimum sample size needed in quantitative re s e a rc h
depends on how big the effects being studied are, so there is no
rule, but a general guideline for a minimum sample size might
be 30 to 50 people. The larger the sample, the smaller the diff
e rence needed between groups to attain statistical significance.
But even more important than sample size is the way the sample
was collected. Quantitative re s e a rch is based on the assumption
that the findings for a sample of people can be generalized to the
l a rger population. Researchers collect information on a sample
of people in order to determine the effects of a program for the
full population. For example, a study will select a sample of 100
c h i l d ren in afterschool programs, and this sample is intended to
re p resent the population of all children in similar pro g r a m s .
R e s e a rchers use careful pro c e d u res to select their samples. One
a p p ropriate pro c e d u re, the most commonly used, is random
selection, but there are other appropriate sampling pro c e d u re s
as well. If the sampling pro c e d u res are n ?’t done well, then we
cannot assume that the findings for the sample generalize to the
population, and the study?’s findings would not be valid.
One important aspect of good sampling is the response rate.5
If a study has a low response rate, then this means that a port i o n
of the carefully selected sample was not studied. It is possible
that the people who did not respond are diff e rent in some
systematic way from the people who did respond. For example,
in a written surv e y, people who do not answer the questions
might have lower literacy skills than the people who did
answer it. The response rate is very important for this re a s o n .
While there are no hard and fast rules on response rates, a
general guideline for an acceptable response rate would be
50%, and a very good response rate would be 80% or higher.
A D V O C ATE?’ S CHECKLI ST
n What is the sample size used in the study?
n How was the sample selected?
n What was the response rate for the study?
7. Statistical significance explained.
One of the things advocates value most about re s e a rch is getting
?“ h a rd data?” ?– numbers ?– about the effects of a policy on childre n .
A study re p o rts a statistically significant diff e rence between those
who received a program and those who didn?’t. But what does
statistical significance mean, and what can we conclude from it?
A statistically significant result is one that is unlikely to be due
to chance. Researchers use statistics to test whether the re s u l t s
they found are likely to be due to the effect of the pro g r a m
being studied, and not to other unrelated factors.
L e t ?’s take a hypothetical example: a study found that childre n
who received preventive health services had significantly
higher rates of school attendance than children who did not
have access to these services. Specifically, 75% of children who
had health care had good school attendance, but only 50% of
c h i l d ren without preventive health care had good attendance.
That this finding was statistically significant means that it is
highly unlikely that the diff e rence found between the two
g roups was due to chance; there f o re, it is likely that the diff
e rence between the groups was really due to the diff e rence in
access to health care.
Statistical significance is diff e rent than the substantive significance,
or meaningfulness, of a finding. A result may be statist
i c a l l y significant but unimportant (sample size is crucial here ,
because a very small diff e rence will be statistically significant
if group sizes are large). Conversely, a result may not be stat
i s t i c a l l y significant, perhaps because the sample size was too
small, but it may be meaningful nonetheless because it suggests
an important change in an outcome.
8. Research findings are about groups.
Research results are usually based on comparisons between
groups of people. For example, a study may find that children
in program X have higher reading scores than children in
program Y. That research findings are based on groups of
people makes them particularly relevant for policy decisions,
since policies affect groups of people, but less relevant for
individual case decisions.
In addition to looking at the diff e rence between two gro u p s ,
it is also worthwhile to look at the absolute levels of perf o rmance
in each group when deciding what this re s e a rch tells you.
L e t ?’s say that 85% of program X children and 70% of pro g r a m
Y children read at a fifth-grade level. Let?’s also assume that
this is a statistically significant diff e rence, and a meaningful
d i ff e rence as well. While it is true that program X childre n
did better, it is also important to note that most of the pro g r a m
Y children also are reading at this level (it would be a diff e re n t
s t o ry if only 10% of program Y children were reading at the
fifth-grade level). Knowing how big a diff e re n c e t h e re is
between the two groups studied, and what the absolute levels of
p e rf o rmance are for each group, taken together can help you
make more informed policy decisions.
A D V O C ATE?’ S CHECKLI ST
n For a finding comparing two groups, what were the
absolute levels of perf o rmance for each group?
9. All research is not created equal.
When comparing the results from diff e rent studies with conflicting
findings, higher-quality studies should be given more
weight (you can use the tips provided in this fact sheet as a
guide to determining the quality of a study). Better studies can
refute poorer studies; there is not a one-to-one comparison.
10. Any one study is not the whole story.
Although we usually come across re s e a rch one study at a time,
f rom the news or a new re p o rt, re s e a rch is most valuable when
many specific studies are taken together to tell the whole story
of what we know on a given topic. Research, as a tool for scientific
d i s c o v e ry, is designed to work this way. Science is about the
a g g regation of specific studies, one building on another to
i n c rease our knowledge base.
Any single study, no matter how good, needs to be viewed in
the context of other re s e a rch on the topic. Finding art i c l e s
which summarize and synthesize the results of many studies,
c a l l e d l i t e r a t u re reviews, is one good way to get a sense of the
bigger picture that re s e a rch can tell us about a given topic.
Most re s e a rch articles provide a brief review of the literature ,
and there are some specialized articles which provide comprehensive
literature reviews.
When you learn about an interesting new finding, it is worth
asking if there has been other research on this topic before,
and if so, what the other past research has found. Some topics
have had extensive research conducted on them, and
therefore we have substantial evidence to point to. In other
topics, there may be little research. If there haven?’t been
numerous studies, it is premature to consider that we really
know what works.
Studies in new topic areas are important, and give us an important
indication of what direction things may be going. But they
a re certainly not definitive; we need to have numerous studies
b e f o re re s e a rchers would say that we have a solid basis of
evidence on what works in a given policy area.
A D V O C ATE?’ S CHECKLI ST
n Has there been other past re s e a rch on this same topic?
How much additional re s e a rc h ?
n If this study?’s findings are diff e rent than past re s e a rch,
did the re s e a rchers explain why it is diff e re n t ?
n Has there been enough high-quality re s e a rch so that we
can say we know a lot about what works in this topic are a ?
Or has there been only a little re s e a rch, so we should only
consider the re s e a rch as suggestive of what might be going
on, rather than more definitive.
Conclusion
Child advocates often use re s e a rch to guide their policy re c o mmendations
and make persuasive arguments. To make the best
policy choices for children, and to ensure your cre d i b i l i t y, it is
i m p o rtant to evaluate critically the re s e a rch information you
use. This fact sheet can assist you in using re s e a rch eff e c t i v e l y.
?© 2001 by the National Association of Child Advocates
This document was pre p a red with the generous support
of the David and Lucile Packard Foundation.
?• ?• ?•
The author thanks the following re s e a rchers and
advocates who off e red helpful comments on this fact sheet:
Frances Campbell, Elizabeth Hudgins,
Suzanne Clark Johnson, Gary Laumann,
Dawn Ramsburg, Tom Rane, and Jeff rey Stueve.
Suggested citation style: Schaefer, Stephanie A.,
Understanding Research: TopTen Tips for Advocates
and Policymakers. Washington, DC:
National Association of Child Advocates, 2001.
This document is available online under
?“Publications?” at www. c h i l d a d v o c a c y. o rg .
?• ?• ?•
Stephanie A. Schaefer, Ph.D.
Policy and Advocacy Specialist
National Association of Child Advocates
1522 K St., NW ?• Suite 600 ?• Washington, DC 20005
202-289-0777, ext. 203 ?• 202-289-0776 (fax)
s c h a e f e r @ c h i l d a d v o c a c y. o rg ?• www. c h i l d a d v o c a c y. o rg
N a t i o n a l A s s o c i a t i o n o f C h i l d A d v o c a t e s H F a c t S h e e t

Sponsored Links
Advertise Here!

Promote Your Business or Product for $10/mo

istockphoto_2518034-hot-pizza.jpg

For just $10/mo you can promote your business or product directly to nearby residents. Buy 12 months and save 50%!

Buynow

Zip Code Profiler

95660 Zip Code Details

Neighborhoods, Home Values, Schools, City & State Data, Sex Offender Lists, more.