SlideShare a Scribd company logo
1
I. INTRODUCTION
Summer school programs have been used for remediation, enrichment, acceleration, and
reform purposes in the Unites States for over a century (Heyns, 1986). Originally, the academic
year in the U.S. revolved around an agrarian calendar: schools opened after the harvest, and
ended in time for spring planting. However, by the middle of the 19th century, as urban areas
and industry expanded, children and youth who were out of school during the summer months
became a public concern. The public looked to the schools to provide a solution, and in turn,
schools began operating summer programs for recreation, language training, and other special
needs (Heyns, 1986). In the summer of 1841, many cities in the country, including New York
City, were offering summer school programs (Dougherty, 1981).
During the 1950s, educators began to view summer school as a potential time to
remediate and/or prevent learning deficits (Austin, Rogers, & Walbesser, 1972). Remedially-
focused summer programs were established to help students meet the increasingly large number
of minimum competency requirements for graduation or grade promotion. This movement
received a financial push in 1965 with the passage of the Elementary and Secondary School Act
(ESEA; Public Law 89-10) . Title I of this law specifically targeted funds to serve students in
economically disadvantaged areas by providing supplemental educational services. When this
law was renewed in 1994 (as the Improving America’s Schools Act; Public Law 103-382),
Section 1001 (c) (4) specifically stated that schools could use Tittle I funds to “ensure that
children have full access to effective high-quality regular school programs and receive
supplemental help through extended time activities” (as quoted in Cooper, Charlton, Valentine,
& Muhlenbruck, 2000).
In the last decade, the emphasis on setting and meeting minimum competency
requirements and ending social promotion has increased dramatically, as has the use of
standardized tests to ensure that students are meeting such requirements. Such high stakes
2
testing has increased the interest in using some form of mandated summer school program to
help students meet the higher academic standards (Pipho, 1999; White, 1998). Most of the larger
cities in the U.S. are now sponsoring summer remediation programs, including large programs in
Chicago, New York, Los Angeles, Boston, Denver, and the District of Columbia.
It is highly likely that the need for such programs will continue to increase. In addition to
the greater emphasis on tougher academic standards mentioned above, Cooper and colleagues
(2000) reviewed two other national trends to support this contention. First, changes in the
structure of the American family are contributing to the increased need for summer remediation.
The number of children living in two parent homes where only one parent works are now in the
minority. Increases in the percentage of children living in families with two working parents, or
in families with only one parent, has helped to create an increased need for supplemental school
services, including summer programs, and based on family demographic data, this need is likely
to increase. Second, national educational policy makers have become increasingly concerned
about the global competitiveness of the U.S. educational system (i.e., National Commission on
Excellence in Education, 1983). Many of these concerns have focused on the relatively short
U.S. academic year, when compared to school calendars in other countries. For example,
students spend 174 days per year in school in Minneapolis, Minnesota, which is substantially less
than the 230 days per year in Taipei, Taiwan, and the 243 days per year in Sendai, China.
Educational research has also provided impetus for increasing the number and
comprehensiveness of summer school programs. In-grade retention has been one standard policy
for dealing with students who fail to meet grade promotion criteria. In a policy brief by
McCollum and colleagues (1999), research on the use of in-grade retention in Texas was
reviewed. Sixty-five out of 66 studies conducted on the effects of in-grade retention in Texas
between 1991 and 1997 illustrated that such retention practices could have long-term negative
impacts on children, including higher future drop-out rates, loss of interest and enthusiasm for
school, and loss of self-esteem. The researchers also found that Hispanic and African-american
3
students were retained twice as often as Caucasian students, that approximately 40% of students
asked to repeat a grade came from the lowest socioeconomic quartile, and that urban school
districts showed the highest rates of in-grade retention (McCollum, Cortez, Maroney, & Montes,
1999). Research conducted in other areas of the country has drawn similar conclusions (see
Harrington-Lueker, 1998 and Fager & Richen, 1999, for reviews).
Educational research on summer learning loss has also increased public interest in
summer school. Cooper, Nye, Charlton, Lindsay, and Greathouse (1996) conducted a meta-
analysis on 39 different studies of summer learning loss, and concluded that the loss equaled
approximately 1 month on a grade equivalent scale, or a tenth of a standard deviation lost
relative to spring administered standardized test scores. Cooper and colleagues (1996) found that
math loss was greater than reading loss, and in particular, summer break had a greater negative
impact on computational skills in math and spelling skills in reading.
Socioeconomic status was found to be an important factor in the extent of summer
learning loss. Middle class students gained on reading recognition over the summer, while
students from disadvantaged homes lost ground (Cooper et al., 1996). Interestingly, there were
no significant differences in the amount of math skills lost between the two socioeconomic
groups; both lost equally large amounts. Their analysis also indicated that the negative effects of
summer learning loss increased with increases in grade level.
In her review of research on compensatory summer school programs, Heyns (1986)
devoted considerable discussion to the findings of the national Sustained Effects Study (SES;
Klibanoff & Haggart, 1981). The SES was started in 1975, and for three consecutive years,
collected achievement data for over 100,000 students in 300 different elementary schools across
the country. In the SES, significantly greater losses from spring to fall were consistently found
for compensatory education students when compared to their noncompensatory peers. As Heyns
commented, the evidence presented on summer learning loss in the SES “clearly points to a
consistent relative loss among the least advantaged students” (p. 24). This pattern of greater loss
4
in less eceonomically advantaged students paralleled Heyns (1978) own finding in the Summer
Learning and the Effects of Schooling study, performed in the Atlanta public schools, as well as
in a study by Entwisle and Alexander (1992) performed in Baltimore. In the Baltimore study,
math achievement test scores (California Achievement Test) of Caucasian and african-american
students were collected over a three year period. These two groups of students showed no
difference in average CAT scores in the first grade. By the second grade, there was a 10-point
disparity; by the third grade, this gap had increased to 14 points. The authors hypothesized that
summer learning loss may be one of the most central factors contributing to the well documented
achievement test score gap between these two populations. Many of these researchers have
suggested that these differences between advantaged and disadvantaged populations may be
accounted for by the existence of fewer educational resources in the homes of disadvantaged
students (Heyns, 1987; Entwisle & Alexander, 1992; Cooper et al., 1996). Whatever the cause
of the disparity is, the research is clear: disadvantaged students show greater levels of summer
learning loss.
In summary, the use of summer programs in the U.S. to remediate learning deficiencies
has a long history. Many summer programs have been targeted at disadvantaged populations,
and research on summer learning loss supports this strategy. Disadvantaged students lose greater
ground during the summer in reading, and in some studies, in mathematics as well. With the
country’s recent increased emphasis on high stakes testing and the establishment of tougher
minimum competency requirements for grade promotion and graduation, many educational
policy makers are pinning their hopes on mandated summer school programs to provide the
necessary “Rx for low performance” (Pipho, 1999). However, until very recently, research on
the effectiveness of such summer programs has been limited.
II. EFFECTIVENESS OF SUMMER SCHOOL
5
There are well over 100 evaluations of summer school programs available in the
literature. However, most of these evaluations have not been published in peer reviewed
journals, many use very different methods to measure the impact of summer school, and have
done so on a variety of different types of summer school programs with various goals.
Additionally,until very recently, reviews of this literature have been few and far between, and
have not employed the most current methods available for integrating research results across
studies. All of this changed with the publication of Making the most of summer school: A meta-
analytic and narrative review, by Cooper and colleagues (2000). These researchers employed
the most current meta-analytic techniques available to provide the best possible integration of 93
different evaluations of the impact of summer school programs.
Meta-analysis is a set of computational techniques that can be used to combine the
statistical results from any number of independent experimental investigations to produce an
overall estimate of effect size. Summary statistics (t-tests, mean differences, correlations) from
independent studies are combined using one of several different computational formulas
available. The key assumption behind these techniques, is that each independent investigation
provides a unique estimate of the particular experimental effect of interest, and that by
combining a group of such estimates, a more accurate representation of the true population effect
can be obtained (Lyons, 1998). The ability to quantify the overall effect size of a group of
independent studies provides a powerful tool for combining anbd summarizing research results.
The meta-analysis by Cooper et al. (2000) on the impact of summer school programs
posed three questions:
(1). What is the overall impact of summer school?
(2). Which characteristics are associated with more and less successful summer
programs?
6
(3). What recommendations for improving summer school programs emerge from
careful review of the existing research?
They reviewed several different goals of summer programming, including remediation,
enrichment, schedule flexibility for working students, early graduation opportunities, specialized
programming for gifted/talented students, and specialized programing for learning disabled
students. As New York City’s Summer School 2001 program is primarily remedially-focused (a
program intended to help students meet minimum requirements for grade promotion or
graduation), results from the meta-analysis pertinent to these types of programs will be the focus
here.
There are a number of methodological difficulties that arise when attempting to compare
the results of multiple studies in any research area. In the summer school literature, central
difficulties include the multiple different criteria employed for establishing program “success”,
the multiple different measures used, waning and/or inconsistent attendance in summer
programs, and the nature of the comparison samples. In some evaluations, there is no
comparison sample. In others, a non-equivalent comparison group is used. In many studies,
students serve as their own comparison group through multiple test administrations (the one
group pre-post test design). There are both benefits and drawbacks to the one group pre-post
test design. As Cooper and colleagues (2000) commented, the strength of this design “lies in our
knowledge of the comparability of the pretest and posttest groups, because they are the same
students. It’s weakness lies in the confound of the treatment with the passage of time between
testings” (p. 13-14). Regression to the mean is an important concern with this type of design, if
the students being studied were chosen because of extremely low test scores, and this is often the
case in summer school research. Thus, the authors believe that the results of studies employing
this design are necessarily equivocal. The ideal design, which was very rarely used in the
literature reviewed, is random assignment. In a small number of studies, summer school students
were chosen by lottery, and students in summer school, were compared to students who were not
7
chosen. However, the impact of these methodological confounds can be assessed in meta-
analysis, and then controlled for.
Meta-analytic procedures:
To select the 93 studies used in this meta-analysis, searches were made in both the ERIC
(January 1966 to August of 1998) and PsychINFO (January 1967 to August of 1998) databases
for relevant articles. Selection criteria were employed, and included:
(1). Program had to take place only during the summer
(2). Program could include students anywhere from K-12.
(3). Program could contain both general and/or special populations.
(4). An evaluation was performed and empirical data was available.
(5). Goals of the program had to include either improved performance and/or the
reduction of delinquency;
(6). Comparisons had to be based on the one group pretest-posttest design, or
between a group of students who had attended with a group who had not.
Following the selection of the 93 studies, each study was coded on 53 different characteristics,
which could be grouped into the following categories:
(1). Aspects of the report: internal or external evaluator
(2). Aspects of the research design: nature of the comparison sample; comparability
of the pretest and posttest; sample size; whether or not the evaluation included an
assessment of the fidelity of program implementation; monitoring of student
attendance and drop-out rate.
(3). Aspects of the sample: grades, socioeconomic status, sex, achievement level.
(4). Aspects of the program: program goals; number of years the program had been
running; community size; number of students; number of schools; number of
classes; voluntary or mandatory participation; average class size; teacher
8
certification; parental involvement; daily length of the program; total number of
days; group or individualized curriculum; subject areas covered.
(5). Aspects of the outcome measure: length of time between the end of the summer
program and the administration of the outcome measure; type of outcome
measure; whether or not the outcome measure was standardized; whether the
analysis was based on raw or scaled scores; means; standard deviations; effect
size estimates when stated or derived.
The monograph authors served as coders. Coder reliability was assessed and all
disagreements were conferenced. Of the 93 original articles, 54 reports (dating from 1963 to
1995) provided enough information to calculate an effect size estimate. The estimate of effect
size used here was the d-index (Cohen, 1988), which is expressed in standard deviation units.
Following Cohen (1998), the authors viewed effect sizes of approximately .20 as small compared
to all effect sizes, but average when compared to effect sizes in some of the behavioral sciences
closely aligned with education (like child psychology).
Results of the meta-analysis:
Of the 54 studies from which an effect size could be calculated, 41 studies reported
evaluations of remedially-focused summer school programs. The 41 studies contained
information on 99 independent samples, with approximately 26,500 students across studies. The
overall effect size across these 41 evaluations was .26. Based on this estimate, and its
corresponding confidence interval, the authors concluded that “students completing remedial
summer programs can be expected to score about one fifth of a standard deviation, or between
one seventh and one quarter of a standard deviation, higher than the control group on outcome
measures” (p. 89).
Cooper et al. (2000) then went on to examine potential moderators of the effect size
estimate. Five different types of moderators were examined:
9
(1). Methodological moderators: type of control/comparison group; internal or
external evaluation; size of sample.
(2). Student characteristics: grade level; sex; socioeconomic status.
(3). Program context: year of the evaluation; number of years the program had
existed; size of community; number of schools; number of classes.
(4). Program features: class size; group or individualized instruction; parent
involvement; amount of instruction.
(5). Outcome characteristics: measurement delay (length of time between end of
summer school and outcome test administration); raw or norm-referenced scores;
outcome content (math vs. reading)
Regarding the methodological moderators, the type of control group was found to be a
significant predictor of effect size. Evaluations that used a one group pretest-posttest design
produced larger effect size estimates. Evaluations performed internally, rather than by an
external evaluator, also produced larger estimates. Sample size was not found to be a significant
predictor of effect size; however, for this analysis, samples less than 90 were coded as small, and
samples larger than 90 were coded as large. Finding a significant effect was unlikely using such
a gross metric.
In assessing the impact of the next moderator types, the impact of the significant
methodological moderators was controlled for1 . In terms of student characteristics, the authors
found evidence of a curvilinear effect for grade level, concluding that “summer programs were
most effective for the youngest and oldest students...in all cases, students in the middle grades
revealed the smallest effect sizes” (p. 64). Socioeconomic status was also a significant predictor
of effect size, and results indicated the presence of robust differences between middle and low-
income students. Middle income students clearly benefited more from summer remedial
programs. However, the authors did note that “the effect of summer school...was positive and
1 This was done using stepwise multiple regression, with the significant methodological moderators entered as the
first block. Thus, the variance accounted for by the methodological moderators was removed.
10
greater than zero for both economic groups” (p. 64). Similar findings have been reported in
several other evaluations of summer programs (see Heyns (in press) for a discussion of
socioeconomic status and summer school effects).
Evaluations of programs in their first year produced significantly larger estimates of
effect size when compared to evaluations of programs that had been operating for multiple years.
Community size was also a significant predictor of effect size, with programs conducted in rural,
suburban, or small city settings producing larger effect sizes than programs conducted in large
cities. Both programs using a smaller number of schools and programs using a smaller number
of classrooms also showed larger effects. It is logical to imagine that running a smaller summer
school program in a less populated area would be much less complex that running a homologous
program in a large city.
Several features of summer programs were significantly predictive of effect size. Smaller
class sizes were associated with larger effect sizes, as was an emphasis on individualized versus
group instruction. Programs requiring some form of parent involvement were also associated
with larger effect sizes. Regarding the amount of instruction, a curvilinear relationship was
again found. Programs containing less than 60 hours of instruction, or more than 120 hours,
showed significantly lower effect size estimates than programs ranging from 60 to 120 hours of
instruction.
Longer delays between the end of the summer program and the administration of the
posttest were associated with smaller effect sizes. In terms of the content of the outcome
measure, the findings were equivocal (different estimating methods not detailed here produced
dissimilar results). However, the authors tentatively concluded that general outcome measures
(containing a variety of content areas) showed the lowest effect sizes, and that math outcome
measures resulted in slightly larger estimates of effect size (.28) when compared to reading
outcome measures (.21). This interpretation of the data is corroborated by the research on
summer learning loss discussed previously, which appears to impact math skills more robustly
11
and consistently than reading skills. Therefore, one would expect math outcomes from a summer
program to produce larger effect size estimates, as more ground is lost in math during the
summer.
General conclusions emerging from the meta-analysis:
Cooper and colleagues (2000) felt they could draw several conclusions from their meta-
analysis with confidence. In particular, and perhaps most central, “summer school programs
focused on lessening or removing learning deficiencies have a positive impact on the knowledge
and skills of participants” (p. 89). Several qualifications concerning this conclusion were also
offered. Cooper et al. (2000) believe that the evidence is strong concerning the greater benefits
from summer remedial programs for middle class vs. economically disadvantaged students.
They also felt it was clear that smaller programs generally worked better, and that
individualization of instruction improved outcomes.
Several conclusions were drawn with somewhat less confidence. The authors tentatively
concluded that summer remedial programs showed a larger impact on math scores, and that
students in the lowest and highest grades benefied more than students in the middle grades.
They also tentatively concluded that “summer programs that undergo careful scrutiny for
treatment fidelity, including monitoring to insure that instruction is being delivered as prescribed,
monitoring of attendance, and removal from the evaluation of students with many absences may
produce larger effects than unmonitored programs” (p. 96-97).
III. NEW YORK CITY’S SUMMER SCHOOL PROGRAM
Can the results of the meta-analysis by Cooper et al. (2000) help form some hypotheses
regarding the impact of New York City’s Summer School 2001 program? Clearly, New York
12
City’s program is very unusual given its size. The largest summer program reviewed as part of
the meta-analysis served just over 5,500 students. New York City’s program is expected to serve
over 300,000 students. Thus, based on the results of the meta-analysis, one might predict that
because of the larger number of schools and large number of classes involved in NYC’s Summer
School 2001 program, a somewhat lower city-wide effect size should be expected.
Additionally, despite the fact that attendance is to be monitored, the fidelity of
implementation cannot be truly assessed or monitored at the classroom level. Each district is
given some latitude in determining certain key components of their summer program, such as the
curriculum used. The Board of Education provides a curriculum, but districts may use funds to
purchase curriculum packages from private vendors. Other services, such as professional
development, test preparation packages, and student support services, can also be privately
contracted. Thus, each district’s Summer School 2001 program, although needing to match a set
of core components required by the BOE, can look quite different. Also, district interpretations
of how exactly to implement the BOE’s promotion criteria varied widely in Summer School
2000, producing wide variations in grade promotion rates across districts (see Summer School
2000 evaluation)2 .
This district-to-district variability can be viewed as both an advantage and a
disadvantage. It is disadvantageous in that any conclusions drawn regarding the city-wide
impact of the Summer School 2001 program must be drawn with caution, as each district is
implementing a slightly different version of a core model. It is advantageous because each
district is given the ability to make choices that they feel may be more beneficial for the students
in their district. For example, districts expecting a high percentage of English Language
Learners can choose to purchase a remedial curriculum specifically designed for such students.
Because of the district-to-district variability inherent in the Summer School 2001 program, there
will exist 32 different versions of the core model. Examination of variations in effect size across
2 For example, 31% of mandated students in CSD 7 were promoted, while 77% of mandated students in CSD 29
were promoted.
13
different districts, with careful attention to the factors that might be producing those differences,
could provide exceptionally helpful information. District-level analysis may provide the BOE
with information that would allow them to provide several different versions of the core model
that address particular types of student needs.
IV. A SIMILAR EXAMPLE FROM CHICAGO
Like New York City, the city of Chicago has also recently (1996) implemented a revised
grade promotion policy which is linked to a summer remediation program (Roderick, Bryk,
Jacob, Easton, & Allensworth, 1999). Chicago’s new promotional policy was focused on ending
social promotion in the public schools. Initially, Chicago’s Summer Bridge program focused on
students in grades 3, 6, and 8 (it has since been expanded). Students were required to pass spring
administered standardized tests in reading and math in order to qualify for promotion to the next
grade. Students who failed to meet criteria begin receiving extra attention at the end of the
regular school year, and were required to participate in the Summer Bridge program.
Standardized achievement tests were re re-administered at the end of Summer Bridge. Students
who pass the second administration were promoted; those who fail were retained. Failing
students 15 or older were sent to a new group of alternative schools, called Transition Centers.
In Summer Bridge’s first year, approximately 27,000 students failed to meet promotional
criteria, and of those, 80% attended Summer Bridge3 . Approximately 38% of those who
attended passed the standardized tests in August and were promoted4 . In the second year of the
program, 1,600 students who were attending Summer Bridge for the second time, failed to meet
3 Certain students were exempted from the test cut-off policy because they were enrolled in bilingual and/or special
education programs.
4 Some students who failed to meet test cut-offs were promoted based on teacher/principal recommendations.
Hispanic students were significantly more likely than African-American students to be promoted despite a failing
test score.
14
promotional criteria again, and thus were retained for a second time. The question of what to do
about such repeat retainees will become a major issue for districts like Chicago and New York
City (Pipho, 1999). Success rates were highest for 6th and 8th graders. Evaluators interpreted
this as indicating that these older students were better able to appreciate the significance of the
threat of in-grade retention (Roderick et al., 1999).
An initial criticism of Chicago’s program was that it used the sole criteria of scores on a
standardized exam for grade promotion. After 3 years with this policy, the city’s promotional
criteria were expanded to include student grades, attendance record, and teacher perceptions of
learning growth, thus making it similar to New York City’s model for promotion decisions.
V. IMPLEMENTATION AND POLICY RECOMMENDATIONS
Numerous potential recommendations regarding how to implement summer school
programs can be derived from the research reviewed (Cooper et al., 2000). An important
consideration for program implementers is the trade-off between the length of the summer
program and the number of students served. Implementers need to consider the size and make-
up of their target populations, and what specifically these students need to achieve district goals.
Results of the meta-analysis did indicate higher success rates in districts operating summer
programs in a smaller number of schools, with a smaller number of classes. Ending programs
closer to the beginning of the next school year can prevent some learning loss associated with a
gap in programming. Planning for summer programs should begin early in order to allow
sufficient preparation time, to avoid late-arriving curriculum materials, and to ensure that hiring
and professional development for teachers and administrators are provided prior to the start of
the program. Cooper et al. (2000) also recommend that districts strive for as much continuity in
staffing from year to year as is possible.
Researchers have also recommended that summer programs maintain low teacher-student
ratios (Curry & Zyskowksi, 1999). Greater outreach to and participation on the part of parents in
15
summer programs is encouraged by the results of Cooper et al.’s (2000) meta-analysis, and by
other research as well (Harrington-Leuker, 1988). Providing teachers with instructionally-
relevant diagnostic information about their students prior to the start of the program can assist
them in delivering more individualized instruction, which has also been shown to increase
summer program success (Cooper et al., 2000; Haenn, 1999).
Educational policy makers should continue to fund summer school programs. Research
has illustrated that summer programs can be successful in increasing academic achievement, and
there are potential long-term gains that have yet to be measured (Cooper et al., 2000). For
example, attendance in summer programs may decrease delinquency and increase student
commitment to education. Further research following summer school attendees for a number of
years after their participation is needed to assess longer term impacts.
Policy makers should also attempt to ensure that the majority of funds set aside for
summer programs are spent on instruction in reading and math. This is not meant to imply that
summer programs should not contain other elements (i.e., recreational, arts, sports, work in other
subject areas), but research seems to indicate greater effectiveness when the content of summer
programs are more targeted on increasing reading and math achievement (Cooper et al., 2000).
Also, because most summer school programs are voluntary, policy makers would also benefit by
targeting funds meant to increase student participation. Transportation and food services should
be provided at each program site.
Regarding the question of the balance between district-wide versus individualized school-
based control in service delivery, Cooper et al. (2000) recommended allowing a modicum of
flexibility in the implementation of summer programs, in order to allow for local communities to
address the particular needs of their student populations. They stated: “policy makers ought to
resist the temptation to micromanage programs and give local schools and teachers leeway in
how to structure and deliver programs” (p. 107). Additionally, as mentioned previously,
variations in how a core program model is implemented can provide useful information for
16
helping to determine which types of program delivery variations serve which types of students
best.
VI. CONCLUSION
It is clear that the need for summer school programs is likely to increase. Changes in the
composition of the american family, concerns about the competitiveness of the american
educational system, and the establishment of tougher promotional standards will continue to
prompt school districts to establish summer remedial programs. The best research evidence
available to date indicates that such summer programs can be effective in meeting their goals; it
is also clear that how beneficial summer programs are can be influenced greatly by both program
and student characteristics. However, as Cooper et al (2000) concluded, “the general positive
effects of summer school for those students who participate are unmistakable” (p. 109).
Continued high quality research is clearly needed to help policy makers and program
implementers make the best choices possible for the needs of their particular summer school
student populations.
17
References
Austin, G., Rogers, B., & Walbesser, H. M. Jr. (1972). The effectiveness of summer
compensatory education: A review of the research. Review of Educational Research, 432, 171-
181.
Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Hillsdale, NJ:
Erlbaum.
Cooper, H., Charlton, K., Valentine, J. C., & Muhlenbruck, L. (2000). Making the most
of summer school: A meta-analytic and narrative review. Monographs of the Society for
Research in Child Development, 65(1).
Cooper, H., Nye, B., Charlton, K., Lindsay, J., & Greathouse, S. (1996). The effects of
summer vacation on achievement test scores: A narrative and meta-analytic review. Review of
Educational Research, 66(3), 227-268.
Curry, J. & Zyskowski, G. (1999). Summer opportunities to accelerate reading (SOAR).
Austin, TX: Austin Independent School District.
Dougherty, J. W. (1981). Summer school: A new look. Bloomington, IN: Phi Delta
Kappan Educational Foundation.
18
Entwisle, D. R. & Alexander, K. L. (1992). Summer setback: Race, poverty, school
composition, and mathematics achievement in the first two years of school. American
Sociological Review, 57, 72-84.
Fager, J. & Richen, R. (1999). When students don’t succeed: Shedding light on grade
retention. ERIC Clearinghouse (ED#431865).
Haenn, J. F. (1999, April). Measuring achievement growth in an 18-day summer school
session. Paper presented at the annual meeting of the American Educational Research
Association, Montreal, Quebec, Canada.
Harrington-Lueker, D. (1998). Retention vs. social promotion. School Administrator,
55(7), 6-12.
Heyns, B. (in press). Summer learning. In, D. L. Levinson, A. Sadovnik, & P. W.
Cookson, Jr. (Eds.), Education and Sociology: An Encyclopedia. New York: Garland
Publishing.
Heyns, B. (1986). Summer programs and compensatory education: The future of an
idea. Eric Clearinghouse (ED#293906).
Heyns, B. (1978). Summer learning and the effects of schooling. New York: Academic
Press.
Klibanoff, L. S. & Haggart, S. A. (1981). Summer growth and the effectiveness of
summer school (Technical Report No. 8). Mountain View, CA: RMC Research Corporation.
19
Lyons, L. C. (1998). Meta-analysis: Methods of accumulating results across research
domains. www.mnsinc.com/solomon/MetaAnalysis.html (downloaded 5/18/01).
McCollum, P., Cortex, A., Maroney, O. H., & Montes, F. (1999). Failing our children:
Finding alternatives to in-grade retention. ERIC Clearinghouse (ED#434962).
National Commission on Excellence in Education. (1983). A nation at risk: The
imperative for educational reform. Washington, D.C.: U.S. Department of Education.
Pipho, C. (1999, September). Summer school: Rx for low performance? Phi Delta
Kappan, 81(1), 7-8.
Roderick, M., Bryk, A. S., Jacob, B.A., Easton, J. Q., & Allensworth, E. (1999). Ending
social promotion: Results from the first two years. Chicago, IL: Consortium on Chicago School
Research.
White, K. A. (1998). The heat is on as big districts expand summer school. Education
Week on the Web. www.edweek.org/ew/1998/42summer.h17. (Downloaded 3/27/01).

More Related Content

BiernbaumSummerSchoolL

  • 1. 1 I. INTRODUCTION Summer school programs have been used for remediation, enrichment, acceleration, and reform purposes in the Unites States for over a century (Heyns, 1986). Originally, the academic year in the U.S. revolved around an agrarian calendar: schools opened after the harvest, and ended in time for spring planting. However, by the middle of the 19th century, as urban areas and industry expanded, children and youth who were out of school during the summer months became a public concern. The public looked to the schools to provide a solution, and in turn, schools began operating summer programs for recreation, language training, and other special needs (Heyns, 1986). In the summer of 1841, many cities in the country, including New York City, were offering summer school programs (Dougherty, 1981). During the 1950s, educators began to view summer school as a potential time to remediate and/or prevent learning deficits (Austin, Rogers, & Walbesser, 1972). Remedially- focused summer programs were established to help students meet the increasingly large number of minimum competency requirements for graduation or grade promotion. This movement received a financial push in 1965 with the passage of the Elementary and Secondary School Act (ESEA; Public Law 89-10) . Title I of this law specifically targeted funds to serve students in economically disadvantaged areas by providing supplemental educational services. When this law was renewed in 1994 (as the Improving America’s Schools Act; Public Law 103-382), Section 1001 (c) (4) specifically stated that schools could use Tittle I funds to “ensure that children have full access to effective high-quality regular school programs and receive supplemental help through extended time activities” (as quoted in Cooper, Charlton, Valentine, & Muhlenbruck, 2000). In the last decade, the emphasis on setting and meeting minimum competency requirements and ending social promotion has increased dramatically, as has the use of standardized tests to ensure that students are meeting such requirements. Such high stakes
  • 2. 2 testing has increased the interest in using some form of mandated summer school program to help students meet the higher academic standards (Pipho, 1999; White, 1998). Most of the larger cities in the U.S. are now sponsoring summer remediation programs, including large programs in Chicago, New York, Los Angeles, Boston, Denver, and the District of Columbia. It is highly likely that the need for such programs will continue to increase. In addition to the greater emphasis on tougher academic standards mentioned above, Cooper and colleagues (2000) reviewed two other national trends to support this contention. First, changes in the structure of the American family are contributing to the increased need for summer remediation. The number of children living in two parent homes where only one parent works are now in the minority. Increases in the percentage of children living in families with two working parents, or in families with only one parent, has helped to create an increased need for supplemental school services, including summer programs, and based on family demographic data, this need is likely to increase. Second, national educational policy makers have become increasingly concerned about the global competitiveness of the U.S. educational system (i.e., National Commission on Excellence in Education, 1983). Many of these concerns have focused on the relatively short U.S. academic year, when compared to school calendars in other countries. For example, students spend 174 days per year in school in Minneapolis, Minnesota, which is substantially less than the 230 days per year in Taipei, Taiwan, and the 243 days per year in Sendai, China. Educational research has also provided impetus for increasing the number and comprehensiveness of summer school programs. In-grade retention has been one standard policy for dealing with students who fail to meet grade promotion criteria. In a policy brief by McCollum and colleagues (1999), research on the use of in-grade retention in Texas was reviewed. Sixty-five out of 66 studies conducted on the effects of in-grade retention in Texas between 1991 and 1997 illustrated that such retention practices could have long-term negative impacts on children, including higher future drop-out rates, loss of interest and enthusiasm for school, and loss of self-esteem. The researchers also found that Hispanic and African-american
  • 3. 3 students were retained twice as often as Caucasian students, that approximately 40% of students asked to repeat a grade came from the lowest socioeconomic quartile, and that urban school districts showed the highest rates of in-grade retention (McCollum, Cortez, Maroney, & Montes, 1999). Research conducted in other areas of the country has drawn similar conclusions (see Harrington-Lueker, 1998 and Fager & Richen, 1999, for reviews). Educational research on summer learning loss has also increased public interest in summer school. Cooper, Nye, Charlton, Lindsay, and Greathouse (1996) conducted a meta- analysis on 39 different studies of summer learning loss, and concluded that the loss equaled approximately 1 month on a grade equivalent scale, or a tenth of a standard deviation lost relative to spring administered standardized test scores. Cooper and colleagues (1996) found that math loss was greater than reading loss, and in particular, summer break had a greater negative impact on computational skills in math and spelling skills in reading. Socioeconomic status was found to be an important factor in the extent of summer learning loss. Middle class students gained on reading recognition over the summer, while students from disadvantaged homes lost ground (Cooper et al., 1996). Interestingly, there were no significant differences in the amount of math skills lost between the two socioeconomic groups; both lost equally large amounts. Their analysis also indicated that the negative effects of summer learning loss increased with increases in grade level. In her review of research on compensatory summer school programs, Heyns (1986) devoted considerable discussion to the findings of the national Sustained Effects Study (SES; Klibanoff & Haggart, 1981). The SES was started in 1975, and for three consecutive years, collected achievement data for over 100,000 students in 300 different elementary schools across the country. In the SES, significantly greater losses from spring to fall were consistently found for compensatory education students when compared to their noncompensatory peers. As Heyns commented, the evidence presented on summer learning loss in the SES “clearly points to a consistent relative loss among the least advantaged students” (p. 24). This pattern of greater loss
  • 4. 4 in less eceonomically advantaged students paralleled Heyns (1978) own finding in the Summer Learning and the Effects of Schooling study, performed in the Atlanta public schools, as well as in a study by Entwisle and Alexander (1992) performed in Baltimore. In the Baltimore study, math achievement test scores (California Achievement Test) of Caucasian and african-american students were collected over a three year period. These two groups of students showed no difference in average CAT scores in the first grade. By the second grade, there was a 10-point disparity; by the third grade, this gap had increased to 14 points. The authors hypothesized that summer learning loss may be one of the most central factors contributing to the well documented achievement test score gap between these two populations. Many of these researchers have suggested that these differences between advantaged and disadvantaged populations may be accounted for by the existence of fewer educational resources in the homes of disadvantaged students (Heyns, 1987; Entwisle & Alexander, 1992; Cooper et al., 1996). Whatever the cause of the disparity is, the research is clear: disadvantaged students show greater levels of summer learning loss. In summary, the use of summer programs in the U.S. to remediate learning deficiencies has a long history. Many summer programs have been targeted at disadvantaged populations, and research on summer learning loss supports this strategy. Disadvantaged students lose greater ground during the summer in reading, and in some studies, in mathematics as well. With the country’s recent increased emphasis on high stakes testing and the establishment of tougher minimum competency requirements for grade promotion and graduation, many educational policy makers are pinning their hopes on mandated summer school programs to provide the necessary “Rx for low performance” (Pipho, 1999). However, until very recently, research on the effectiveness of such summer programs has been limited. II. EFFECTIVENESS OF SUMMER SCHOOL
  • 5. 5 There are well over 100 evaluations of summer school programs available in the literature. However, most of these evaluations have not been published in peer reviewed journals, many use very different methods to measure the impact of summer school, and have done so on a variety of different types of summer school programs with various goals. Additionally,until very recently, reviews of this literature have been few and far between, and have not employed the most current methods available for integrating research results across studies. All of this changed with the publication of Making the most of summer school: A meta- analytic and narrative review, by Cooper and colleagues (2000). These researchers employed the most current meta-analytic techniques available to provide the best possible integration of 93 different evaluations of the impact of summer school programs. Meta-analysis is a set of computational techniques that can be used to combine the statistical results from any number of independent experimental investigations to produce an overall estimate of effect size. Summary statistics (t-tests, mean differences, correlations) from independent studies are combined using one of several different computational formulas available. The key assumption behind these techniques, is that each independent investigation provides a unique estimate of the particular experimental effect of interest, and that by combining a group of such estimates, a more accurate representation of the true population effect can be obtained (Lyons, 1998). The ability to quantify the overall effect size of a group of independent studies provides a powerful tool for combining anbd summarizing research results. The meta-analysis by Cooper et al. (2000) on the impact of summer school programs posed three questions: (1). What is the overall impact of summer school? (2). Which characteristics are associated with more and less successful summer programs?
  • 6. 6 (3). What recommendations for improving summer school programs emerge from careful review of the existing research? They reviewed several different goals of summer programming, including remediation, enrichment, schedule flexibility for working students, early graduation opportunities, specialized programming for gifted/talented students, and specialized programing for learning disabled students. As New York City’s Summer School 2001 program is primarily remedially-focused (a program intended to help students meet minimum requirements for grade promotion or graduation), results from the meta-analysis pertinent to these types of programs will be the focus here. There are a number of methodological difficulties that arise when attempting to compare the results of multiple studies in any research area. In the summer school literature, central difficulties include the multiple different criteria employed for establishing program “success”, the multiple different measures used, waning and/or inconsistent attendance in summer programs, and the nature of the comparison samples. In some evaluations, there is no comparison sample. In others, a non-equivalent comparison group is used. In many studies, students serve as their own comparison group through multiple test administrations (the one group pre-post test design). There are both benefits and drawbacks to the one group pre-post test design. As Cooper and colleagues (2000) commented, the strength of this design “lies in our knowledge of the comparability of the pretest and posttest groups, because they are the same students. It’s weakness lies in the confound of the treatment with the passage of time between testings” (p. 13-14). Regression to the mean is an important concern with this type of design, if the students being studied were chosen because of extremely low test scores, and this is often the case in summer school research. Thus, the authors believe that the results of studies employing this design are necessarily equivocal. The ideal design, which was very rarely used in the literature reviewed, is random assignment. In a small number of studies, summer school students were chosen by lottery, and students in summer school, were compared to students who were not
  • 7. 7 chosen. However, the impact of these methodological confounds can be assessed in meta- analysis, and then controlled for. Meta-analytic procedures: To select the 93 studies used in this meta-analysis, searches were made in both the ERIC (January 1966 to August of 1998) and PsychINFO (January 1967 to August of 1998) databases for relevant articles. Selection criteria were employed, and included: (1). Program had to take place only during the summer (2). Program could include students anywhere from K-12. (3). Program could contain both general and/or special populations. (4). An evaluation was performed and empirical data was available. (5). Goals of the program had to include either improved performance and/or the reduction of delinquency; (6). Comparisons had to be based on the one group pretest-posttest design, or between a group of students who had attended with a group who had not. Following the selection of the 93 studies, each study was coded on 53 different characteristics, which could be grouped into the following categories: (1). Aspects of the report: internal or external evaluator (2). Aspects of the research design: nature of the comparison sample; comparability of the pretest and posttest; sample size; whether or not the evaluation included an assessment of the fidelity of program implementation; monitoring of student attendance and drop-out rate. (3). Aspects of the sample: grades, socioeconomic status, sex, achievement level. (4). Aspects of the program: program goals; number of years the program had been running; community size; number of students; number of schools; number of classes; voluntary or mandatory participation; average class size; teacher
  • 8. 8 certification; parental involvement; daily length of the program; total number of days; group or individualized curriculum; subject areas covered. (5). Aspects of the outcome measure: length of time between the end of the summer program and the administration of the outcome measure; type of outcome measure; whether or not the outcome measure was standardized; whether the analysis was based on raw or scaled scores; means; standard deviations; effect size estimates when stated or derived. The monograph authors served as coders. Coder reliability was assessed and all disagreements were conferenced. Of the 93 original articles, 54 reports (dating from 1963 to 1995) provided enough information to calculate an effect size estimate. The estimate of effect size used here was the d-index (Cohen, 1988), which is expressed in standard deviation units. Following Cohen (1998), the authors viewed effect sizes of approximately .20 as small compared to all effect sizes, but average when compared to effect sizes in some of the behavioral sciences closely aligned with education (like child psychology). Results of the meta-analysis: Of the 54 studies from which an effect size could be calculated, 41 studies reported evaluations of remedially-focused summer school programs. The 41 studies contained information on 99 independent samples, with approximately 26,500 students across studies. The overall effect size across these 41 evaluations was .26. Based on this estimate, and its corresponding confidence interval, the authors concluded that “students completing remedial summer programs can be expected to score about one fifth of a standard deviation, or between one seventh and one quarter of a standard deviation, higher than the control group on outcome measures” (p. 89). Cooper et al. (2000) then went on to examine potential moderators of the effect size estimate. Five different types of moderators were examined:
  • 9. 9 (1). Methodological moderators: type of control/comparison group; internal or external evaluation; size of sample. (2). Student characteristics: grade level; sex; socioeconomic status. (3). Program context: year of the evaluation; number of years the program had existed; size of community; number of schools; number of classes. (4). Program features: class size; group or individualized instruction; parent involvement; amount of instruction. (5). Outcome characteristics: measurement delay (length of time between end of summer school and outcome test administration); raw or norm-referenced scores; outcome content (math vs. reading) Regarding the methodological moderators, the type of control group was found to be a significant predictor of effect size. Evaluations that used a one group pretest-posttest design produced larger effect size estimates. Evaluations performed internally, rather than by an external evaluator, also produced larger estimates. Sample size was not found to be a significant predictor of effect size; however, for this analysis, samples less than 90 were coded as small, and samples larger than 90 were coded as large. Finding a significant effect was unlikely using such a gross metric. In assessing the impact of the next moderator types, the impact of the significant methodological moderators was controlled for1 . In terms of student characteristics, the authors found evidence of a curvilinear effect for grade level, concluding that “summer programs were most effective for the youngest and oldest students...in all cases, students in the middle grades revealed the smallest effect sizes” (p. 64). Socioeconomic status was also a significant predictor of effect size, and results indicated the presence of robust differences between middle and low- income students. Middle income students clearly benefited more from summer remedial programs. However, the authors did note that “the effect of summer school...was positive and 1 This was done using stepwise multiple regression, with the significant methodological moderators entered as the first block. Thus, the variance accounted for by the methodological moderators was removed.
  • 10. 10 greater than zero for both economic groups” (p. 64). Similar findings have been reported in several other evaluations of summer programs (see Heyns (in press) for a discussion of socioeconomic status and summer school effects). Evaluations of programs in their first year produced significantly larger estimates of effect size when compared to evaluations of programs that had been operating for multiple years. Community size was also a significant predictor of effect size, with programs conducted in rural, suburban, or small city settings producing larger effect sizes than programs conducted in large cities. Both programs using a smaller number of schools and programs using a smaller number of classrooms also showed larger effects. It is logical to imagine that running a smaller summer school program in a less populated area would be much less complex that running a homologous program in a large city. Several features of summer programs were significantly predictive of effect size. Smaller class sizes were associated with larger effect sizes, as was an emphasis on individualized versus group instruction. Programs requiring some form of parent involvement were also associated with larger effect sizes. Regarding the amount of instruction, a curvilinear relationship was again found. Programs containing less than 60 hours of instruction, or more than 120 hours, showed significantly lower effect size estimates than programs ranging from 60 to 120 hours of instruction. Longer delays between the end of the summer program and the administration of the posttest were associated with smaller effect sizes. In terms of the content of the outcome measure, the findings were equivocal (different estimating methods not detailed here produced dissimilar results). However, the authors tentatively concluded that general outcome measures (containing a variety of content areas) showed the lowest effect sizes, and that math outcome measures resulted in slightly larger estimates of effect size (.28) when compared to reading outcome measures (.21). This interpretation of the data is corroborated by the research on summer learning loss discussed previously, which appears to impact math skills more robustly
  • 11. 11 and consistently than reading skills. Therefore, one would expect math outcomes from a summer program to produce larger effect size estimates, as more ground is lost in math during the summer. General conclusions emerging from the meta-analysis: Cooper and colleagues (2000) felt they could draw several conclusions from their meta- analysis with confidence. In particular, and perhaps most central, “summer school programs focused on lessening or removing learning deficiencies have a positive impact on the knowledge and skills of participants” (p. 89). Several qualifications concerning this conclusion were also offered. Cooper et al. (2000) believe that the evidence is strong concerning the greater benefits from summer remedial programs for middle class vs. economically disadvantaged students. They also felt it was clear that smaller programs generally worked better, and that individualization of instruction improved outcomes. Several conclusions were drawn with somewhat less confidence. The authors tentatively concluded that summer remedial programs showed a larger impact on math scores, and that students in the lowest and highest grades benefied more than students in the middle grades. They also tentatively concluded that “summer programs that undergo careful scrutiny for treatment fidelity, including monitoring to insure that instruction is being delivered as prescribed, monitoring of attendance, and removal from the evaluation of students with many absences may produce larger effects than unmonitored programs” (p. 96-97). III. NEW YORK CITY’S SUMMER SCHOOL PROGRAM Can the results of the meta-analysis by Cooper et al. (2000) help form some hypotheses regarding the impact of New York City’s Summer School 2001 program? Clearly, New York
  • 12. 12 City’s program is very unusual given its size. The largest summer program reviewed as part of the meta-analysis served just over 5,500 students. New York City’s program is expected to serve over 300,000 students. Thus, based on the results of the meta-analysis, one might predict that because of the larger number of schools and large number of classes involved in NYC’s Summer School 2001 program, a somewhat lower city-wide effect size should be expected. Additionally, despite the fact that attendance is to be monitored, the fidelity of implementation cannot be truly assessed or monitored at the classroom level. Each district is given some latitude in determining certain key components of their summer program, such as the curriculum used. The Board of Education provides a curriculum, but districts may use funds to purchase curriculum packages from private vendors. Other services, such as professional development, test preparation packages, and student support services, can also be privately contracted. Thus, each district’s Summer School 2001 program, although needing to match a set of core components required by the BOE, can look quite different. Also, district interpretations of how exactly to implement the BOE’s promotion criteria varied widely in Summer School 2000, producing wide variations in grade promotion rates across districts (see Summer School 2000 evaluation)2 . This district-to-district variability can be viewed as both an advantage and a disadvantage. It is disadvantageous in that any conclusions drawn regarding the city-wide impact of the Summer School 2001 program must be drawn with caution, as each district is implementing a slightly different version of a core model. It is advantageous because each district is given the ability to make choices that they feel may be more beneficial for the students in their district. For example, districts expecting a high percentage of English Language Learners can choose to purchase a remedial curriculum specifically designed for such students. Because of the district-to-district variability inherent in the Summer School 2001 program, there will exist 32 different versions of the core model. Examination of variations in effect size across 2 For example, 31% of mandated students in CSD 7 were promoted, while 77% of mandated students in CSD 29 were promoted.
  • 13. 13 different districts, with careful attention to the factors that might be producing those differences, could provide exceptionally helpful information. District-level analysis may provide the BOE with information that would allow them to provide several different versions of the core model that address particular types of student needs. IV. A SIMILAR EXAMPLE FROM CHICAGO Like New York City, the city of Chicago has also recently (1996) implemented a revised grade promotion policy which is linked to a summer remediation program (Roderick, Bryk, Jacob, Easton, & Allensworth, 1999). Chicago’s new promotional policy was focused on ending social promotion in the public schools. Initially, Chicago’s Summer Bridge program focused on students in grades 3, 6, and 8 (it has since been expanded). Students were required to pass spring administered standardized tests in reading and math in order to qualify for promotion to the next grade. Students who failed to meet criteria begin receiving extra attention at the end of the regular school year, and were required to participate in the Summer Bridge program. Standardized achievement tests were re re-administered at the end of Summer Bridge. Students who pass the second administration were promoted; those who fail were retained. Failing students 15 or older were sent to a new group of alternative schools, called Transition Centers. In Summer Bridge’s first year, approximately 27,000 students failed to meet promotional criteria, and of those, 80% attended Summer Bridge3 . Approximately 38% of those who attended passed the standardized tests in August and were promoted4 . In the second year of the program, 1,600 students who were attending Summer Bridge for the second time, failed to meet 3 Certain students were exempted from the test cut-off policy because they were enrolled in bilingual and/or special education programs. 4 Some students who failed to meet test cut-offs were promoted based on teacher/principal recommendations. Hispanic students were significantly more likely than African-American students to be promoted despite a failing test score.
  • 14. 14 promotional criteria again, and thus were retained for a second time. The question of what to do about such repeat retainees will become a major issue for districts like Chicago and New York City (Pipho, 1999). Success rates were highest for 6th and 8th graders. Evaluators interpreted this as indicating that these older students were better able to appreciate the significance of the threat of in-grade retention (Roderick et al., 1999). An initial criticism of Chicago’s program was that it used the sole criteria of scores on a standardized exam for grade promotion. After 3 years with this policy, the city’s promotional criteria were expanded to include student grades, attendance record, and teacher perceptions of learning growth, thus making it similar to New York City’s model for promotion decisions. V. IMPLEMENTATION AND POLICY RECOMMENDATIONS Numerous potential recommendations regarding how to implement summer school programs can be derived from the research reviewed (Cooper et al., 2000). An important consideration for program implementers is the trade-off between the length of the summer program and the number of students served. Implementers need to consider the size and make- up of their target populations, and what specifically these students need to achieve district goals. Results of the meta-analysis did indicate higher success rates in districts operating summer programs in a smaller number of schools, with a smaller number of classes. Ending programs closer to the beginning of the next school year can prevent some learning loss associated with a gap in programming. Planning for summer programs should begin early in order to allow sufficient preparation time, to avoid late-arriving curriculum materials, and to ensure that hiring and professional development for teachers and administrators are provided prior to the start of the program. Cooper et al. (2000) also recommend that districts strive for as much continuity in staffing from year to year as is possible. Researchers have also recommended that summer programs maintain low teacher-student ratios (Curry & Zyskowksi, 1999). Greater outreach to and participation on the part of parents in
  • 15. 15 summer programs is encouraged by the results of Cooper et al.’s (2000) meta-analysis, and by other research as well (Harrington-Leuker, 1988). Providing teachers with instructionally- relevant diagnostic information about their students prior to the start of the program can assist them in delivering more individualized instruction, which has also been shown to increase summer program success (Cooper et al., 2000; Haenn, 1999). Educational policy makers should continue to fund summer school programs. Research has illustrated that summer programs can be successful in increasing academic achievement, and there are potential long-term gains that have yet to be measured (Cooper et al., 2000). For example, attendance in summer programs may decrease delinquency and increase student commitment to education. Further research following summer school attendees for a number of years after their participation is needed to assess longer term impacts. Policy makers should also attempt to ensure that the majority of funds set aside for summer programs are spent on instruction in reading and math. This is not meant to imply that summer programs should not contain other elements (i.e., recreational, arts, sports, work in other subject areas), but research seems to indicate greater effectiveness when the content of summer programs are more targeted on increasing reading and math achievement (Cooper et al., 2000). Also, because most summer school programs are voluntary, policy makers would also benefit by targeting funds meant to increase student participation. Transportation and food services should be provided at each program site. Regarding the question of the balance between district-wide versus individualized school- based control in service delivery, Cooper et al. (2000) recommended allowing a modicum of flexibility in the implementation of summer programs, in order to allow for local communities to address the particular needs of their student populations. They stated: “policy makers ought to resist the temptation to micromanage programs and give local schools and teachers leeway in how to structure and deliver programs” (p. 107). Additionally, as mentioned previously, variations in how a core program model is implemented can provide useful information for
  • 16. 16 helping to determine which types of program delivery variations serve which types of students best. VI. CONCLUSION It is clear that the need for summer school programs is likely to increase. Changes in the composition of the american family, concerns about the competitiveness of the american educational system, and the establishment of tougher promotional standards will continue to prompt school districts to establish summer remedial programs. The best research evidence available to date indicates that such summer programs can be effective in meeting their goals; it is also clear that how beneficial summer programs are can be influenced greatly by both program and student characteristics. However, as Cooper et al (2000) concluded, “the general positive effects of summer school for those students who participate are unmistakable” (p. 109). Continued high quality research is clearly needed to help policy makers and program implementers make the best choices possible for the needs of their particular summer school student populations.
  • 17. 17 References Austin, G., Rogers, B., & Walbesser, H. M. Jr. (1972). The effectiveness of summer compensatory education: A review of the research. Review of Educational Research, 432, 171- 181. Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Hillsdale, NJ: Erlbaum. Cooper, H., Charlton, K., Valentine, J. C., & Muhlenbruck, L. (2000). Making the most of summer school: A meta-analytic and narrative review. Monographs of the Society for Research in Child Development, 65(1). Cooper, H., Nye, B., Charlton, K., Lindsay, J., & Greathouse, S. (1996). The effects of summer vacation on achievement test scores: A narrative and meta-analytic review. Review of Educational Research, 66(3), 227-268. Curry, J. & Zyskowski, G. (1999). Summer opportunities to accelerate reading (SOAR). Austin, TX: Austin Independent School District. Dougherty, J. W. (1981). Summer school: A new look. Bloomington, IN: Phi Delta Kappan Educational Foundation.
  • 18. 18 Entwisle, D. R. & Alexander, K. L. (1992). Summer setback: Race, poverty, school composition, and mathematics achievement in the first two years of school. American Sociological Review, 57, 72-84. Fager, J. & Richen, R. (1999). When students don’t succeed: Shedding light on grade retention. ERIC Clearinghouse (ED#431865). Haenn, J. F. (1999, April). Measuring achievement growth in an 18-day summer school session. Paper presented at the annual meeting of the American Educational Research Association, Montreal, Quebec, Canada. Harrington-Lueker, D. (1998). Retention vs. social promotion. School Administrator, 55(7), 6-12. Heyns, B. (in press). Summer learning. In, D. L. Levinson, A. Sadovnik, & P. W. Cookson, Jr. (Eds.), Education and Sociology: An Encyclopedia. New York: Garland Publishing. Heyns, B. (1986). Summer programs and compensatory education: The future of an idea. Eric Clearinghouse (ED#293906). Heyns, B. (1978). Summer learning and the effects of schooling. New York: Academic Press. Klibanoff, L. S. & Haggart, S. A. (1981). Summer growth and the effectiveness of summer school (Technical Report No. 8). Mountain View, CA: RMC Research Corporation.
  • 19. 19 Lyons, L. C. (1998). Meta-analysis: Methods of accumulating results across research domains. www.mnsinc.com/solomon/MetaAnalysis.html (downloaded 5/18/01). McCollum, P., Cortex, A., Maroney, O. H., & Montes, F. (1999). Failing our children: Finding alternatives to in-grade retention. ERIC Clearinghouse (ED#434962). National Commission on Excellence in Education. (1983). A nation at risk: The imperative for educational reform. Washington, D.C.: U.S. Department of Education. Pipho, C. (1999, September). Summer school: Rx for low performance? Phi Delta Kappan, 81(1), 7-8. Roderick, M., Bryk, A. S., Jacob, B.A., Easton, J. Q., & Allensworth, E. (1999). Ending social promotion: Results from the first two years. Chicago, IL: Consortium on Chicago School Research. White, K. A. (1998). The heat is on as big districts expand summer school. Education Week on the Web. www.edweek.org/ew/1998/42summer.h17. (Downloaded 3/27/01).