Colorado Governor Bill Owens has never been a fan of what he terms "the public education establishment."
As a state senator, he carried and passed legislation that legalized home schooling, established charter schools and instituted school choice, and he's been an unabashed advocate of vouchers his entire political career.
And he has devoted a large share of his first year as governor of Colorado to hammering home the message that public education is doing a dismal job of educating kids.
The evidence for this claim is plain as day, according to Owens: low statewide test scores. Only half of last year's fourth-graders achieved "proficient or better" on the state's required reading test. Less than a third did so in the writing test.
To Owens, these results are nothing less than proof of "an epidemic of failure."
"I do not want to mince words," he said after the latest round of results were announced last September. "We are facing a crisis of public education in Colorado." It's time, he said, to "crack down" on the schools, the teachers and "the education bureaucracy."
But while Owens cites low state assessment scores, a number of educational professionals who have extensively researched the tests say the self-proclaimed "education governor" is dead wrong.
Colorado public schools, they say, are actually providing more educational opportunity at higher standards than ever before, and so are the nation's schools as a whole.
The first step in understanding their conclusion is to understand how public education has changed over the past quarter century.
Times have changed
A generation ago, low-skill, high-wage jobs in steel, rubber and automobile manufacturing abounded. Students who did poorly, or even dropped out of school still had a chance for a decent standard of living.
Those jobs, though, are long gone. They've been replaced by industries that require higher skill levels, and this change in the job market has drastically changed the way Americans think about public education.
A generation ago, schools brought only the kids we used to call upper track or college bound to a high academic skill level. Today, we're asking schools to bring every kid to a "proficient or better" standard.
This, as Patricia Graham, former dean of the Harvard School of Education, noted in the Nov.-- Dec. 1999 issue of Harvard Magazine (an alumni publication), "is the most radical request that has ever been made of American schools in our century: to bring all our children to a high level of academic skills. The schools' business in the past was to bring some fraction of children to a high level of skills -- but certainly not all."
It was this rise in expectation that gave birth to the standards-based education movement that swept public education in the early 1990s.
Almost every state has now adopted standards that spell out what every kid is expected to know and be able to do in every subject, and those states have set up testing programs to assess whether these standards are being met.
The testing vehicle adopted by Colorado is the Colorado Student Assessment Program, or CSAP. The standards and tests were determined by the Colorado Department of Education in close consultation with the state's teachers, and it is these tests that Owens cites when he proclaims "a crisis state of public education in Colorado."
A careful look into what the CSAPs actually ask kids to know and do, though, shows why this is, in the minds of many, a manufactured crisis.
Scoring the CSAPs
The 1998 CSAP required fourth-graders to read a story about a friendship between a painter and a young boy, and then write an essay during three 45-minute periods in three days about the setting, characters, plot and ending.
In the reading portion of the test, the students had to recognize the author's point of view, predict and draw conclusions about the story, differentiate between fact and opinion, and formulate questions about the story.
The writing portion required the students to use conventional grammar, punctuation, capitalization, spelling and sentence structure, write for a variety of purposes and audiences, show application of thinking skills, and discuss the story as a record of human experience.
Each essay got one of four grades: Advanced, Proficient, Partially Proficient or Unsatisfactory. Understanding that only one-third of the kids scored proficient or better during the past three years, follow closely the skill level required of the students in each of these categories.
Advanced: The student's writing must include a thorough summary, demonstrate insight into the text, perform nonlinear sequencing, recognize connections between details and ideas, and use context clues to understand words with abstract meaning.
Proficient: The essay must include a summary with factual support, understand context and visual cues, locate main ideas, recognize inferences, understand prefixes and suffixes, and show ability to discriminate, sequence and recall facts.
Partially proficient: The essay must utilize context clues to comprehend word meanings, locate details and recall them to answer questions.
Unsatisfactory: The essay shows minimal and very general comprehension of a text that has substantial content.
These, obviously, are far from mediocre standards. They require a pretty sophisticated command of the language and beg the question: how many Colorado grownups would have performed at a "proficient or better" level back when they were nine-year-olds, or even now?
The answer is suggested in the results of a District 11 experiment conducted last November in which 16 prominent Colorado Springs adults -- anti-tax crusader Douglas Bruce, Economic Development Council executive director Rocky Scott, City Councilman Ted Eastburn, Urban League head Jerome Page and Gazette columnist Ralph Routen among them -- tried their hand at an abbreviated version of the fourth-grade CSAP. The results were published in The Gazette.
This test used a combination of example questions released by the state Department of Education and actual questions from past tests. The results were scored by the District 11 Department of Planning, Research and Evaluation using the exact same standards and methodology used for the actual fourth-grade CSAP.
Among the 16 tested were individuals with law, medical and graduate degrees, and a professional writer. No one got all the questions right; scores ranged from 61 to 95 percent. And like the fourth-graders, the adults scored lower on the writing portion than on the reading portion.
D-11 Director of Assessment Bill Veitch reports that one of the writing test takers gave the answer: "I can't do this."
Given that college-educated adults are unable to ace an abbreviated version of the CSAP, it hardly seems a "crisis" that nine-year-olds struggle with the three-day version.
These results -- both the kids' and the adults' -- illustrate less "an epidemic of failure" than the fact that a much higher skill level is required to score "proficient or better" on the CSAPs than on the traditional standardized assessments.
The traditional standardized tests rely on multiple-choice questions that require selection of one of four provided answers (the filling in of a bubble). The CSAPs are far tougher. They require the student to initiate thought, apply knowledge and develop ideas at length.
As Veitch recently noted, the low CSAP scores of the past three years reflect our rise in expectations more than a crisis-level failure of performance.
The meaning of scores
The difficulty of the CSAPs has also been noted by Lorrie Shepard, president of the American Educational Research Association and professor of education at the University of Colorado Boulder.
"We've set the bar very high," said Shepard. "In the writing test, the proficiency level is set at the national 70th percentile. To put this another way, the student who scores 'proficient or better' in the writing CSAP is performing better than 70 percent of fourth-graders nationwide. It is possible for a student who is sub-proficient by Colorado standards to be writing above the national norm."
Colorado teachers, in short, are being asked to bring every student in Colorado to a skills level 20 percent higher than the national norm.
Shepard also found something very interesting when she compared the Colorado reading test to the one in Texas. In Colorado, the standard for "proficiency" is twice as high.
In Texas, accordingly, 75 percent of the students score at "proficient or better." People in Texas hear that figure and think their schools are doing a slam-bang job. When, in Colorado, only 50 percent of the students score "proficient or better," people perceive an epidemic of failure and demand crackdowns.
This is a problem at the heart of the education debate in Colorado: what assessment scores mean -- what they actually measure -- is not as obvious and clear-cut as critics of public education assume.
A clear-cut example is the contradictory results of the 1998 reading tests administered by the International Assessment of Educational Progress (IAEP) and its American counterpart, the National Assessment of Educational Progress (NAEP).
The IAEP results ranked U.S. fourth- graders second in the world (behind Finland), but its NAEP counterpart graded only 20 percent of U.S. fourth-graders "proficient or better."
In other words, American fourth-graders were assessed to be world-class and inept -- simultaneously.
Three separate U.S. agencies launched independent investigations to determine how these tests could produce such radically different results. All three concluded that the American test had set proficiency standards so high as to invalidate outcomes.
Nevertheless, most people looking at the international scores will conclude that our schools are doing a terrific job. Looking at the American scores, they'll demand a crackdown. The same body of students were tested, however, in both cases.
"High standards are to be encouraged," said Shepard, "and it should be our aim to raise students to a higher standard, not lower the standards to achieve higher scores. But we have to keep it in mind just how high we've set the bar in these tests. There is no crisis or epidemic of failure."
A score is not a score is not a score
These examples starkly illustrate how assessment scores, in and of themselves, don't reliably mirror how well students, teachers and schools are performing.
This was evident throughout the late '80s and early '90s when critics of public education lambasted the schools for a 35-point drop in SAT (Scholastic Aptitude Test) averages over 30 years.
The Sandia Laboratories (a branch of the Department of Energy) investigated the cause of this decline by researching two decades of assessment testing. They found that the drop in SAT average did not reflect a drop in overall performance level. Instead it reflected, they concluded, "the fact that fewer students taking the test in recent years have been in the top 20 percent of their class. More students from the bottom half are taking it because more people in America are aspiring to achieve a college education than ever before."
In four of the five top-scoring states, significantly, only 10 percent of the state's high school seniors took the SAT. In Connecticut, Massachusetts, New Hampshire, New Jersey, New York and Rhode Island, 70 percent of the seniors took it.
This suggests that the "meaning" of assessment scores is not as clear-cut as we presume. Higher scores don't necessarily mean better education. Attempts to bring every student to a high level of achievement can actually produce a decline in assessment averages.
And it's crucial to understand this in light of Governor Owens' push for a massive escalation in assessment testing.
Owens wants to expand reading assessments to students in grades 3 to 9 inclusively by 2001, writing assessments to grades 3, 5, 6, 8 and 9 by 2002, and math assessments to grades 5 to 9 inclusively by 2002.
He's also pushing for mandatory ACT (American College Testing) assessment for every 11th grader in the state. This will make all non-collegebound students take the test and thereby, ironically, will lower, not raise, the Colorado average, reaffirming his sense of crisis.
At the same time, Owens -- with full backing of the Republican legislature -- is pushing hard for increasingly high-stake consequences for low CSAP results. More and more is riding on scoring high.
His proposed crackdown includes a measure featuring possible loss of accreditation for any school that fails to bring 80 percent of its students to a "proficiency or better" level, or that fails to register a 25 percent rise in scores over three years.
It also includes a "Report Card" that would grade all 1,568 public schools in the state on a bell curve, the top eight percent (122 schools) getting an automatic A, the bottom two percent (30) getting an automatic F. Every school given an F would be converted into a charter school, with every A school given the option to convert. Given that there will be dozens of F (and possibly A) schools turned into charter schools every year, this would result in a dramatic increase in the number of charter schools (there at 62 at present), and fewer traditional public schools.
The trend of more assessment testing with increasingly high-stake consequences is spreading nationwide. A recent study by Washington-based Education Week and the Pew Charitable Trust Foundation found that all but two states now have mandatory assessment programs upon which $200 million is spent annually -- twice the amount in 1990.
Ironically, more and more funds, resources, time and effort are being poured into assessment testing, often at the expense of fundamental classroom instruction and student-teacher interaction.
"I think we've gone berserk in this country with standardized testing" says Boston Arts Academy headmaster Linda Nathan of the trend. "Thirty-six thousand teachers could have been hired in [Massachusetts] with the money we just spent on the MCAS [the Massachusetts Comprehensive Assessment System, a new series of statewide tests administered in fourth, eighth and tenth grades]."
A recent study by the Mid-continent Regional Educational Laboratory concluded after examining over 250 standards in 14 subject areas nationwide, that the national standards movement is overwhelming teachers and students with too many requirements.
"The standards movement," said McREL senior fellow Robert Marzano of what the study found, "has produced far too much for kids to know in the available time they have. It's just impossible to implement all of the standards you find in most state documents."
The schools, though, are feeling pressure to deliver scores. And some are getting desperate.
Last April, the deputy superintendent of an Austin, Texas school district was indicted on chargers of altering government records to raise his district's state assessment scores.
Last December, 54 New York City teachers and two principals were accused of helping kids cheat on state assessment tests in 32 schools by supplying right answers, "correcting" tests and giving practice tests containing questions from the actual exam.
Last spring, a group of Chicago students intentionally failed portions of the state assessment tests to protest "test mania."
The coming bashing
How are state mandates for more testing impacting local educators? We asked two teachers with 34 years of combined classroom experience.
Eric McNeil teaches math at Mitchell High School.
He has no problem with CSAP testing per se. Contrary to some educators' criticisms of assessment testing, he thinks they have tremendous educational value. "It's changed the way I teach," he said in a recent interview. "It connects the learning process to real life by shifting the orientation from rote memorization to problem solving and application of facts."
McNeil described at length how the entire District 11 math curriculum has been overhauled and brought into alignment with CSAP testing (math CSAPs begin for tenth-graders this spring). "We're totally on board," he said. "We've invested tons of time and money into this, and I don't mind a bit because it has excellent educational value."
(The Colorado Education Association, the state teacher's union, has supported state standards and assessments from the outset. "We totally support high academic achievement standards," said CEA's Deborah Faillin, "and we've resisted every effort made to lower the standards or make them optional.")
McNeil, though, is feeling tremendous anxiety over the increasingly high-stakes emphasis on scores for scores' sake. He's convinced that it's going to result in widespread teacher bashing.
"We all see what's coming," he said. "We're going to be evaluated on the basis of CSAP scores, and we're going to get hammered."
McNeil is exactly right, if a recent letter to the editor in The Gazette is any indication. The author, Renee Wood, claims that teachers are overpaid, underworked and pampered, and deserve en masse firings for how kids are scoring on the state tests.
"If a professional turned in the job performance that our public schools do," said Wood, speaking of the most recent round of state assessment scores, "they would simply be out of a job. If a stockbroker lost even a small percentage of his clients (dropout rate) he would be fired. If carpenters built houses that wouldn't stand (literaracy rate, national test scores) they would promptly be out of business. Regardless of a teacher's performance or lack of same, they are assured a job with all its benefits."
Gazette editorials have recently espoused similar sentiments, admonishing teachers and "the public school establishment" to stop "bawling," "pouting" and "whining" about how hard they have it and start doing a good job for once.
What angers McNeil about statements like these is that so many of the kids who'll be taking these tests are absolutely indifferent about their education. Teachers aren't in the same position as business and corporate professionals in producing "results." All too often, he said, teachers are working with students who simply don't care. And attempts to make them care are met with glassy, so-what? stares.
"A third of last year's freshmen class at Mitchell," he said, "are repeat freshmen this year. My freshman algebra class had a 50 percent failure rate last year, and it's 35 percent this year. That level of failure goes way beyond inability to do the work. The sad fact is that a big chunk of kids are just fu-ged-aboud-it. They don't care. Period."
McNeil cares, though, and deeply. With an M.A. from Harvard, he's worked hard to hone his teaching skills, accumulating 80 credits of college and developmental instruction beyond his graduate degree, and he donates his seventh-hour planning period to one-on-one tutoring of struggling math students, meaning he has to do planning and course preparation at home. His wife is a teacher at Buena Vista Elementary School.
"You provide the best-quality education possible, you raise the bar, you make yourself available to the kids, you donate time, you cultivate rapport," he said, "but there's no way you can make kids who don't care perform well on the CSAPs. Raising the bar from four to six feet doesn't necessarily produce a six-foot high jumper."
The results of a recent CSAP practice were demoralizing. McNeil got back blank tests, tests with pictures on them, writing portions that meandered into pornography.
William Moloney, the Colorado commissioner of education and a member of the NAEP governing board, reports similar behavior during NAEP testing. "Many of the students," he observed in a recent Denver Post article, "didn't try very hard and didn't seem to care how they did. There were reports of students doodling on their tests, or even putting their heads down on their desks."
"You have to realize," McNeil adds, "that there's no consequence to the kid for blowing off the CSAP. It doesn't affect graduation or grades. The sole consequence of poor effort is for the teacher. We're at the mercy of kids who couldn't care less, and the public is going to judge us on how well kids like these do on these tests. We're gonna get bashed.
"I fully support the educational value of CSAPs," he said, "but they're set up to make the teachers and schools look as bad as possible. Even if that's not the intent, that's what they're going to do. When an excellent educational tool like this is used punitively against teachers, it's hard not to believe there's an anti-public school agenda in this somewhere."
McNeil may have a point. Only public schools have to administer CSAPs. There's no way to know how private school and home-schooled kids are performing against the state standards. Private schools will remain free of state government scrutiny, and of bad publicity due to poor test scores.
'It gets nuts'
Bob Carlson teaches Title I reading and math at Pike Elementary School. He has been teaching in District 11 schools since 1975.
Twenty-four years of experience has convinced him that the proliferation of assessment testing is of dubious educational value for the classroom.
"As it stands now," he said in a recent interview, "we spend the better part of several months administering, scoring and recording the results of a half-dozen assessments, and there's even more testing on the way.
"The majority of it has minimal educational payoff for the classroom teacher and student. It generates plenty of scores, data charts and graphs for the legislators and educrats, but the time could be better spent in direct instruction."
Listening to him rattle off the series of tests, your eyes glaze over in a blurred flurry of acronyms.
Carlson administers the Brigante to kindergarteners at the beginning of the school year and the Marie-Clay at the end of the year and beginning of first grade. Much of September is also given over to the Quantitative Reading Inventory (QRI), which has to be administered to second- and third-graders individually, and takes 30 to 60 minutes per student.
The QRIs are followed by the DALTs, the District Assessment Learning Test given to every D-11 third- through eighth-grader every fall and spring. By then, it's two months into the first semester.
Throughout this testing, Carlson's classes have to be covered by a substitute teacher or school administrator, and the entire assessment cycle is repeated in the second semester with another round of QRIs, DALTs, the district writing assessments and CSAPs.
Carlton, too, sees a problem with the growing emphasis put on assessment scores. "They talk about accountability," he says, "but it sure sounds punitive.
"A lot of the kids we're asked to bring up to standards have problems the teacher can't fix -- bad home environments, physical problems, poor attendance. Some are going to fail no matter how good the teacher is and no matter what the teacher does.
"On the one hand, teachers with 28 kids don't have the time and resources to give the problem kids the amount of individual attention they need. On the other hand, we have to give them extra time and attention to get them up to standard. And what about the rest of the kids in the class all this time? And then there's all the paperwork of recording, scoring and reporting, and we aren't given any extra time to do it.
"It gets nuts."
Carlson and McNeil aren't heartened by the rhetoric about crackdowns and anti-teacher erosion of job security coming out of the statehouse.
McNeil, accordingly, savors the irony of a letter written by Owens in reply to an Arvada seventh-grader questioning why he was declaring a crisis instead of emphasizing how the CSAPs are improving Colorado education.
Owens' reply, notes McNeil, contained 10 punctuation and grammatical errors -- a writing performance that would earn him an Unsatisfactory in the fourth-grade writing CSAP.