Dr. Jonathan Willner is Professor and Chair of the Department of Economics and Finance at Oklahoma City University.
After considerable controversy and delay, the State Department of Education released its A-F report cards for all Oklahoma public schools in late October. According to the Department, “Oklahoma’s A-F School Grading System is based on the concept that parents and community members should be able to quickly and easily determine how schools are performing .”
Now that the grades are out, we need to ask: what it is that is being graded? Teachers? Administrators? Not really. Careful analysis of the report cards reveals that a good part of what is being graded is the parents. It’s convenient to blame the schools, but a significant part of the grade of each school is driven by the socio-economic condition of the parents of the children in the school.
For years economic research and research in other fields has indicated, quite clearly, that the educational achievement of children is correlated with parental conditions (click here for a partial bibliography). Basically, if a child comes from a well-educated, high income, stable, two-parent household, that child does well in school. A child with a single, poor parent with little education is likely to fair poorly in school. Economic research also indicates, quite clearly, that education and income increase together.
With few exceptions, schools do not get to choose the parents of their students. Rather, the children simply appear and the task is to try to move them forward intellectually. Thus the tasks before any given teacher is likely to vary based on the socio-economic status of parents of the children.
When the school report cards were released, I decided to see if I could predict the grades based on some simple demographics associated with parents. That is, can I deduce a schools grade without reference to anything that might actually occur in the school? The answer is pretty much, “yes”. In fact I can accurately predict more than 50 percent of all schools’ grades without reference to anything under the control of a school.
How?
A commonly used statistical process called multivariate regression analysis was used. Multivariate regression looks for patterns in data and quantifies those patterns. For example, we assume that shoe size is related to age in children. Though there is variation in shoe size by age, if we want to see how much the shoe size is dependent upon age we might use regression to see what portion of shoe size is related to the age of the child and what is related to other factors. Once we have that numerical relationship we can use it to forecast any child’s shoe size. The technique is commonly used in many business and research activities. Though it’s never possible to perfectly predict a relationship, as we include more possible explanations we hope to get better predictions. By “better” we mean that we are right and closer to right with each effort .
In our analysis, a number of parental factors were used to predict a school’s grade: the number of single-parents in the district; students on free and reduced lunch at the school; school mobility (proportion of new students each year); educational attainment in the district, and the median household income in the district. None of these have anything to do with the actions of teachers and administers. Using this information, and only this information, it is possible to accurately predict 962 of 1,676 school grades – or 57 percent – for which I had complete data. I correctly forecasted 574 of 824 (70 percent) of schools with a grade of “B” and 377 of 571 (66 percent) of schools with a grade of “C” (click here for school-level results on my analysis).
It turns out that it is quite possible to determine much of a school’s grade simply by knowing some basics characteristics of the parents of the children who attend it. Lots of single parents? Lower the grade. Lots of poor parents? Lower the grade. Lots of parents moving in and out of the district? Lower the grade. Lots of parents who didn’t finish high school? Lower the grade. The stated concept of Oklahoma’s A-F School Grading System is that parents and community members should be able to quickly and easily determine how schools are performing . Yet the reality is that a good portion of the grade has nothing to do with the school and everything to do with the parents.
Now if we can just get our teachers to increase their students’ parent’s income, convince the parents to get and stay married, not move around, and make sure the parents are educated, then the schools will improve. But that’s not the teachers’ job. Their job is to do the best they can with what is before them and the resources they are provided.
So the State Department of Education in Oklahoma has represented schools and, by insinuation, teachers as failures in many places. The reality is that much of the perceived failure has nothing at all to do with the schools and teachers. It has to do with the socio-economic make-up of the students in the schools.
Rating or grading a school based on results that it can control makes a great deal of sense. Unfortunately, what the State Department of Education in Oklahoma has done is grade schools largely on what sort of parents are in the school district. This is laying the credit or blame for school performance in the wrong place.
The opinions stated above are not necessarily those of OK Policy, its staff, or its board. This blog is a venue to help promote the discussion of ideas from various points of view and we invite your comments and contributions. To see our guidelines for blog submissions, click here.
If only rational people would read this and pay attention.
This is a great article, and I appreciate the scholarship behind this. When discussing things like education funding and teacher pay, this basic knowledge should be a prerequisite.
That said, is it possible to develop more complex algorithms to take such factors into account and adjust accordingly, or is the implication that there should never be any sort of student-based job performance involved in public schools? My initial thought is to assign every school a predicted score (like a test curve) based on the factors Mr. Wilner descrbes above, then assess the student test scores above or below that threshold.
I’m just shooting from the hip there; such an algorithm would be more complex. In any case, good article, and I look forward to Dr. Willner’s future contributions.
Andrew,
Yes, it can be done. In a few cases it has been thoroughly done. Though long you might find, “The Effect of Attending a Small Class in the Early Grades on College-Test Taking and Middle School Test Results: Evidence from Project STAR” available at: http://www.nber.org/papers/w7656 an interesting read.
Funny you should ask. Due to pressure by the Gates Foundation, the Obama administration, and conservative and liberal school “reformers,” we are legally required to use such a algorith in evaluations. But, the use of those statistical models, known as value-added models or VAMs, is NOT valid. No research has been done to justify their use in high school and the evidence for middle school is nearly as tenuous. Even in elementary school, these methods are not likely to be sustained legally. They are systematically unfair for schools with high percentages of ELL, IEP, and low-income students.
The Obama administration knows that these policies could have cost him the election if teachers in Wisconsin and Ohio had voted their anger, not followed the unions’ advice to vote Dem and then persuade. Even the Gates Foundation seems disturbed that their research has been disappointing. So, I’m hoping that we’ll soon admit that those algorithms are valid for low stakes, but not for high stakes purposes. If not, teachers must litigate against them until the cows come home. In NYC for instance, large numbers of teachers have already been fired by a model that is roughly as valid as a coin flip, and I don’t see those terminations being upheld. Here’s a nice summary of the evidence:
http://www.huffingtonpost.com/jack-jennings/mind-the-gap_2_b_2324262.html
“Economic research also indicates, quite clearly, that education and income increase together.” Unless you go into academia, then it’s curvilinear. But seriously, nice article.
Thank you, thank you, thank you!
You’ve added to a huge body of social science – that has been largely ignored by school “reformers.” I’d also note that we’ve been debating the report card for months, and this morning was the first that I’ve read in a nespaper about the single most important finding. The entire state got a D, as I recall, for improving the low performers. There is an equally huge body of research documenting the “Matthew Effect” and showing why it is virtually impossible to raise non-math performance in secondary schools for kids who don’t learn to read for comprehension by 3rd grade. So, why are we criticizing the two big urban districts, that are 80 to 90% low income and serve disproportionate numbers of low performing stuudents?
Your projections would have been even more accurate if it were not for choice. OKC, for instance has one “no-majority high school and one similarly diverse middle school. Both earned Cs after being projected to earn a C. All of OKCs neighborhood high schools scored lower than projected, however, while all of the (unionized) magnet and enterprise schools and almost all of the selective charters scored above their projections. The main reason, of course, is peer effects.
OKC middle schools, not surprisingly, show even greater peer effects. All of the neighborhood middle schools were projected to earn Cs, but earned Ds. Its possible that you’ve identified a huge weakness of OKCPS educators, but I suspect your work points to the system’s refusal to address negative peer effects for that age group. By high school, the most troubled students drop out, lessening the damage done by disruptive school cultures.
But again, great work! Thanks!
Mr. Thompson. Thank you for the thoughtful response. The cohort effect can certainly be very, very real. Some of the cohort effect is captured by the income, etc. of the students’ parents. I’ll have to work on the Charter school matter. I’m fairly certain the data will report what you expect.
The same is likely to be true with the grade level issue. This latter I did a quick check on. While it does improve predictions it only adds 23 accurate predictions. Of course, it’s only a first pass. Consistent with your surmise, predicted grades were higher in the higher level schools.
Get your Barresi = F bumper stickers! Supplies are dwindling! http://r.ebay.com/yL3qo5