Education and testing
Sep. 18th, 2012 07:29 amI teach computer programming, and I tell my students to always write test cases for their program before writing the program itself. In other words, before you've started solving the problem, you need to specify clear criteria for what it would mean to have solved it successfully: if I run the program on this input, I should get that result. This process helps my students clarify in their minds what the problem really is, makes it easier to solve the problem, and makes it less likely that students under last-minute time pressure will skip testing their programs.
More generally, when you're solving any kind of problem, it makes sense to specify clearly what results you hope to achieve and how you'll know when you've achieved them, before you start trying to come up with solutions. (If you're a politician, of course, you're less concerned with whether you successfully solved the problem than with the perception of whether you successfully solved the problem... so you're better off not specifying the desired results in advance, but retroactively defining them as whatever you accomplished, thus guaranteeing success. But we're not politicians here.)
Many of the problems I solve in daily life have to do with teaching. So it makes sense to specify what I expect students to learn, and how I'll tell whether they learned it, before starting to teach. It could even be argued that if I can't think of a way to check whether students achieved the desired learning outcomes, I don't really understand them myself well enough to teach them.
This is where we get the high-stakes, standardized tests so prevalent in modern education. Several things go wrong with this sort of testing:
The time spent on testing itself can start to detract from teaching. More testing is not necessarily better testing.
Even if an outcome is "measurable", that doesn't mean it's easily and inexpensively measurable. When learning outcomes are incorporated into a large-scale standardized test, there's an automatic bias in favor of things that are easily measured for lots and lots of students -- multiple-choice questions, essay questions with a checklist of points to address, etc. This may not accurately reflect many of the things we actually want students to learn.
When student testing is used to measure the effectiveness of individual teachers and schools (which, in theory, should be a perfectly good use of the data), it means most of the people involved in the testing process have an incentive to cheat: students, teachers, and local administrators. The only people with an incentive to see the tests administered fairly and accurately are those farthest from the testing process: the graders and the education reformers in government and central administration. Even if "cheating" doesn't take the form of getting "correct answers" illicitly and passing them around, it can mean "teaching to the test."
"Teaching to the test" is not a bad thing if the test actually reflects what you want students to learn. (Nobody would complain about a programmer "programming to the test," and in fact a prominent school of software development says you should almost never write code except to pass a specific currently-failing test.) But in combination with issue 2 above, the tests' existence skews the teaching process in favor of things that are easily tested, as above: memorizable objective facts and recipe-following are preferred over creativity and analysis.
Discuss. Any ideas for how to fix this? Have I framed the problem wrong in the first place?
Posted via LiveJournal app for iPhone.