At times what is assessed is seen as more important than what is not. Such distortion of values associated with the domain may in turn result in learning being conceived as instrumentally rather than intrinsically important which can lead to tension that cannot be easily resolved (Alaska Education, 1996). There is also a danger that the evaluations of student achievement are based on the professional judgment of the assessor which can be questionable especially when it comes to the issue of bias (Stiggins, 1991). Alternative assessments have been criticized for their subjectivity (largely the reliability issue), and it is certainly true that it is far more difficult to develop standards for evaluation and to apply them consistently across a group of portfolios or oral performances or research projects. There are also the questions of who selects the domains of knowledge to be tested? On what basis? Why were the omitted domains left out? The biases that underlie the development and evaluation of alternative assessments are right there on the surface to be seen, critiqued and, we hope, addressed and corrected. Therefore, the new assessments, on the other hand, are by design ill-structured, messy, open-ended, and complex (Liskin-Gasparro, 1997).


                                                                                                            Back