Peer Review

Can Value Added Assessment Raise the Level of Student Accomplishment?

What do students actually learn across their several years of higher education? For some twenty years, higher education has been urgently exhorted to turn its attention to this seemingly fundamental public interest question.

A spate of reports in the mid-1980s, including one from this association (1985), decried as "scandalous" the academy's inability to document the quality of student learning gained in college. Subsequently, the so-called "assessment movement" gathered strong momentum as accrediting agencies, many states, the business community, national organizations, and a growing number of institutions called on higher education to increase its "accountability." National and local assessment projects were launched, some involving AAC&U; faculty were recruited; professionals enlisted.

Given all the assessment activity of the past decade-and the significant investment of resources it reflects-what are we now able to report from systematic assessment evidence about students' cumulative gains in learning across their years of study?

The answer remains, say the authors of this issue of Peer Review: much too little.

Marc Chun's informative analysis in these pages observes that the academy now generates a great deal of "actuarial" and "archival" data on items that can readily be counted: scores on national tests; retention and degree attainment; job placement and the like. We also systematically cull student reports about different aspects of their educational experience, including their own self-assessment of their gains in learning.

Recently, thanks to the new National Survey of Student Engagement (2001), we are now able to say something about students' participation in forms of learning-extensive writing, credit-bearing internships, collaborative projects, interaction with diverse peers-that, other studies indicate, have a positive impact on students' levels of attainment.

What all this assessment effort has yet to produce, however, is tangible evidence of how well the academy is doing on the forms of liberal learning that most educational leaders still maintain provide lasting value both to individual students and to our society.

As a community, and on most campuses, we remain unable to provide useful information on students' development of outcomes that are widely considered important, not just within the academy but for proactive participation in the economy: analytical and communicative capacities; facility in addressing unscripted problems in one's own field; the ability to translate skills and knowledge to new domains and new kinds of problems; the capacity to take context and contingencies into account in resolving problems; the ability (and inclination) to integrate learning from different contexts; the ability to learn with and from others; the capacity to assess the ethical and value dimensions of an issue; or the ability to take others' views productively into account in solving real world problems. These goals, and others such as quantitative reasoning, civic knowledge and engagement or cross-cultural literacy, are widely endorsed. But too much of the higher education assessment effort is still going into institutional study of more basic aspects of students' learning such as writing (especially entry- and minimum-competency writing tests), mathematics assessments (typically aimed at what should be a high school level of attainment), and reading.

Moreover, many of our assessments are structured to exempt the successful test-taker from any further work in the subject area. Mathematics and second language are conspicuous examples of this test-certified license to cease further effort. It's hard to track gains in learning when students with sufficiently high scores on a national college entrance test may never be asked to use quantitative analysis or their chosen second language again. (And some campuses that have assessed quantitative skills from first to final year indeed found, unsurprisingly, that their students, on average, lost ground in college.)

Greater Expectations and the Value Added Assessment Initiative
Richard Hersh, who served as guest advisor for this issue of Peer Review, and I both sit on the National Panel shaping AAC&U's forthcoming "Greater Expectations" report on the learning students need for the twenty-first century. The Panel-representing all parts of the higher education community and leaders from the business, civic, and school sectors as well-strongly endorses the view that those of us recommending specific forms of learning for the twenty-first century need to provide meaningful evidence to support our claims. And to provide this evidence, the Panel agrees, the academy will need to say with much more specificity than we typically do, what kinds of capacities we want our students to achieve in college, why we think these capacities make a difference, and what progress we are making in helping students achieve them.

The Greater Expectations National Panel is very conscious of the need to avoid the dangers that lurk in simplified (and less expensive) approaches to assessment being adopted in many states. The Panel does not believe that students' ability to find the right answers on multiple-choice tests provides evidence that they are ready to undertake the kinds of complex analysis and learning they should do in college and will further face in their lives, societies, and work.

The trends in our schools notwithstanding, the National Panel contends that the nation's assessment efforts must move beyond the multiple-choice regime and focus with new intensity on students' own performances. It is time, we argue, for higher education to proactively lead a national effort to focus assessment on higher level learning outcomes.

In this context, the Value Added Assessment Initiative (VAAI) comes as a welcome development. VAAI's leaders are experimenting with ways to assess the academy's most advanced outcomes, and not just our most miminal requirements. Moreover, they are focusing their assessment energies in the area where the right policy choice could influence our curricula for the better. Specifically, they are asking whether students can apply their analytical skills and knowledge to novel tasks, tasks reflective of the issues citizens and professionals actually encounter in the world beyond college.

As a member of the advisory board for VAAI, I am especially intrigued that the leaders of this new assessment effort see the nation's law boards as a possible exemplar for undergraduate education. As Stephen Klein explains in these pages, the legal community places great weight on the essay part of the law boards because they want law schools and candidates to give more attention to legal reasoning. Similarly, many states also are adding "performance tasks" to the law boards in which students are given a "mini-library" of materials and told to use these in executing legal tasks. Here too, the legal community's goal in adding performance tasks is to raise the significance of clinical experience during law school. Changes in the test are surely driving changes in the law school curriculum, but the result is a literal raising of the bar for students' levels of legal preparation.

The VAAI seeks to achieve comparable positive influence on undergraduate education. The performance-based assessments its designers propose will be designed to expect and drive higher levels of accomplishment. Assessments should enrich the curriculum, they argue, not reduce it to information acquisition and multiple-choice decisions. New assessments, in other words, should reflect and support Greater Expectations and not the bare minimum.

But Can the Value Added Assessment Initiative Achieve These Benefits?
I believe that it can, but only if we can create a way to embed the summative assessment Hersh et al. propose in the standard curriculum. There's a huge amount of evidence already available that graduating seniors will not apply their best efforts to an assessment "that doesn't count." Many won't even take it at all. It would cost a fortune to alter that social reality.

On the other hand, many AAC&U members already are adding "capstone" courses or experiences to the curriculum, both in general education and the major. In our study last year of general education trends (Ratcliff et al. 2001), for example, we found that nearly half of those responding said upper level courses were part of their general education programs.

To produce the results proponents intend, value added assessments need to be embedded in such credit-bearing senior year capstone experiences. Indeed, the best place to pilot VAAI assessments would be on campuses where culminating or capstone courses already exist. The VAAI could be added as a requirement for completing a capstone course, and the results would count towards students' grades.

In order for the senior year performances to assess value added, however, there has to be a baseline. Therefore, VAAI exercises need to be embedded in first year programs as well as in capstone courses, in order to track gains. And there needs to be a compensating strategy for the large number of graduating students who began their work elsewhere and transferred in.

Making the VAAI part of the regular curriculum would not only recognize the faculty time summative assessments require. It would also provide a faculty feedback loop into decisions about how well students have been prepared for these summative performance-based assessments and how the curriculum can do more to prepare them.

Alternatively, the VAAI can itself become a required capstone experience, an activity students have to complete to graduate. Such a change would, in fact, revive an older form of assessment that has been abandoned on most campuses. I'm conscious, in fact, that one reason I am drawn to the VAAI is that I myself experienced something like it as a history major at Mount Holyoke College in the late sixties. At the time, every student had to complete a comprehensive examination in her major, with the results recognized on the transcript and a factor in whether a student graduated with honors.

In advance of the comprehensive examination, my department provided all its seniors with a recently published study of modernization, which we were asked to evaluate on the exam itself. We were also confronted on the exam with new documents-including works of art-that we were unlikely to have seen before. We were asked to evaluate the documents, make judgments about their provenance and implications, and explain the bases for our claims.

We were, in short, expected to apply our knowledge and skill to novel tasks-which is what the VAAI proposes to ask as well. And even as a twenty-year old, I recognized that that was a very smart and very appropriate assessment of the learning I was taking with me from college.

Had the faculty prepared me for this kind of assessment? Yes, in fact, they had. Not with "drill" but by giving me challenging research and writing assignments repeatedly across the curriculum in every semester and every course. The "preparation" they made me do provided a first-rate education.

The right approach to value added assessment can provide the same benefit for students across the nation. But we must embed our summative assessments in the credit-bearing curriculum if we want them to result in greater student accomplishment.


Association of American Colleges. 1985. Integrity in the college curriculum. Washington, DC.

National Survey of Student Engagement. 2001. Improving the college experience: National benchmarks of effective educational practice. Bloomington, IN: Indiana University Center for Postsecondary Research and Planning.

Ratcliff, James L., D. Kent Johnson, Steven M. La Nasa, and Jerry G. Gaff. 2001. The status of general education in the year 2000: Summary of a national survey. Washington, DC: Association of American Colleges and Universities.

Previous Issues