Peer Review

Emerging Evidence on Using Rubrics

In 2009, the Association of American Colleges and Universities (AAC&U) publicly released a set of rubrics for evaluating achievement of a wide array of cross-cutting learning outcomes. These rubrics were developed as part of AAC&U’s VALUE (Valid Assessment of Learning in Undergraduate Education) project—part of AAC&U’s Liberal Education and America’s Promise (LEAP) initiative. The VALUE rubrics were developed as an alternative to the snapshot standardized tests used by many to assess student learning. Since September 2010 when we started tracking the data, well over three thousand different organizations have downloaded one or more of the VALUE rubrics and are using them. Campuses are now beginning to report findings based on their use of these rubrics to assess student learning across a set of fifteen outcomes, all essential for student success in the twenty-first century.

AAC&U developed the VALUE initiative to test the proposition that cross-cutting rubrics could be used as a viable alternate approach to the existing tests, (e.g., ETS Profile, the Collegiate Assessment of Academic Proficiency, and the College Learning Assessment) for accountability reporting purposes. More importantly, there could be approaches to assessing student learning that would yield important information about the quality of student performance over time that faculty and others could use to improve pedagogy and practice in the classroom and beyond.

Core Expectations for Learning

What the VALUE initiative established was that faculty members actually share core expectations for learning, regardless of type of institution or region of the country. Further, VALUE showed that these shared expectations exist for a broad array of student learning outcomes; that the work we ask students to do through the curriculum and cocurriculum is the best representation of our students’ learning; and that faculty need to be directly involved in assessing the quality of the learning.

With the emergence of the Degree Qualifications Profile in January 2011 (Lumina 2011) and its focus on the integrity of the academic degree, the importance of specified levels of performance across five areas of learning—specialized and broad/integrative knowledge, intellectual skills, applied learning, and civic learning—has surfaced as a core requirement for student success in higher education in this country. The VALUE rubrics are providing a means by which campuses and their faculty can create common standards for evaluating the quality of performance expected for attainment of specified degree levels, e.g. Associate or Baccalaureate degrees. In essence, what has emerged is a framework of quality standards without standardization.

The VALUE rubrics were developed as broad, institutional level, or “meta” rubrics, and we recognize that rubric use at program, disciplinary, or even classroom levels requires adapting the rubrics with more specific language and purposes. For example, using rubrics to grade assignments does not necessitate the need for precise, psychometric methods. In developing and testing the rubrics, the faculty and professional staff development teams anchored the demonstrated learning for students at the entry and exit points for degree attainment. To provide signposts that learning was moving in the desired direction, milestones were developed between the benchmark—where rubric developers found their students’ current performance on average—and the capstone—where rubric developers hoped their students would be to attain a baccalaureate degree.

By having faculty on over one hundred campuses test the rubrics with student work from a variety of classes and disciplines, the rubrics were refined based on the feedback and found to be useful across the curriculum and cocurriculum for assessing student progress. What was found through the development process was that the commonly shared performance levels for learning in the rubrics reflected what faculty were looking for in student work and that the rubrics were sufficiently generic and flexible to be successfully used across the disciplines, hence articulating usable standards for attainment of learning for the outcomes.

The validity of the VALUE rubric approach is attested to by the rapidity with which campuses have been adopting and adapting the rubrics for use in courses, programs, and whole institutions—the rubrics resonate with faculty. All types of colleges and universities—both two year and four year—report that the use of the VALUE rubrics has catalyzed rich conversations around student learning and assessment. Some campuses have modified the language of the rubrics to reflect their own culture and mission, some have added dimensions to the rubrics, and others are using them “off the shelf.” Many campuses use the rubrics to assess collections of student work, often choosing to use an e-portfolio for students to present work to be assessed. Most of the leading e-portfolio vendors have incorporated the VALUE rubrics into their software as an option for campuses to utilize for evaluation and reporting on their students’ learning.

Testing Rubric Reliability

Issues of reliability are frequently raised when choosing to use rubrics to assess learning. In the initial VALUE development, a national panel of thirty individuals who had not seen the rubrics or been involved with their development—administrators, faculty, student affairs professionals, local community members, and teachers—was assembled for three days to test use of the rubrics in assessing work in a set of student e-portfolios. The inter-rater reliability results exceeded the 0.8 standard routinely used for such measures. In a more recent pilot project that relied upon a virtual, truncated inter-rater reliability approach, a group of forty faculty members from a variety of campuses across the country assessed a sample of student work using three VALUE rubrics. From this pilot, a composite reliability score was calculated for all faculty scorers across four broad disciplinary areas: humanities, social sciences, natural sciences, and professional and applied sciences (Finley 2012).

Examples from the Field

This issue of Peer Review focuses on campus-level use of the VALUE rubrics and the findings that have resulted from that use. These examples of how the VALUE rubrics are being used on campuses across the country to assess student learning and to engage faculty, students, and others in evaluating learning represent a broad range of strategies for powerful learning assessment. Three of the examples involve consortia of campuses that are using the rubrics to connect their individual campus efforts, and these articles define the benefits of those collaborative endeavors that contrast the ways institutions currently are ranked or compared. The other articles detail the benefits and challenges of using rubrics on different types of campuses and the ways in which the VALUE rubrics have helped campuses enhance student learning.

The Connect to Learning project involves twenty-two public and private, community college, research, comprehensive, and liberal arts colleges and universities focused on connecting traditional indicators of student success, such as retention and graduation, with more nuanced assessment of learning through student work and reflective practice. This three-year FIPSE-funded project is in partnership with the Association of Authentic, Experiential and Evidence-Based Learning and utilizes e-portfolios of student work and the VALUE rubrics. This national network of campuses is a collective, recursive knowledge-generation project linking teaching strategies with evaluation of student achievement levels.

Through a collection of examples in this issue of Peer Review, we see that assessment can be and is a high-impact practice capable of improving learning and pedagogy on our campuses. Through the use of VALUE rubrics, campuses are demonstrating that students benefit from knowing what is expected of them; that faculty leadership emerges through the development of rubrics and e-portfolios; that there is an important relationship between intentionality, assignments, and learning; that there is value in basing assessment on student work; and that there is an emerging culture shift among faculty from “my work” to “our work.”

The validity and reliability of this on-going, developmental rubric/portfolio approach to assessment has demonstrated the usefulness, the need, and the reality of meaningful faculty-led methods and approaches to assessing student learning—approaches that represent the integrity of a student’s particular degree not solely in terms of completion, but more importantly in terms of the quality of the learning.

NOTE

Copies of the VALUE Rubrics can be found at http://www.aacu.org/value.

REFERENCES

Adelman, C., P. Ewell, P. Gaston., and C. G. Schneider. 2011. The Degree Qualifications Profile. Indianapolis: Lumina Foundation.

Finley, A. Forthcoming. Making Progress? What We Know about the Achievement of Liberal Education Outcomes. Washington, DC: Association of American Colleges and Universities.

Lumina Foundation. 2011. Degree Qualifications Profile. Indianapolis, IN: Lumina Foundation. http://www.luminafoundation.org/publications/The_Degree_Qualifications_Profile.pdf.

Previous Issues