Peer Review

Staying On Track With Rubric Assessment

Like many AAC&U essential learning outcomes, information literacy—the ability to locate, evaluate, and use information—is a concept integrated into curricula and cocurricula campuswide. Throughout college, students are asked to define their information needs in order to complete their academic work and make decisions affecting their personal and social development. They are expected to obtain information to meet those needs, evaluate the applicability of the information they locate, and revise their searches accordingly. They also are required to apply new information ethically and responsibly. These information literacy outcomes are integral to a successful college experience.

Outcomes with such broad applicability should be assessed often and well. In reality, information literacy is frequently omitted from higher education assessment efforts for several reasons. First, faculty and cocurricular professionals traditionally expect students to acquire information literacy skills prior to college; consequently, they typically do not teach and assess them as a focus of their courses and activities. Second, librarians—who do focus on teaching information literacy—often do not have the access to students in ways or settings most conducive to meaningful assessment activities. Third, because information literacy is often not taught in courses or cocurricular activities, existing information literacy assessments are frequently limited to survey and test formats that can be administered by librarians remotely. However, information literacy is an especially complex and context-dependent concept that is not easily assessed using these common fixed-choice methods.

The Information Literacy VALUE Rubric

The development of an information literacy VALUE (Valid Assessment of Learning in Undergraduate Education) rubric was a significant advance in information literacy assessment because it addressed these obstacles to information literacy assessment. First, naming information literacy an “essential” learning outcome, the ability to locate, evaluate, and use information was elevated to a position valued by faculty, cocurricular professionals, and, of course, librarians. Second, the information literacy VALUE rubric established an expectation that this outcome will be assessed in the context of existing student learning activities and assignments—activities and assignments that are both complex in nature and steeped in authentic contexts.

RAILS

Recently, the Institute of Museum and Library Services awarded $400,000+ in funding to support a three-year project called RAILS (Rubric Assessment of Information Literacy Skills). The RAILS project is designed to support the rubric assessment of information literacy outcomes at institutions nationwide. During the 2010–2011 academic year, five institutions participated in RAILS: a branch campus of a public university (2,800 FTE); a private, faith-based liberal-arts college (4,500 FTE); a private, liberal-arts college (6,400 FTE); a public, land-grant, high-research activity university (29,000 FTE); and a public college that focuses on workforce development and offers high-school completion, certificates, and associates degrees (30,000 FTE). To begin, librarians from each institution took part in intensive rubric training. As a part of their training, librarians learned to customize the information literacy VALUE rubric to fit the unique needs of their institutions and formed plans to test their rubrics. Using the rubric, they gathered 100+ artifacts of student learning to assess and selected ten participants (faculty, cocurricular professionals, and other librarians) to serve as raters. Intensive rubric revision, norming, and scoring sessions for all raters were then scheduled. During the scoring sessions, each of the ten participants rated all one hundred student artifacts and input rubric scores for each student into an assessment management system that facilitates rubric usage. These scores are currently under analysis in order to learn about student information literacy skill levels and explore factors that contribute to inter-rater reliability. Results will be disseminated as they become available via a variety of venues, including the project website at www.railsontrack.info. During the 2011–12 academic year, this cycle will be repeated at five additional institutions. Combined assessment results from all ten RAILS institutions will be released in summer 2012.

Student Performance on RAILS Rubrics
In order to meet the needs of individual institutions, each rubric used during the first year of RAILS was slightly different in scope (see figures 1–3 below for examples). Each RAILS rubric was based on the information literacy VALUE rubric; however, individual institutions decided to “break out” different criteria and describe them in more specific terms. Figure 1 demonstrates how one VALUE criterion (“Use Information Effectively to Accomplish a Specific Purpose”) can be divided into three parts using the key verbs from the VALUE rubric: communicate, organize, and synthesize.

Figure 1. Student Performance on Rails Rubric “Use information effectively to accomplish a specific purpose”

Criterion
3
2
1
Organizes Content
 
Are the sources in the right places?
Consistently organizes cited information in a manner that supports the purposes and format of the product/performance.
 
Students rated as 3: 35%
Inconsistently organizes cited information in a manner that supports the purposes and format of the product/performance.
 
Students rated as 2: 45%
Does not organize cited information in a manner that supports the purposes and format of the product/performance.
 
Students rated as 1: 20%
Synthesizes New and Prior Information
 
Do the sources help to support new claims or make points?
Consistently connects new and prior information to create a product/performance.
 
Students rated as 3: 27%
Inconsistently connects new and prior information to create a product/
performance.
 
Students rated as 2: 48%
Does not connect new and prior knowledge to create a product/performance.
 
Students rated as 1: 25%
Communicates Information
 
Do they have sources?
Consistently communicates information from sources via products/performances.
 
Students rated as 3: 37%
Inconsistently communicates information from sources via products/performances.
 
Students rated as 2: 50%
Does not communicate information from sources via products/
performances.
 
Students rated as 1: 13%

The first year of applying RAILS rubrics also revealed varying levels of student skills. For some institutions, student performance on the rubrics was “as expected.” Other institutions found more surprising results; in some cases, the surprise was how many students fell into unsatisfactory performance levels in specific skill areas. For example, one institution found that students in the RAILS sample did not achieve expected levels of ethical and legal information use (see figure 2). At this institution, almost 80 percent of students either did not follow style guide conventions when citing information sources (13 percent) or committed frequent errors (65 percent). These results provided the concrete evidence needed to significantly revise instruction to address skill weaknesses. At other institutions, interesting patterns of satisfactory performance emerged. For instance, one institution found that students were far more likely to evaluate an information source at an “accomplished” level based on currency (68 percent) or authorship (46 percent) than the source’s reliability (23 percent), accuracy (21 percent), or point of view (27 percent) (see figure 3). After receiving these results, faculty and librarians developed a shared understanding of information evaluation criteria, adjusted assignment requirements, and improved in-class instruction.

Figure 2. Student Performance on Rails Rubric “Use information ethically and legally”

Criterion
Advanced
Applies Outcome Successfully; Many Strengths Are Present
Developing
Shows Skill in this Outcome; Improvement Needed
Beginning
Evidence of the Outcome May Be Minimally or Not at All Present; Need for Improvement Outweighs Apparent Strengths
Style conventions
Follows style guide conventions with few errors.
 
Students rated as Advanced: 22%
Follows style guide conventions with frequent errors.
 
Students rated as Developing: 65%
Does not follow style guide
conventions.
 
Students rated as Beginning: 13%
Correspondence of
bibliography and in-text
citations
Bibliography and in-text citations correspond.
 
Students rated as Advanced: 39%
Bibliography and in-text citations do not correspond.
 
Students rated as Developing: 53%
Does not include a functional bibliography and/or in-text citations.
 
Students rated as Beginning: 8%
Common knowledge and attribution of ideas
Consistently distinguishes between common knowledge and ideas requiring attribution.
 
Students rated as Advanced: 33%
Inconsistently distinguishes between common knowledge and ideas requiring attribution.
 
Students rated as Developing: 59%
Does not distinguish between common knowledge and ideas requiring attribution.
 
Students rated as Beginning: 8%
Paraphrasing, summarizing, quoting
Summarizes, paraphrases, or quotes in order to integrate the work of others into their own.
 
Students rated as Advanced: 43%
Summarizes, paraphrases, or quotes, but does not always select appropriate method for integrating the work of others into their own.
 
Students rated as Developing: 53%
Does not summarize, paraphrase, or quote in order to integrate the work of others into their own.
 
Students rated as Beginning: 4%

 

Figure 3. Student Performance on Rails Rubric “Evaluates information and its sources critically and access the needed information”

Criterion
Accomplished
Developing
Inadequate
Evaluates authority
Student shows sufficient evidence of the author’s credentials and qualifications.
 
Students rated as Accomplished: 46%
Student briefly identifies the author’s credentials and qualifications.
 
Students rated as Developing: 35%%
Student does not identify the author’s credentials or qualifications.
 
Students rated as Inadequate: 19%
Evaluates currency
Student comments on the source’s publication year and retrieves the source that is published within the last five years.
 
Students rated as Accomplished: 68%
Student either comments on the source’s publication year or retrieves a source that is published in the last five years, but does not do both.
 
Students rated as Developing: 26%
Student does not comment on the source’s publication year and does not retrieve a source that is published in the last five years.
 
Students rated as Inadequate: 6%
Evaluates reliability
Student shows adequate evidence of whether or not the source is trustworthy.
 
Students rated as Accomplished: 23%
Student shows superficial evidence of whether or not the source is trustworthy.
 
Students rated as Developing: 53%
Student does not show evidence of whether or not the source is trustworthy.
 
Students rated as Inadequate: 24%
Evaluates accuracy
Student provides a thorough explanation of the accuracy of the source.
 
Students rated as Accomplished: 21%
Student provides superficial explanation of the accuracy of the source.
 
Students rated as Developing: 51%
Student does not explain the accuracy of the source.
 
Students rated as Inadequate: 28%
Evaluates perspective
Student identifies the author’s point of view in detail.
 
Students rated as Accomplished: 27%
Student briefly identifies the author’s point of view.
 
Students rated as Developing: 53%
Student does not identify the author’s point of view.
 
Students rated as Inadequate: 20%
Evaluates reflection of source
Student explains in detail how the source contributes to his/her knowledge.
 
Students rated as Accomplished: 29%
Student identifies how the source contributes to his/her knowledge.
 
Students rated as Developing: 51%
Student does not identify how the source contributes to his/her knowledge.
 
Students rated as Inadequate: 20%
Access the needed information
Student accesses information using effective, well-designed search
strategies.
 
Students rated as Accomplished: 27%
Student accesses information using simple strategies, including both search term(s) and tool(s).
 
Students rated as Developing: 53%
Student does not specify strategy with both search term(s) and tool(s).
 
Students rated as Inadequate: 20%

Potential Barriers to Rubric Assessment
After rating student work using RAILS rubrics, participants were asked to identify major barriers to their personal use of rubrics as well as their perceptions of potential barriers for their colleagues. More than half listed a lack of time and coordinated structures for assessment (e.g., an assessment “point person” or committee) as major barriers to their use of rubrics. More than a quarter of RAILS participants cited insufficient institutional financial resources, lack of staff, and uncertainty about their role in assessment. There were also concerns about possible inaccuracy of assessment tools and misuse of assessment data. In considering their colleagues, more than a third of participants stated that their colleagues would be discouraged by these barriers plus two more: their own lack of familiarity with rubric assessment in general and a lack of rewards for participating in assessment activities.

Initial RAILS Findings
Barriers notwithstanding, institutions have already reported a number of important findings resulting from RAILS participation including changes to teaching, assessment, and collaborative activities.

Teaching
All institutions report substantive improvements in teaching information literacy concepts. Furthermore, librarians and faculty have revised student assignments, altered classroom pedagogy, and enacted changes in curriculum sequencing. Faculty reported a higher quality of student work and plan to use rubrics as one of their teaching tools. One RAILS participant reflected, “I learned that grading the assignments in the RAILS project was an empowering act for me. It will strengthen my teaching the next time because I now understand what the students really are not getting. This rubric creation and rating experience has facilitated valuable reflection on my teaching practice and I hope to weave what I now understand into my teaching the next time around.” According to another participant, RAILS “changed the way I teach… [the instructional] session has more structure, and the students seemed much more engaged.” A third participant shared a student remark about the increased level of hands-on engagement that resulted from pedagogical changes: “The day that we went as a class to the library…was probably one of the most beneficial days of my semester.”

Assessment
In addition to teaching improvements, all institutions also documented increased assessment activity. Several institutions created or revised rubrics focused on information literacy and other essential learning outcomes. Faculty and librarians collected more artifacts of student learning for assessment purposes, and one institution regularized consent forms for collecting student work samples. In addition, more than a few institutions are contemplating the purchase of assessment management systems to improve the organization and reporting of their assessment data. Significantly, the librarians who participated in RAILS feel less disconnected from institutional assessment efforts and more committed to participating in program reviews and accreditation processes campuswide.

Collaboration
Beyond teaching and assessment improvements, rubric assessment has enhanced cross-campus collaboration. Faculty, teaching and learning center professionals, librarians, and writing center professionals have reported increased partnerships and future plans for co-created projects. A few have commenced rubric-related research projects; others have joined together to communicate rubric assessment results.

Preliminary Results

Although the statistical analysis of first year RAILS results is still in process, a few preliminary findings have emerged. First, faculty, cocurricular professionals, and librarians need to increase their awareness and knowledge of rubric assessment. Many of the participants involved in the RAILS project had no prior experience with rubrics, but all expressed interest in learning more about them. Second, if multiple participants plan to use the same rubric to score artifacts of student learning, norming is critical for establishing shared understanding of the rubric and achieving greater inter-rater reliability. Third, analytical rubrics appear to be more practical for assessing student artifacts than holistic rubrics. The more specific the language and the narrower the scope of the rubric, the more confident participants seemed to be about the accuracy of their ratings. Fourth, the more analytical the rubric becomes, the more directed the rubric scorer must be in looking for evidence of learning. Thus, participants appeared to be more confident about their ratings when student artifacts under analysis were concrete, focused, and shorter in length. This may mean targeting a different type of student artifact or subdividing larger artifacts to facilitate scoring. Fifth, large scale analysis of rubric assessment results is faster and more convenient when an appropriate assessment management system is a part of the process. The use of the assessment management system greatly facilitated the data recording, analysis, and reporting of RAILS rubric data.

Once statistical analysis of RAILS has been completed, more conclusions can be made. During the 2011–12 year, five additional institutions will participate in the RAILS process and benefit from the lessons learned from the first year of the grant.

Conclusion

The information literacy VALUE rubric in its original, holistic form is a significant step forward in the assessment of students’ ability to locate, evaluate, and use information, both in and out of the classroom. The RAILS project has already captured important information about student information literacy skills at several institutions and identified potential barriers to rubric assessment so they may be better understood and overcome. Perhaps most importantly, RAILS has demonstrated that VALUE rubrics, adapted for analytical, campus-specific purposes, can spur instructional improvements, increase assessment activity, and improve collaborations among faculty, cocurricular professionals, and librarians. In the coming year, RAILS will build on these successes to keep rubric assessment of information literacy “on track”!


Megan Oakleaf is an assistant professor at Syracuse University.

Previous Issues