Membership Programs Meetings Publications LEAP Press Room About AAC&U
Association of American Colleges and Universities
Search Web Site
AAC&U
Resources on:
Liberal Education
General Education
Curriculum
Faculty
Student Success
Institutional Change
Assessment
Diversity
Civic Engagement
Women
Global Learning
Science & Health
PKAL
Connect with AACU:
Join Our Email List
RSS Feed
Facebook
Follow us on Twitter
LEAP Blog
LEAP Toolkit
YouTube
Podcasts
Support AACU
Online Giving Form
 
Peer Review Winter 2010 Cover

 

 

Winter 2010, Vol. 12, No. 1

Engaging Departments in Assessing
Student Learning: Overcoming Common Obstacles

By Jo Michelle Beld, professor of political science and director of evaluation and assessment, St. Olaf College


Assessment helps us figure out whether our students are learning what we think we’re teaching.
—Chemistry faculty member

Discussing how to go about assessing the intended learning outcomes of our major led to some of the best—and longest!—conversations we’ve ever had about pedagogy.
—Romance languages faculty member

Assessment played a key role in being awarded an NSF grant for curriculum and pedagogical innovation, and now that the grant is completed, we’re able to show convincingly that it had great results.
—Psychology faculty member

Assessment can be useful in the classroom insofar as it helps make our expectations more transparent to our students.
—Political science faculty member.

 

Assessment at the department level is a bit like living in Minnesota—it’s not always easy, but in the long run, it’s worth it. To be sure, gathering credible evidence of student learning in a major, minor, or concentration takes commitment, creativity, and, occasionally, some courage. But as a growing number of faculty are finding, there can be real payoffs to the work, particularly at the department level.

At St. Olaf College—an academically rigorous, nationally ranked liberal arts institution in Northfield, Minnesota—a “utilization-focused” approach to assessment has enhanced meaningful departmental engagement with evidence of student learning. With assessment projects centered on “intended uses by intended users,” departments are using results in a variety of ways, from redesigning gateway courses to redirecting feedback on student writing. A utilization focus is helping many departments begin to realize some of the rewards that thoughtful assessment can deliver without the excessive burdens that many faculty fear.

The Challenges of Assessment at the Department Level

The challenges of department-level assessment are all too familiar. The following challenges are among the most pressing.

Fear of compromised autonomy. Most midcareer and senior faculty remember the early days of assessment, when the principal, if not sole, purpose of the work seemed to be accountability (“Prove you’re doing your job”), and the principal audience for the results consisted of administrators and accreditors. Although the climate for assessment has changed considerably in recent years, there remains a lingering suspicion of assessment as a threat to faculty autonomy. For some, assessment raises the specter of standardized curriculum, paint-by-numbers pedagogy, and teaching to the test so institutions can “pass accreditation.” Understood in this way, assessment runs head-on into the two most highly valued qualities of faculty work life—freedom in determining course content, and professional independence and autonomy (DeAngelo et al. 2009). It is therefore actively resisted as an encroachment on academic freedom. Other colleagues voice the opposite concern—that assessment reports are simply put in a file drawer and hauled out only when the next accreditation review rolls around. They associate assessment with the “institutional procedures and red tape” that nearly three-quarters of faculty consider to be a source of stress in their jobs (DeAngelo et al. 2009). In either case, whether assessment is viewed as an occasion for unwarranted administrative interference with faculty work, or as a useless exercise in bureaucratic paper-pushing, it smacks of top-down decision-making. It has not helped matters that, despite genuine efforts by many institutions to foster grassroots faculty ownership of assessment, responsibility for leading the effort still rests principally with academic administrators.

Methodological skepticism. There are at least two different versions of this concern. Some faculty dismiss assessment as a reductionist enterprise, inappropriate to the complex outcomes of higher education. They are particularly skeptical about measuring outcomes in a major or other academic concentration, where student learning is expected to reach its greatest depth and sophistication. Other faculty take the opposite tack, arguing that valid assessment is too methodologically complicated for most faculty to undertake. These colleagues associate assessment with the publishable—and generally quantitative—work undertaken by statisticians and educational researchers. While this is a problem for assessment in general, it’s magnified in department-level assessment. At the institutional level, responsibility for figuring out what and how to assess often rests with an institutional research office or a specially appointed faculty member with released time and a small professional development budget. But at the department level, assessment responsibility rests with the members of the department themselves, most of whom feel underprepared and ill-equipped. The practical conclusion reached by both camps is the same: departmentally conducted assessment is unlikely to tell us anything meaningful about our students.

Lack of time. Nearly three-quarters of faculty members at four-year institutions report lack of time as a source of stress—second only to “self-imposed high expectations” (DeAngelo et al. 2009). Busy faculty committed to high-quality teaching, trying to maintain a scholarly research agenda, and increasingly engaged in a broad array of governance and administrative responsibilities are understandably drawn to activities with more immediate and certain payoff than assessment. Competing demands for faculty time are problematic not only for individual faculty members, but for departments as a whole; as any chair will attest, the array and complexity of responsibilities departments are expected to carry out continues to expand (Sorcinelli and Austin 2006). Assessment is understandably perceived as just one more thing departments have to do, and ironically, as something that takes precious time away from the very thing it’s supposed to foster—improved teaching and learning. As one chair said to me in the early phases of department-level assessment at our institution, “If I have to choose between the long line of students waiting outside my door during office hours and the assessment report I’m supposed to write for my associate dean, the students will win every time!”

Responding to the Challenges: A Utilization Focus in Department-Level Assessment

Suspicion, skepticism, and stress are powerful disincentives for departments to invest in assessing student learning. But these conditions are by no means unique to assessment; they characterize the conduct of most program evaluations, whatever their organizational context. In Utilization-Focused Evaluation, Michael Patton (2008) argues that the central challenge in evaluation research is “doing evaluations that are useful and actually used” (xiv). The model of “utilization-focused evaluation” that Patton developed in response to this challenge is readily adaptable to the conduct of assessment in higher education, which is essentially a specific domain of applied evaluation research. Utilization-focused assessment turns on the core question, “What evidence of student learning do we need to help us identify and sustain what works, and find and fix what doesn’t?” Like effective teaching, effective assessment begins with the end in mind. Below are key features of St. Olaf’s model.

Focusing on intended users and uses. Patton’s research on effective evaluation practice begins with the observation that evaluation processes and results are much more likely to be used to inform program decisions and practices when there is an identifiable individual or group who personally cares about an evaluation and the findings it generates (44). Consequently, the best starting point for assessment planning is not with the details of sampling or instrumentation, but with the question, Who has a stake in the evidence to be gathered, and what will they do with the results? Patton also argues convincingly that methodological choices should be governed by users and uses—not simply to foster ownership of the results, but to enhance their quality (243ff). At St. Olaf, a focus on intended users and uses has made a significant difference in both the design of departmental assessment projects and faculty engagement in the effort. Departments are encouraged to structure assessment efforts around a concrete problem of practice: the number or distribution of their major requirements, the content of key courses, the scaffolding they provide prior to advanced assignments, or the sequencing of instruction across courses. More than one chair has said that the focus on the department’s own questions has signaled a genuine and welcome shift in the premises that govern the college’s program of assessment.

Limiting the agenda. Purposeful inquiry is focused inquiry (Patton 2008, 170). The number of questions that can be asked about a program always exceeds the time and energy available to answer them. Moreover, if the goal of gathering evidence is to improve practice, then faculty need time to interpret and act on the evidence they gather. Consequently, the expectations for the scope of the departmental assessment projects undertaken in a given year at St. Olaf are intentionally modest. When departments were initially asked to articulate intended learning outcomes for the majors, concentrations, and other academic programs they offer, they were encouraged to focus on outcomes distinctive to their program and to limit the number of outcomes to five or fewer. When they were subsequently asked to begin gathering evidence of student learning, they were asked to choose only one outcome as the focus of their efforts, and, as described above, to select an outcome related to a practical concern. In assessment, as in so many other features of academic life, less really is more.

Treating findings as a means to an end, not as ends in themselves. In utilization-focused assessment, the findings are not an end in themselves, but rather a means to the larger end of improving teaching and learning (Patton 2008, 68ff). The guidelines for the “assessment action reports” prepared by departments at St. Olaf reflect this premise explicitly. The first—and principal—question departments are asked to address in these reports is not, What were the results of your rubric/test/survey/portfolio analysis? but rather, Which features of your major/concentration are likely to be continued, and which might be modified or discontinued, on the basis of the evidence you gathered about student learning? In describing their intended actions, departments are encouraged to cite only the results that pertain to the continuities and/or changes they anticipate. Finally, they are invited to indicate the practical support they need from their associate deans in order to carry out their plans. In this reporting model, assessment results are treated as they should be—as supporting material for conclusions the department has reached about its program, rather than as the conclusions themselves.

The Payoffs of Utilization-Focused Assessment for Departments

Utilization-focused assessment at St. Olaf has begun to mitigate the challenges to departmental engagement with evidence of student learning. Concerns about compromised autonomy, meaningless results, and wasted time begin to dissipate when departments themselves are treated as both the agenda-setters and the primary audience for the evidence they gather. Utilization-focused assessment provokes less anxiety about either bureaucratic interference or administrative indifference, because it is the department itself that is the principal respondent to the evidence. Methodological skepticism is moderated when departments are encouraged to observe (rather than “measure”) and summarize (rather than “quantify”) information about student learning in ways that are consistent with both their disciplinary methods and their pedagogical practices. And while assessment still requires an investment of precious faculty time, the investment is less burdensome when the agenda is limited and linked to practical questions of genuine faculty concern. For all these reasons, utilization-focused assessment is beginning to pay off at the department level, and the results are making a discernable difference in departmental discussions and decisions. The following are payoffs that we have realized.

Fostering shared understandings and commitments. Faculty from the St. Olaf department of religion recently gathered evidence of its students’ ability to “form, evaluate, and communicate critical and normative interpretations of religious life and thought” by assessing a sample of senior essays against a rubric they had developed for the purpose. Both the process of developing and applying the rubric and consideration of the findings fostered several kinds of shared understandings among the members of the department: first, a clearer and more explicit understanding of the learning goal itself and how to recognize it in student work; second, a commitment to developing and communicating common goals for writing instruction in the required intermediate-level core course; and finally, a commitment to requiring a substantial writing project in all advanced seminars, so that all students will have at least two opportunities to undertake this kind of writing in the major. These decisions will not only enhance the faculty’s shared understanding of “critical and normative interpretations of religious life and thought,” but will also extend that understanding to students.

Informing pedagogical practices. Faculty in the St. Olaf management studies concentration focused their 2008–09 assessment effort on students’ ability to “work effectively in teams to accomplish organizational goals.” For the past two academic years, the Principles of Management course, which is required of all management studies concentrators, has been structured around the pedagogical principles of team-based learning (Michaelsen, Knight, and Fink 2002). Most class sessions begin with short quizzes on assigned readings, completed first by individual students and then by teams. Over three semesters of evidence-gathering in this course, each team consistently out-performed its highest-scoring individual member. Consequently, the program faculty have decided to continue the use of team-based learning in the Principles course; to use elements of team-based learning in the accounting and marketing courses; and to convert the corporate finance and investments courses to team-based learning in fall 2010. The systematic evidence gathered in the Principles course is allowing program faculty to make a collective, evidence-based decision about pedagogy across an array of courses, and to demonstrate powerfully to their students (many of whom have had negative experiences with group work in the past) that the time they invest in learning to work effectively in teams is well spent.

Several other departments are using assessment results to fine-tune instruction in one or more of their key courses. For example, a department in the social sciences found that, although student papers in advanced seminars were generally proficient, students were better at crafting clear and contestable thesis statements than they were at invoking disciplinary theory. The department plans to provide explicit instruction to enhance students’ ability to engage theoretical debates in their research papers. A natural sciences department is rewriting its laboratory safety policies and procedures document on the basis of the lab safety quiz it developed and administered to its majors. A department in the humanities is planning to use its writing rubric not just as an assessment instrument, but as a teaching tool in advanced research and writing courses. None of these departments discovered glaring deficiencies in their students’ learning, and none are planning a wholesale overhaul of their curriculum or instructional practices. But they did discover patterns of relative strength and weakness that could be readily—and confidently—addressed in specific courses.

Securing resources. Assessment is an increasingly important consideration in grant applications for improving curriculum and instruction; it can help make the case that a proposed project is needed, and it can also provide evidence of departmental capacity to document project outcomes. The St. Olaf department of psychology has incorporated both kinds of arguments in successful bids for both internal and external funding. Assessment findings were part of the rationale for a college-funded curriculum improvement grant recasting the department’s introductory lab course as a gateway course for majors rather than a general education course for students in any discipline. Assessment capacity supported a successful departmental request to the National Science Foundation to lead a national project integrating investigative psychophysiology lab experiences in introductory psychology courses, to increase students’ understanding of psychology as a natural science. Utilization-focused assessment has helped this department leverage resources for instructional improvement.

Utilization-focused assessment is not a panacea—it won’t erase antipathy to administrative directives, resolve long-standing methodological disputes within departments, or eliminate pressures on faculty time. But it can make the work of assessment, increasingly an established feature of departmental life, both more manageable and more meaningful for the faculty who care most about the results.

References

DeAngelo, L., S., Hurtado, J. H. Pryor, K. R. Kelly, J. L. Santos, and W. S. Korn. 2009. The American college teacher: National norms for the 2007–08 HERI Faculty Survey. Los Angeles: Higher Education Research Institute, University of California, Los Angeles.

Michaelsen, L. K., Knight, A. B., and L. D. Fink. 2002. Team-based learning: A transformative use of small groups in college teaching. Sterling, VA: Stylus Publishing.

Patton, M. Q. 2008. Utilization-focused evaluation, 4th edition. Thousand Oaks, CA: Sage.

Sorcinelli, M. D., and A. E. Austin. 2006. Developing faculty for new roles and changing expectations. Effective Practices for Academic Leaders 1 (11) .


 

 

spacer