Membership Programs Meetings Publications LEAP Press Room About AAC&U
Association of American Colleges and Universities
Search Web Site
AAC&U
Resources on:
Liberal Education
General Education
Curriculum
Faculty
Student Success
Institutional Change
Assessment
Diversity
Civic Engagement
Women
Global Learning
Science & Health
PKAL
Connect with AACU:
Join Our Email List
RSS Feed
Facebook
Follow us on Twitter
LEAP Blog
LEAP Toolkit
YouTube
Podcasts
Support AACU
Online Giving Form
 
Peer Review Winter 2010 Cover

 

 

Winter 2010, Vol. 12, No. 1

Developing the Framework for Assessing a New Core Curriculum at Siena College

By Ralph J. Blasting, dean of liberal arts, Siena College


Siena College reveals the dirty secrets behind developing a campus-wide assessment plan, and asks “Are we alone?”

When the Peer Review editor asked for a contribution “illuminating the complexities” of campus assessment practices, I was both flattered and apprehensive. Though our team came away from the AAC&U 2009 Engaging Departments Institute with an elegant plan to integrate the assessment of general education with assessment in the major, a follow-up report in November would represent relatively small accomplishments given that it could cover only eight weeks of the fall semester. Our campus is just beginning to develop a culture of student outcomes assessment, and simultaneously grappling with revision of our core curriculum and the development of a new strategic plan. Somehow I couldn’t help but recall Malvolio, the steward in Shakespeare’s Twelfth Night, who is captivated by a secret letter telling him that “some are born great, some achieve greatness, and some have greatness thrust upon them.” Encouraged by Peer Review, we herewith put on our yellow stockings and present ourselves to the court of academic opinion, hoping for better results than Malvolio’s.

Siena College is an undergraduate, mainly residential, liberal arts institution of 3,000 students, situated on the northern boundary of Albany, New York. We were founded by seven Franciscan friars in 1937. Our Franciscan and Catholic tradition is at the core of our mission and planning documents—although it also engenders some of the most lively campus debates about what that actually means to our curriculum and policies. Our current president, Father Kevin Mullen, took office in July 2007; we hired a new director of assessment in fall 2008 and submitted a Middle States interim review in summer 2009. We began a fundamental review of our core curriculum in fall 2007, with a target completion date of spring 2009 and implementation for fall 2010. Presuming that a new core structure would be in place by July 2009, we applied for the Engaging Departments Institute with a team ready to draft a core assessment program at the conference. Two years ago, we had no plan for campuswide assessment of student outcomes, and few consistent programs in the majors. A new Assessment Planning Committee (APC) was formed in fall 2008, and our core review was not going particularly smoothly. The Engaging Departments Institute seemed like exactly what we would need to help us learn to create a culture of assessment within and among the academic major departments. This would be crucial to the success of the program, since academic departments have the most direct impact on students while retaining the greatest degree of autonomy. For better and worse, departmental faculty are acknowledged as the authoritative voice in the delivery and evaluation of the curriculum. While assessment expertise may vary widely among faculty and departments, the most successful programs are those that the faculty view as important and useful. Our team applied to the Engaging Departments Institute with the purpose of developing the framework for assessing our new core curriculum.

The Institute

A priest, a dean, a teacher, and an art historian walk into the Alumni Center at the University of Pennsylvania. This may sound like the start of a bad joke, but one of our goals was to include a diverse team of participants at the institute: all of the team members hold some level of leadership on the campus. Not all are tenured; our assessment expertise varies widely; and none of us had been directly involved in revising the core curriculum. What we do share is an interest in practical, valid assessment practices that yield information useful to us and to our colleagues, and which can help our students to become more intentional learners. All of us teach core courses; each of our departments is committed to offering large numbers of core classes to all majors. As dean of liberal arts, I was deeply concerned about how the new core would manifest itself, as nearly half of the sections that I schedule every semester fulfill some aspect of the core. Given the political battles that had largely dominated core discussions up to spring 2009, we believed that an assessment plan for the new core would help to keep future conversations focused more precisely on educational effectiveness. Eager to spend four days in Philadelphia beginning to craft such a plan with the help of national experts, we very quickly encountered two potential obstacles. First, the core revision process was not completed in spring 2009 as planned, so we were heading to the conference without a curriculum to assess. Second, Carol Geary Schneider’s plenary address on “The Integrative Work of the College Major” caused a radical shift in our thinking: perhaps because our institution’s separate assessment plans for general education and majors only reinforce what Schneider described as an artificial and unhealthy division.

In many cases, we are the same faculty leading the same students toward college-wide learning goals. To be sure, the work of the majors has more disciplinary depth and is sequenced to achieve a certain level of proficiency as defined by the faculty and professionals in that field. But a typical undergraduate degree requires that about one-third of any student’s coursework be taken outside of his or her chosen concentration, in what we call general education. While both faculty and students may think about general and major coursework as two very different kinds of experiences, they must work together to create the breadth and depth so often cited as hallmarks of American liberal education. Our team, in our very first working session that evening, questioned the wisdom if not the validity of a separate assessment of general education. If almost all of our faculty in the School of Liberal Arts are teaching core courses to all majors, and if all majors consider the core an integral part of their degree, then shouldn’t there be a way to assess the learning outcomes of the core as a part of the assessment of the major? Each major already had some sort of assessment in place, even though some are highly developed and others just beginning. Faculty tend to see their efforts on behalf of their majors as more directly relevant to their expertise and to the well-being of their departments than the energy expended on general education. Given these two premises, it seemed to us that a culture of assessment would be much more likely to take root at the department level than it would if imposed broadly across the campus—especially to assess a new core that would likely not have unqualified support. Our next job was to flesh out our ideas and bounce them off of the expert consultants available to us: the institute faculty.

As recommended prior to the conference, we divided up to attend the three different tracks of the conference (education leadership; faculty work; and the learning, assessment, and improvement cycle), coming back together to compare notes regularly. We contributed to our plan from our various perspectives. The Education Department, for example, has extensive assessment in place, as required for their NCATE accreditation. Our chair of education is adept at organizing assessment activities to match the curricular frameworks already in place, or those anticipated in the new core. Our chair of creative arts drew from her department’s experience with a new senior capstone course. The department offers one degree that allows students to concentrate in music, theater, or visual arts, so that the capstone class presents a wide variety of projects to assess. Without a lot of consistency of product among them, she has become proficient at seeing the results of broad learning goals as they are manifested in particular student products. One of our team members is a relatively new faculty member in our Religious Studies department. The department has only about a dozen majors, but serves every Siena student with at least one course, and will be the primary guide for new “Franciscan concern” courses in the new core. While assessment practice in Religious Studies is relatively simple, the broader implications of the new core for that department are significant. The small number of majors allows the department to receive accurate data on the student experience from conducting senior exit interviews and surveys. However, all Religious Studies faculty teach a large number of nonmajors, and the department clearly has an interest in assuring that “Franciscan values” continue to play a significant role in our curriculum. Finally, as dean, I oversee eleven departments that are collectively responsible for 75 percent of the core curriculum. The group with which I meet most regularly is the department chairs, who fulfill two- or three-year rotating duties with little compensation and no administrative support. While none categorically rejects learning outcomes assessment, all of them are concerned about developing data-collection activities that will not yield useful results proportionate to the effort expended to gather the data.

The Plan

As stated in our application to the institute, our initial goal was to “create a framework for assessing the general education core.” We specifically wanted to “create and implement assessment techniques that measure the common learning goals across disciplines.” At one of the institute’s open feedback sessions, we told our colleagues that the process we envisioned was flawed. We decided to find a way to integrate general education and major assessment within the disciplines. Specifically, we proposed to work with individual departments to find out how they can assess their students’ accomplishments of collegewide learning goals as they implement their own departmental assessment plans. Because departmental learning goals are derived from the more general college goals (see fig. 1) department faculty should be able to make their own evaluations of the degree to which their students are meeting both aspects of learning. For example, one college learning goal is “informed reasoning.” The History Department uses a capstone research project to assess its own goals, but the faculty should also be able to evaluate the level of “informed reasoning” apparent in those projects. The Finance Department might see “informed reasoning” from a different disciplinary point of view, but can still evaluate its students’ abilities in that area. Their responses would be a part of their regular assessment activities, requiring minimal additional work from the faculty. Each department submits an annual assessment report as a part of its year-end progress report. These go to the Assessment Planning Committee, made up of faculty representatives from each division. The mechanisms are therefore in place for faculty to receive and review student work, looking for accomplishment of both departmental (major) and college-wide goals (core). The department is expected to make some evaluative statements about the degree to which students are meeting their goals, with suggestions about how the department might become more effective. We might also reasonably argue that every core course should be addressing these skills in some way. The first step, however, is to determine whether faculty perceive any patterns of achievement or deficiency across majors. It would fall to the APC to look for patterns within the responses received from departments, and then to suggest ways to improve our core curriculum in response to that data.

Figure 1. The Siena College Learning Goals

As a learning community and liberal arts college grounded in its Franciscan and Catholic heritage, Siena affirms the following learning goals:

Learning Goal 1. Informed reasoning (Reason)
Students will think critically and creatively to make reasoned and informed judgments. Through engagement with contemporary and enduring questions of human concern, students will solve problems in ways that reflect the integration of knowledge across general and specialized studies, and they will demonstrate competence in information literacy and independent research.

Learning Goal 2. Effective communication (Rhetoric)
Students will read a variety of texts with comprehension and critical involvement, write effectively for a variety of purposes and audiences, speak knowledgably, and listen with discernment and empathy.

Learning Goal 3. Meaningful reflection (Reflection)
Students will comprehend that learning is a life-long process and that personal growth, marked by concern and care for others, is enhanced by intellectual and spiritual exploration.

Learning Goal 4. Regard for human solidarity and diversity (Regard)
Students will affirm the unity of the human family, uphold the dignity of individuals, and delight in diversity. They will demonstrate intercultural knowledge and respect.

Learning Goal 5. Reverence for creation (Reverence)
Students will demonstrate a reverence for creation. They will develop a worldview that recognizes the benefits of sustaining our natural and social worlds.

Learning Goal 6. Moral responsibility (Responsibility)
Students will commit to building a world that is more just, peaceable, and humane. They will lead through service.

Approved by the Board of Instruction, November 18, 2008

Dreams, Doubts, and Pitfalls

The dream of elegance embedded in our plan is that it uses already-established activities in departments to evaluate the success of the core curriculum. No new faculty committee would be created and no new general education assessment plan would be put into place, only to be marginalized and ignored as someone else’s problem. Faculty would be looking at their own (major) students in a more holistic way as they sought evidence of learning accomplished outside of the major. Finally, faculty would see major and general education as two parts of a unified experience, for which all faculty are responsible. While the assessment “data” would come in a variety of forms in response to a variety of prompts (papers, surveys, interviews, test scores), our dream included APC members who would be able to discern patterns of strength and weakness across disparate data from various departments. Looking beyond issues of validity and reliability, they would value the sometimes-intuitive feedback from a variety of disciplines to make suggestions for improvement across the board. Finally, our dream includes a presumption that faculty across disciplines can find some level of agreement on how the qualities we all seek in our students can be manifested by a graduating senior.

The doubts are obvious. Are data collected through a variety of means from a variety of disciplines “data” at all? How much validity is lost due to variation of measures and methods? Is there any consistency among faculty as they seek evidence for “effective communication,” “meaningful reflection,” or even “regard for human solidarity and diversity”? Is a single assessment point at the senior year sufficient to assess student accomplishment in core courses spanning the entire undergraduate experience? Will faculty be willing to expand their current assessments of the majors to achieve this broader look at student achievement of college-wide goals? And even if all of these doubts can be addressed, will the resulting assessments lead to real changes and improvements in core courses?

Pitfalls are likewise obvious. We might present our ideas to the Assessment Planning Committee ineffectively, killing the project before it starts. Even with the APC on our side, that doesn’t win over departments that are struggling to establish their own practices in the major. The presentation of the idea needs broad understanding and support from the outset, as it proposes to allow mainly full-time, departmental faculty to comment on the effectiveness of courses not in their majors and often taught by part-time faculty. And finally, whose idea is it anyway? If I, as dean, “support” this approach too strongly, it will be seen as a top-down administrator’s project. The AAC&U imprimatur sometimes offers legitimacy, but is just as often seen as outside meddling in our internal processes. Regardless of the value of the idea, it can be sidetracked at many points along the way.

Progress Report: Back to Reality

One of my fellow institute team members and I were in fact invited to report to the APC in October. The meeting went well, but our presentation may have been somewhat hampered by “wet dog syndrome.” We returned from a rich working conference in July, at which we and our colleagues developed what we thought was an elegant solution to a complex problem. Like the dog just returning from a dip in the lake on a hot summer day, we wanted to share our joy and enthusiasm. As anyone who has returned from a conference with the same exuberance knows, the effect on bystanders is often the same as that of the dog: the joy you wish to share is perhaps too sudden and widely distributed to be received well. At the same time, I have to say that the APC asked the same questions we asked of ourselves in our doubting moments. Is the department the best place for general education assessment to occur? It might be too narrow for more advanced departments, while too complex for those departments just beginning. Would core assessment through the major be focused enough? That is, will we learn enough about what is working in the core and what is not to make informed judgments for change? Do we not need multiple points of assessment throughout a student’s career, including her work in the core? And first and foremost in the minds of the assessment committee was the question, Is this a good time? The core was still in flux at time of our initial presentation in mid-October. By October 27, our curriculum committee had (thankfully) passed a new core, albeit not without objection from several departments. We are now ready to move ahead in answering some of the more detailed questions of the new core, but the issue of how that new core is to be assessed remains. The decision was to receive our proposal, but to hold it until the spring, until members of the APC could complete their reviews of current departmental assessment practices.

Conclusion

Assessment is above all a human process (says the dean of humanities). Any new process takes time and patience, and nowhere is this truer than in academe. While administrators, boards of trustees, and accreditors are our incentives, we tend to be (rightfully) skeptical of initiatives for their own sakes. If not a natural part of a department’s annual activities, outcomes assessment becomes meaningless data collection at best. As in art and athletics, some departments and institutions excel easily, while others come to assessment slowly. Or as Malvolio is advised, “Some are born great, some achieve greatness, some have greatness thrust upon them.” The final effect on Malvolio in the play is left ambiguous: having been embarrassed by his erratic behavior, he is then imprisoned and abused for several days. He leaves the play with the threat the he will be “revenged upon you all.” And yet in the tradition of the comedies, some interpreters hope that he returns later, humbled yet perhaps more wise about the fickle nature of human judgment. Having spent our four days in Philadelphia (as Malvolio spends four days in “a dark place”), we will put on our yellow stockings and go forward.

spacer