Liberal Education

Learning about Learning Outcomes: A Liberal Arts Professor Assesses

To an ever-increasing extent over the last ten years, my colleagues and I have been hearing the call to measure learning outcomes. We are increasingly aware that we are living in an age of assessment, when bodies external to the university, including legislatures, are eager to see “accurate measurements” of student learning. While perhaps my humanities-inclined liberal arts cohort (within and beyond my particular university) is atypical, most colleagues with whom I have talked dismiss such measurement as an unrewarded add-on without course benefit, and see those who demand it as outsiders to our guild; that is, those who talk a lot about it aren’t “in the trenches” actually teaching. Many regard this conversation about learning outcomes as rhetoric in the negative sense. Further, this “time-waster done for others” comes in a strange and unfamiliar language. The result is often frustration, fear, and even anger.

Although I have been teaching for over a quarter century, I was never trained in, nor truly understood, “outcomes assessment.” I regarded it as a task to be completed for an outside accreditor that had little relation to my real “liberal arts” goals for students. Those goals include teaching critical self-awareness, developing empathetic understanding of others, realizing we exist in and are shaped by our historical context, and in particular, comprehending the disciplinary ways of thinking found in religious studies. Instead, what I believed “they” wanted was some objective measurement of the “name five key figures and three major schools of thought” variety. In fact, that exact question was the first thing I used in a class as an outcome measure. I had no sense that assessment could actually benefit me as a teacher. Further, assessment always came as an uncompensated addition to my already busy schedule. Or it would be an extra committee assignment, taken on by a colleague who was “taking one for the team.”

Learning about learning outcomes

I began to look more closely at learning outcomes for two reasons. The first was serendipitous. I had become friends with a member of our teaching center, Romana Hughes, whom I first came to know through my training on the use of our learning management system, eCollege. I came to respect this staff member by observing how hard-working and skilled she was, and I believed she would not ask me to waste my time. Through a couple of group meetings, I later came to know the center’s director, Jeff King, and felt that he, too, was a talented and supportive person with whom I could work. During a lunch, I was informed that eCollege was introducing something called “learning outcomes management,” and that the center staff would like me to work with them as they created a pilot program for our university. My good relationship with them overrode my reservations about “assessing learning outcomes,” so I said yes.

My second reason for learning about learning outcomes was that I knew we would have to do assessment for the new core curriculum and for our accrediting body. So it was not only a good idea in theory to know what our students are actually learning, it was also a practical necessity. As a full professor without the demands many others had to meet for promotion or tenure, I thought I could afford to be a good university citizen, doing something that took time and energy for little or no reward.

The next step was to decide which course to focus on, and this turned out to be easy. Our core curriculum requires all students to take one course in which they “will demonstrate a critical understanding of the role of religion in society, culture, and individual life,” and all faculty in the religion department teach at least one course per semester that meets this core requirement. Each of these courses must address one or both of the following learning outcomes:

  • Students will demonstrate familiarity with one or more disciplinary approaches to the study of religion.
  • Students will demonstrate knowledge of one or more major religious traditions through the study of some foundational texts, figures, individuals, ideas, or practices.

My own introductory course, Understanding Religion: Worldviews and Religions, takes up both of these outcomes.

I then began meeting with the staff of the teaching center. The director bought the books assigned for my course (which impressed me!), and we began discussing how well my personal goals for the course actually aligned with the university’s requirement, and how effectively my course assignments measured students’ achievement of the outcomes. I reviewed each assignment in my course to determine which outcome it addressed and planned to measure the corresponding student learning over the course of the semester. Through this process, which I had never before attempted, I gained greater clarity about how course assignments fit with an outcome and how to measure the success of each student in meeting the outcome.

Although I already had a good understanding of my goals for the course, this review helped me more clearly recognize my true agendas and priorities. For example, it is even clearer to me now that my primary emphasis early in the course is on method (understanding various approaches to the study of religion), and that my emphasis shifts to content (the “data” of religious traditions in historical context) as the semester progresses. One must balance these, of course; a professor must always include some of both, and each enhances the other. I saw, for example, that while I used both method and data to demonstrate that religious beliefs and practices are socioculturally conditioned and historically contingent, I did not explicitly measure students’ awareness of this (and their own) conditioning; instead, I relied on comments arising from class discussion and various assignments to indicate whether students were reflecting on this contingency.

The next step was to develop a rubric—a term unfamiliar to me from prior academic training—that I could use to identify and measure students’ achievement of the expected learning outcomes enumerated above. By trial and error and through discussion with the teaching center staff, I found that a simple three-part rubric was sufficient for measuring progress on both outcomes (in a nutshell: got it well, got it sufficiently, didn’t get it). While we are currently working to expand and clarify the rubric further, I want to affirm that even a relatively brief rubric makes sufficient distinctions among students, especially in the case of short journal responses, and gets one out of thinking in terms of formal grades only. A grade, particularly on a paper or major critical essay, is really a summary of such various elements as form and grammar, clarity of focus or organization, utilization of course material, and critical reflection and personal insight. Learning outcomes assessment is far more focused.


By this point, I was beginning to buy into the value of measuring outcomes. But then a serious and crucial question arose: when does a student demonstrate sufficient understanding to say that a course outcome has truly been achieved?

Liberal education goals are notoriously abstract and hard to measure, yet they are the aims that draw many professors into teaching. If the goals are sufficiently ambitious, they certainly can’t be fully accomplished in a course, semester, or even a college career. Since the achievement of outcomes that professors deem worthy of a lifetime commitment is inevitably partial and ongoing, calls for certain kinds of concrete, quantitative, or one-time assessments of them tend to breed cynicism about the adequacy of such measures. Yet if the outcome is too trivial—such as the memorization of names and schools of thought—then its achievement is not what we got into teaching for, and its assessment breeds serious skepticism about the value of the exercise. Further, preparing the measures takes time and often requires training; if the learning outcomes do not truly reflect what a teacher values and wants to measure, then the effort leads to frustration and scorn. Finally, as compared with numbers of publications and high scores on student evaluations, faculty work on learning outcomes assessment is not typically rewarded.

So, as we worked, questions percolated: What should one do when the significant time and effort invested in “deep” teaching and learning yields little or no extrinsic reward? What is a realistic goal for a semester? The answers obviously depend on the discipline and course level, and there is a learning curve for each professor (and for those who guide the program). Realism about what is possible should always be in the foreground.

More specifically, for achieving the outcomes listed above over an entire course, it is obviously not sufficient to measure a student’s success in a few word identifications or online discussion threads. But how about a single major essay on a test? And does one have to ask a particular question at some point in the semester and in a form that allows students to expound sufficiently to assess them by that question alone? If not, how do we measure their cumulative demonstration of understanding? One answer is to choose four or five measures, give each one a percentage, and total up the numbers at the end of the semester. One can then compare each student’s success with other times measured and the overall success of each measure.

Another possibility is to design an assignment that will unquestionably and effectively reveal the extent of students’ grasp of the outcome. I unwittingly did this when, finding myself with an extra class period due to missing a conference, I assigned a free-writing essay asking students to describe their own religion or worldview using the phenomenological approach found in many introductory religion texts. In my case, the specific model was that found in Ninian Smart’s Worldviews: Crosscultural Explorations of Human Beliefs (2000), which uses a prototype with six aspects or “dimensions,” including experience, myth, ritual, doctrine, ethics, and society. In this exercise, most students described their “home” Christian denomination using these dimensions. Yet many reported understanding their tradition in a new way through these categories, and some described religion competitors or substitutes like humanistic psychology, nationalism, or scientifically influenced skepticism with this model. Most importantly for assessment purposes, they demonstrated their basic comprehension of a disciplinary approach to the study of religion. Many also showed they understood a definition of religion that is broader and more inclusive than simply “belief in god(s).” This exercise thus became the “gold standard” for revealing student success at understanding a core outcome and proved valuable enough to be added to my course in the future.

My thoughts about the value of measuring outcomes have changed over time, but there has been no single “conversion moment.” Incrementally and cumulatively, I have begun truly to see the point of focusing on student learning, and that doing so has improved my teaching and my students’ learning. Contrary to my prior perception of and experience with assessment work, I have been able to gather information that I actually wanted to know and found valuable. I have been able to work at my own pace and with goals I set myself, instead of being asked to add or alter some outcomes to fit someone else’s purpose. I became convinced that the staff of the teaching center wanted to support and facilitate what I was already doing.

I have never lost my awareness of the balance between time invested and benefit achieved. I have attended closely to the benefits, both intrinsic (more effective teaching) and extrinsic (university reward via stipend, credit on annual report for merit raise, reduction of other duties, etc.). As indicated above, I have found that my teaching profited from a refinement of my goals and priorities, and from greater clarity in how course assignments fit with an expected outcome. Because I also have gotten enormous support from the teaching center (and a small stipend), I would certainly say the benefits have outweighed the costs.

I think that positive balance will continue for at least two reasons. First, the workload becomes lighter and less demanding as one becomes more familiar with it. Further, what one learns in one course, such as how to build rubrics and how to define outcomes, is transferrable to other courses. Second, assessment work can provide useful data and models for various required reviews at the departmental, programmatic, or institutional levels. So assessment may save time and labor down the road. Familiarity tends to lessen the time commitment, and ongoing refinement provides opportunities to “close the loop.”


As far as recommendations for other liberal arts professors, I believe it’s essential to be realistic about the time and financial resources available. Start well in advance of any deadline, begin with “baby steps” and plenty of support, allow time for trust building, and to the extent possible, clearly identify any rewards (time or money). See whether your assessment work can be integrated with other mandatory reviews. And remind your colleagues that assessment work gets easier over time and is transferrable to other courses.

Administrators especially need to realize that assessment requires a way of thinking for which the vast majority of professors have not been prepared by their academic training. It might, therefore, be helpful to provide workshops so they can learn the ropes. Finally, it cannot be said too often: if an administration wants faculty buy-in, then show that you value assessment. Provide support or rewards for undertaking it: reduce other duties (committee work, teaching load), offer summer training or compensation, provide stipends, and give assessment work status in the annual report as well as in the tenure or promotion review process.

When I sit in a faculty meeting with my committed and hardworking colleagues, I still reflect on how to persuade others that the effort is worth it without university support and the level of assistance the teaching center has given me. I strongly believe that faculty members need a clear reward structure for assessment work. If that support is forthcoming, then outcomes assessment is work well worth doing.

Lessons Learned from the Pilot Program in Learning Outcomes Training
As described in this article, the author’s rapprochement with outcomes assessment came about through his involvement in a pilot program in learning outcomes training that was administered by the Koehler Center for Teaching Excellence at Texas Christian University. Jeff King, director of the Koehler Center, has compiled the following list of lessons learned from the pilot program.
  1. Help faculty divorce learning outcomes assessment from grading assignments or even from determining a course grade. Understanding this key difference can lead to “aha moments” that help faculty accept learning outcomes assessment as a worthy enterprise.
  2. Allow faculty to work at their own pace and to select what they want to measure in a baby-steps process.
  3. Provide guidance and direction in building rubrics, but don’t “over-teach.” Faculty realize on their own when trying to use a poorly constructed rubric that they haven’t accurately characterized the differences among levels of achievement. Learning to write good rubrics results in instructional strategy improvement in multiple ways. Don’t risk losing that, especially since adjusting rubrics as a “lesson learned” for the next time the class is taught is in itself extraordinarily valuable.
  4. It takes time. We thought four training sessions about learning outcomes could do it. We were dead wrong. We’ve since adjusted our training to build in the one-on-one debriefing sessions about the rubrics that faculty submit. Those colleague-to-colleague conversations are pure gold; the learning-by-doing process and the trust-building engagement pay huge dividends.
  5. Design a useful end-of-term report. Our report includes the data summary for outcomes achievement, but perhaps the most important components are “lessons learned” and “action steps.” In those segments, faculty describe where/how things could be better. Often these realizations are about improving rubrics, adjusting assignments (even—gasp!—eliminating them), or assessing differently. “Action steps” define what the professor will do differently the next time the course is taught, and faculty are usually eager to see whether the changes result in higher student achievement of outcomes.




Smart, N. 2000. Worldviews: Crosscultural explorations of human beliefs, 3rd ed. Upper Saddle River, NJ: Prentice-Hall.


Andrew O. Fort is professor of religion at Texas Christian University.

To respond to this article, e-mail, with the authors’ names on the subject line.

Previous Issues