Peer Review

Problem Solving and Transparent Teaching Practices: Insights from Direct Assessment

With the help of thirty-five faculty members across seven Transparency and Problem-Centered Learning project campuses, AAC&U sought to better understand how transparent teaching practices affect students’ demonstrated ability with regard to a targeted learning outcome within courses already incorporating high-impact practices. Although the Transparency in Learning and Teaching Survey, a tool designed to assess levels of clarity and visibility around objectives for student learning, criteria for success, and examples of meeting criteria, would provide indirect evidence of students’ perceived learning and attitudes, this project enabled us to gather direct evidence from students’ work products regarding their ability to demonstrate a particular cognitive skill. Any one of a number of learning outcomes would have made sense for the focus of this project. But we knew, based on prior research, that students often mentioned aspects of problem-solving when discussing what it meant to be most engaged in learning or what they thought employers would most value (Finley and McNair 2013). The following student comments drawn from focus groups highlight the ways in which students recognize the importance of developing problem-solving skills.

“I mean you can have a wide-ranging knowledge about the general condition of what’s going on, but [that knowledge] changes so fast…[in college] we’re supposed to learn how to think critically and to attack problems…instead of having the right answer on hand all the time.”—Student in Oregon

“Employers are]…asking you to think outside the box sometimes, as well as being able to perform inside the box… So…[a] bachelor’s [degree is] informing [students] of how the world functions and works and how to… understand it and how do we function in it…how we interact with people and communicate and problem solve and the leadership that we can give...”—Student in Oregon

“I guess you…learn the specifics from the book, but in [college] you kind of learn how to problem solve in a general sense. [Y]ou learn…how to go out and look for information on your own, and maybe look for resources and how to solve stuff if you don’t know it…[W]hen you go apply for a job, you can say, ‘I’m really good at…finding solutions for problems or finding information’…[and employers will think]…‘He’s able to adapt and…not panic…if the answer is not given to him or if he needs to go out and do extra work in order to solve something.’” —Student in California

Students also lamented when problem-solving experiences were absent from college learning or when peers were absent from the problem-solving process.

“In the real world, if I have a problem, I can call up three or four different people, and we can get together and solve a problem [but] in a classroom setting, it’s me all by myself with no book, and I can’t…hey, what do you think? [P]erhaps college would serve us better if it was more interactive… [let the] class get together, solve these problems, and that’s [the] final.”—Student in Oregon

“[I] mostly have just huge lecture classes…and…even with…my science lab, …my class[mates] would just leave…I would sit there for…trying to figure out the problems…[W]e never work as a team or anything like that.” —Student in Wisconsin

Survey data from employers supports students’ sentiments. When asked what skills students should attain in college, regardless of their field of study, 91 percent of employers indicated, “problem solving in diverse settings,” followed by “direct experience with community problem solving” (Finley and McNair 2013). When employers were asked what skills should receive greater emphasis within colleges and universities, the second most commonly identified skill (81 percent) was “complex problem solving.” The first was critical thinking and analytic reasoning (82 percent); skills that are, arguably, closely linked with problem-solving abilities.

Approach for Direct Assessment

In order to assess the effect of transparent teaching practices on students’ abilities to problem solve, every member of a campus project team agreed to identify two similar courses being taught by a single faculty member; one course would be identified as a control course that did not engage transparency techniques and the other would be the experimental course in which transparency techniques were used. All courses were taught within the general education curriculum. Project campuses were selected, in part, because they expressed an interest in developing their general education curricula to be more intentional, outcomes-based, and integrated into the institutional mission.

Ideally, all faculty participants would have identified two sections of the same course for this research and those sections would have been taught in the same semester. In reality, that research design, while useful for making comparisons, is challenging to execute. Ultimately, for reasons owing to course scheduling, circumstances that delayed course implementation, and communication issues, not all faculty were able to implement a control and experimental course within the project timeline or were unable to submit course data for analysis. Out of thirty-five expected experimental and control courses, data was received for nearly the same number of experimental (N=30) and control (N=29) courses. The total number of students across experimental and control courses was also roughly equivalent, 296 students and 284 students, respectively. Courses were taught either concurrently during the spring 2015 semester or between consecutive terms spanning the fall 2014–summer 2015 terms.

In order to assess improvement in student learning from the beginning to end of the course, faculty identified a pre- and a post-assignment, from each of which they would randomly select ten work products (twenty total work samples from each course). This was done for both control and experimental courses. All faculty members participated in an in-person training session on how to apply the AAC&U Problem Solving VALUE Rubric to student work prior to conducting campus scoring. Following training, each campus team was instructed to collectively score student work samples gathered across courses, such that individual faculty did not score work exclusively from their own courses. Scoring was completed between the summer 2015 and fall 2015 terms. Results from each campus were forwarded to AAC&U for aggregation and analysis.

What We Learned

This project provided an opportunity to do the first multi-campus implementation of the AAC&U VALUE rubric. Because sample sizes were relatively small and were drawn from a non-random group of courses, we did not intend for the data gathered from rubric scoring to meet the rigors of statistical analysis. We did, however, intend to learn something from faculty collaboration in the use of rubrics in general and from the use of the problem-solving rubric, in particular. Thus, the findings from the direct assessment of this project are more heavily weighted toward what we learned from the process of engaging faculty teams in the scoring of student work using the problem solving rubric than on actual data points. The data did indicate that, overall, students’ demonstrated problem-solving skills improved from the pre-assignment to the post-assignment. This was true for students in both the control and the experimental courses. This finding was consistent when the courses were regrouped according to students’ perceptions of whether a course was “more” or “less” transparent, rather than experimental or control, based on reported scores on the transparency survey. Figures 1 and 2 below indicate the percent of change from students’ rubric score on pre-assignments at the beginning of the course to their performance on post-assignments at the end of the course.

PR_WISP16_Finley_Fig1_Web.jpg

(Click on image to enlarge)

 

PR_WISP16__Finley_Fig2_Web_0.jpg

(Click on image to enlarge)

 

The graphs demonstrate that, although average rubric scores improved over time for all students in this project, the ability to draw conclusions from comparing courses (either control versus experimental or “less” versus “more” transparent) was difficult. Figures 1 and 2 suggest that students in courses with higher levels of transparency (whether labeled experimental or more transparent, respectively) demonstrated greater percent improvement in rubric scores over time, specifically in their ability to “define the problem” and to “identify strategies” for problem solving. But across other rubric dimensions, the students in the control or “less” transparent courses appeared to demonstrate greater improvement over time.

Mixed results should not be perceived as a failure. The practical reality for campuses engaging in direct assessment using rubrics as highly articulated as the AAC&U VALUE rubrics work is that an initial pilot may reveal messy data. It takes time for faculty to become proficient and comfortable with scoring and it also takes time for faculty to allow the rubric to inform assignment design. Faculty in this project were taking on the tasks of both understanding the rubric for scoring and understanding its utility for assignment design within the span of a number of months. They were also, concurrently, learning best practices with regard to transparent teaching practices. These caveats are not intended to excuse mixed data, but they are intended to provide insight in order to encourage faculty to dig into data points that might not be in the intended direction or fall short of expectations. Direct assessment is a learning process that often, helpfully, starts with a pilot. Faculty in this project provided the means toward piloting the problem-solving rubric, both on their own campuses and nationally. Their collaborative efforts provided a number of insights and considerations that can be of help to faculty who may be exploring the use of this rubric on their campus. The following are a number of those insights gleaned from both faculty feedback on the direct assessment process and my own reflections from working with the project teams.

  • Rubrics can be instruments for transparency, as well as tools for assessment. The depth and breadth of articulation within the problem-solving rubric (and all other VALUE rubrics) helps to accomplish two essential practices of transparent teaching—to clearly articulate the intended outcome and to communicate how students will be evaluated. The first page of the rubric details the meaning of problem solving, while the rows and progress points of the rubric itself help students to understand how they will be evaluated and how to improve.
  • Applying different disciplinary lenses to the same rubric and forming examples helped clarify how the rubric could be applied across disciplines. Though the VALUE rubrics were intended to be common tools applied across disciplines, interpreting them as such is not always easy or obvious. In the case of the problem solving rubric, concerns were raised that the language of the rubric was best suited for application to problems specifically in the natural sciences. To address this, the meaning of words like “solutions” and “hypotheses” were discussed through the lens of the humanities. Collaborative brainstorming on examples of different problems across disciplines was also useful in helping faculty to envision how the rubric could be applied within their field.
  • Dimensions of the rubric need to be understood as both meaningful and feasible. Faculty raised concerns that two dimensions of the problem-solving rubric, “Implement Solution” and “Evaluate Outcomes” were not practical for students to actually demonstrate within the span of a course. Instead, faculty reframed these dimensions, inviting students to discuss possible implementation procedures and hypothesizing potential outcomes.
  • Getting clear on the distinction between “0” and “Not Applicable” takes on-going conversation. When using a VALUE rubric to score student work, a score of “0” is applied if the student’s demonstrated performance on a particular dimension does not reach the “benchmark” (1) level. By contrast, a score of “not applicable” is given if the assignment does not invite the student to demonstrate learning in the first place. The confusion over the practical application of these points across nearly all of the project teams may help explain the mixed results from the rubric data. Though a couple of team leaders believed they eventually achieved enough clarity to maintain consistent scoring, other team leaders were not as confident and at least one reported that this remained an issue throughout scoring. Campuses working with any of the VALUE rubrics should be cognizant of raising this issue as a potential area for on-going discussion during the scoring process.
  • The scoring process isn’t just useful for getting data. Several team leaders commented on the utility of the dialogue that arose from the scoring process for understanding how outcomes were shared across disciplines. These discussions also assisted with assignment design. In seeing where students were stronger or were weaker across the rubric, faculty could consider how to adjust assignments accordingly. One team leader commented on the ability to better engage in backward design following scoring because those conversations provided greater understanding of the intended outcome and therefore insights into how to construct more relevant assignments.

Conclusion

The use of the problem-solving rubric across project campuses provided valuable insights into how this rubric can be applied, emergent issues, and the overall utility of the rubric itself and of the assessment process. Though results from the direct assessment were ultimately mixed, the process was found to be both meaningful and substantive for project teams. The lessons learned shed further light on how essential interdisciplinary dialogue is for faculty in engaging direct assessment of shared student learning outcomes. As one team leader reflected via e-mail, “I believe that [team discussion around rubric assessment] was the most powerful cross-content professional development that I [have] experienced!”

 

Reference

Finley, Ashley, and Tia McNair. 2013. Assessing Underserved Students’ Engagement in High-Impact Practices. Washington, DC: Association of American Colleges and Universities.


Ashley Finley, associate vice president of academic affairs and dean of the Dominican Experience, Dominican University of California; former senior director of assessment and research, AAC&U

Previous Issues