Peer Review

Assessment of High-Impact Practices: Using Findings to Drive Change in the Compass Project

The role of assessment in the AAC&U LEAP States’ project, Give Students a Compass: A Tri-State LEAP Partnership for College Learning, General Education, and Underserved Student Success (known as the Compass project) necessarily spans and connects inquiry at the campus, system, and national levels. The Compass project provides the opportunity to work across multiple levers of assessment to engage multiple mechanisms for change in states and state systems. It is possible to draw from the campus-level assessment across individual campuses and to scale up by aggregating findings to gain a richer understanding of effective practices and outcomes at the system and national levels. Working in reverse, evidence aggregated at the system and national levels can also be disseminated back to campuses for further development of efforts on the ground level. Therefore, it is possible to develop a substantial and complimentary evidence base of knowledge that flows across multiple tiers of local (campus level), system (what might be considered the “meta level”), and national-level work. Additionally, the engagement of assessment levers that extend beyond campuses while providing broad evidentiary support of campus goals can assist campus leaders in making the case for institutional change.

Below is an example of how evidence can be gathered and strengthened using multiple levers of assessment to facilitate discussion at the campus, system, and national levels. Using campus-level assessment of high-impact practices (HIPs) drawn from the National Survey of Student Engagement (NSSE), this article reports on the trends in student participation in HIPs and the associated learning outcomes that were analyzed via the initial aggregation of data at the state system level and, then, ultimately through the aggregation of system level data to inform national findings. The implications of these amplified effects combined with the distribution of participation rates suggest that campuses in these state systems are expanding first-year programming such that entering students stand to benefit widely from high-impact practices at the start of college.

Scaling-Up: Using Campus-Level Data to Examine Broad Trends

In 2009, each campus participating in the NSSE in the California State University System, the University of Wisconsin System, and the Oregon University System provided authorization for NSSE to conduct separate analyses of their 2006/2008 data for the advancement of Compass work. The results combine data from thirty-nine campuses—twenty campuses in the California State University System, thirteen campuses in the University of Wisconsin System, and six campuses in the Oregon University System—with an average estimated response rate of 27 percent. Summary reports from these analyses were given back to the individual campuses, to the system offices, and also shared with AAC&U. Specifically, the NSSE reports focused on participation rates and self-reported outcomes associated with student involvement in six high-impact practices: learning communities (first year), service learning (both first-year and senior-year experiences), student–faculty research (senior), study abroad (senior), internship (senior year), and senior culminating experience/ senior capstone. However, because these reports did not combine data across all three state systems, the following supplemental analysis was developed to demonstrate a way to analyze national trends. The study examined aggregated system-level data and then examined averages to discern patterns. This supplemental analysis across three state systems provides findings on: (1) whether and to what degree HIPs impact students’ self-reported outcomes; (2) the variation in effects between types of HIPs on students’ self-reported outcomes; and, finally, (3) the rate of participation in HIPs among students from underserved populations compared with students from traditionally more advantaged backgrounds. Underserved student populations refer to students from historically underrepresented racial/ethnic minorities groups, transfer, first-generation, and part-time students.

 

Table 1. Level of Statistical Significance of HIPS on Student Outcome Measures (Averages across California, Oregon, and Wisconsin State Systems Data)

 

 

Deep Learning

Gains in General Education

Gains in Personal and Social Development

Gains in Practical Competence

First Year

Learning Communities

***

***

**

**

Service Learning

***

***

***

***

 

Senior

Study Abroad

**

*

**

No Effect

Student-Fac. Research

***

***

***

***

Service Learning

***

***

***

***

Internship

***

***

***

***

Senior Culminating Experience

***

***

***

***

*p.<.05, **p<.01, ***p<.001

How Effective Are High-Impact Practices?

First, looking at collective outcome effects for the selected high-impact practices across all three state systems, one finds substantial evidence that these practices have a high degree of positive impact on self-reported outcomes. Nearly every HIP examined is associated with significant gains, as defined by NSSE, in students’ self-reported deep learning: gains in general education, gains in personal and social development, and gains in practical competence. Although study abroad experiences have some significant benefits for student outcomes, relatively speaking, these experiences were not found to be as powerful as other types of HIPs. Table 1 closely mirrors Kuh’s (2007) findings on HIPs, also drawn from NSSE data, including the comparatively small impact of study abroad experiences.

Are High-Impact Practices Equally Effective?

Second, although high-impact practices are collectively effective, they are not necessarily uniformly effective. Because HIPs, as a group, are so convincingly efficacious, perhaps the more illustrative way to compare the relative impact across different types of HIPs is to examine the size or magnitude of effects for particular high-impact practices across student outcomes. Figure 1 indicates that across nearly all of the HIPs examined (except study abroad experiences), these practices have the most sizeable impact on students’ deep learning, followed by students’ gains in personal and social development and gains in practical competence, respectively. Conversely, learning communities, student–faculty research, and service-learning experiences for both first-year students and seniors had the smallest impact on gains in general education. Overall, service-learning experiences demonstrated the greatest impact on each of the four outcomes measured, regardless of whether the student was in the first or senior year.

An overall sample size could not be calculated from the available data. However, the aggregate sample size for each HIP category across state systems could be calculated. Accordingly, the number of students reporting for each HIP is as follows: learning communities=2,357; service learning (FY)=6,237; service learning (SR)=11,903; student/faculty research=4,059; senior capstone=7,189; internship=11,373; study abroad=3,146.


Figure 1. Size of Effect of HIPs on All Measured Outcomes (Averages across California, Oregon, and Wisconsin State Systems Data

finley11_1.gif

An overall sample size could not be calculated from the available data. However, the aggregate sample size for each HIP category across state systems could be calculated. Accordingly, the number of students reporting for each HIP is as follows: learning communities=2,357; service learning (FY)=6,237; service learning (SR)=11,903; student/faculty research=4,059; senior capstone=7,189; internship=11,373; study abroad=3,146.

Who Participates in High-Impact Practices?

In terms of race, patterns of relative advantage and disadvantage in HIPs participation are less clear. Figure 2 reflects average participation rates in learning communities and service-learning experiences by race for first-year students across the three state systems; figure 3 reflects average participation rates by race for seniors across selected HIPs. Overall, both charts suggest rates of participation, for first-year and senior students, in the selected HIPs do not vary dramatically across racial categories (see figure 2 and figure 3)—or at least not in ways that might be expected. Figure 2 shows nearly identical rates of participation across racial groups for first-year students in learning communities (between 13–17 percent). White students, however, have the lowest reported rate of participation in first-year service-learning experiences (32 percent) among groups analyzed; 10 percentage points below the participation rate of black students. Similarly, the average participation rates for seniors across HIPs analyzed indicate that while white students have among the highest percentages of participation in these HIPs, they are not the highest (see figure 3). In fact, senior Hispanic students report the highest rates of participation among racial categories in three out the five HIPs examined—service learning (55 percent), student/faculty research (20 percent), and internships (54 percent). In contrast, black students have the lowest rates of participation in four of the five HIPs examined, with markedly lower rates of involvement in student/faculty research (10 percent) and study abroad experiences (9 percent).

The lack of consistent patterns of racial difference in HIPs participation across first-year and senior students in this analysis largely echoes results previously reported by Kuh (2008). However, it is instructive to examine the emergent differences in participation by race when moving from first-year to senior students. Across these state systems, first-year students are experiencing HIPs at about the same rate or, as is the case with service learning, students in racial minority groups are experiencing them even more often. But past the first year, familiar trends emerge by the time students are seniors—while the participation of black students falls behind nearly every other group except in service learning. Unexpectedly, senior Hispanic students have comparably high, or higher, levels of participation as white students in several of the HIPs examined.

Based on these findings there is promising news for first-year students given the limited national data on the impact of HIPs on measures of student success for students of color. Using a large, national sample of NSSE data, Kuh (2008) found that engagement in high-impact activities was strongly correlated with increasing first- to second-year retention. But Kuh (2008) also found that the likelihood of returning for the second year was even greater for Hispanic students participating in these activities, compared to white students. In the same study, Kuh (2008) similarly found that the positive association between participating in high-impact activities and having a higher GPA at the end of the first-year was even greater for black students, compared to white students. The implications of these amplified effects, combined with the distribution of participation rates, suggest campuses in these state systems have done a good job of expanding first-year programming such that entering students stand to benefit widely, and equitably, from high-impact practices at the start of college. However, senior year indicators suggest that a greater breadth of intentionality is warranted to make sure all students graduate with similar levels of participation.


Figure 2. Participation Rates for First-Year Students in Selected HIPs by Race

finley11_2.gif

Because of the variation in sample sizes across race categories, only a descriptive comparison in participation rates is presented. These data should not be used to make statistical inferences about a generalized population of students participating in HIPs.

Using System and National Findings to Motivate Change on Campuses
How can the preceding system and national-level analyses be used on campuses to generate knowledge for the advancement of local campus work? As both table 1 and figure 1 indicate, there is strong evidence that HIPs are broadly effective for students as a whole. Yet, figure 2 shows that these effective practices are not reaching the majority of students. If we know these practices are not reaching the majority of students, who are they reaching? Figures 2 and 3 provide a nuanced picture of participation in HIPs across underserved student populations. Transfer and first-generation students appear to be the most consistently lacking in their participation in HIPs, compared to other underserved groups analyzed, in both their first or senior years. Racial comparisons in HIPs participation suggest a fair degree of equitable participation for students in students’ first-year. However, this distribution grows more uneven by the senior year, particularly for black students.

Campuses can help to advance knowledge of participation and access to sources of engaged learning experiences by more fully and more narrowly interrogating NSSE data and other forms of institutional data. This work would provide critical data sources for greater understanding of the ways in which students from all backgrounds are affected by HIPs and the degree to which these practices are inclusive of all students. Specifically, we lack at the national, system, and local campus levels clear knowledge of the rates of students’ participation in HIPs. This analysis presents findings for only five selected HIPs; many more exist on campuses, including writing across the curriculum, collaborative projects, and first-year seminars. Additionally, although analyzing HIPs participation rates using NSSE data is useful, it is unclear if students who participated in HIPs are also more likely to respond to NSSE, causing these participation rates across student groups to be overestimated. Those on campuses will need to work more intentionally to track and record where HIPs exist on campus, the students involved in those experiences, and how to effectively capture student responses to these experiences.

In addition to participation rates in HIPs, particularly among underserved students, we also lack information on the impact on outcomes for underserved students who engage in HIPs. Though table 1 and figure 1 illustrate strong effects on student outcomes, these findings assume students are a single, homogenous group. The reality, however, is that the heterogeneity of students across race and ethnicity and socioeconomic class creates different opportunities, paths, and experiences of learning for these students on campuses. To neglect these differences by not disaggregating data is to not just assume students are a single group—it is effectively to assume all students are white, non-transfer, with parents who graduated from college. But the assumption of a white majority, never mind the other descriptors, will soon cease to be the demographic reality of higher education. Assessment must begin now to be more wholly reflective of the success and inclusion of all students.

A final caveat of this research, as with all current research examining HIPs, is that there is no means by which to account for best practices within high-impact practices. As indicated by figure 1, though HIPs appear to have widely positive effects on students’ self-reported outcomes, the extent or magnitude of these effects does vary across types of HIPs. Even further, however, is the likely degree to which experiences will vary even across the same high-impact practice (e.g. different service learning or undergraduate research experiences). Thus, there is likely to be a difference between delivering HIPs on campus and delivering them well. As noted by Kuh (2008), “There is growing evidence that—when done well—[high-impact practices] appear to engage participants at levels that elevate their performance across multiple engagement and desired outcomes measures” (14). As our collective knowledge of HIPs advances, we will need to look more closely at the qualities that inform Kuh’s cautionary three words: “when done well.” The integrative and multitiered assessment, like that developed through the Compass project, can be a valuable part of this discovery. Through the assembling of data across campuses, systems, and at the national level, multiple constituencies can contribute to the better understanding of how HIPs vary in their impacts on student outcomes and how these results are maximized through effective practices.

It will be a challenge to know all we can about HIPs, what makes them work, for whom, and where. Campuses play a vital role in assessing this work, both through the assessment of students’ perceptions of these experiences and through the direct evidence gathered from the products of student learning. But the Compass project demonstrates that campus assessment efforts reach full potential when they are allowed to be shared and aggregated at the system and national levels. At these levels we can build off local practices to better examine the large-scale trends that emerge from bringing lots of data together. By shifting across these levels knowledge can be built faster, more efficiently, and with a greater likelihood of reaching a successful goal. If the journey is to know all we can about student learning—and which students are learning—assessment can’t be one speed.


Figure 3. Participation Rates (%) for Seniors in Selected HIPs by Race

finley11_3.gif

Reference

Kuh, George D. 2008. High-Impact Educational Practices: What They Are, Who Has Access to Them, and Why They Matter. Washington, DC: Association of American Colleges and Universities


Ashley Finley is the senior director of assessment and research at the Association of American Colleges and Universities.

Previous Issues