by Steven Harmon Wilson, Dean of Academic Assessment, Tulsa Community College
Among the DaVinci Institute’s purposes is to recommend ways to leverage the state’s ample intellectual, cultural, and social resources to promote creativity in Oklahoma. To this end, I was recently tasked to work with some of my colleagues on the board to develop a plan to assess the state of the state’s creativity education. I encourage regular readers of this blog to stay tuned for upcoming developments on this project. In the meantime, I would note that, as far as I can tell, it was the word “assessment” in my title that qualified me to participate in this particular DaVinci Institute effort. Yet, assessment is actually fairly new to my professional profile, and that fact tempts me to share the following thoughts regarding the current prominent role of assessment in higher education. Consider the following verse from T.S. Eliot’s Four Quartets:
“We shall not cease from exploration
And the end of all our exploring
Will be to arrive where we started
And know the place for the first time.”
This was often used by my dissertation director as an introduction to his lectures. Usually, he offered them as a defense of the ongoing study of history, an academic discipline he practiced for nearly fifty years. Yet, the first three of these four lines might be an objection from the faculty at any college or university when an administrator proposes an initiative, procedure, or goal. Stop me if you’ve heard any of these before: “we tried that, years ago,” “we’ve been down that road,” and, “here we go, again.”
Sometimes, it is fair to charge that administrators tend to cover the same old ground, again and again—but, sometimes a road we’ve been down before was still the right path to take yesterday, is today, and maybe will be even tomorrow. All too familiar or not, the well-traveled road isn’t inevitably a rut, and the familiar destination isn’t always a dead-end street. To justify making yet another trip, however, administrators will have to promise that “this time, it will be different.” Few veterans of higher education reform and reorganization projects will believe them.
The trick to being a pioneer on the road already traveled (with apologies to Robert Frost), as Eliot suggests, is to see the terrain with fresh eyes and new understanding, as if to “know the place for the first time.” One way of accomplishing this is to bring someone with fresh eyes.
At my institution, I am the guy with the new eyes. Less than a year ago, Tulsa Community College re-allocated resources and re-assigned some responsibilities to create a new position, Dean of Academic Assessment, which I took up in August 2012. It’s a wholly new position for TCC, but the topic is old. Assessment, for those readers who are not in academia, is one of the very well-trodden roads. The modern wave of student learning assessment (faculty might say “the hysteria” to do measure) emerged a few decades ago, and it has been a rising tide ever since (coming to “swamp us,” the faculty might say), but the social scientific impulse to gather data about learning and teaching goes back at least a century, part of the Progressive era’s great democratization of education.
Yet, the movement to assess student learning has gained momentum in the last decade. In part, this was in reaction to the rising government interest in accountability for the public dollars spent on higher education (the impulse manifested in K-12 a few years earlier, of course, most notably through No Child Left Behind). To some extent this was also an answer to private employers’ looming concerns about apparently poorly educated future workers. As a result of these forces, accrediting bodies, including our own, the Higher Learning Commission, are seeking proof that students are actually learning the things that colleges and universities claim to be teaching them.
Yes, assessment is an old road, but one with new twists and surface hazards. Those of you who work in higher education, however, know all too well that assessment is the road few if any seem to want to travel now, then, or ever. Why is that? Because many faculty feel insulted and threatened by the imposition of student learning assessment from outside and above. Faculty will worry about potential standardization because they wish to be creative in the classroom. Faculty will resent imposed control because they value academic freedom. But, naturally, faculty also resisted simply because “we tried that,” “we’ve been down that road,” and, “here we go, again.”
Assessment? “We’ve been there.”
But, the fact is, I haven’t been there before, despite having worked in higher education for a dozen years. Before becoming Dean, I spent six years as an associate dean of liberal arts at one of TCC’s four campuses, and before that, I was a full-time faculty member at a state university in Texas. (I was an engineer for a decade, too, but that is the subject for another blog post.) Mostly, it’s been good timing. Assessment is and by right ought to be a responsibility of the faculty, but I don’t recall doing much of it when I was a history professor, and I left my faculty position to come to Tulsa just before the current assessment and accountability bug hit Texas.
At the time, TCC’s assessment of student learning was focused on developing course-embedded assessments of general education goals. These classroom-based assessments were managed by individual faculty, though it was clear in hindsight that meaningful assessment was not taking place regularly, and that no useful data was being collected, collated, or connected. There was little to no communication among faculty, even those teaching the same courses. Further, it was unclear whether any assessment was taking place in online instruction at all. Year after year, faculty were given a mandate to get from here to there, as best they could, and were asked to report whatever they found there. Every few years faculty were given a few online tools and some new links to click, but they weren’t given training on their use. They weren’t provided with accurate maps or compasses.
As associate dean, I was expected to ensure that my faculty were assessing student learning, but, respecting their autonomy in the classroom and their expertise in their own disciplines, I mostly took their word for it. Apparently, so did my colleagues in other academic divisions and on the other campuses. As a result, many of the faculty wandered in the wilderness; perhaps not for forty years, but it must have felt like it for many of the veteran instructors. They received no feedback, and they didn’t use their results for improvement—because no one asked them to do so. Everyone seemed busy, and everything seemed fine, so why worry about the assessment?
Looking back on those days, it wasn’t quite the higher education variation of the old Soviet-era joke—”so long as they pretend to pay us, we will pretend to assess”—but it was perilously close.
This situation had to change, and it finally did after January 2009, the date of TCC’s most recent re-accreditation site visit. We did well on most measures, as we expected, but the HLC reviewers put the College on notice with the following comments regarding TCC’s assessment processes: “The absence of measurable program and learning outcomes assessment is a barrier to making effective assessment of student learning possible. Without the implementation of an intentional, systematic, and overarching assessment plan, the college will be unable to evaluate the acquisition of learning outcomes by the conclusion of the students’ program of studies.”
Ouch. In response to this critique, and to avoid a future spanking, TCC committed to develop a wholly new comprehensive assessment plan. Developing and implementing this plan has taken time, and the effort was met with much of the skepticism, and also the resistance and resentment, that I described above. Yet, I can honestly assert that “this time, it was different.”
What’s been different? I wish I could say it was the arrival of—well, brilliant me. But it wasn’t.
Instead, it was the advent of communication, commitment, and follow-through on the part of the College leadership. And by leadership, I do not mean the intervention of the high administration (although that has been key). Rather, from 2009 through 2012, the assessment push was, well, pushed, by the members of the Learning Effectiveness Sub-Council (LESC). The LESC includes faculty from all four campuses, representing both university transfer and workforce areas. The LESC also included Associate Deans (I was one of these, getting my first real exposure to the modern world of assessment), the director of Planning and Institutional Research, Administrative Deans and campus Provosts, and the Associate Vice President of Academic Affairs as an ex officio member. Few of the members boasted backgrounds in assessment, per se, but all were educators, all were committed to student learning, and all knew that the College finally had to get assessment right.
The LESC had already been around in one form or another for many years, and yes, the College had been down this road before, but this time was different. We were determined to see with new eyes. More to the point, we pretended that we had not been down the road before. We started from scratch, and made it up as we went along. There is nothing more pioneering than that.
Development of a new plan was a gradual process, necessitating a shift in TCC’s institutional measures of learning. The LESC spent the first year developing, with extensive faculty and staff input, a college-wide learning outcomes plan which included a cyclical system for gathering and disseminating assessment data results and program improvement decisions as well as tools for developing and reporting on an action plan for improvement.
Unfortunately, the LESC faced early challenges that arose from TCC’s organization as a multi-campus system. TCC has one program associate dean (AD) per campus; for example, there are four associate deans for business, or psychology, or mathematics. Yet, there is no college-wide discipline dean with oversight for any area (deans of instruction were eliminated a decade ago). During the Spring 2011 semester, ADs convened faculty meetings, responded to faculty questions and concerns, and solicited the support of the LESC as needed to implement the assessment plan. Campus meetings were also conducted by LESC members throughout the year in an open venue for faculty, staff, and administrators.
The academic units collaborated college-wide and achieved the following: articulated common discipline/program goals; identified the skills, knowledge, and abilities necessary to demonstrate progress toward those goals; chose one or more courses that require these skills, knowledge and abilities; selected a common assessment activity that measures these skills, knowledge and abilities; agreed upon common criteria to evaluate student learning against stated objectives; designated the desired level of proficiency expected; and estimated the percentage of students expected to demonstrate proficiency.
Because this was a new process, uncertainty arose among the players as to responsibility for assessment activities, reporting results, and how to use data. There were also debates and critical conversations (not always constructive) that occurred at many faculty meetings. In some cases, faculty united in thought and agreement; in others, agreement to disagree carried the day. Yet, even the negative views were seen to contribute to the positive result. For example, one benefit of this focused effort, backed up with administration support, is that we have seen greater communication among faculty in the same disciplines but on different campuses. This promoted more effective assessment, and much greater consistency in delivery.
Meetings and training opportunities have focused on conversations about student learning outcomes and how to write and measure them. Finally, it has been established that the cycle of data gathering, analysis, and plan for improvement will be completed on an annual basis. By the end of AY 2011, 90% of the academic divisions of the College determined program outcomes and had begun gathering data on student learning, providing the LESC the summary and analysis of the results.
Another success came through a major shift in faculty expectations. The “dead” week before spring classes begin—a time when faculty were formally on contract but had never had to appear on campus—was used to review and reinforce TCC’s commitment to assessment. Specifically, the LESC organized a mandatory meeting for “Learning Effectiveness and Planning” (LEAP) for the first week of January 2012. Having already determined program outcomes (with methods of assessment) and taken data by the end of the fall semester, faculty were asked on LEAP Day to meet with disciplinary colleagues, review data, analyze results, and propose improvement plans. The LEAP event was met with some resistance but actually proved positive.
LEAP Day 2012 marked the end of the first complete (that is, full-year, all-hands) cycle of student learning outcomes assessment under the new TCC plan. These results were promising, but faculty in various disciplines expressed continued confusion over the scope and purpose of this general assessment project at TCC. Hence, the College found it useful to provide ongoing faculty support for assessment practices, data analysis and program evaluation. Most notable, as a result of the proposals the Faculty Development Sub-Council and the LESC presented to the Academic Council, the College approved the appointment in the summer of 2012 of a Faculty Director for a newly created Center for Excellence in Learning and Teaching (CELT). The CELT director works with four Campus Coordinators (one per campus, known aloud as either “Selts” or “Kelts”) who are faculty members interested in developing research-based practices.
The CELTs are now working with another new cadre of faculty mentors, four Student Learning Fellows (or “Selfs”), who are undergoing training to enable them to provide ongoing support in developing discipline and program assessment plans. The College also provides three-credits of reassign time per semester for the SLFs for each campus. It is worth noting that the College created all of these faculty-filled mentorship positions months before deciding that a Dean of Academic Assessment ought to be hired to oversee the assessment project. The result has been the beginnings of a culture change at TCC—from one of data denial to application of evidence.
Since August 2012, my SLFs and I have traveled to the HLC’s assessment workshops, offered “How’s Our Progress?” forums (one on each campus) during the Fall 2012 semester, and helped coordinate a second LEAP Day in January 2013. This event focused on the need for writing solid student learning outcomes if one was to obtain meaningful assessment results. The heartening thing is, after two successful LEAP Days, and the promise that it will be an annual spring event, I no longer hear comments like, “we’ve been down this road before?” Instead, I am hearing more faculty ask me, “what’s on the agenda for next year?”
Progress is being made because faculty and administrators talked frankly and candidly about assessment and how to best approach it. Many (but not all, not yet) of the skeptics have joined these conversations. General understanding of the vocabulary and processes of assessment has grown. Faculty resistance to the assessment of student learning at a program/discipline level has lessened. This is a huge cultural shift. As T.S. Eliot might say:
“Will [have arrived] where we started
And know the place for the first time.”