Assessment and Stories We Tell Students

Do new, first-year college students need to study 2-3 hours per credit hour in a week to be successful? The short answer is: no. Research tells us that most first-year students spend about one hour or less per credit hour studying and preparing for class and do just fine, depending on your definition of “fine,” of course.

This is the central premise of Academically Adrift. The idea is that most students see college as a pathway towards economic security or a rite of passage into adulthood. Thus, college students invest their time in activities that have little to do with learning.

adrift

Assuming the premise of Academically Adrift is accurate, then traditional assessments, like grades, standardized tests, or degrees, are not assessing learning, but probably other things, like managing the college experience or skills related to persistence. 

Data from a variety of sources, including NSSE , the CLA,  and grade inflation show that students are still getting good grades and graduating with less effort, at least measured by hours spent studying and preparing for class.

Telling most first-year students they need to study 2-3 hours per credit hour to be successful in college isn’t accurate and probably harmful. There are two problems with this kind of messaging:

  1. It exaggerates how much time academically competent and even a few successful students actually spend studying. Communicating an unrealistic standard reinforces the legitimacy of peers and other sources for information over more legitimate ones (like advisors and faculty). 
  2. It sets up time as the constant and learning as variable. According to the flipped teaching model, time should be variable and learning is constant. A better strategy would be to communicate what students will do and/or outcomes of their college experience. 

How does assessment inform what we should tell new first-year students? First-year students should receive two types of messages, one that legitimizes the expertise of faculty and student advising staff and anaother that de-emphasizes a fixed-intelligence mind set. They should receive messages like this:

Message 1: “You get out of college what you put into it. If you want to study 15 hours a week, and you’re fine with a 2.5-3.0 GPA, then go for it. Keep in mind, though, that your effort will need to increase as you progress through college, and in particular your major.”

Message 2: “You may be disappointed, in spite of all your hard work. Keep in mind, though, that intelligence is not fixed. Frustration with learning something new and learning from set-backs are all natural parts of the learning process. Utilizing the services we provide can help you grow and be a more competent and efficient learner.”

If I remember anything, I remember two messages from my first-year orientation. You will need to study 2-3 hours per credit to be successful, and, look to your left and look to your right. One of you won’t make it. It should go without saying that neither message was very helpful.

Posted in Illinois | Leave a comment

Bunking & Debunking Altucher’s 15 Essential Skills They Don’t Teach in College

According to one internet blogger, there are 15 essential skills for making money. They include the usual things like networking, motivation, creativity, etc.

While the skills are fine (who can disagree with creativity?) the claims about them are dubious and have almost no evidence to back them up. The two claims are:

1. You don’t need to go to college to get the 15 essential skills.

2. Colleges aren’t teaching these skills (or, at least, students aren’t learning them).

Claim 1: You don’t need to go to college to get the 15 essential skills. 

One can make a good argument that you don’t need to go to learn. Traveling, reading War & Peace, and conducting home experiments can all take place outside of a class room.

The claim is dubious from an earnings-perspective, however, because skills don’t translate into higher earnings unless a credential is attached to them. There’s simply no evidence to support this claim.

Sure, there are plenty of successful people don’t have degrees. And almost all of them come from wealthy backgrounds. Steve Jobs and Bill Gates didn’t get a degree, but they did have wealthy parents and access to college. Most people don’t have the time or money to be unemployed and tinker in their parental-subsidized garages. A college degree is a less risky bet.

In today’s U.S. economy, the evidence is pretty clear: family background and credentials matter more than skills. Whether skills should matter more is another conversation. If you want to make more money, in general this is what you need to do:

  1. Be born to rich parents.
  2. Get a college degree. Think college is too expensive and that it’s not worth it? Think again. The rate of return from getting a degree is still higher than if you didn’t go to college.
  3. Be mobile.

Skills matter, but credentials trump skills almost all the time. And while it stinks that wealthy kids get a huge head start, education still provides a pathway to a credential and higher earnings for most people.

Claim 2: Colleges aren’t teaching these skills (or students aren’t learning them).

There is little evidence to support the second claim, and a lot that says otherwise. An academic research library search of the things the author claims colleges don’t do with the added phrase “college learning outcomes _____ skills” revealed the following number of academic studies:

  • college learning outcomes presentation skills: 783,493 research articles.
  • college learning outcomes quantitative literacy skills: 223,947 research articles.
  • college learning outcomes philanthropy and civic engagement skills: 80,664 research articles.

Altucher has some good points about learning, but investment decisions should be based on evidence and realistic outcomes, not anecdote and opinion.

If your goal is to make more money, your best bet is to learn skills while in college and not out of it. And get a credential or degree along the way.

Posted in Assessment - General | Leave a comment

Illinois High School Graduates and Out-of-State Colleges

The map below shows where Illinois high school graduates enroll at public (blue) and private (red) four-year universities.

https://public.tableau.com/javascripts/api/viz_v1.js

tableaumap

The first story is that a lot of Illinois high school graduates leave Illinois for college. The most, in fact, of any other state after New Jersey.

This matters because when a student leaves Illinois for college, they are less likely to return and work in Illinois. The net economic impact of a lost student to another state is about $225,000 per student lost over the course of a lifetime (1) in income tax revenues alone. This does not include the negative impact on the general economy in terms of lost consumption or spending.

Companies lose in terms of their ability to attract an educated and skilled workforce. Taxpayers lose the investment they made in students in 13 years of K-12 education. Other states win because they are able to develop a highly educated workforce with little investment of their own.

The second compelling story is that a lot of students leave Illinois for public universities. In the last 20 years, four-year public university enrollment among Illinois high school graduates has remained relatively flat, but nearly doubled at out-of-state four-year public universities. This should send a clear message about what Illinois residents think about the state of public higher education in Illinois.

Strategies to keep Illinois residents at in-state universities include:

  • Mission differentiation in the four-year public sector, creating institutions with their own unique niche (small, liberal-arts focused; technical or engineering focused; large university focused on research; etc.). Institutional diversity provides more options to Illinois residents. A one-size fits all approach precludes access to distinctiveness and value.
  • More certainty in the higher education budget for direct institutional subsidies and student financial aid.
  • Financial aid incentives for in-state residents. However, since it is assumed that Illinois residents are paying more for public institutions in other states, economic incentives may not be effective in retaining people who are not sensitive to price, and focus more on perceived educational quality.
  • A final strategy would be to recruit more out-of-state students or out-of-state college graduates to Illinois. If Illinois residents are unwilling to invest in strategies that would keep talented high school graduates at in-state colleges, this option could be less expensive. It would probably require, however, more public-private coordination and cooperation, expertise in economic development and education, and leadership.
(1) Adjusted for inflation from $162,000 figure in 2000.
Posted in Illinois | Leave a comment

End Planning and Start Storytelling with Learning Outcomes

Confusion of goals and perfection of means seems, in my opinion, to characterize our age (Einstein).

Imagine the ideal student 10 years after your class. What do you want them to know? What are your hopes for them? What do you imagine they will be doing?

We rarely think about what our students will be like in the future. Most of us are focused on what we are doing now.

Assessment professionals (like me) and accreditation people will usually say: “Begin with what you care about and what you find meaningful.” And then they hand you a guidebook and tell you the following:

“Make sure the outcome is measureable (or SMART). Make sure to fit the outcome into a cycle. Use the right verbs. Align the learning outcomes with programmatic, departmental, and college goals.”

Basically, all of the meaningfulness is sucked out by the process that’s handed to you. Why do we make assessment painful or mind-numbing? It doesn’t have to be about arbitrary standard-setting, bureaucratic control, or an anxiety-inducing exercise in verb-selection, but that’s how a lot of people feel about it. If you read assessment books or look at on-line college guides, they’re all pretty much the same. Best practices are encouraged because they are familiar. It rarely leads to anything new.

There are people who hate assessment, but I’ve never met anyone who doesn’t care about learning. Writing outcomes that are meaningful to you and accountable to others is possible. If you tell your classroom’s or program’s story well, you shouldn’t even have to really worry about accountability.

A creative and non-conforming approach that puts more focus on storytelling, as opposed to calculating and planning, can perhaps be a better and more engaging way to write learning outcomes (1).

Step 1. Start with Questions

Rather than thinking of writing learning outcomes as a planning exercise, think of it as storytelling. Margot Leitman provides a good method for writing stories. Start with questions (2):

I would like to know ____________ about my program.
I would like to know ____________ about our students.

You may get pushback from methodological fundamentalists, accreditation reviewers, and strategic planners, but don’t worry about them for now. No one from the evaluation police is going to arrest you for being different. Here are some more ideas:

We would like to know __________ about our students.
We are curious about  _______________.
Our students seem to be really good at ______________.
Our students are really scared of _________.
Our students pretend to care about __________.
Our students worry the most about __________.
We can’t believe our students think ___________.
Our students’ biggest regret is ___________.

Be practical. Focus on what you have at least a moderate level of control over. Finding a soul mate and paying off student loans may be the most anxiety-inducing thing that students deal with, but there isn’t really a lot you can do about it. The goal is utility.

Here are some story ideas from an imaginary environmental sustainability program:

I would like to know how knowledgeable students are about using statistics to solve real-world problems.

Our students worry the most about having to take the required statistics course.

Our students seem to be really good at using game simulations .

Based on the story ideas above, maybe an outcome should focus on quantitative knowledge? Another outcome could focus on encouraging game simulations. Using storytelling, these ideas were selected because I find them meaningful, as opposed to the result of a planning exercise.

Step 2. Connect the Story Ideas to Learning Domains

Brainstorming and being creative is fun, but you do have to get a little organized with learning outcomes. With the previous questions in mind, think about what kinds of knowledge you want your students to learn. Education experts organize learning into three domains:

Affective Domain: What do you want students to care about or value? (Feelings, Emotions, Attitudes)
Cognitive Domain: What do you want students to know? (Intellectual)
Behavioral Domain: What do you want students to be able to do? (Physical)

This is where we start to use storytelling to make the connection between what we care about and what we want students to learn (3).

I would like to know how knowledgeable students are about using statistics to solve real-world problems. This looks like a problem in the cognitive domain. Perhaps a learning outcome should focus on assessing how students apply statistics or quantitative thinking to real-world problems?

Our students worry the most about having to take the required statistics course. This looks like a problem in the affective domain, so maybe a learning outcome could get at the idea of student confidence or attitudes towards math?

Our students seem to be really good at using game simulations .This looks like an outcome in the behavioral domain. Perhaps an outcome should be developed that looks at if and/or why game simulations are an effective learning strategy?

Step 3 – Write the Learning Outcome Statement

Some people get really sophisticated with learning outcomes. This paper, which is quite good, states that learning outcomes statements are complete, Kantian sentences. I have no idea what Kantian means, so I googled it. It’s good advice.

I agree that verbs and syntax matter. But it takes practice and time. Like strategic planning, they also have the potential to take away from creativity. The goal is to find a balance. I think balance can be achieved if you use storytelling techniques, as opposed to planning techniques. Planning puts too much emphasis on calculation, and not enough on improvement (1). Balance can be achieved using storytelling because the learning outcomes are attached to already-existing narratives that are meaningful to you, not someone else.

I use the term learning outcomes, but choose whatever you want: goals, objectives, outcomes, targets…whatever works. They’re all statements of intention.

When you have the domain, match it to the right verb. Here’s the first story narrative:

I would like to know how knowledgeable students are about using statistics to solve real-world problems. This looks like a problem in the cognitive domain. Perhaps a learning outcome should focus on assessing how students apply statistics or quantitative thinking to real-world problems?

By referencing this table (p. 2), I can see “apply” as a level in the cognitive learning area. A list of verbs is next to the level. I will build my learning outcome around the verb that best articulates what I am getting at:

Students will be able to construct an advocacy report written for a general audience on the economic benefits of bicycle commuting. 

A more structured method to writing learning outcomes is the ABCD Method (Audience-Behavior-Condition-Degree).  I like this method because it requires the program to make learning explicit and operationalizes the assessment.

Given the results from their environmental impact study (condition), students (audience) will be able to construct an advocacy report (behavior) written for a general audience (degree) on the economic benefits of bicycle commuting. 

The best part about using storytelling, as opposed to planning, in creating learning outcomes is that the learning outcome is genuine. A lot of people focus on transparent or sustainable learning outcomes. I have no idea why sustainability in learning outcomes is a worthwhile goal. Transparent outcomes are written for accreditation and compliance. Outcomes can be put anywhere – marketing materials, websites, catalogs, reports, etc. But that doesn’t make them useful. Being transparent is just an activity and it says little about being honest or truthful.

Genuine outcomes are written to help students learn and programs improve. The best storytelling advice is to tell the truth.

*******************
(1) Assessment has its roots in empirical and mostly quantitative analyses of learning. In the 1980’s, it was co-opted by the planning and improvement field. Remnants of this movement are still around. Assessment plans are fine, but there’s a problem with using assessment as a driver of planning: “The problem is that planning represents a calculating style of management, not a committing style. Managers with a committing style engage people in a journey. They lead in such a way that everyone on the journey helps shape its course. As a result, enthusiasm inevitably builds along the way. Those with a calculating style fix on a destination and calculate what the group must do to get there, with no concern for the members’ preferences….calculated strategies have no value in and of themselves…strategies take on value only as committed people infuse them with energy (H. Mintzberg, Harvard Business Journal, January-February 1994, p. 109).” Plans should certainly have some intentionality and direction, but learning outcomes aren’t the same as strategic goals.
(2) This was also proposed as an idea-generating process by Patton in Utilization-Focused Evaluation (1978).
(3) Focus on the story and meaning first, and learning taxonomy second. Some people will try to develop learning outcomes that cover all three of the taxonomies. There’s nothing wrong with having all cognitive outcomes, or all affective outcomes. Additionally, the three domains are not mutually exclusive. There can be overlap; don’t feel like a leaning outcome has to fit a domain. For example, a student with crutches who is leaning to write will encounter all three domains: cognitive (writing), psychomotor (interruptions due to adjusting crutches), and maybe affective (frustration or a feeling of accomplishment).
Posted in Uncategorized | Leave a comment

Best Practice and Kind-of-Best-Practice Guidelines for Writing Learning Outcomes

References at the end of this blog.
  1. Learning outcomes should focus on what students learn, not what we teach.

Students will be introduced to the topics of abnormal mental behaviors in their patients.

The problem with this outcome is that it is focused on what the teacher does, not what the student will learn.

Proposed fix: Students will be able to document abnormal mental behaviors in their patients.

  1. A strategic or effectiveness goal is not a learning outcome.
  • Students will be satisfied with academic advising (H).
  • The program will witness an 80% retention rate from fall to spring  (H).
  • 70% of program graduates will enroll in graduate programs (H).

These are program effectiveness outcomes, not learning outcomes (H). They only indirectly address learning. One could make a claim that if 70% of program graduates are accepted to a graduate program, some kind of learning is occurring, but that’s a very indirect claim. Additionally, it’s difficult to see how this kind of information helps a program improve.

Learning occurs at different levels on a spectrum from indirect to direct. It is perfectly appropriate to evaluate your program based on program effectiveness indicators. It’s a stretch, though, to make a claim about learning.

  1. Learning outcomes should focus on the learning resulting from the activity and not the activity itself (HB, p. 99).

Students will study at least one non-literary genre of art (HB, p. 99).

Maybe your goal is to assess if students study? If so, that’s fine. You can even measure it based on the number of hours a student spends studying or the number of pages they read.

However, this outcome evaluates a process, not learning. Thus, it is only indirectly related to learning and probably shouldn’t be labeled as a learning outcome at the course or program levels.

  1. Learning outcomes should not be too broad and ideally be discipline-specific.

Students will understand how to communicate well.

Here are the problems with this outcome:

  • Most of us aren’t communications experts, aren’t experts in how to evaluate it, and can’t control all of the factors that go into how well students are at it.
  • Communications is way too broad. Is this outcome assessing verbal, written, or oral communications?
  • The word ‘understand’ applies to an internal, covert state of mind, not something students do (see guideline #5).
  • This outcome could apply to every class, activity, or program on campus.

Proposed fix: Given a sentence written in the past or present tense, the student will rewrite the sentence in future tense with no errors in tense or tense contradiction. (BC)

  1. Try to make learning outcomes measureable. Avoid verbs that are unclear and describe covert, internal behavior which cannot be or are difficult to measure (URI, p. 3).

Students will develop an appreciation of cultural diversity in the workplace.

Students will value the role of statistics in the workplace.

These outcomes are laudable strategic goals or vision statements. It shouldn’t stop you from helping students learn about or value diversity in the workplace. It’s not a good learning outcome, though.

As instructors, we have no idea what is going on inside students’ brains. That is because attitudes and knowledge are covert and internal to the students. Mager wrote that attitude objectives “are not specific descriptions of intent. Statements like these describe states of being; they do not describe doing” (p. 103). Words like “appreciation” and “value” are internal states of mind. Cliff Adelman states “we do not teach college students how to be conscious, and we do not award degrees on the basis of peripheral sensations (A, p. 10).”

That’s why we do assessment. Assessing learning through instruments like papers, demonstrations, artwork, or other activities makes student skills and knowledge overt and allows us to evaluate it. “One does not know a student has the ability to do anything until the student actually does it, for which point we use verbs that indicate what the student actually did” (A, p. 13).

Still, most of us desire that students develop some kind of values and character. Valuing diversity in the workplace is certainly an important goal or outcome for students. In light of the issues associated with assessing and evaluating this, I would consider making it a core value or part of the program mission or vision. There’s no obligation to measure and assess a core value – it stands on its own. Another option would be to make it a program effectiveness goal and measure it indirectly, through a survey or other activity.

A final option is re-write the learning outcome to operationalize what you want students to learn. Here are some basic examples (the ABCD model is a good framework for writing outcomes, but for simplicity sake will focus more on verbs and basic outcomes):

  • Given a case study, students will produce a workplace inclusion plan.
  • Using a case study, students will be able to defend the economic benefits of workplace diversity.
  1. Try to avoid compound or double-barreled learning outcomes.

Students will be able to successfully venipuncture an arm and define legal issues related to phlebotomy.*

Obviously, this should be two-learning outcomes.

  1. Learning outcomes should be written in the present, not past or future. (A)

Program graduates will demonstrate the democratic ideal through service to their communities.

Even if you could track this, there really isn’t a lot you can do to influence graduates. Focus on what you can do now in the present or at least current semester(s).

  1. Learning outcomes should have an activity or assignment associated with them.

Given the symbol representing a particular isotope of an atom or ion, the student will be able to determine the number of electrons, protons and neutrons in that species eight out of ten times (BC).

This is a great learning outcome. If students have no opportunity to demonstrate this outcome, though, people may assume they haven’t learned the material. This is particularly the case learning outcomes at the program level that rely on multiple courses or activities.

Ideally, you would want multiple assignments to provide information for one outcome. For example: Students completing the Engineering program will score over 95% on a locally-developed examination (UC).

In this case, learning is only assessed based on one instrument. Another problem with this outcome is that it dictates an assignment. This may be fine at the classroom level, but hopefully students will have multiple opportunities to demonstrate competency towards a learning outcome at the program level.

(Kind of Guideline) 9. Learning outcomes should be aligned with institutional missions and goals.

Backward design is the idea that learning outcomes should start with the mission of the institution in mind, followed by college, departmental, programmatic, and course goals or mission. The course should then be delivered forward, feeding into the institutional mission.This is what it looks like:

design-backward-deliver-foward

I think this is a great model, but like most assessment frameworks, it usually doesn’t play out well in practice.

First, colleges and universities are just too internally diverse and variable. A google image search of colleges of business and fine arts alone will show this. The institutional mission and goals are going to have to be broad to accommodate everyone.

Second, backward design always kind of reminded me as an exercise in philosophical reductionism. It is like getting multiple cups of coffee from one filter. The third cup from the same filter barely resembles the first one. By the time the institutional mission or goal gets filtered all the way down to the program and classroom levels, the outcome no longer has an resemblance with the institutional mission.

(Kind of Guideline) 10. Use the correct language of goals.

Some people are really picky about the differences between outcomes, outputs, goals, targets, and objectives. I don’t think it really matters – they’re all statements of intent. It’s more important to be aware of the differences between learning outcomes and program/effectiveness goals (see guideline #2).

(Kind of Guideline) 11. Cohort percent benchmarks and learning outcomes.

Upon completion of the art history program, 80% of students will be able to identify the approximate year of a painting.

These kinds of out outcomes are fine for compliance and summative evaluation reasons, but aren’t really helpful for program improvement.

The first issue with these kind of outcomes is that the cut-offs are seemingly arbitrary. Why is 80% better than 75%? What is so special and magical about 80%? 

The second issue is use of the results. If 90% of students in the art history program meet the goal, it provides an incentive for the program to ignore the outcome and move on. If 75% don’t show competency, that suggests a problem that may not exist. 

These kinds of outcomes are indicative of programs in a compliance-driven, summative assessment mode. Assessment fundamentalists who also serve as accreditation peer reviewers or state policy makers will like it, but I don’t see the value of them for improvement. 

image001

References

(A) C. Adelman, To Imagine a Verb: The Language and Syntax of Learning Outcome Statements, 2015.;
(BC) T. Brumfield & S. Carrigan, Instructional Objectives Workshop Handout, 2011;
(HB) M. Huba & J. Freed, Learner-centered Assessment on College Campuses, 1999.
(M) R. Mager, Preparing Instructional Objectives, 1962
(URI) University of Rhode Island, Student Learning Outcomes 101
(UC) University of Connecticut, How to Write Program Objectives/Outcomes
Posted in Uncategorized | Leave a comment

Do We Really Need Student Learning Outcomes?

I made it to the end of my junior year of college without picking a major. It never really occurred to me to pick one. But I did know what I liked to do – drawing, writing, reading, baseball statistics, history, and a few thousand other things.

I had one last thing to do before I went home for the summer: register for fall classes. The registration office, however, wouldn’t let me. An office worker told me there was a hold on my registration. There was a rule that all seniors must be enrolled in a major. He gave me a list that looked a lot like this:

majors

Reading down the list, I picked the first one I liked: Art History & Archaeology. And that’s how I picked my major – mostly because it begins with the letter “A.”

Programs are kind of like people. Some programs focus with laser-precision on what they want students to learn. They have valid and reliable instruments with super-precise instruments that tell them everything they need to know.

Other programs kind of muddle along, figuring things out as they go. They might have learning goals, but can’t remember who created them or why – there’s no name or date on the paper. Maybe it was the chair who retired three years ago? The original digital document is long gone, so the learning outcomes exist on a sheet with crooked margins that’s been photocopied a hundred times.

Some programs even intentionally muddle along – they have little structure or intention by design. When I read Lynda Barry’s Syllabus: Notes from an Accidental Professor, I couldn’t help but think how a tightly-structured assessment plan could only get in the way of how she teaches. I really liked the idea of teaching as a process of uncovering what skills and knowledge students already have and building on those. (1)

The thing about programs that muddle along or take a serendipitous approach to learning is that they’ve been doing it a long time, maybe for decades. And they’re still around, engaging and graduating students. They may understand, recognize, and even appreciate the value of learning outcomes, but they’ve been doing fine without them.

So, you really don’t need student learning outcomes. A lot of programs are functioning just fine without them. (2) A lot of people are, too.

But just because a program doesn’t need learning outcomes doesn’t mean it shouldn’t have learning outcomes. I think it’s a good idea for three reasons:

First, it’s good pedagogy. Here’s an edited passage from Popham’s book, Transformative Assessment (pp. 50-51):

Jill has designed a one-month instructional unit to promote student’s mastery of a high-level cognitive skill. Jill will undertake the following activities:

  1. Fully clarify for students the skill they are to master by the time they achieve the unit’s target curricular aim.
  2. Motivate students to achieve the aim by showing how the skill will be potentially beneficial.
  3. Supply instruction.
  4. Model the use of the skill.
  5. Give students ample guided practice as well as time for independent practice.

This is a well-organized class. One can say with confidence that students will learn in this class, regardless of whether an assessment exists. Still, the skills are clarified, which is close to articulating learning outcomes. Even without making the skills explicit, the students will likely be learning.

However, without some kind of formative or summative assessment of those clarified skills, how will she know what to modify or improve? In my experience, teaching and program improvement is a continual process of tweaking and change. It’s rare that no changes are made, year after year. Some kind of formative assessment of those skills would go a long way in providing meaningful feedback and help improve the course.

The second reason is that outcomes tell your program’s story. In What the Best College Teachers Do, Bain writes that professors hold two responsibilities (p. 58):

  1. Help students learn.
  2. Tell society how much learning has taken place.

Having real learning outcomes is a good idea because it communicates your program’s story. Telling a program’s story can go a long way in educating decision-makers, responding to public or future-student inquiries, and demonstrating impact.

There’s a third reason, but if you’re engaged with the first two, you shouldn’t have to worry about it: accreditation and accountability. I once heard an accreditation peer reviewer tell a group from a diverse background of disciplines that at least 70% of their program-level data should be benchmarked.

This isn’t to suggest that all peer reviewers feel this way, but there’s not a lot of variation in terms of perspectives about assessment among reviewers, at least in my experience. They are often told what they want to hear because they write reports – reports that are reviewed by people who make decisions about things like budgets, strategic planning, football stadiums, and other important matters. 

Does the risk of being genuine outweigh the benefits of homogenized, compliance-driven assessment? I don’t know. If I was a peer reviewer, I would intentionally look for mistakes, lessons learned, challenges, creativity, and genuine ideas and results, and be very suspicious of perfect assessment plans that purport 70% benchmark-able data and 90% response rates. 

Getting Started

If your program has muddled along or taken a serendipitous approach to learning, you might want to consider starting with one learning goal and examining it for one year. You can build on it in year two, or move on to another one. After 5 or 6 years, you will have a lot of assessment information.

Don’t fall into the measureability trap (3): focus on what you find meaningful first, not whether the outcome can be quantified or measured. Would you end a program that promotes awareness of sexual assault just because of issues associated with measuring outcomes? Of course not. Whether or not an outcome can be measured should not be the sole criteria for addressing an outcome. Don’t fall into another trap: the feeling that everything has to be assessed. Time, resources, and energy are precious resources, and they should be directed towards what we find meaningful and what matters.

Consider storytelling, as opposed to planning, as a way to get started. 

(1) Serendipity and muddling are not the same thing, even though a lot of people think they are. (Kind of like the Bascombe character in Richard Ford’s novels, who many think is cynical but is actually really intuitive). One can be methodological and detail-oriented, and still serendipitous. Many of the greatest scientific discoveries were conducted in controlled, methodological environments.
(2) Other programs are getting by with ghost written outcomes or goals, but I wouldn’t say they are functioning fine. Ghost written goals are a form of shadow assessment written for PR or compliance reasons, or to make an administrator or assessment bureaucrat go away. Ghost written goals are worse than having no goals at all, because everyone’s time is wasted. And it’s lying.
(3) See Purposeful Program Program Theory by Sue Funnell & Patricia Rogers for more about this idea.
Posted in Culture, Methods | Leave a comment

Assessment Planning and Decision-making: The Problems with Assessment Frameworks

Nearly everyone has an assessment framework that symbolizes how assessment does or should work. I haven’t viewed all of them. But I’ve seen a lot. And most of them look the same.

In the book Reason & Rigor: How Conceptual Frameworks Guide Research, the authors explain the benefits of using frameworks to guide research. Some of the positives of frameworks include:

  • Serving as a guide or map.
  • Capitalizing on the collective expertise of subject-matter experts.
  • Articulating the links between steps in a plan or research study.

Frameworks in assessment serve the same purposes, and are helpful in planning curricular and co-curricular programs and activities.

Frameworks, however, can be limiting (see Mintzberg’s The Rise and Fall of Strategic Planning). There are several reasons why adhering to a strict, formalized assessment model, with little to no deviation or room for serendipity or exploration, can lead to problems.

  1. First, assessment frameworks, by themselves, ignore all of the variables that influence decision-making. Most, if not all, assessment frameworks assume that actions and decisions are in isolation from other factors, and that the only variable that influences decision-making is the analysis and interpretation of data. Here is an example of what most of them look like:

assessment-model-basic-1

What if all of the factors that influence decision-making were actually included in this model? It might look something like the model below:

assessment-model-basic-2

In Misbehaving: The Making of Behavioral Economics (2015), Thaler calls these supposedly irrelevant factors (SIFs). SIFs are factors that are not considered in research models. SIFs are factors that were ignored for many years by classical economists, who (incorrectly, we now know) assumed that all people respond to economic decisions in rational ways. We now know that humans are quite capable of making irrational and often bad decisions, despite our efforts to model human behavior.

Assessment frameworks do a good job of providing a road-map. They don’t, however, capture all of the bathroom breaks, family fights, random detours, gas stops, and flat tires. 

No one plans a long road trip without taking these factors into consideration. Well, not “no one.” Rational, linear planners probably do. They tend to pride themselves on arriving early or on time, only to sit down and watch TV for a few hours upon arrival. 

Similarly, institutional culture, politics, staff issues, and the normal issues that arrive in daily life should be considered when using an assessment framework.
2. Most assessment frameworks are shown as a cycle. This limits decision-making to a narrow definition, ignoring the non-linear and incremental manner in which decisions are actually made. We are constantly making decisions, and they often don’t follow the linear process described in most cycles.

3. Many planning and assessment models assume that organizations are rational. In Strategic Planning for Public and Nonprofit Organizations, Bryson notes that non-profits are only politically rational, and can only be understood from this perspective.

When most organizations articulate how they are organized, it usually looks like this:

assessment-org-model-1

The model above assumes orderly and rational decision-making, where everyone follows a chain of command. Communications are assumed to also follow this chain.

Anyone who works in higher education, and maybe most organizations, knows this is not how things actually work . People communicate with individuals at different levels and different departments all the time. Additionally, universities are open systems. State politicians, the press, donors, and even random people who seem to just kind of wander on the periphery will exert influence over organizational plans and activities. With that in mind, a different perspective on organizational structure might look like this:

assessment-org-model-2

When working with assessment models and frameworks, it is important to acknowledge the influence of other factors in decision-making and organizational dynamics that may influence the use and interpretation of assessment evidence. In a chapter of Using Evidence of Student Learning to Improve Higher Education, the authors note:

…the relationship between evidence and action is not always neat, rational, or linear. Moreover, the fact that evidence meets the highest possible psychometric standards may have no bearing on its effectiveness in prompting action (Hutchings, Kinzie, & Kuh, p. 41).

This does not mean that frameworks and models should not be used. In fact, they are very helpful in terms of planning and showing the links between parts of the assessment and evaluation plans. There are several ways the tension between best-practice and responsibility to the field of assessment, and the kind of messy way in which public non-profits are organized and how they make decisions.

In Assessment Reconsidered, authors Keeling, Wall, Underhile, and Dungy recommend distinguishing between formal assessment and informal assessment practices:

Formal assessment practice includes conceptualizing, planning, implementing, and evaluating the impact, or outcomes, of a purposeful, intentional learning event on a set of learners. Informal assessment is the experience that an individual or individuals have when they experience an event in which learning occurs…whether or not that event was intentionally developed or designed (p. 10).

The key, in the informal situation as the authors describe, is to develop methods that “ascribe meaning to that event.” Methods like observation, informal interviews, or quick polls/surveys are good for capturing these moments. Even staff debriefing and documenting observations can be helpful in these situations.

It is also important to ensure that multiple viewpoints are taken into consideration in assessment and evaluation. A lot, if not most, decisions do not occur through rational, formal processes and structures. Decisions are often made incrementally over time. Assessment data travels through a multiple of different people and groups, all of whom attach their own interpretation and meaning to the information (more about this topic is in M. Patton, Utilization-focused Evaluation, 1978). (Developing shared meaning about assessment data is much easier at the program-level. The variety of interpretations increases at the institutional level). Sometimes, it’s not obvious how a decision was reached or made.

For assessment data to be useful in this context, it should be broadly communicated, discussed, and given time to develop. The emphasis should be on creating a shared meaning over time. The phrase “the reality is…” should only be used after a long investment of time and energy in creating shared meaning. (I would suggest never using that phrase in the context of assessment and evaluation — otherwise, people may think the data does not reflect their reality).  In order to highlight use, instructors and leaders can clarify the connections between intentions and actions through curriculum mapping or logic models, or through reports and in meetings.

 

Posted in Culture, Methods | Leave a comment