Illinois High School Graduates and Out-of-State Colleges

The map below shows where Illinois high school graduates enroll at public (blue) and private (red) four-year universities.

https://public.tableau.com/javascripts/api/viz_v1.js

Fall 2014 Enrollment of Recent H.S. Graduates from Illinois at Out-of State Four-Year Colleges

Data from the U.S. Dept. of Ed., Integrated Postsecondary Education Data System (IPEDS). Includes four-year public and non-profit private institutions only. Alaska, Hawaii, and territories not included.

 

 

The first story is that a lot of Illinois high school graduates leave Illinois for college. The most, in fact, of any other state after New Jersey.

This matters because when a student leaves Illinois for college, they are less likely to return and work in Illinois. The net economic impact of a lost student to another state is about $225,000 per student lost over the course of a lifetime (1) in income tax revenues alone. This does not include the negative impact on the general economy in terms of lost consumption or spending.

Companies lose in terms of their ability to attract an educated and skilled workforce. Taxpayers lose the investment they made in students in 13 years of K-12 education. Other states win because they are able to develop a highly educated workforce with little investment of their own.

The second compelling story is that a lot of students leave Illinois for public universities. In the last 20 years, four-year public university enrollment among Illinois high school graduates has remained relatively flat, but nearly doubled at out-of-state four-year public universities. This should send a clear message about what Illinois residents think about the state of public higher education in Illinois.

Strategies to keep Illinois residents at in-state universities include:

  • Mission differentiation in the four-year public sector, creating institutions with their own unique niche (small, liberal-arts focused; technical or engineering focused; large university focused on research; etc.). Institutional diversity provides more options to Illinois residents. A one-size fits all approach precludes access to distinctiveness and value.
  • More certainty in the higher education budget for direct institutional subsidies and student financial aid.
  • Financial aid incentives for in-state residents. However, since it is assumed that Illinois residents are paying more for public institutions in other states, economic incentives may not be effective in retaining people who are not sensitive to price, and focus more on perceived educational quality.
  • A final strategy would be to recruit more out-of-state students or out-of-state college graduates to Illinois. If Illinois residents are unwilling to invest in strategies that would keep talented high school graduates at in-state colleges, this option could be less expensive. It would probably require, however, more public-private coordination and cooperation, expertise in economic development and education, and leadership.
(1) Adjusted for inflation from $162,000 figure in 2000.
Posted in Illinois | Leave a comment

End Planning and Start Storytelling with Learning Outcomes

Confusion of goals and perfection of means seems, in my opinion, to characterize our age (Einstein).

Imagine the ideal student 10 years after your class. What do you want them to know? What are your hopes for them? What do you imagine they will be doing?

We rarely think about what our students will be like in the future. Most of us are focused on what we are doing now.

Assessment professionals (like me) and accreditation people will usually say: “Begin with what you care about and what you find meaningful.” And then they hand you a guidebook and tell you the following:

“Make sure the outcome is measureable (or SMART). Make sure to fit the outcome into a cycle. Use the right verbs. Align the learning outcomes with programmatic, departmental, and college goals.”

Basically, all of the meaningfulness is sucked out by the process that’s handed to you. Why do we make assessment painful or mind-numbing? It doesn’t have to be about arbitrary standard-setting, bureaucratic control, or an anxiety-inducing exercise in verb-selection, but that’s how a lot of people feel about it. If you read assessment books or look at on-line college guides, they’re all pretty much the same. Best practices are encouraged because they are familiar. It rarely leads to anything new.

There are people who hate assessment, but I’ve never met anyone who doesn’t care about learning. Writing outcomes that are meaningful to you and accountable to others is possible. If you tell your classroom’s or program’s story well, you shouldn’t even have to really worry about accountability.

A creative and non-conforming approach that puts more focus on storytelling, as opposed to calculating and planning, can perhaps be a better and more engaging way to write learning outcomes (1).

Step 1. Start with Questions

Rather than thinking of writing learning outcomes as a planning exercise, think of it as storytelling. Margot Leitman provides a good method for writing stories. Start with questions (2):

I would like to know ____________ about my program.
I would like to know ____________ about our students.

You may get pushback from methodological fundamentalists, accreditation reviewers, and strategic planners, but don’t worry about them for now. No one from the evaluation police is going to arrest you for being different. Here are some more ideas:

We would like to know __________ about our students.
We are curious about  _______________.
Our students seem to be really good at ______________.
Our students are really scared of _________.
Our students pretend to care about __________.
Our students worry the most about __________.
We can’t believe our students think ___________.
Our students’ biggest regret is ___________.

Be practical. Focus on what you have at least a moderate level of control over. Finding a soul mate and paying off student loans may be the most anxiety-inducing thing that students deal with, but there isn’t really a lot you can do about it. The goal is utility.

Here are some story ideas from an imaginary environmental sustainability program:

I would like to know how knowledgeable students are about using statistics to solve real-world problems.

Our students worry the most about having to take the required statistics course.

Our students seem to be really good at using game simulations .

Based on the story ideas above, maybe an outcome should focus on quantitative knowledge? Another outcome could focus on encouraging game simulations. Using storytelling, these ideas were selected because I find them meaningful, as opposed to the result of a planning exercise.

Step 2. Connect the Story Ideas to Learning Domains

Brainstorming and being creative is fun, but you do have to get a little organized with learning outcomes. With the previous questions in mind, think about what kinds of knowledge you want your students to learn. Education experts organize learning into three domains:

Affective Domain: What do you want students to care about or value? (Feelings, Emotions, Attitudes)
Cognitive Domain: What do you want students to know? (Intellectual)
Behavioral Domain: What do you want students to be able to do? (Physical)

This is where we start to use storytelling to make the connection between what we care about and what we want students to learn (3).

I would like to know how knowledgeable students are about using statistics to solve real-world problems. This looks like a problem in the cognitive domain. Perhaps a learning outcome should focus on assessing how students apply statistics or quantitative thinking to real-world problems?

Our students worry the most about having to take the required statistics course. This looks like a problem in the affective domain, so maybe a learning outcome could get at the idea of student confidence or attitudes towards math?

Our students seem to be really good at using game simulations .This looks like an outcome in the behavioral domain. Perhaps an outcome should be developed that looks at if and/or why game simulations are an effective learning strategy?

Step 3 – Write the Learning Outcome Statement

Some people get really sophisticated with learning outcomes. This paper, which is quite good, states that learning outcomes statements are complete, Kantian sentences. I have no idea what Kantian means, so I googled it. It’s good advice.

I agree that verbs and syntax matter. But it takes practice and time. Like strategic planning, they also have the potential to take away from creativity. The goal is to find a balance. I think balance can be achieved if you use storytelling techniques, as opposed to planning techniques. Planning puts too much emphasis on calculation, and not enough on improvement (1). Balance can be achieved using storytelling because the learning outcomes are attached to already-existing narratives that are meaningful to you, not someone else.

I use the term learning outcomes, but choose whatever you want: goals, objectives, outcomes, targets…whatever works. They’re all statements of intention.

When you have the domain, match it to the right verb. Here’s the first story narrative:

I would like to know how knowledgeable students are about using statistics to solve real-world problems. This looks like a problem in the cognitive domain. Perhaps a learning outcome should focus on assessing how students apply statistics or quantitative thinking to real-world problems?

By referencing this table (p. 2), I can see “apply” as a level in the cognitive learning area. A list of verbs is next to the level. I will build my learning outcome around the verb that best articulates what I am getting at:

Students will be able to construct an advocacy report written for a general audience on the economic benefits of bicycle commuting. 

A more structured method to writing learning outcomes is the ABCD Method (Audience-Behavior-Condition-Degree).  I like this method because it requires the program to make learning explicit and operationalizes the assessment.

Given the results from their environmental impact study (condition), students (audience) will be able to construct an advocacy report (behavior) written for a general audience (degree) on the economic benefits of bicycle commuting. 

The best part about using storytelling, as opposed to planning, in creating learning outcomes is that the learning outcome is genuine. A lot of people focus on transparent or sustainable learning outcomes. I have no idea why sustainability in learning outcomes is a worthwhile goal. Transparent outcomes are written for accreditation and compliance. Outcomes can be put anywhere – marketing materials, websites, catalogs, reports, etc. But that doesn’t make them useful. Being transparent is just an activity and it says little about being honest or truthful.

Genuine outcomes are written to help students learn and programs improve. The best storytelling advice is to tell the truth.

*******************
(1) Assessment has its roots in empirical and mostly quantitative analyses of learning. In the 1980’s, it was co-opted by the planning and improvement field. Remnants of this movement are still around. Assessment plans are fine, but there’s a problem with using assessment as a driver of planning: “The problem is that planning represents a calculating style of management, not a committing style. Managers with a committing style engage people in a journey. They lead in such a way that everyone on the journey helps shape its course. As a result, enthusiasm inevitably builds along the way. Those with a calculating style fix on a destination and calculate what the group must do to get there, with no concern for the members’ preferences….calculated strategies have no value in and of themselves…strategies take on value only as committed people infuse them with energy (H. Mintzberg, Harvard Business Journal, January-February 1994, p. 109).” Plans should certainly have some intentionality and direction, but learning outcomes aren’t the same as strategic goals.
(2) This was also proposed as an idea-generating process by Patton in Utilization-Focused Evaluation (1978).
(3) Focus on the story and meaning first, and learning taxonomy second. Some people will try to develop learning outcomes that cover all three of the taxonomies. There’s nothing wrong with having all cognitive outcomes, or all affective outcomes. Additionally, the three domains are not mutually exclusive. There can be overlap; don’t feel like a leaning outcome has to fit a domain. For example, a student with crutches who is leaning to write will encounter all three domains: cognitive (writing), psychomotor (interruptions due to adjusting crutches), and maybe affective (frustration or a feeling of accomplishment).
Posted in Uncategorized | Leave a comment

Best Practice and Kind-of-Best-Practice Guidelines for Writing Learning Outcomes

References at the end of this blog.
  1. Learning outcomes should focus on what students learn, not what we teach.

Students will be introduced to the topics of abnormal mental behaviors in their patients.

The problem with this outcome is that it is focused on what the teacher does, not what the student will learn.

Proposed fix: Students will be able to document abnormal mental behaviors in their patients.

  1. A strategic or effectiveness goal is not a learning outcome.
  • Students will be satisfied with academic advising (H).
  • The program will witness an 80% retention rate from fall to spring  (H).
  • 70% of program graduates will enroll in graduate programs (H).

These are program effectiveness outcomes, not learning outcomes (H). They only indirectly address learning. One could make a claim that if 70% of program graduates are accepted to a graduate program, some kind of learning is occurring, but that’s a very indirect claim. Additionally, it’s difficult to see how this kind of information helps a program improve.

Learning occurs at different levels on a spectrum from indirect to direct. It is perfectly appropriate to evaluate your program based on program effectiveness indicators. It’s a stretch, though, to make a claim about learning.

  1. Learning outcomes should focus on the learning resulting from the activity and not the activity itself (HB, p. 99).

Students will study at least one non-literary genre of art (HB, p. 99).

Maybe your goal is to assess if students study? If so, that’s fine. You can even measure it based on the number of hours a student spends studying or the number of pages they read.

However, this outcome evaluates a process, not learning. Thus, it is only indirectly related to learning and probably shouldn’t be labeled as a learning outcome at the course or program levels.

  1. Learning outcomes should not be too broad and ideally be discipline-specific.

Students will understand how to communicate well.

Here are the problems with this outcome:

  • Most of us aren’t communications experts, aren’t experts in how to evaluate it, and can’t control all of the factors that go into how well students are at it.
  • Communications is way too broad. Is this outcome assessing verbal, written, or oral communications?
  • The word ‘understand’ applies to an internal, covert state of mind, not something students do (see guideline #5).
  • This outcome could apply to every class, activity, or program on campus.

Proposed fix: Given a sentence written in the past or present tense, the student will rewrite the sentence in future tense with no errors in tense or tense contradiction. (BC)

  1. Try to make learning outcomes measureable. Avoid verbs that are unclear and describe covert, internal behavior which cannot be or are difficult to measure (URI, p. 3).

Students will develop an appreciation of cultural diversity in the workplace.

Students will value the role of statistics in the workplace.

These outcomes are laudable strategic goals or vision statements. It shouldn’t stop you from helping students learn about or value diversity in the workplace. It’s not a good learning outcome, though.

As instructors, we have no idea what is going on inside students’ brains. That is because attitudes and knowledge are covert and internal to the students. Mager wrote that attitude objectives “are not specific descriptions of intent. Statements like these describe states of being; they do not describe doing” (p. 103). Words like “appreciation” and “value” are internal states of mind. Cliff Adelman states “we do not teach college students how to be conscious, and we do not award degrees on the basis of peripheral sensations (A, p. 10).”

That’s why we do assessment. Assessing learning through instruments like papers, demonstrations, artwork, or other activities makes student skills and knowledge overt and allows us to evaluate it. “One does not know a student has the ability to do anything until the student actually does it, for which point we use verbs that indicate what the student actually did” (A, p. 13).

Still, most of us desire that students develop some kind of values and character. Valuing diversity in the workplace is certainly an important goal or outcome for students. In light of the issues associated with assessing and evaluating this, I would consider making it a core value or part of the program mission or vision. There’s no obligation to measure and assess a core value – it stands on its own. Another option would be to make it a program effectiveness goal and measure it indirectly, through a survey or other activity.

A final option is re-write the learning outcome to operationalize what you want students to learn. Here are some basic examples (the ABCD model is a good framework for writing outcomes, but for simplicity sake will focus more on verbs and basic outcomes):

  • Given a case study, students will produce a workplace inclusion plan.
  • Using a case study, students will be able to defend the economic benefits of workplace diversity.
  1. Try to avoid compound or double-barreled learning outcomes.

Students will be able to successfully venipuncture an arm and define legal issues related to phlebotomy.*

Obviously, this should be two-learning outcomes.

  1. Learning outcomes should be written in the present, not past or future. (A)

Program graduates will demonstrate the democratic ideal through service to their communities.

Even if you could track this, there really isn’t a lot you can do to influence graduates. Focus on what you can do now in the present or at least current semester(s).

  1. Learning outcomes should have an activity or assignment associated with them.

Given the symbol representing a particular isotope of an atom or ion, the student will be able to determine the number of electrons, protons and neutrons in that species eight out of ten times (BC).

This is a great learning outcome. If students have no opportunity to demonstrate this outcome, though, people may assume they haven’t learned the material. This is particularly the case learning outcomes at the program level that rely on multiple courses or activities.

Ideally, you would want multiple assignments to provide information for one outcome. For example: Students completing the Engineering program will score over 95% on a locally-developed examination (UC).

In this case, learning is only assessed based on one instrument. Another problem with this outcome is that it dictates an assignment. This may be fine at the classroom level, but hopefully students will have multiple opportunities to demonstrate competency towards a learning outcome at the program level.

(Kind of Guideline) 9. Learning outcomes should be aligned with institutional missions and goals.

Backward design is the idea that learning outcomes should start with the mission of the institution in mind, followed by college, departmental, programmatic, and course goals or mission. The course should then be delivered forward, feeding into the institutional mission.This is what it looks like:

design-backward-deliver-foward

I think this is a great model, but like most assessment frameworks, it usually doesn’t play out well in practice.

First, colleges and universities are just too internally diverse and variable. A google image search of colleges of business and fine arts alone will show this. The institutional mission and goals are going to have to be broad to accommodate everyone.

Second, backward design always kind of reminded me as an exercise in philosophical reductionism. It is like getting multiple cups of coffee from one filter. The third cup from the same filter barely resembles the first one. By the time the institutional mission or goal gets filtered all the way down to the program and classroom levels, the outcome no longer has an resemblance with the institutional mission.

(Kind of Guideline) 10. Use the correct language of goals.

Some people are really picky about the differences between outcomes, outputs, goals, targets, and objectives. I don’t think it really matters – they’re all statements of intent. It’s more important to be aware of the differences between learning outcomes and program/effectiveness goals (see guideline #2).

(Kind of Guideline) 11. Cohort percent benchmarks and learning outcomes.

Upon completion of the art history program, 80% of students will be able to identify the approximate year of a painting.

These kinds of out outcomes are fine for compliance and summative evaluation reasons, but aren’t really helpful for program improvement.

The first issue with these kind of outcomes is that the cut-offs are seemingly arbitrary. Why is 80% better than 75%? What is so special and magical about 80%? 

The second issue is use of the results. If 90% of students in the art history program meet the goal, it provides an incentive for the program to ignore the outcome and move on. If 75% don’t show competency, that suggests a problem that may not exist. 

These kinds of outcomes are indicative of programs in a compliance-driven, summative assessment mode. Assessment fundamentalists who also serve as accreditation peer reviewers or state policy makers will like it, but I don’t see the value of them for improvement. 

image001

References

(A) C. Adelman, To Imagine a Verb: The Language and Syntax of Learning Outcome Statements, 2015.;
(BC) T. Brumfield & S. Carrigan, Instructional Objectives Workshop Handout, 2011;
(HB) M. Huba & J. Freed, Learner-centered Assessment on College Campuses, 1999.
(M) R. Mager, Preparing Instructional Objectives, 1962
(URI) University of Rhode Island, Student Learning Outcomes 101
(UC) University of Connecticut, How to Write Program Objectives/Outcomes
Posted in Uncategorized | Leave a comment

Gap Years Are a Bad Idea: Assessing Questionnable Higher Ed Advice

There were a lot of “hairy-faced men around” during the beard boom of the early 80’s, at least according to Roald Dahl. Dahl was a little wary of people with beards, writing “when a man grows hair all over his face, it is impossible to tell what he really looks like. Perhaps that’s why he does it. He’d rather you didn’t know.”

4328_image_04800435x244104800870x4882481999901466x825248199990733x4131_1732_

Image from The Twits, by Roald Dahl.  A literary and artistic masterpiece.

It’s 2016 and we are in the midst of another boom: the college advice boom. This boom is marked by flawed and anecdotal research marketed towards everyone, but written with a very tiny audience in mind (specifically, very wealthy or famous people who plan on attending highly selective universities).

This advice adequately communicates expectations about what college perhaps should be, but is not an accurate picture of what a vast majority of students in the U.S. will actually experience. Here are three examples of that kind of research.

Example 1: Gap Years

This New York Times blog post about the benefits of the gap year claims that students who take time off may be able to “make better choices about things like alcohol and sex and have a better understanding” of what they want from college. As evidence, the author references Harvard, Middlebury, Princeton and a student who “travel(ed) through Europe with assorted new friends.”

This article from Slate claims that “students who take a gap year seem to drink less when they get to college.” The evidence for this claim was based on another blog by a parent who was wondering (but never claimed) whether or not a gap year in Europe will impact a daughter’s drinking habits in college. This isn’t even anecdote – it’s opinion. And bad research.

That study out of Middlebury – largely anecdotal and based on a small group of high-incomestudents – has been used a lot to advocate gap years as good for all students, regardless of family income or circumstances, including herehere, herehere, and even NPR. The role of general maturity and growing up is not mentioned anywhere.

For most college students, a gap year or any kind of delayed entry is a bad idea. This is because momentum is crucial. Anything that breaks that momentum – like a gap year, taking a semester off, or a high number of course withdrawals – comes with the potential for negative consequences.

It also has a lot to do with identity. Many low-income students’ identities are oriented around many things, like being a student, caring for dependents, and work. Upper-income students’ identities tend to be oriented around one thing: being a student. Gap years can be a particularly vulnerable time for students whose energy, time, and resources must be devoted to competing priorities.

The bottom line: Don’t take a gap year. If your goal is to go to and complete college, and you come from a middle or low-income family, focus on keeping up the momentum and stay in college.

If you must take a gap year, find a program that is connected to a college and will help pay for it. There are some subsidized programs, but space is very limited, according to some.

Taking a year off to “save money for college” is an irrational financial decision. This is because you are trading a year of low-income work for a year of high-income work.

If you feel like you need to clarify life goals and explore, there are plenty of opportunities to do that in college – you can study abroad, take an alternative spring break, or participate in volunteer programs. If a gap year needs to be abroad or far away from home, the good news is that there are colleges and universities all over the world — many of their credits even transfer.

If students need to learn independence and responsibility, they could even consider spending a gap year at a two-year college – they will likely learn a lot more about independence, responsibility, and hard work from two-year college students than from a gap year wandering around with “assorted friends” in Europe.

Or better yet, you could consider taking a gap year between college graduation and your first job. You will have the degree is in you back pocket and the lifetime financial risk is much lower.Take it from an Art History major – every minute spent worrying about finding a career in your youth is a wasted minute.

Example 2: Working in College

Making the Most of College asserts that working in college has a negligible impact on grades. That is only partially correct. What the book doesn’t communicate is that the impact of working in college is highly dependent on where and how much a student works.

The more selective the university, the less likely a student is to work. According to National Survey of Student Engagement (NSSE) and Community College Survey of Student Engagement (CCSSE), nearly one out of five (19%) full-time community college students work 30 or more hours per week, compared to one out of 250 (0.4%) students at highly-selective (wealthy) universities.

Let that sink in for a moment.

Working part-time on campus is good, but working long hours off-campus has consequences, including longer time to degree, lower persistence rates, less time for academic work, decreased wellness benefits and higher levels of stress, and less time to engage in meaningful campus activities.

The bottom-line: Working part-time on campus is ideal; working long hours off-campus is not. Moderate off-campus employment is also acceptable, but the negative consequences increase as the number of hours working increase. Unfortunately, a lot of students really have no choice. Students who work long hours off-campus while in college are very practical, and they mostly figure out a way to make things work, but at a significant cost to their learning and development.

Example 3: Extra-curricular Activities.

This article asserts that students are engaging in more extra-curricular activities at the expense of academic work. The author at least clarifies he is not describing students who need full-time jobs, care for dependents, or those pursuing specialized careers (but does not clarify that his observations are based on students at a university that represents a tiny sliver of the total U.S. College population. And has a lot of money).

According to NSSE data, students at highly-selective universities spend a lot more time on extra-curricular activities. 81% at highly-selective universities are involved at least one hour per week, compared to 39% at non-competitive institutions.

Of the 3.5 million 2013 high school graduates, 34% didn’t even enroll in college right away. Another 25% enrolled in open admissions colleges. Most of the other 41% enrolled in places with moderate admissions standards.

It would appear, then, that the vast majority of high school and college students don’t participate in extra-curricular activities because of helicopter or snowplow parenting, grade inflation, admissions-portfolio padding, or ambition. Contrary to what that article asserts, all that is a myth. Rather, almost all students participate in extra-curricular activities because they are fun or interesting.

The bottom-line: Stay involved in extra-curricular activities. A lot of people don’t pick a major based on their life passion. A lot of other people don’t even really have a life passion. Extra-curricular activities provide a place to channel that passion and give students something fun to do besides watching Netflix or YouTube. Even Quidditch is an option.

Part of going to college is learning to manage your time and priorities. Here is what I tell my kids: “My job is to get you to 18. We have given you many advantages in life. If you mess this up, it’s on you. Not us.”

Students should be told to stay involved. It won’t hurt their learning or development.

Why Do Journalists, Authors, and Even Some Academics Over-sensationalize the College Experience via Anecdote and Bad Research?

Books like Lythcott-Haims’ How to Raise an Adult paint the transition from high school to what’s next as a transcendent and poignant experience of epic significance, and that for almost everyone what’s next is a very selective university experience. This is not the case for almost everyone.

In The First Year Out, Clydesdale speculates about some journalists and some academics’ motivations and the appeal of college-advice tailored for a very narrow audience:

This may be due to authors’ fascination, as high school graduates occupy an ambiguous and even exotic cultural position, enjoying many adult freedoms while having fewer adult responsibilities. (Or the) authors’ desire to confirm cherished assumptions, as teens are sufficiently diverse that authors can readily find a few teens whose stories support a pet theory. (Or maybe) even authors’ convenience, as many authors work with teens full-time and thus of a steady supply of (students) about whom they can write. I quickly learned that teen lives are not nearly as exotic as fictional, nostalgic, and popular accounts suggest (p. 44).

A vast majority of high school graduates are much more conventional and practical. For most college students, the main concerns are probably figuring how to pay for college, finding work, making new friends, or homesickness. As this article from 538 notes, a majority of young people in the U.S. are more concerned with how to complete and get out of college, as opposed to getting in.

This myth is even evident in popular fiction. The fictional The Admissions (which I really liked and gave 5 stars on Goodreads), attempts to serve as a kind a of cautionary tale about selective-university ambition. The family appears to opt out of the rat race. But not really, and what could have been a great book stumbles at the end. The family seems immune from the economic and educational consequences of their transition and decisions (which includes an implied gap year). Most students are focused on managing daily activities, and when encountering a life transition, are much more likely to retreat to the familiar, not the new.

How to Evaluate College Advice

  • A lot of the advice from college advice books is fine. People should just take that advice with a grain of salt.
  • Unless you’re looking for advice that is irrelevant or tells you what you already know, then I wouldn’t recommend getting advice from the New York Times.  Nothing against the Times in general. Just take their advice with a lot of grains of salt.
  • In fact, don’t take advice from most of the national media. This Washington Post article writes how hectic “K-12 schooling has become, noting that ‘training for college scholarships — or professional contracts — begins early, even in grammar school.’ ” No it hasn’t, not for a vast majority of the population at least. Just writing an opinion down and backing it up with anecdote doesn’t make it true. Take newspaper and media advice about college with the whole salt shaker.
  • Be cautious of the anecdote and research source. When someone writes that an experience will help students, and then references a friend’s nephew’s experience and how their grandparents paid for that experience, it should not be taken as an article of faith. If one makes a claim about all college students, and bases their research on a group of 50 students at Middlebury, proceed with caution.
  • Rather than reading books about getting into college and managing the college experience, students should be encouraged to explore books about how to be a good learner and good student. That advice is usually based on research.
Posted in Methods, Uncategorized | Tagged , , , | Leave a comment

Do We Really Need Student Learning Outcomes?

I made it to the end of my junior year of college without picking a major. It never really occurred to me to pick one. But I did know what I liked to do – drawing, writing, reading, baseball statistics, history, and a few thousand other things.

I had one last thing to do before I went home for the summer: register for fall classes. The registration office, however, wouldn’t let me. An office worker told me there was a hold on my registration. There was a rule that all seniors must be enrolled in a major. He gave me a list that looked a lot like this:

majors

Reading down the list, I picked the first one I liked: Art History & Archaeology. And that’s how I picked my major – mostly because it begins with the letter “A.”

Programs are kind of like people. Some programs focus with laser-precision on what they want students to learn. They have valid and reliable instruments with super-precise instruments that tell them everything they need to know.

Other programs kind of muddle along, figuring things out as they go. They might have learning goals, but can’t remember who created them or why – there’s no name or date on the paper. Maybe it was the chair who retired three years ago? The original digital document is long gone, so the learning outcomes exist on a sheet with crooked margins that’s been photocopied a hundred times.

Some programs even intentionally muddle along – they have little structure or intention by design. When I read Lynda Barry’s Syllabus: Notes from an Accidental Professor, I couldn’t help but think how a tightly-structured assessment plan could only get in the way of how she teaches. I really liked the idea of teaching as a process of uncovering what skills and knowledge students already have and building on those. (1)

The thing about programs that muddle along or take a serendipitous approach to learning is that they’ve been doing it a long time, maybe for decades. And they’re still around, engaging and graduating students. They may understand, recognize, and even appreciate the value of learning outcomes, but they’ve been doing fine without them.

So, you really don’t need student learning outcomes. A lot of programs are functioning just fine without them. (2) A lot of people are, too.

But just because a program doesn’t need learning outcomes doesn’t mean it shouldn’t have learning outcomes. I think it’s a good idea for three reasons:

First, it’s good pedagogy. Here’s an edited passage from Popham’s book, Transformative Assessment (pp. 50-51):

Jill has designed a one-month instructional unit to promote student’s mastery of a high-level cognitive skill. Jill will undertake the following activities:

  1. Fully clarify for students the skill they are to master by the time they achieve the unit’s target curricular aim.
  2. Motivate students to achieve the aim by showing how the skill will be potentially beneficial.
  3. Supply instruction.
  4. Model the use of the skill.
  5. Give students ample guided practice as well as time for independent practice.

This is a well-organized class. One can say with confidence that students will learn in this class, regardless of whether an assessment exists. Still, the skills are clarified, which is close to articulating learning outcomes. Even without making the skills explicit, the students will likely be learning.

However, without some kind of formative or summative assessment of those clarified skills, how will she know what to modify or improve? In my experience, teaching and program improvement is a continual process of tweaking and change. It’s rare that no changes are made, year after year. Some kind of formative assessment of those skills would go a long way in providing meaningful feedback and help improve the course.

The second reason is that outcomes tell your program’s story. In What the Best College Teachers Do, Bain writes that professors hold two responsibilities (p. 58):

  1. Help students learn.
  2. Tell society how much learning has taken place.

Having real learning outcomes is a good idea because it communicates your program’s story. Telling a program’s story can go a long way in educating decision-makers, responding to public or future-student inquiries, and demonstrating impact.

There’s a third reason, but if you’re engaged with the first two, you shouldn’t have to worry about it: accreditation and accountability. I once heard an accreditation peer reviewer tell a group from a diverse background of disciplines that at least 70% of their program-level data should be benchmarked.

This isn’t to suggest that all peer reviewers feel this way, but there’s not a lot of variation in terms of perspectives about assessment among reviewers, at least in my experience. They are often told what they want to hear because they write reports – reports that are reviewed by people who make decisions about things like budgets, strategic planning, football stadiums, and other important matters. 

Does the risk of being genuine outweigh the benefits of homogenized, compliance-driven assessment? I don’t know. If I was a peer reviewer, I would intentionally look for mistakes, lessons learned, challenges, creativity, and genuine ideas and results, and be very suspicious of perfect assessment plans that purport 70% benchmark-able data and 90% response rates. 

Getting Started

If your program has muddled along or taken a serendipitous approach to learning, you might want to consider starting with one learning goal and examining it for one year. You can build on it in year two, or move on to another one. After 5 or 6 years, you will have a lot of assessment information.

Don’t fall into the measureability trap (3): focus on what you find meaningful first, not whether the outcome can be quantified or measured. Would you end a program that promotes awareness of sexual assault just because of issues associated with measuring outcomes? Of course not. Whether or not an outcome can be measured should not be the sole criteria for addressing an outcome. Don’t fall into another trap: the feeling that everything has to be assessed. Time, resources, and energy are precious resources, and they should be directed towards what we find meaningful and what matters.

Consider storytelling, as opposed to planning, as a way to get started. 

(1) Serendipity and muddling are not the same thing, even though a lot of people think they are. (Kind of like the Bascombe character in Richard Ford’s novels, who many think is cynical but is actually really intuitive). One can be methodological and detail-oriented, and still serendipitous. Many of the greatest scientific discoveries were conducted in controlled, methodological environments.
(2) Other programs are getting by with ghost written outcomes or goals, but I wouldn’t say they are functioning fine. Ghost written goals are a form of shadow assessment written for PR or compliance reasons, or to make an administrator or assessment bureaucrat go away. Ghost written goals are worse than having no goals at all, because everyone’s time is wasted. And it’s lying.
(3) See Purposeful Program Program Theory by Sue Funnell & Patricia Rogers for more about this idea.
Posted in Culture, Methods | Leave a comment

Assessment Planning and Decision-making: The Problems with Assessment Frameworks

Nearly everyone has an assessment framework that symbolizes how assessment does or should work. I haven’t viewed all of them. But I’ve seen a lot. And most of them look the same.

In the book Reason & Rigor: How Conceptual Frameworks Guide Research, the authors explain the benefits of using frameworks to guide research. Some of the positives of frameworks include:

  • Serving as a guide or map.
  • Capitalizing on the collective expertise of subject-matter experts.
  • Articulating the links between steps in a plan or research study.

Frameworks in assessment serve the same purposes, and are helpful in planning curricular and co-curricular programs and activities.

Frameworks, however, can be limiting (see Mintzberg’s The Rise and Fall of Strategic Planning). There are several reasons why adhering to a strict, formalized assessment model, with little to no deviation or room for serendipity or exploration, can lead to problems.

  1. First, assessment frameworks, by themselves, ignore all of the variables that influence decision-making. Most, if not all, assessment frameworks assume that actions and decisions are in isolation from other factors, and that the only variable that influences decision-making is the analysis and interpretation of data. Here is an example of what most of them look like:

assessment-model-basic-1

What if all of the factors that influence decision-making were actually included in this model? It might look something like the model below:

assessment-model-basic-2

In Misbehaving: The Making of Behavioral Economics (2015), Thaler calls these supposedly irrelevant factors (SIFs). SIFs are factors that are not considered in research models. SIFs are factors that were ignored for many years by classical economists, who (incorrectly, we now know) assumed that all people respond to economic decisions in rational ways. We now know that humans are quite capable of making irrational and often bad decisions, despite our efforts to model human behavior.

Assessment frameworks do a good job of providing a road-map. They don’t, however, capture all of the bathroom breaks, family fights, random detours, gas stops, and flat tires. No one plans a long road trip without taking these factors into consideration. Similarly, institutional culture, politics, staff issues, and the normal issues that arrive in daily life should be considered when using an assessment framework.

2. Most assessment frameworks are shown as a cycle. This limits decision-making to a narrow definition, ignoring the non-linear and incremental manner in which decisions are actually made. We are constantly making decisions, and they often don’t follow the linear process described in most cycles.

3. Many planning and assessment models assume that organizations are rational. In Strategic Planning for Public and Nonprofit Organizations, Bryson notes that non-profits are only politically rational, and can only be understood from this perspective.

When most organizations articulate how they are organized, it usually looks like this:

assessment-org-model-1

The model above assumes orderly and rational decision-making, where everyone follows a chain of command. Communications are assumed to also follow this chain.

Anyone who works in higher education, and maybe most organizations, knows this is not how things actually work . People communicate with individuals at different levels and different departments all the time. Additionally, universities are open systems. State politicians, the press, donors, and even random people who seem to just kind of wander on the periphery will exert influence over organizational plans and activities. With that in mind, a different perspective on organizational structure might look like this:

assessment-org-model-2

When working with assessment models and frameworks, it is important to acknowledge the influence of other factors in decision-making and organizational dynamics that may influence the use and interpretation of assessment evidence. In a chapter of Using Evidence of Student Learning to Improve Higher Education, the authors note:

…the relationship between evidence and action is not always neat, rational, or linear. Moreover, the fact that evidence meets the highest possible psychometric standards may have no bearing on its effectiveness in prompting action (Hutchings, Kinzie, & Kuh, p. 41).

This does not mean that frameworks and models should not be used. In fact, they are very helpful in terms of planning and showing the links between parts of the assessment and evaluation plans. There are several ways the tension between best-practice and responsibility to the field of assessment, and the kind of messy way in which public non-profits are organized and how they make decisions.

In Assessment Reconsidered, authors Keeling, Wall, Underhile, and Dungy recommend distinguishing between formal assessment and informal assessment practices:

Formal assessment practice includes conceptualizing, planning, implementing, and evaluating the impact, or outcomes, of a purposeful, intentional learning event on a set of learners. Informal assessment is the experience that an individual or individuals have when they experience an event in which learning occurs…whether or not that event was intentionally developed or designed (p. 10).

The key, in the informal situation as the authors describe, is to develop methods that “ascribe meaning to that event.” Methods like observation, informal interviews, or quick polls/surveys are good for capturing these moments. Even staff debriefing and documenting observations can be helpful in these situations.

It is also important to ensure that multiple viewpoints are taken into consideration in assessment and evaluation. A lot, if not most, decisions do not occur through rational, formal processes and structures. Decisions are often made incrementally over time. Assessment data travels through a multiple of different people and groups, all of whom attach their own interpretation and meaning to the information (more about this topic is in M. Patton, Utilization-focused Evaluation, 1978). (Developing shared meaning about assessment data is much easier at the program-level. The variety of interpretations increases at the institutional level). Sometimes, it’s not obvious how a decision was reached or made.

For assessment data to be useful in this context, it should be broadly communicated, discussed, and given time to develop. The emphasis should be on creating a shared meaning over time. The phrase “the reality is…” should only be used after a long investment of time and energy in creating shared meaning. (I would suggest never using that phrase in the context of assessment and evaluation — otherwise, people may think the data does not reflect their reality).  In order to highlight use, instructors and leaders can clarify the connections between intentions and actions through curriculum mapping or logic models, or through reports and in meetings.

 

Posted in Culture, Methods | Leave a comment

Junk Assessment and Junk Miles: Quit Worrying About Assessment and Have Fun

On a recent long bike ride in the countryside of Illinois, I was concentrating on my training regimen and my mind wandered to assessment. Not something most normal people do, but it’s my job.

Thinking about challenges associated with the use of assessment, three themes came to mind:

  1. Assessment isn’t useful. People are busy with their regular jobs, and no one likes being forced to invest precious time and energy in something they won’t use. Even worse, people hate being told to do something they are already doing anyway.
  2. Assessment methodologies are messy and unreliable. This is particularly true in regard to trends in declining survey response rates. A lot of people just hate taking standardized tests and online surveys. As this college student from England notes, surveys are “boring, tedious, and usually pointless. I hate surveys when they talk about stuff I don’t care about (and) when they don’t change anything.” Low response rates are another issue. Some people take a “do the best we can with the data we have and proceed with caution” approach. A lot of people, though, feel that anything below a 80% participation rate utilizing double-blind experimental methods should be thrown out. And then buried and covered with salt.
  3. Assessment is too rational, linear, restrictive, goal-oriented, and a pedagogical straitjacket.

At around mile 10 mile of my ride, it occurred to me that I needed to pick up the pace in order to avoid junk miles. Junk miles are miles that serious cyclists and expert trainers advise people avoid because they have no specific training purpose. The idea is that one can’t improve strength and performance by wasting time on casual rides through the neighborhood.

As an industry, we do a lot of assessment. Could it be that we are engaging in junk assessment, and that a lot of it is just a waste of time, particularly if it’s not useful, the methodology is questionable, and it’s too restrictive? To answer this question, I had to examine my own experience with cycling.

I started cycling over a year ago by just getting on my old bike and going, with no purpose in mind. I really liked it, though, and wanted to get better. Over time, I made moderate changes to my diet and reduced the “junk” miles.

To get better and monitor my performance, I figured I needed data to make better decisions. Those room-temperature, half-eaten chicken nuggets  my kids leave on the counter, and I later eat over the sink, really add up. So, I needed two things: good research and good tools. This approach makes sense to me. In fact, it’s the whole principle of how assessment is supposed to work, at least according to standard assessment models: you create a goal, measure progress towards that goal, and use the results, or evidence, to make better decisions.

This was more challenging than I thought. I came across a reputable news website titled 15 Deadly Food Myths. Based on this article, there is no way I should be alive. And who knew that broccoli is full of toxins and has by-products that have been shown to cause cancer in lab rats?

Actually that’s not true. Broccoli is great for you. But this Onion-esque article shows how difficult it is to sift through all of the information out there. After all of my research on diet, this headline from the real Onion described how I felt: Eggs Good for You This Week.

Well, I thought, I’ll just simplify and focus on calories in and calories out. I bought a GPS bike computer to track calories, miles, heart rate, calories burned, and other important information. This approach is based on research. People who track their intake and outtake of calories are more likely to lose weight.

Or are they? The problem is that people are terrible at measuring what they eat. We lie about what we eat to ourselves and others. A headline referencing an article from the New England Journal of Medicine informed me that counting calories “never” works. Food labels can be wrong. Even my bike computer and other fitness trackers aren’t accurate.

At this point, I had two choices. I could devote more time, energy, and resources into getting better and more “significant” data, or I could just deal with the best data I could get and make decisions from there.

 I have two little kids. I want to enjoy life, not just measure it. Measuring micro-nutrients and other biometric data with pee-sticks and brow-rags may provide good data, but it wouldn’t be a worthwhile use of my time or money. Besides, I feel great and am getting better, even without sending samples to a company that analyzes the intracellular micro-nutrients in blood.

My new approach to cycling and not worrying about junk miles worried me a little at first, but then I read an article by Selene Yeager: Why There’s No Such Thing as Junk Miles. A laser focus on each ride being aligned with a specific training goal, and informed by precise (e.g., valid and reliable) data is fine for some people. But think about what one misses out on:

  • Serendipitous discoveries, like seeing an eagle or a fox or enjoying a burger and beer with friendly locals at a small-town bar and grill you didn’t even know existed before.
  • Taking a 15 mile bike ride with your 10-year old daughter.
  • Learning something new about the group of friends you’re cycling with. Or making new friends.

I reached a similar conclusion while giving a co-presentation about an institution-wide survey of students. The survey revealed a difference in opinion in terms of feedback from faculty to students. We were having a great conversation about feedback, and then the inevitable question arose: “what was the response rate?” It was low, certainly out of the range where one can make generalizations about the entire population. (Although significance does not always mean one can generalize, or even that the data means anything). It was politely noted by the questioner that due to the low response rate, no conclusion about the data can be made and that the entire survey was a waste of time.

A humanities instructor, who admittedly had no training or expertise in calculating statistical significance, questioned this wisdom. We were having a great conversation about the topic, and it made him think differently about feedback to his students. He planned on investigating this topic in more detail in his own classroom, and improving his teaching from there.

From one perspective, then, the survey was a form of junk assessment. Even so, at least one instructor planned on improving his classroom. The mistake is in thinking that assessment and evaluation is just about the methodology and statistical significance. It isn’t. A hallmark of good assessment is sharing and dialog. In fact, the conversation and dialog is just as, if not more, important than the methodological precision of the assessment. The term ‘data-driven decision making’ has little place in the world of assessment and evaluation , in my opinion. People drive decisions, not data. This is particularly true at the program level, where people make sense of and use data in terms of how it relates to them and their particular program’s context, not some objective truth that can be tested.

This isn’t to suggest that methodology and data quality aren’t important. In every study, one should always strive for representative samples and pay very close attention to details. In the uncontrolled settings in which a vast majority of assessment and evaluation occur, however, this is practically difficult. One should never advocate for substantial organizational, curricular, or budgetary changes based on one question from a survey with a 10% response rate – that would be irresponsible. However, there is no reason we can’t at least talk about the data and results, and discuss ideas to look at the problem in more detail and follow up with research backed by more rigorous methods. In Evaluation Debates, Carol Weiss says it best:

Evaluation can do more than just legitimate something people already knew. It can also help to clarify and crystalize it and express sort of vague, inchoate feelings that people have and don’t really understand. Once evaluation does that, it really can be helpful (p. 145).

Here are a few strategies to make assessment and evaluation practical and meaningful:

  • Capitalize on what is already occurring. Programs do a lot of assessment, and it might be helpful to inventory what already exists.
  • On the other hand, it’s okay to get rid of unproductive assessments. If an assessment is focused on something the program doesn’t care about, stop doing it.
  • Don’t think that everything has to be assessed and evaluated. Pick 3-6 things that you care about, and focus on those. Focusing on one or two goals, themes, learning outcomes, or whatever you care about per year can contribute to great conversations over time.
  • If participation or response rates are low, don’t make decisions right away. Focus on the conversation and dialog, and see if further research is warranted.
  • Focus on where results will be discussed, as opposed to when. Places should be created where people can talk about and reflect on assessment evidence.
  • Foster a healthy organizational culture. Assessment works best in places where people trust each other, governance and leadership are stable (people don’t give up on the institution or each other), and people can engage in honest conversations.
  • Embrace the idea of serendipity in assessment. All assessment plans will contain some kind of outcome or goal, but it’s okay to incorporate the exploration of issues and ideas in evaluation plans.
  • Be open to different ideas in terms of what it means to use assessment data. Assessment literature can be pretty limited in terms of how it defines using assessment evidence. Most literature advocates ‘closing the loop.’ The links between evidence and decision-making, in reality, are rarely that direct and immediate. The problem is that these definitions ignore the incremental and evolutionary nature in which program decisions are actually made. (Michael Patton discusses this idea at length in Utilization Focused Evaluation).

If there is such a thing as junk assessment, it may be compliance-driven, externally-mandated, summative assessment. In So What Do They Really Know?, Cristina Tovani states that summative assessment is like an autopsy. Autopsies provide data that is valuable for doctors, nurses, health care policy makers, hospital administrators, and medical researchers, but does very little for the patient. Similarly, summative and compliance-driven assessment data may do very little for instructors and people at the program level, but it should inform policy makers and upper-level administrators. But like student loans, car insurance, and dental work, assessment for external entities is just part of life. Granting agencies, governments, accrediting bodies, donors, and a host of many other people will want to know your program’s story. Additionally, programs in higher education exist in both disciplinary and institutional contexts. The disciplinary context may have a large degree of independence in regard to evaluation, but the institutional context will almost always be subject to planning, evaluation, and budgeting processes. Even then, these processes provide opportunities for reflection and dialog, and another opportunity to learn.

Posted in Culture, Methods | Leave a comment