References at the end of this blog.
- Learning outcomes should focus on what students learn, not what we teach.
Students will be introduced to the topics of abnormal mental behaviors in their patients.
The problem with this outcome is that it is focused on what the teacher does, not what the student will learn.
Proposed fix: Students will be able to document abnormal mental behaviors in their patients.
- A strategic or effectiveness goal is not a learning outcome.
- Students will be satisfied with academic advising (H).
- The program will witness an 80% retention rate from fall to spring (H).
- 70% of program graduates will enroll in graduate programs (H).
These are program effectiveness outcomes, not learning outcomes (H). They only indirectly address learning. One could make a claim that if 70% of program graduates are accepted to a graduate program, some kind of learning is occurring, but that’s a very indirect claim. Additionally, it’s difficult to see how this kind of information helps a program improve.
Learning occurs at different levels on a spectrum from indirect to direct. It is perfectly appropriate to evaluate your program based on program effectiveness indicators. It’s a stretch, though, to make a claim about learning.
- Learning outcomes should focus on the learning resulting from the activity and not the activity itself (HB, p. 99).
Students will study at least one non-literary genre of art (HB, p. 99).
Maybe your goal is to assess if students study? If so, that’s fine. You can even measure it based on the number of hours a student spends studying or the number of pages they read.
However, this outcome evaluates a process, not learning. Thus, it is only indirectly related to learning and probably shouldn’t be labeled as a learning outcome at the course or program levels.
- Learning outcomes should not be too broad and ideally be discipline-specific.
Students will understand how to communicate well.
Here are the problems with this outcome:
- Most of us aren’t communications experts, aren’t experts in how to evaluate it, and can’t control all of the factors that go into how well students are at it.
- Communications is way too broad. Is this outcome assessing verbal, written, or oral communications?
- The word ‘understand’ applies to an internal, covert state of mind, not something students do (see guideline #5).
- This outcome could apply to every class, activity, or program on campus.
Proposed fix: Given a sentence written in the past or present tense, the student will rewrite the sentence in future tense with no errors in tense or tense contradiction. (BC)
- Try to make learning outcomes measureable. Avoid verbs that are unclear and describe covert, internal behavior which cannot be or are difficult to measure (URI, p. 3).
Students will develop an appreciation of cultural diversity in the workplace.
Students will value the role of statistics in the workplace.
These outcomes are laudable strategic goals or vision statements. It shouldn’t stop you from helping students learn about or value diversity in the workplace. It’s not a good learning outcome, though.
As instructors, we have no idea what is going on inside students’ brains. That is because attitudes and knowledge are covert and internal to the students. Mager wrote that attitude objectives “are not specific descriptions of intent. Statements like these describe states of being; they do not describe doing” (p. 103). Words like “appreciation” and “value” are internal states of mind. Cliff Adelman states “we do not teach college students how to be conscious, and we do not award degrees on the basis of peripheral sensations (A, p. 10).”
That’s why we do assessment. Assessing learning through instruments like papers, demonstrations, artwork, or other activities makes student skills and knowledge overt and allows us to evaluate it. “One does not know a student has the ability to do anything until the student actually does it, for which point we use verbs that indicate what the student actually did” (A, p. 13).
Still, most of us desire that students develop some kind of values and character. Valuing diversity in the workplace is certainly an important goal or outcome for students. In light of the issues associated with assessing and evaluating this, I would consider making it a core value or part of the program mission or vision. There’s no obligation to measure and assess a core value – it stands on its own. Another option would be to make it a program effectiveness goal and measure it indirectly, through a survey or other activity.
A final option is re-write the learning outcome to operationalize what you want students to learn. Here are some basic examples (the ABCD model is a good framework for writing outcomes, but for simplicity sake will focus more on verbs and basic outcomes):
- Given a case study, students will produce a workplace inclusion plan.
- Using a case study, students will be able to defend the economic benefits of workplace diversity.
- Try to avoid compound or double-barreled learning outcomes.
Students will be able to successfully venipuncture an arm and define legal issues related to phlebotomy.*
Obviously, this should be two-learning outcomes.
- Learning outcomes should be written in the present, not past or future. (A)
Program graduates will demonstrate the democratic ideal through service to their communities.
Even if you could track this, there really isn’t a lot you can do to influence graduates. Focus on what you can do now in the present or at least current semester(s).
- Learning outcomes should have an activity or assignment associated with them.
Given the symbol representing a particular isotope of an atom or ion, the student will be able to determine the number of electrons, protons and neutrons in that species eight out of ten times (BC).
This is a great learning outcome. If students have no opportunity to demonstrate this outcome, though, people may assume they haven’t learned the material. This is particularly the case learning outcomes at the program level that rely on multiple courses or activities.
Ideally, you would want multiple assignments to provide information for one outcome. For example: Students completing the Engineering program will score over 95% on a locally-developed examination (UC).
In this case, learning is only assessed based on one instrument. Another problem with this outcome is that it dictates an assignment. This may be fine at the classroom level, but hopefully students will have multiple opportunities to demonstrate competency towards a learning outcome at the program level.
(Kind of Guideline) 9. Learning outcomes should be aligned with institutional missions and goals.
Backward design is the idea that learning outcomes should start with the mission of the institution in mind, followed by college, departmental, programmatic, and course goals or mission. The course should then be delivered forward, feeding into the institutional mission.This is what it looks like:
I think this is a great model, but like most assessment frameworks, it usually doesn’t play out well in practice.
First, colleges and universities are just too internally diverse and variable. A google image search of colleges of business and fine arts alone will show this. The institutional mission and goals are going to have to be broad to accommodate everyone.
Second, backward design always kind of reminded me as an exercise in philosophical reductionism. It is like getting multiple cups of coffee from one filter. The third cup from the same filter barely resembles the first one. By the time the institutional mission or goal gets filtered all the way down to the program and classroom levels, the outcome no longer has an resemblance with the institutional mission.
(Kind of Guideline) 10. Use the correct language of goals.
Some people are really picky about the differences between outcomes, outputs, goals, targets, and objectives. I don’t think it really matters – they’re all statements of intent. It’s more important to be aware of the differences between learning outcomes and program/effectiveness goals (see guideline #2).
(Kind of Guideline) 11. Cohort percent benchmarks and learning outcomes.
Upon completion of the art history program, 80% of students will be able to identify the approximate year of a painting.
These kinds of out outcomes are fine for compliance and summative evaluation reasons, but aren’t really helpful for program improvement.
The first issue with these kind of outcomes is that the cut-offs are seemingly arbitrary. Why is 80% better than 75%? What is so special and magical about 80%?
The second issue is use of the results. If 90% of students in the art history program meet the goal, it provides an incentive for the program to ignore the outcome and move on. If 75% don’t show competency, that suggests a problem that may not exist.
These kinds of outcomes are indicative of programs in a compliance-driven, summative assessment mode. Assessment fundamentalists who also serve as accreditation peer reviewers or state policy makers will like it, but I don’t see the value of them for improvement.