Moving Out and On: Illinois Students at Out-of-State Colleges

Out-of-state college attendance by Illinois residents is a problem. Historically, a strong economy, large population, and functional government shielded Illinois from the negative impacts of migration. That is no longer the case. Click here to see a visualization of the impact.

The Magnitude

Figure 1 shows the enrollment of first-year students from Illinois at Illinois 4-year colleges in the last 24 years. 4-year public enrollment hasn’t changed at all. It’s decreased by almost 20% alone in the last 10 years.

Illinois-colleges

Figure 1. First-year Student Enrollment of Illinois Residents at Illinois 4-year Colleges, 1992-2016. See this visualization for more information. Overall enrollment in Illinois colleges (including Illinois and out-of-state residents) has fallen by about 16,000 students in the last seven years alone.

Figure 2 displays the enrollment of first-year students from Illinois at 4-year colleges in other states. While the enrollment of Illinois residents in Illinois colleges has been flat, out-of-state enrollment has nearly doubled.

Illinois-out-of-state

Figure 2. First-year Student Enrollment of Illinois Residents at Out-of-State 4-year Colleges, 1992-2016. See this visualization for more information.

Is Out of State Enrollment Really a Problem?

Think of Illinois taxpayers as investors in people. Businesses invest in equipment, space, productivity and – especially – people. They expect returns on these investments.

Illinois taxpayers invest in the future of the state through education and training. The hope is that investments in people through education and training see positive returns.

The evidence is clear that investments in people through education and training produce positive returns to state taxpayers. In fact, investments in people produce higher rates of return than pretty much any other kind of investment. Including the stock market, gold, and real estate.

investments

Figure 3. Rate of Return on a College Degree Compared to Other Investments. Source: Thompson, D. (2012, March 27). What’s More Expensive Than College? Not Going to College. The Atlantic.

Costs

Illinois taxpayers invest about $12,000 annually for each K-12 student. That amounts to $156,000 for each student over the course of a K-12 student’s lifetime. That’s an investment of billions of dollars in the future of the state.

Returns

As figure 3 showed, investments in people witness high returns. Despite news stories and bloggers who write about increasing tuition and student loan debt, the returns from education and training are still significantly higher than pretty much any other investment one can make.

For Illinois, these returns mean a larger tax base, lessening the tax responsibility and costs for all citizens. They also mean additional investments in the quality of life of the state’s citizens through better roads, schools, parks, and other amenities.*

*This blog focuses mostly on the economic returns of education. It should be acknowledged that higher education produces non-economic returns that also make positive contributions to a region’s quality of life.

What About People Who Move Back?

Unfortunately, this is not the case in Illinois. A study by Eric Lichtenberger and Cecile Dietrich found that although many out-of-state college students return to Illinois after college graduation, the employment rate in Illinois is substantially lower than for those who stay in Illinois for college. This is consistent with migration research. Once a person moves to a new region, they are much more likely to move again and less likely to return to return to their native state.

There’s another problem with out-migration. Lichtenberger and Dietrich also noted that Illinois students who attend out-of-state colleges and universities are much more likely to come from upper-income families and enroll in high-demand fields in science, research and health.

Economists call this phenomenon brain drain. History of full of examples of countries and states that have taken advantage of brain drain. Many of the technological advances made by the U.S. in the 1950’s and 1960’s, for example, came from Jewish scientists who fled Germany to the U.S. during the second World War.

How Other States Win from Illinois’ Neglect of Higher Education

Other states have caught on to the idea that Illinois residents don’t feel there are enough options for them in Illinois, that the state underfunds and has no coherent plan for higher education, and the perception that out-of-state tuition is actually lower than in-state tuition.

20% of all first-year students at Mizzou are now from Illinois, compared to about 5% in 1996. In 2012, the Chicago Tribune picked up on this and published an article titled The University of Chicagoland at Missouri.

Students are not only being sold the college, however. Out-of-state colleges sell their state as a good place to live and work after graduation. Judging by high migration rates for all Illinois citizens, the recruitment. While out-of-state enrollment in border states like Missouri and Iowa has always been strong, out-of-state enrollment in far-off states like Alabama and Colorado have increased dramatically in the last 10 years.

colorado2016

Figure 4. First-year Student Enrollment of Illinois Residents at Colorado 4-year Public Colleges, 1992-2016. See this visualization for more information.

alabama2016

Figure 5. First-year Student Enrollment of Illinois Residents at Alabama 4-year Public Colleges, 1992-2016. See this visualization for more information.

Other states’ taxpayers win by gaining all the returns with none of the upfront costs in K-12 education. It’s like inheriting a constant stream of income from a trust fund in which you invested zero dollars.

Proposed Solutions

map2016

Figure 6. First-year Student Enrollment of Illinois Residents at Out-of-State 4-year Colleges, Fall 2016. Public colleges are in red, private in blue. See this visualization for more information.

Mission Differentiation

The largest red dots on the map above represent the Universities of Iowa, Iowa State, Missouri, Wisconsin, Michigan, Indiana, Minnesota, Purdue, Ohio State and Kentucky. They are essentially out of state versions of the University of Illinois.

Many states have differentiated systems, with large universities focused on research, public liberal arts focused on teaching, or institutions focused on science and technology. They do this because it fits with state economic and social priorities. And it gives residents many options, keeping them in state.

As stated in A Decade of Decline, Illinois has an inability to establish shared state goals and priorities for higher education and fails to allocate resources strategically, if they are even allocated at all. A dysfunctional political culture and a lack of will among residents to make public investments in higher education has had a significant negative impact on higher education in Illinois. Despite its unlikelihood, the mission differentiation approach is probably the best solution.

Bringing College Graduates Back (and Stealing College Graduates from other States)

This approach accepts the idea that there is no political will to adequately fund higher education in Illinois, to ensure adequate planning, or good governance. It dedicates resources and energy to agressively recruit college graduates from other states.

Economically, this might make sense. There is little to no investment by Illinois residents, Yet, the state witnesses all the returns.

There are significant challenges to this approach, however. Illinois is hemmoraging residents. Illinois was ill-prepared for the transition from a manufacturing/goods-producing to a knowledge/service-based economy (the exception being Chicago). Today’s knowledge-workers are not only looking for quality jobs, but also a quality of life. Infastructure, business incubation, creating environments where people can share knowlege across industries and sectors, and public amenities should be viewed as investments, not public costs. Unforunately, it’s not clear that Illinois is doing much to attract a vibrant workforce looking for jobs relevant in today’s economy.

Merit Aid

A broad, merit aid approach will likely have minimal impact on keeping students in state. College students migrants are not making decisions based on cost – they are making them based on perceived institutional quality and the desire for a specific college experience. They generally come from high-income families and can easily afford the out-of-state costs.

When a majority of Illinois students leave Illinois to attend large, research universities with prominent athletic programs – and there’s only one university in the state that offers that experience – those residents are sending a message. As a contrast, the population of Iowa and Kansas (about 3 million each) are a quarter of Illinois (12 million). Yet, they are able to sustain two large, public universities.

A targeted merit aid approach focused on high-demand fields, like science or technology, could potentially work. But, the campus programs have to exist. And they have to be mature and adquately-funded. A 50-year I.T. or chemistry program at Purdue or Mizzou, for example, has a substantial head start. And there’s the risk that gradudates could still leave the state after graduation.

Posted in Illinois | Leave a comment

An Insider’s Take on Assessment Reaction

Every few months, someone writes an article exposing assessment for what it supposedly is: a waste of time and a destructive force. The latest installment is from the Chronicle of Higher Education, An Insider’s Take on Assessment: It May Be Worse Than You Think.

Any discipline should welcome critical thought and reflection. This article definitely provides material for interesting discussion. I have a reaction to three points made in the article:

  1. Assessment as a form of research.
  2. Assessment as a form of control.
  3. Assessment as a conspiracy.

Assessment as a Form of Research

The author quotes an article from Intersection stating “The whole assessment process would fall apart if we had to test for reliability and validity and carefully model interactions before making conclusions about cause and effect.”

Assessment isn’t the only discipline dealing with this. As the replication crisis shows, a lot of empirical research struggles with issues of reliability and validity. Assessment may be no worse than a lot of the social science research in peer-reviewed journals or conference presentations.

In Debates on Evaluation, Mike Patton provides an example to explain the tension between validity and utility. Most agriculture faculty like to conduct research in controlled settings. This is because it enhances validity and reliability.

Most of the time, this is good. Scientific methods and prescribed measurement principles have contributed to advancements in health care, psychology, and how we understand the world. On the other hand, advancements have been made by accident or specifically ignoring the  rules of empirical, “gold standard” research or serendipity.

This isn’t anti-science or anti-intellectual. It’s pragmatic. Is anyone really going to tell an art or music professor with 20 years of experience in their discipline that their assessment of learning is invalid or unreliable because it lacks the “gold standards” of empirical research design? Good luck with that.

As the son-in-law and grandson of farmers, however, I know that any farmer will tell you that controlled settings are always impractical and almost always impossible. Based on the criteria described in the Chronicle article, my father-in-law would be wise to ignore most agriculture research. In fact, based on the logic presented in the Chronicle, most farmers would be wise to avoid empirical agriculture research conducted in controlled settings.

So, of course assessment is going to “fall apart” when testing for validity and reliability, just like a lot of peer-reviewed, empirical studies and probably most social science research. But does that mean my father-in-law should ignore agriculture research because it doesn’t translate into the field? Conversely, do agriculture researchers have nothing to learn from farmers because their observations lack the gold standards of empirical research, like controlled settings, randomized trails, etc.? Does a lack of use of these methods imply

Of course not. That would be ridiculous.

Assessment as a Form of Control

I agree with the author of the Chronicle article on a major point. Here’s the quote:

He (the author of the Intersection article) also seems to be opening the door to a challenge to what is perhaps the single most implausible idea associated with assessment: that grades given by people with disciplinary knowledge and training don’t tell us about student learning, but measurements developed by assessors, who lack specific disciplinary knowledge, do.

I have never in my career met anyone would tell a faculty member how to measure learning in their class or even program. I would never do that. It sends the message “I have no idea what your job is, but whatever is, you’re doing it wrong.”

The Chronicle article also makes a point about grades and ignoring the judgment of disciplinary experts. I happen think grades are fine at the classroom level. Their utility at the program level is another matter.

Unfortunately, conference attendance and a review of institutional assessment websites makes me believe there are a lot of assessment administrators who do tell faculty how to measure learning. And they are telling them that grades are unreliable measures of learning, even at the classroom level. If the goal is to foster engagement with assessment, telling someone their wrong is a bad strategy.

Assessment as a Conspiracy

The final point from the Chronicle article is that assessment is somehow related to the rise of on-line learning and growth in adjunct faculty. The author does not specifically use the term “conspiracy,” but the theory is that assessment provides evidence of quality in these areas, thus justifying their existence.

This might be a matter of correlation and not causation. Assessment has its roots in psychological research in the 1960’s and 1970’s and the accountability movement of the late 1970’s and 1980’s. Assessment predates on-line learning and massive growth in adjunct faculty.

Is it possible that assessment was co-opted by administrators later? I don’t know. I’ve only worked in “traditional,” non-profit higher education institutions. All I can say is that I have never witnessed administrators place more emphasis on assessing on-line teaching and learning over “traditional” forms, like classroom or lab teaching and learning, as a scheme to justify lower costs.

If assessment is an effective strategy for articulating the benefits of dubious educational practices, then why not use assessment to articulate the benefits of “legitimate” educational practices? I would certainly support that approach, and I don’t know what’s stopping people from doing the latter.

A Way Forward

The Chronicle article didn’t really offer a lot in terms of positive solutions moving forward. Here some ideas for moving forward:

Give up the fight about grades. Grades are fine at the classroom level. Telling faculty they aren’t insults their disciplinary expertise. As a form of measurement and motivation, grades can be problematic. My assumption, though, is that almost all faculty are pretty good about tying learning outcomes to grades and communicating to students what a grade means, so I don’t know why assessment administrators make such a big deal about them. Grades at the program level are a different story. I have no idea if a student with a 3.0 is more knowledgeable than a person with a 2.86.

Consider ending grids and templates. The point of standardized grids and templates is to give administrators a view of learning and program effectiveness for the institution as a whole. In theory, the components of the template are complied into one document and reviewed. This almost never happens. And if it did, programs and courses are too varied and contextual to make sense out of it.

What if, instead, we asked faculty to pick one thing they care about – critical thinking, art criticism, research ethics, whatever – and asked them to spend a year researching it? And they get control over how the research is formatted and looks. It could be uploaded to an on-line respository for sharing. Faculty are doing a lot of creative and interesting work in the area of student learning. This approach could capitalize on that. This also addresses the rather arbitrary issue of quantity. Over 5-7 years, that’s a lot of research on student learning. Assessment administrators would call this assessment, but “research” would be a less threatening term.

There’s two potential problems with this kind of process. Convincing an accreditation peer reviewer or agency official to take a different approach that relies more on creativity and one outcome a year, and less on standardization and compliance, might be a hard sell. A second problem is change. Organizational processes that have been in place for many years provide certainty. Change is slow, effortful, and uncertain (like learning). It’s easier to fill out a mind-numbing and non-useful grid than to change a process. And it requires less thought.

Distinguish between student learning assessment and program evaluation. Student learning outcomes assessment looks at what students learn and do. Program evaluation looks at what the program does (space allocation, staffing, budgeting, etc.). Student learning assessment projects can be used in program evaluations and reviews. But a program evaluation should not exclusively focus on student learning or be evaluated solely on that criteria. Teaching and learning are two different activities. Faculty don’t have total control over learning, only what they teach. No one should be held accountable for something over which they only have partial control.

Perhaps accountability and compliance can be framed as program evaluation or program review? All of the standards, frameworks, forms, and templates can be used in a program evaluation. Since program evaluation has more to do with evaluation criteria than disciplinary criteria – and who can argue with program evaluation? – make program review the primary accounability and compliance vehicle.

This frees up student learning outcomes assessment. Provide few if any standards or guidelines, four or five at most. Make learning outcomes assessment an addendum to the evaluation. This would respect faculty disciplinary expertise and maybe enhance engagement with the process.

Be prepared for what’s coming next. Unlike higher education, K-12 teachers are held accountable for what students learn and the value they add. K-12 education has been dealing with this side of assessment for decades. It would be naive to think it’s not coming to higher education. Big data sets about faculty productivity and graduate salaries already exist. Value added measures should be next. Someday, people will be able to quantify how much value a faculty member adds to a graduate’s salary and other labor market outcomes. The data’s already there, just waiting for someone to match it.

There’s still a narrow window to get ahead of this if we engage in geniune, simple assessment and dialog about what it means (not two-way monologue, in which two people talk, but no one really listens). Even though the Chronicle article provided almost no guidance in terms of positive next steps, it did offer a chance for reflection and dialog.

One Last Story

In my early days of assessment and institutional research, the focus was on compliance combined with quality improvement processes. It appears like it still is in many places.

Getting faculty to “comply” and submit assessment reports was very difficult. And what was submitted to the assessment committee was subpar and boring. Some of the font-sizes in the standardized grids had 8-point font and were difficult to read.

A few years in, I attended an in-house retention symposium. Faculty and staff presented on strategies for learning and student success at the classroom, program, and institutional levels. I was blown away by the quality of the research and the creativity.

It was all what I would call assessment research, and no one had ever submitted it to the assessment committee. When I asked a group of faculty why they didn’t turn this creative work into the assessment committee, the reply was “well, they never asked, and this work isn’t assessment.”

A vast majority of faculty want to do good work. And they’ll share it, if they are asked in the right way. But I learned that most faculty will do anything you ask them to do, but almost nothing they are told to do. This Chronicle article made me think that perhaps it’s time, as a discipline, to engage in better dialog with faculty and administrators about assessment, make positive changes in how we organize assessment practices, and do a better job at telling these stories to the public, including accreditors and policy makers.

Posted in Illinois | Leave a comment

Incorporating Design Principles in Writing Student Learning Outcomes

Modern Design Principles

Smartphones, tablets, and e-readers have revolutionized how we consume and create information. It’s a based simple formula:

simplicity + user customization = engagement

Take my e-reader, pictured below. It comes in only black or white. To get started, I just turned it on, adjusted a few settings, and was good to go. Over time, though, I added more extensions, created folders, personalized settings, and customized the kindle to meet my specific needs.

kindle

Simplicity in design coupled with customization of experience is why today’s smartphones and e-readers are so engaging. When I look at another person’s smartphone or e-reader, though, it’s kind of weird. Although the other person’s device looks the exact same, the settings, layout, and overall experience is not. Anyone who has looked for something on their spouse’s or partner’s smartphone knows the feeling.

The design of today’s e-readers and smartphones is intentional. The idea is that simplicity and less design is easier to use and more genuine. By allowing the user customize and personalize the device, engagement with it also increases.

Design Principles in Learning Outcomes

Applying the design principle of  principle of simplicity + user customization = engagement to assessment and evaluation could simplify things and lead to greater engagement.

Like a smartphone, tablet, or e-reader, the process for writing student learning outcomes can start with a few standard features, as shown below. Then, faculty and staff can customize from there. 

  1. Answer questions about your program, course or activity.
  2. Select a verb and link it to an activity.
  3. Write the final outcome.

Feature 1. Start with questions. 

  • Affective domain: “What does your program want students to value or care about?”
  • Cognitive domain: “What does your program want students to know?”
  • Psychomotor domain: “What does your program want students to be able to do?”

If you don’t know anything about the domains or need a refresher, that’s okay. You can watch this video.

Example: Masters Degree in Assessment & Evaluation Program

  • Assessment question 1 (cognitive): Students should know how to engage stakeholders.
  • Assessment question 2 (cognitive, somewhat psychomotor): Students should know how to write goals and outcomes.
  • Assessment question 3 (affective): Students should be able identify and question their own values and how those values guide assessment research.

Feature 2. Select the activity or assignment and link it to a verb.

Click on this sheet below to see what activities or assignments work well with a particular verb.

Example: Masters Degree in Assessment & Evaluation Program

  • Assessment question 1 (cognitive): Students should know how to engage stakeholders.
  • Assessment question 2 (cognitive, somewhat psychomotor): Students should know how to write goals and outcomes.
    • Activity/assignment: Students write learning outcomes using the ABCD method.
    • Potential verbs: Write, produce, demonstrate, generate
  • Assessment question 3 (affective): Students should be able identify and question their own values and how those values guide assessment research.
    • Activity/assignment: Reflecting on one’s values and their relationship to research epistemology.
    • Potential verbs: Reflect, justify, adjust, modify, defend, adapt

Feature 3. Write the outcomes.

Example: Masters Degree in Assessment & Evaluation Program (using the ABCD method as an example)

  • Assessment question 1 (cognitive): Students should know how to engage stakeholders.
    • Activity/assignment: Students use a stakeholder identification and analysis grid to identify and analyze stakeholders in an assessment plan.
    • Potential verbs: Identify, classify, prioritize, compare, contrast
    • Final outcome: Using a stakeholder identification grid (condition), students (audience) will identify stakeholders and integrate their needs into the design and analysis of assessment(s) as well as the reporting of results (behavior).
  • Assessment question 2 (cognitive, somewhat psychomotor): Students should know how to write goals and outcomes.
    • Activity/assignment: Students write learning outcomes using the ABCD method.
    • Potential verbs: Write, produce, demonstrate, generate, formulate
    • Final outcome: Given an ABCD template and learning taxonomies (condition), students (audience) will identify appropriate verbs and produce three learning outcomes (behavior).
  • Assessment question 3 (affective): Students should be able identify and question their own values and how those values guide assessment research.
    • Activity/assignment: Reflecting on one’s values and their relationship to research epistemology.
    • Potential verbs: Reflect, justify, adjust, modify, defend, adapt
    • Final outcome: Given a description of three major paradigms – positivism, constructivism, and pragmatism – (condition), students (audience) will identify a paradigm consistent with their personal worldview and articulate how they will adapt paradigmatic assumptions to different assessment and evaluation contexts and needs (behavior).

Start Customizing

Now it’s time to start customizing and writing the outcomes in a way that fits your specific disciplinary and programmatic needs, values, research orientation, culture and history. Some methods include:

Summary & Tips

ABC…..D Method

I did not include the degree of learning in the final learning outcomes. I have only found the degree part of the ABCD model useful in summative assessment. Here’s an example:

Upon completion of the art history program, 80% of students will be able to identify the approximate year of a painting.

The first issue with these kind of outcomes is that the cut-offs are seemingly arbitrary. Why is 80% better than 75%? What is so special and magical about 80%? The second issue is use of the results. If 90% of students in the art history program meet the goal, it provides an incentive for the program to ignore the outcome and move on. If 75% don’t show competency, that suggests a problem that may not exist.

The C.A.S.E. Method (Copy and Steal Everything)

The first outcome dealing with stakeholders is from the ASK student affairs assessment standards. There’s no sense writing a new outcome when a good one already exists. I wouldn’t recommend using an outcome without attribution, however.

Learning Outcomes Focus on What Students Actually Do, Not What They Posses

Adleman points that verbs need to be operationalized. Verbs or statements like understand, become familiar with, recall or capable of should be avoided because they describe “internal cognitive dynamics” that are difficult to assess. Just because a verb describes an action does not mean it can be operationalized for assessment.

Learning Outcomes Put the Focus on Students, Not Teaching

Learning outcomes should focus on what students do, not classroom activities or what we teach. “Students will be introduced to the topics of abnormal mental behaviors…” describes what we do as instructors, not what students do.

Learning Outcomes Are Focused on the Present or Near Present, Not the Future or the Past

Adleman asserts that learning outcomes should focus on what students do now, not in the future or past. As an example, we all want students to able to discuss an important topic or idea after they graduate. The first problem is that students aren’t with us anymore and will be difficult to assess. It’s difficult to isolate our impact on a graduates’s ability to discuss an important issue 10 years after graduation. The second problem is that discuss could be interpreted as focusing on a teaching activity.

Learning Outcomes Describe the Learning that Results from the Activity, not the Activity Itself

An outcome like “students will participate in a hazardous materials training seminar” is a fine outcome if the goal to measure participation only. As written, though, this outcome says nothing about what students will learn of the training seminar.

Focus on How to Operationalize Learning Outcomes, Not Arbitrary Distinctions

Some people are really picky about the differences between learning outcomes, objectives, goals, targets, indicators, outputs, etc. I have yet to see a standard approach to the definition of these words. Every textbook and author has a different definition.

Presenting a detailed and prescribed definition of each is confusing enough. Asking people to write varying levels of outcomes and outputs is even worse. The only distinctions that matter, in my experience, are the differences between 1) outcomes and outputs and 2) the levels between program, classroom, or institutional outcomes. In the context of writing learning outcomes they are all statements of intentionality. In my opinion, it doesn’t matter.

References & Documents

Posted in Assessment - General, Methods | Tagged | Leave a comment

Learning Goals, Objectives, & Outcomes: They’re All the Same

goals
Assessment has always struggled with language and definitions. One area of confusion is the distinctions between different statements of intentionality. These usually include goals, objectives, outcomes, targets, performance indicators, and so forth. When incorporated into one plan, the result can sometimes look something like this:

program-goals-objectives

Most People Aren’t That Interested in Assessment

For assessment experts, the distinctions between goals, objectives, and outcomes are obvious. Goals are broad, objectives more specific, and so forth.

The problem is that most people aren’t assessment experts, have no desire to be assessment experts, and don’t care about the distinctions. Reinforcing these distinctions  only reinforces the idea that assessment is a complex exercise in bureaucratic compliance, not improvement.

Keeping It Simple

While most people may not be that interested in assessment, they do care about their students and are intellectually curious about how things are going. They also have an intuitive sense about what a goal is. These considerations are what should drive engagement with assessment, not precision in statements of intentionality or filling out a grid.

If we really want to engage people in assessment, we should consider eliminating the perception of arbitrary distinctions, when possible, and focus on intentionality. It doesn’t matter whether statements of intentionality are defined as goals, objectives, outcomes, targets, or aims.

Distinctions about the claims people make when writing statements of intentionality, however, should be considered.

Distinction 1: Outcomes and Outputs

Deborah Mills-Schofield states “outcomes are the difference made by outputs. Outputs, such as revenue and profit, enable us to fund outcomes. But without outcomes, there is no need for outputs.”

Here is an example from two statements that claim to be student learning outcome statements:

  • Students will be introduced to the topics of abnormal mental behaviors…
  • Students will participate in a hazardous materials training seminar.

The problem in the first statement should be obvious. It says almost nothing about what students will learn. It is focused on what the instructor will do, not the student.

There’s no reason the first statement can’t be a goal, objective, outcome, target, or aim. It almost seems kind of pointless to ask people to make a distinction, when they should focus on the claim being made, not the definition.

This first outcome statement describes what instructors do, not what students do. Although the intent of the outcome may be student learning, the statement, as written, evaluates whether an instructor introduced students to the topic, not whether they learned it or not.

The second outcome statement is an improvement in that it describes what the student will do, as opposed to the instructor. If all one wants to do is assess the number of students who attended the seminar, then the statement is fine. It would be wrong, however, to claim or assume that students will learn something from the seminar just by attending it.

Distinction 2: Levels of Assessment

Program and institutional goals will almost always be more broad than classroom or unit goals. Rather than ask individuals to write multiple levels of goals, objectives, or outcomes, it would be better to ask them how their goals align with larger institutional or programmatic goals. This exercise is more intuitive and helps individuals see how their course or activity contributes to a coherent experience for students.

Posted in Assessment - General, Methods | Tagged | Leave a comment

Wayfinding & Curriculum Mapping in Higher Education

A curriculum map is a visual representation of how a program’s activities or courses lead to a coherent learning experience for students. (Principles of curriculum mapping can also be applied to the co-curriculum).

Wayfinding: Why Curriculum Mapping is Important

An example from the field of wayfinding illustrates the importance of curriculum mapping. Wayfinding refers to how people orient themselves in a space and use directional cues, like signs or walking paths, to navigate their environment. The design of a space has a significant impact on one’s experience and perceptions of their environment. Higher education has been leveraging that idea for decades.

This blue line in the photo below represents how I orient to my space at my university. My orientation to the campus and experiences are centered around assessment and evaluation.

Students, however, have a much different perspective and university experience, as shown by the red line. Whereas my orientation is centered around my discipline, students experience the university as a whole, as shown by the red line. Curriculum mapping helps us see how students navigate their experience and create a more cohesive and whole curriculum.

wayfinding2

My Wayfinding (blue line) — Student Wayfinding (red line)

Types of Curriculum Maps

There are three types of curriculum maps: simple, embedded, and developmental. The following images highlight examples of all three. The last part of this post presents examples of how to use curriculum mapping.

Simple Curriculum Maps

simple1

simple1-co

Embedded Curriculum Maps

Embedded curriculum maps show where learning outcomes are addressed and assessed at specific points in the curriculum.

embeddedmap2

This curriculum map flips the assessments and courses.

embeddedmap1

embeddedmap3

Developmental Curriculum Maps

Developmental maps show student growth over time. A short list of developmental frameworks are shown below:

  • Introduced, Developed, Mastered
  • Introduced, Reinforced, Practiced, Demonstrated
  • Low Emphasis, Medium Emphasis, High Emphasis
  • Introduce, Emphasize, Measure
  • Instruction, Practice, Feedback

developmental1

developmental2

Using Curriculum Maps

Example 1. Art History Program Curriculum Map

What conclusions can you make about this curriculum map?

use1

Conclusion 1 – Museum administration and budgeting is not covered anywhere in the common Art History program courses. It should probably be removed as a program outcome – there’s no sense having it as an outcome if it’s not being taught to all students. Faculty can still teach budgeting and administration in their individual courses, but no claims can be made about program graduates possessing this skill. If program faculty feel that administration and budgeting is important, then it should be reinforced in the common courses.

Conclusion 2 – Attendance at college art events is not addressed in any of the outcomes. This does not necessarily mean it should be removed as an activity, however. It just means program faculty should have a conversation about this activity’s role in the program.

Example 2. Biology Program Curriculum Map

What conclusions can you make about this curriculum map?

use2

Conclusion 1 – Students are expected to have mastered laboratory skills by the end of the program. Faculty may be frustrated by this and blame the students. However, a curriculum map in this hypothetical example reveals that students were never introduced to laboratory skills. It is unfair to assess students for something they were never taught. Laboratory skills should be reinforced earlier in the curriculum.

Conclusion 2 – Students were introduced to the idea of major cellular processes, and expected to master it by the end of the program. However, they were never given the opportunity to practice this skill.

Conclusion 3 – Students were introduced to the careers in biology. However, career awareness is never discussed in the curriculum beyond the introductory course. Faculty should have a discussion about this outcome’s place in the curriculum.

Example 1. Student Affairs Curriculum Map

What conclusions can you make about this curriculum map?

use3

Conclusion 1 – Civic responsibility is not covered in the student affairs curriculum. Staff have a decision to make. Should they remove civic responsibility as an outcome? Or, should it be reinforced and assessed in the curriculum?

Conclusion 2 – Global awareness is only addressed in the housing survey. Maybe that’s enough. But it won’t cover students who do not live in the residence halls.

Conclusion 3 – Two assessments are not addressed in any of the outcomes. For example, campus housing conducts an annual social media use survey and student organizations conduct an annual textbook cost survey. If an assessment is not being used for improvement, then you’re wasting staff and student time. The ultimate value of assessment lies in it’s use. Staff should consider removing these assessments.

Benefits of Curriculum Mapping & Recommendations

  • Curriculum maps are meant to be discussed and shared. They are an impartial and objective (to the extent that is possible) way of highlighting curricular gaps.
  • Curriculum maps highlight unproductive practices. If an assessment is not supporting the curriculum or being used, then it should be considered for removal.
  • Curriculum maps help programs set priorities and plan for the future.
  • Curriculum maps can communicate expectations to students.
  • Curriculum mapping is meant to be inclusive and include multiple viewpoints.
  • Curriculum mapping is not meant to prove someone wrong.

Resources

Posted in Assessment - General, Methods | Tagged , , | Leave a comment

Assessment and Stories We Tell Students

Do new, first-year college students need to study 2-3 hours per credit hour in a week to be successful? The short answer is: no. Research tells us that most first-year students spend about one hour or less per credit hour studying and preparing for class and do just fine, depending on your definition of “fine,” of course.

This is the central premise of Academically Adrift. The idea is that most students see college as a pathway towards economic security or a rite of passage into adulthood. Thus, college students invest their time in activities that have little to do with learning.

adrift

Assuming the premise of Academically Adrift is accurate, then traditional assessments, like grades, standardized tests, or degrees, are not assessing learning, but probably other things, like managing the college experience or skills related to persistence.

Data from a variety of sources, including NSSE , the CLA,  and grade inflation show that students are still getting good grades and graduating with less effort, at least measured by hours spent studying and preparing for class.

Telling most first-year students they need to study 2-3 hours per credit hour to be successful in college isn’t accurate and probably harmful. There are two problems with this kind of messaging:

  1. It exaggerates how much time academically competent and even a few successful students actually spend studying. Communicating an unrealistic standard reinforces the legitimacy of peers and other sources for information over more legitimate ones (like advisors and faculty).
  2. It sets up time as the constant and learning as variable. According to the flipped teaching model, time should be variable and learning is constant. A better strategy would be to communicate what students will do and/or outcomes of their college experience.

How does assessment inform what we should tell new first-year students? First-year students should receive two types of messages, one that legitimizes the expertise of faculty and student advising staff and anaother that de-emphasizes a fixed-intelligence mind set. They should receive messages like this:

Message 1: “You get out of college what you put into it. If you want to study 15 hours a week, and you’re fine with a 2.5-3.0 GPA, then go for it. Keep in mind, though, that your effort will need to increase as you progress through college, and in particular your major.”

Message 2: “You may be disappointed, in spite of all your hard work. Keep in mind, though, that intelligence is not fixed. Frustration with learning something new and learning from set-backs are all natural parts of the learning process. Utilizing the services we provide and listening to your professors can help you grow and be a more competent and efficient learner.”

If I remember anything, I remember two messages from my first-year orientation. You will need to study 2-3 hours per credit to be successful, and, look to your left and look to your right. These weren’t very helpful. A better message might have been: if you work hard and listen to your instructors and university staff, you will likely be fine. And if you’re not here next year for whatever reason, you’ll be doing something else.

Posted in Illinois | Leave a comment

Bunking & Debunking Altucher’s 15 Essential Skills They Don’t Teach in College

According to one internet blogger, there are 15 essential skills for making money. They include the usual things like networking, motivation, creativity, etc.

While the skills are fine (who can disagree with creativity?) the claims about them are dubious and have almost no evidence to back them up. The two claims are:

1. You don’t need to go to college to get the 15 essential skills.

2. Colleges aren’t teaching these skills (or, at least, students aren’t learning them).

Claim 1: You don’t need to go to college to get the 15 essential skills. 

One can make a good argument that you don’t need to go to college to learn. Traveling, reading War & Peace, and conducting home experiments can all take place outside of a classroom.

The claim is dubious from an earnings-perspective, however. This is because skills don’t translate into higher earnings unless a credential is almost always attached to them.

There are plenty of successful people who don’t have degrees. And almost all of them come from wealthy backgrounds. Steve Jobs and Bill Gates didn’t get a degree, but they did have wealthy parents and access to college. Most people don’t have the time or money to be unemployed and tinker in their parental-subsidized garages. A college degree is a much less risky bet.

In today’s U.S. economy, the evidence is pretty clear: family background and credentials matter more than skills. Whether skills should matter more is another conversation. If you want to make more money, in general this is what you need to do:

  1. Be born to rich parents.
  2. Get a college degree. Think college is too expensive and that it’s not worth it? Think again. The rate of return from getting a degree is still higher than if you didn’t go to college.
  3. Be mobile.

Skills matter, but credentials trump skills almost all the time. And while it stinks that wealthy kids get a huge head start, education still provides a pathway to a credential and higher earnings for most people.

So, the claim that a majority of people don’t need college to make more money is dubious, at best.

Claim 2: Colleges aren’t teaching these skills (or students aren’t learning them).

There is little evidence to support Altucher’s second claim, and a lot to counter it. An academic research library search of the things the author claims colleges don’t do with the added phrase “college learning outcomes _____ skills” revealed the following number of academic studies:

  • college learning outcomes presentation skills: 783,493 research articles.
  • college learning outcomes quantitative literacy skills: 223,947 research articles.
  • college learning outcomes philanthropy and civic engagement skills: 80,664 research articles.

Altucher has some good points about learning, but investment decisions should be based on evidence and realistic outcomes, not anecdote or opinon.

If your goal is to make more money, your best bet is to learn skills while in college, not out of it. It doesn’t even have to be a four-year degree. Economic returns to associate’s degrees, training certificates, and other short-term credential-granting programs are still quite high.

Of course, if you don’t want to go to college, that’s fine too. Plenty of people without college degrees have happy, satisfying lives. Just make sure your expectations match the outcome.

Posted in Assessment - General | Leave a comment

Illinois High School Graduates and Out-of-State Colleges

The map below shows where Illinois high school graduates enroll at public (blue) and private (red) four-year universities.

https://public.tableau.com/javascripts/api/viz_v1.js

tableaumap

The first story is that a lot of Illinois high school graduates leave Illinois for college. The most, in fact, of any other state after New Jersey.

This matters because when a student leaves Illinois for college, they are less likely to return and work in Illinois. The net economic impact of a lost student to another state is about $225,000 per student lost over the course of a lifetime (1) in income tax revenues alone. This does not include the negative impact on the general economy in terms of lost consumption or spending.

Companies lose in terms of their ability to attract an educated and skilled workforce. Taxpayers lose the investment they made in students in 13 years of K-12 education. Other states win because they are able to develop a highly educated workforce with little investment of their own.

The second compelling story is that a lot of students leave Illinois for public universities. In the last 20 years, four-year public university enrollment among Illinois high school graduates has remained relatively flat, but nearly doubled at out-of-state four-year public universities. This should send a clear message about what Illinois residents think about the state of public higher education in Illinois.

Strategies to keep Illinois residents at in-state universities include:

  • Mission differentiation in the four-year public sector, creating institutions with their own unique niche (small, liberal-arts focused; technical or engineering focused; large university focused on research; etc.). Institutional diversity provides more options to Illinois residents. A one-size fits all approach precludes access to distinctiveness and value.
  • More certainty in the higher education budget for direct institutional subsidies and student financial aid.
  • Financial aid incentives for in-state residents. However, since it is assumed that Illinois residents are paying more for public institutions in other states, economic incentives may not be effective in retaining people who are not sensitive to price, and focus more on perceived educational quality.
  • A final strategy would be to recruit more out-of-state students or out-of-state college graduates to Illinois. If Illinois residents are unwilling to invest in strategies that would keep talented high school graduates at in-state colleges, this option could be less expensive. It would probably require, however, more public-private coordination and cooperation, expertise in economic development and education, and leadership.
(1) Adjusted for inflation from $162,000 figure in 2000.
Posted in Illinois | Leave a comment

End Planning and Start Storytelling with Learning Outcomes

Confusion of goals and perfection of means seems, in my opinion, to characterize our age (Einstein).

Imagine the ideal student 10 years after your class. What do you want them to know? What are your hopes for them? What do you imagine they will be doing?

We rarely think about what our students will be like in the future. Most of us are focused on what we are doing now.

Assessment professionals (like me) and accreditation people will usually say: “Begin with what you care about and what you find meaningful.” And then they hand you a guidebook and tell you the following:

“Make sure the outcome is measureable (or SMART). Make sure to fit the outcome into a cycle. Use the right verbs. Align the learning outcomes with programmatic, departmental, and college goals.”

Basically, all of the meaningfulness is sucked out by the process that’s handed to you. Why do we make assessment painful or mind-numbing? It doesn’t have to be about arbitrary standard-setting, bureaucratic control, or an anxiety-inducing exercise in verb-selection, but that’s how a lot of people feel about it. If you read assessment books or look at on-line college guides, they’re all pretty much the same. Best practices are encouraged because they are familiar. It rarely leads to anything new.

There are people who hate assessment, but I’ve never met anyone who doesn’t care about learning. Writing outcomes that are meaningful to you and accountable to others is possible. If you tell your classroom’s or program’s story well, you shouldn’t even have to really worry about accountability.

A creative and non-conforming approach that puts more focus on storytelling, as opposed to calculating and planning, can perhaps be a better and more engaging way to write learning outcomes (1).

Step 1. Start with Questions

Rather than thinking of writing learning outcomes as a planning exercise, think of it as storytelling. Margot Leitman provides a good method for writing stories. Start with questions (2):

I would like to know ____________ about my program.
I would like to know ____________ about our students.

You may get pushback from methodological fundamentalists, accreditation reviewers, and strategic planners, but don’t worry about them for now. No one from the evaluation police is going to arrest you for being different. Here are some more ideas:

We would like to know __________ about our students.
We are curious about  _______________.
Our students seem to be really good at ______________.
Our students are really scared of _________.
Our students pretend to care about __________.
Our students worry the most about __________.
We can’t believe our students think ___________.
Our students’ biggest regret is ___________.

Be practical. Focus on what you have at least a moderate level of control over. Finding a soul mate and paying off student loans may be the most anxiety-inducing thing that students deal with, but there isn’t really a lot you can do about it. The goal is utility.

Here are some story ideas from an imaginary environmental sustainability program:

I would like to know how knowledgeable students are about using statistics to solve real-world problems.

Our students worry the most about having to take the required statistics course.

Our students seem to be really good at using game simulations .

Based on the story ideas above, maybe an outcome should focus on quantitative knowledge? Another outcome could focus on encouraging game simulations. Using storytelling, these ideas were selected because I find them meaningful, as opposed to the result of a planning exercise.

Step 2. Connect the Story Ideas to Learning Domains

Brainstorming and being creative is fun, but you do have to get a little organized with learning outcomes. With the previous questions in mind, think about what kinds of knowledge you want your students to learn. Education experts organize learning into three domains:

Affective Domain: What do you want students to care about or value? (Feelings, Emotions, Attitudes)
Cognitive Domain: What do you want students to know? (Intellectual)
Behavioral Domain: What do you want students to be able to do? (Physical)

This is where we start to use storytelling to make the connection between what we care about and what we want students to learn (3).

I would like to know how knowledgeable students are about using statistics to solve real-world problems. This looks like a problem in the cognitive domain. Perhaps a learning outcome should focus on assessing how students apply statistics or quantitative thinking to real-world problems?

Our students worry the most about having to take the required statistics course. This looks like a problem in the affective domain, so maybe a learning outcome could get at the idea of student confidence or attitudes towards math?

Our students seem to be really good at using game simulations .This looks like an outcome in the behavioral domain. Perhaps an outcome should be developed that looks at if and/or why game simulations are an effective learning strategy?

Step 3 – Write the Learning Outcome Statement

Some people get really sophisticated with learning outcomes. This paper, which is quite good, states that learning outcomes statements are complete, Kantian sentences. I have no idea what Kantian means, so I googled it. It’s good advice.

I agree that verbs and syntax matter. But it takes practice and time. Like strategic planning, they also have the potential to take away from creativity. The goal is to find a balance. I think balance can be achieved if you use storytelling techniques, as opposed to planning techniques. Planning puts too much emphasis on calculation, and not enough on improvement (1). Balance can be achieved using storytelling because the learning outcomes are attached to already-existing narratives that are meaningful to you, not someone else.

I use the term learning outcomes, but choose whatever you want: goals, objectives, outcomes, targets…whatever works. They’re all statements of intention.

When you have the domain, match it to the right verb. Here’s the first story narrative:

I would like to know how knowledgeable students are about using statistics to solve real-world problems. This looks like a problem in the cognitive domain. Perhaps a learning outcome should focus on assessing how students apply statistics or quantitative thinking to real-world problems?

By referencing this table (p. 2), I can see “apply” as a level in the cognitive learning area. A list of verbs is next to the level. I will build my learning outcome around the verb that best articulates what I am getting at:

Students will be able to construct an advocacy report written for a general audience on the economic benefits of bicycle commuting. 

A more structured method to writing learning outcomes is the ABCD Method (Audience-Behavior-Condition-Degree).  I like this method because it requires the program to make learning explicit and operationalizes the assessment.

Given the results from their environmental impact study (condition), students (audience) will be able to construct an advocacy report (behavior) written for a general audience (degree) on the economic benefits of bicycle commuting. 

The best part about using storytelling, as opposed to planning, in creating learning outcomes is that the learning outcome is genuine. A lot of people focus on transparent or sustainable learning outcomes. I have no idea why sustainability in learning outcomes is a worthwhile goal. Transparent outcomes are written for accreditation and compliance. Outcomes can be put anywhere – marketing materials, websites, catalogs, reports, etc. But that doesn’t make them useful. Being transparent is just an activity and it says little about being honest or truthful.

Genuine outcomes are written to help students learn and programs improve. The best storytelling advice is to tell the truth.

*******************
(1) Assessment has its roots in empirical and mostly quantitative analyses of learning. In the 1980’s, it was co-opted by the planning and improvement field. Remnants of this movement are still around. Assessment plans are fine, but there’s a problem with using assessment as a driver of planning: “The problem is that planning represents a calculating style of management, not a committing style. Managers with a committing style engage people in a journey. They lead in such a way that everyone on the journey helps shape its course. As a result, enthusiasm inevitably builds along the way. Those with a calculating style fix on a destination and calculate what the group must do to get there, with no concern for the members’ preferences….calculated strategies have no value in and of themselves…strategies take on value only as committed people infuse them with energy (H. Mintzberg, Harvard Business Journal, January-February 1994, p. 109).” Plans should certainly have some intentionality and direction, but learning outcomes aren’t the same as strategic goals.
(2) This was also proposed as an idea-generating process by Patton in Utilization-Focused Evaluation (1978).
(3) Focus on the story and meaning first, and learning taxonomy second. Some people will try to develop learning outcomes that cover all three of the taxonomies. There’s nothing wrong with having all cognitive outcomes, or all affective outcomes. Additionally, the three domains are not mutually exclusive. There can be overlap; don’t feel like a leaning outcome has to fit a domain. For example, a student with crutches who is leaning to write will encounter all three domains: cognitive (writing), psychomotor (interruptions due to adjusting crutches), and maybe affective (frustration or a feeling of accomplishment).
Posted in Uncategorized | Tagged | Leave a comment

Best Practice and Kind-of-Best-Practice Guidelines for Writing Learning Outcomes

References at the end of this blog.
  1. Learning outcomes should focus on what students learn, not what we teach.

Students will be introduced to the topics of abnormal mental behaviors in their patients.

The problem with this outcome is that it is focused on what the teacher does, not what the student will learn.

Proposed fix: Students will be able to document abnormal mental behaviors in their patients.

  1. A strategic or effectiveness goal is not a learning outcome.
  • Students will be satisfied with academic advising (H).
  • The program will witness an 80% retention rate from fall to spring  (H).
  • 70% of program graduates will enroll in graduate programs (H).

These are program effectiveness outcomes, not learning outcomes (H). They only indirectly address learning. One could make a claim that if 70% of program graduates are accepted to a graduate program, some kind of learning is occurring, but that’s a very indirect claim. Additionally, it’s difficult to see how this kind of information helps a program improve.

Learning occurs at different levels on a spectrum from indirect to direct. It is perfectly appropriate to evaluate your program based on program effectiveness indicators. It’s a stretch, though, to make a claim about learning.

  1. Learning outcomes should focus on the learning resulting from the activity and not the activity itself (HB, p. 99).

Students will study at least one non-literary genre of art (HB, p. 99).

Maybe your goal is to assess if students study? If so, that’s fine. You can even measure it based on the number of hours a student spends studying or the number of pages they read.

However, this outcome evaluates a process, not learning. Thus, it is only indirectly related to learning and probably shouldn’t be labeled as a learning outcome at the course or program levels.

  1. Learning outcomes should not be too broad and ideally be discipline-specific.

Students will understand how to communicate well.

Here are the problems with this outcome:

  • Most of us aren’t communications experts, aren’t experts in how to evaluate it, and can’t control all of the factors that go into how well students are at it.
  • Communications is way too broad. Is this outcome assessing verbal, written, or oral communications?
  • The word ‘understand’ applies to an internal, covert state of mind, not something students do (see guideline #5).
  • This outcome could apply to every class, activity, or program on campus.

Proposed fix: Given a sentence written in the past or present tense, the student will rewrite the sentence in future tense with no errors in tense or tense contradiction. (BC)

  1. Try to make learning outcomes measureable. Avoid verbs that are unclear and describe covert, internal behavior which cannot be or are difficult to measure (URI, p. 3).

Students will develop an appreciation of cultural diversity in the workplace.

Students will value the role of statistics in the workplace.

These outcomes are laudable strategic goals or vision statements. It shouldn’t stop you from helping students learn about or value diversity in the workplace. It’s not a good learning outcome, though.

As instructors, we have no idea what is going on inside students’ brains. That is because attitudes and knowledge are covert and internal to the students. Mager wrote that attitude objectives “are not specific descriptions of intent. Statements like these describe states of being; they do not describe doing” (p. 103). Words like “appreciation” and “value” are internal states of mind. Cliff Adelman states “we do not teach college students how to be conscious, and we do not award degrees on the basis of peripheral sensations (A, p. 10).”

That’s why we do assessment. Assessing learning through instruments like papers, demonstrations, artwork, or other activities makes student skills and knowledge overt and allows us to evaluate it. “One does not know a student has the ability to do anything until the student actually does it, for which point we use verbs that indicate what the student actually did” (A, p. 13).

Still, most of us desire that students develop some kind of values and character. Valuing diversity in the workplace is certainly an important goal or outcome for students. In light of the issues associated with assessing and evaluating this, I would consider making it a core value or part of the program mission or vision. There’s no obligation to measure and assess a core value – it stands on its own. Another option would be to make it a program effectiveness goal and measure it indirectly, through a survey or other activity.

A final option is re-write the learning outcome to operationalize what you want students to learn. Here are some basic examples (the ABCD model is a good framework for writing outcomes, but for simplicity sake will focus more on verbs and basic outcomes):

  • Given a case study, students will produce a workplace inclusion plan.
  • Using a case study, students will be able to defend the economic benefits of workplace diversity.
  1. Try to avoid compound or double-barreled learning outcomes.

Students will be able to successfully venipuncture an arm and define legal issues related to phlebotomy.*

Obviously, this should be two-learning outcomes.

  1. Learning outcomes should be written in the present, not past or future. (A)

Program graduates will demonstrate the democratic ideal through service to their communities.

Even if you could track this, there really isn’t a lot you can do to influence graduates. Focus on what you can do now in the present or at least current semester(s).

  1. Learning outcomes should have an activity or assignment associated with them.

Given the symbol representing a particular isotope of an atom or ion, the student will be able to determine the number of electrons, protons and neutrons in that species eight out of ten times (BC).

This is a great learning outcome. If students have no opportunity to demonstrate this outcome, though, people may assume they haven’t learned the material. This is particularly the case learning outcomes at the program level that rely on multiple courses or activities.

Ideally, you would want multiple assignments to provide information for one outcome. For example: Students completing the Engineering program will score over 95% on a locally-developed examination (UC).

In this case, learning is only assessed based on one instrument. Another problem with this outcome is that it dictates an assignment. This may be fine at the classroom level, but hopefully students will have multiple opportunities to demonstrate competency towards a learning outcome at the program level.

(Kind of Guideline) 9. Learning outcomes should be aligned with institutional missions and goals.

Backward design is the idea that learning outcomes should start with the mission of the institution in mind, followed by college, departmental, programmatic, and course goals or mission. The course should then be delivered forward, feeding into the institutional mission.This is what it looks like:

design-backward-deliver-foward

I think this is a great model, but like most assessment frameworks, it usually doesn’t play out well in practice.

First, colleges and universities are just too internally diverse and variable. A google image search of colleges of business and fine arts alone will show this. The institutional mission and goals are going to have to be broad to accommodate everyone.

Second, backward design always kind of reminded me as an exercise in philosophical reductionism. It is like getting multiple cups of coffee from one filter. The third cup from the same filter barely resembles the first one. By the time the institutional mission or goal gets filtered all the way down to the program and classroom levels, the outcome no longer has an resemblance with the institutional mission.

(Kind of Guideline) 10. Use the correct language of goals.

Some people are really picky about the differences between outcomes, outputs, goals, targets, and objectives. I don’t think it really matters – they’re all statements of intent. It’s more important to be aware of the differences between learning outcomes and program/effectiveness goals (see guideline #2).

(Kind of Guideline) 11. Cohort percent benchmarks and learning outcomes.

Upon completion of the art history program, 80% of students will be able to identify the approximate year of a painting.

These kinds of out outcomes are fine for compliance and summative evaluation reasons, but aren’t really helpful for program improvement.

The first issue with these kind of outcomes is that the cut-offs are seemingly arbitrary. Why is 80% better than 75%? What is so special and magical about 80%?

The second issue is use of the results. If 90% of students in the art history program meet the goal, it provides an incentive for the program to ignore the outcome and move on. If 75% don’t show competency, that suggests a problem that may not exist.

These kinds of outcomes are indicative of programs in a compliance-driven, summative assessment mode. Assessment fundamentalists who also serve as accreditation peer reviewers or state policy makers will like it, but I don’t see the value of them for improvement.

image001

References

(A) C. Adelman, To Imagine a Verb: The Language and Syntax of Learning Outcome Statements, 2015.;
(BC) T. Brumfield & S. Carrigan, Instructional Objectives Workshop Handout, 2011;
(HB) M. Huba & J. Freed, Learner-centered Assessment on College Campuses, 1999.
(M) R. Mager, Preparing Instructional Objectives, 1962
(URI) University of Rhode Island, Student Learning Outcomes 101
(UC) University of Connecticut, How to Write Program Objectives/Outcomes
Posted in Uncategorized | Tagged | Leave a comment