True education, not assessment, as the goal--excellent article in today's NYT

"Producing thoughtful, talented graduates is not a matter of focusing on market-ready skills. It’s about giving students an opportunity that most of them will never have again in their lives: the chance for serious exploration of complicated intellectual problems, the gift of time in an institution where curiosity and discovery are the source of meaning.

That’s how we produce the critical thinkers American employers want to hire. And there’s just no app for that."

https://www.nytimes.com/2018/02/23/opinion/sunday/colleges-measure-learning-outcomes.html?smid=fb-share

I’m out of free articles, but the part you quoted is what I and other proponents of a liberal arts education have been saying forever. It may take longer for the kids with humanities degrees to find meaningful work after college, but they will be well-prepared for the search.

Don’t want to overquote, so I’ll just add this:

“Without thoughtful reconsideration, learning assessment will continue to devour a lot of money for meager results. The movement’s focus on quantifying classroom experience makes it easy to shift blame for student failure wholly onto universities, ignoring deeper socio-economic reasons that cause many students to struggle with college-level work. Worse, when the effort to reduce learning to a list of job-ready skills goes too far, it misses the point of a university education.”

I think it’s ironic that so much money and time goes to these assessment mandates, which raises the cost of education, thereby causing people to question its worth, thereby leading to more “assessment” to “justify” it, thereby leading to more costs…

All those higher college costs that people complain about are not going to “climbing walls” and there are not stockholders getting dividends–but it’s going somewhere, and that somewhere is often more and more levels of administrators whose jobs revolve around assessment and quantification, and make more money than most of the people doing the, you know, educating.

If you want to know if students improve their critical thinking skills, both individually and at the institution level, there has to be some type of assessment. If the professor had any ideas on how to improve the assessment process, as opposed to scrap it entirely, I didn’t see it in her article. I’m sure university faculty would love it if they had no accountability for results.

@roethlisburger

Somewhat dated post, but I just can’t allow this to be the last word on college assessment without a rebuttal.

The problem is that each college class has its own objectives, that cannot necessarily be averaged with other class objectives.in a meaningful manner. For example, when I taught a class on autism, our capstone class project was for students to assess an intervention proposed for autism, give an overview of the theoretical basis of that intervention, and summarize at least two scientific studies that assessed the efficacy of the intervention, including evaluating the strength of the evidence in support of the intervention. The idea here was that the students would not only examine the claims specific to autism, but would learn how to investigate claims of efficacy generally, across psychology, medicine, education, etc. Students also had to present their findings in a cogent manner to the class, and coherently write up their report with proper citations in APA format. Many of the students rose to the challenge and learned how to evaluate a scientific claim, and how to reflect their findings using field-specific standards.

However, the results from my class really couldn’t be averaged with the results from a completely different psychology class, such as Abnormal Psychology. There, perhaps students learned the rules around conducting a competency assessment, or the specifics of different disorders. That is, we wouldn’t be able to use my autism student presentations and reports as data to report an overall departmental average of progress, because the different classes were evaluating different components of psychology.

Part of being a professor is knowing how to develop a curriculum in a specialty area. Professors have extensive subject expertise, and they should be trusted to build evaluative tools that are then translated into grades. I am appalled by the new style of psychology textbooks that deliver canned content and canned assessment, mostly based on rote knowledge and multiple choice responses. That primitive rendition of psychology will never prepare students for graduate work, or really for anything else in the field. Very little of psychology (or other social sciences) is content knowledge – most of it is a process, which does not lend itself to multiple choice responses.

This is the logical fallacy known as appeal to authority. We simply can’t trust a professor’s grading is in line with broader institutional and societal goals without external validation.

I must’ve missed the stats in the article…is it 10% of the educational cost? 1% or 0.01?

I’m all-in for a liberal arts education, and big proponent of the Core, ala Chicago, Columbia, Boston College. Alas, colleges tend to be going in the opposite directions – fewer GE’s.

@roethlisburger So you have the very people at the pinnacle of scientific field, after years of underpaid dedication they are granted the status of tenured professor. But let’s not trust them to create assessments. Who do we find with the subject matter expertise that exceeds those experts? Oh, I see, you would have someone with an MS in education from a 3rd rate program, one that requires a 2.0 gpa for attendance (your standard assessment hack – please if you think I am exaggerating look at the admissions requirements for most Ed MS programs) determine what constitutes subject matter expertise in a field in which they have no expertise. Why do they get to make this call – capitalism! Because they work for an assessment publisher that sees big profit in capturing the higher education market. Gee, that sounds like a fine idea.

Perhaps you should have those same higher ed hacks act as peer reviewers on scientific journals. Who needs subject matter expertise when you can have generic “standards” and enormous profit for some corporation.

@psycholing

If you’re testing for broad skills, writing and critical thinking, as opposed to subject specific knowledge, you shouldn’t need psych profs making the assessments.

I don’t even know what critical thinking skills are, much less how you would assess them.

High quality assessments are REALLY hard. The typical PhD program, even at top schools, will not teach future Profs how to accomplish them. (After all, PhD programs are all about research.)

With all due respect, such straw man example weakens your argument.

Well having graduate training in 3 distinct fields (1 humanities, 2 social science), I can safely say that there are distinct critical skills and standards of evidence in each. Some fields employ an empirical approach, and some do not. But even among empirical fields, there are fast differences in methodology and what constitutes evidence. You may disagree with this state of affairs, but if you want that changed you are talking about revamping entire fields of study, not just revamping assessment procedures. Is that what you are advocating?

Let’s allow the free market to determine. Who would pay 75k a year for an education that boils down the educational objectives to multiple choice exams? I certainly wouldn’t – hence my sons go to colleges that give the faculty authority to author exams. Furthermore, I think you would find that if you undermine the autonomy of faculty, they will leave and go on to more lucrative opportunities. Then you can just have generic “educators” teaching all fields, and you will have colleges as high schools.

Are you suggesting that there are no academic ‘fields’ where critical thinking skills are lacking?

Another straw man argument. (Who said anything about multiple choice tests?

Wow. This is just too easy. :slight_smile:

The proposed uniform assessments of “critical thinking skills” that I have seen are in fact multiple choice tests.

mathmom claims not to know what critical thinking skills are. I suspect that this view is actually correlated with the fact that she uses the skills automatically. Perhaps the better the critical thinker, the less willing the person is to design a multiple choice test for critical thinking?