Showing posts with label assessment. Show all posts
Showing posts with label assessment. Show all posts

Thursday, September 22, 2016

"But even if they are not valid, they do tell you something...."

Remember, "validity" means "they measure what you think they measure." "Data driven" can also mean driven right off the side of the road.

From Inside Higher Ed

Zero Correlation Between Evaluations and Learning

New study adds to evidence that student reviews of professors have limited validity.
September 21, 2016
A number of studies suggest that student evaluations of teaching are unreliable due to various kinds of biases against instructors. (Here’s one addressing gender.) Yet conventional wisdom remains that students learn best from highly rated instructors; tenure cases have even hinged on it.
What if the data backing up conventional wisdom were off? A new study suggests that past analyses linking student achievement to high student teaching evaluation ratings are flawed, a mere “artifact of small sample sized studies and publication bias.”
“Whereas the small sample sized studies showed large and moderate correlation, the large sample sized studies showed no or only minimal correlation between [student evaluations of teaching, or SET] ratings and learning,” reads the study, in press with Studies in Educational Evaluation. “Our up-to-date meta-analysis of all multi-section studies revealed no significant correlations between [evaluation] ratings and learning.”

Wednesday, April 23, 2014

Real "Competencies" for the 21st Century

Music to my ears. Sarah Lawrence, long known for its innovative approach to liberal arts education (still using narrative evaluations - something we could adopt at Mills to great effect IMHO), crafts a simple response to assessment madness and places it where it should be: at the student-advisor nexus.

Imagine: six goals that are about skill not ideological content; evaluated every semester in every course; tracked over time by student and advisor. Throw all the rest of the baroque apparatus away and get on with educating.
H/T to Mark Henderson

Play audio at MarketPlace Education

At Sarah Lawrence College in Bronxville, N.Y., about ten students — all women but one — sit at a round table discussing Jane Austen’s “Northanger Abbey.” 
The 88-year-old college has a reputation for doing things differently. Most classes are small seminars like this one. There are no majors. Students do a lot of independent projects. And grades aren’t as important as the long written evaluations professors give every student at the end of every semester. It’s no surprise, then, that professor James Horowitz is skeptical of any uniform college rating system, like the one being proposed by the Obama administration.
“The goals that we are trying to achieve in instructing our students might be very different from what the University of Chicago or many other schools or a state school or a community college might be striving to achieve,” Horowitz says.
The Obama administration is due out this spring with details of its controversial plan to rate colleges on measures like value and affordability. The idea is that if students can compare schools on cost, graduation rates and even how much money students earn after they graduate — colleges might have to step up their game. Especially if, as proposed, poor performers risk losing access to federal financial aid.
All that, naturally, makes colleges just a bit nervous. Sarah Lawrence is fighting back with its own way of measuring value. The faculty came up with six abilities they think every Sarah Lawrence graduate should have. They include the ability to write and communicate effectively, to think analytically, and to accept and act on critique.
“We don’t believe that there’s like 100 things you should know when you graduate,” says computer science professor Michael Siff, who helped develop the tool. “It’s much more about are you a good learner? Do you know how to enter into a new domain and attack it with an open mind, but also an organized mind?”
Faculty advisors can use the results to track students’ progress over time and help them address any weaknesses. A student who’s struggling with communication could take class with a lot of oral presentations, for example, or make an appointment at the campus writing center. 
But Siff says the tool is also about figuring out what the college can do better.
“This tool will allow us to assess ourselves as an institution,” he says. “Are we imparting what we believe to be these critical abilities?” 
So how is the school doing? So far there are only data for two semesters, but on every measure seniors do better than juniors. Sophomores do better than freshmen. 
Starting next fall, advisors will meet with their students at the beginning of each semester to talk over their progress. In sort of a trial run, Siff goes over the results so far with one of his advisees, junior Zachary Doege.
On a scale from “not yet developed” to “excellent,” he’s mostly at the top end. Doege says he likes seeing his own growth. 
“I think the thing I like the most about this is just the fact that I can look back at how I was doing in previous semesters and sort of chart my own progress,” he says. “Not comparing me towards other students—just me to myself.”
That’s a different measure of the value of an education than, say, student loan debt or earnings after graduation — the sorts of things the Obama administration is considering as part of its ratings plan. Students and parents are right to ask if they’re getting their money’s worth, says the college’s president, Karen Lawrence. After financial aid, the average cost of a Sarah Lawrence education is almost $43,000 a year.
“People are worried about cost,” Lawrence says. “We understand that.”
And they’re worried about getting jobs after graduation. But she says the abilities that the new assessment measures—critical thinking and innovation and collaboration—are the same ones employers say they’re looking for.
“We think these are abilities that students are going to need both right after graduation and in the future, and so it could be an interesting model.”
One she hopes other schools will take a look at as they figure out how to answer the national debate about the value of college.

Thursday, December 19, 2013

Does Assessment Work?

A short commentary on assessment from a respected sociologist who has done a big assessment project funded by the Mellon Foundation and who served several years on Middle States (the WASC of the mid-Atlantic region).  Chambliss spoke at Mills in ~2005.




Click here to download.

The Hamilton Plan for Assessment of Liberal Arts





Click here to download.

Monday, November 11, 2013

Evaluating and Assessing Short Intensive Courses

Two articles on the topic of assessing and evaluating short, intensive courses.  Most of the results appear positive in terms of learning outcomes, but there are a number of factors associated with variations in outcomes that appear worth paying attention to.
Using a database of over 45,000 observations from Fall, Spring, and Summer semesters, we investigate the link between course length and student learning. We find that, after controlling for student demographics and other characteristics, intensive courses do result in higher grades than traditional 16 week semester length courses and that this benefit peaks at about 4 weeks. By looking at future performance we are also able to show that the higher grades reflect a real increase in knowledge and are not the result of a “lowering of the bar” during summer. We discuss some of the policy implications of our findings.






Altogether, we found roughly 100 publications that, in varying degrees, addressed intensive courses. After reviewing the collective literature, we identified four major lines of related inquiry: 1) time and learning studies; 2) studies of educational outcomes comparing intensive and traditional formats; 3) studies comparing course requirements and practices between intensive and traditional 
Scott and Conrad finish their literature review with several sets of open research questions suggested by their research:
Behavior
  1. How do course requirements and faculty expectations of students compare between intensive and traditional formats and, if different, how does this affect the learning environment and student learning outcomes? 
  2. How do student's study patterns compare between intensive and traditional length courses?
Learning Outcomes
  1. How do pedagogical approaches compare between intensive and traditional length courses and, if different, how do these variations affect learning?
  2. How does the amount of time-on-task (i.e., productive class time) compare between intensive and traditional-length courses?
  3. How do stress and fatigue affect learning in intensive courses?
  4. Are intensive courses intrinsically rewarding and if so, how does that affect the classroom experience and learning outcomes?
  5. How do the immediate (short-term) and long-term learning outcomes compare between intensive and traditional-length courses?
  6. How do different student groups compare in their ability to learn under intensive conditions? For example, do older and younger students learn equally well in intensive courses?
  7. How does the degree of intensity influence student achievement? Do three week courses yield equivalent results to eight-week courses?
  8. How does the subject matter influence outcomes in intensive courses?
  9. Which kinds and levels of learning are appropriate for intensive formats?
  10. How do course withdrawals and degree completion rates compare between students who enroll in intensive versus traditional courses?
  11. How do intensive courses influence a student's attitude toward learning?
Optimizing Factors and Conditions
  1. What disciplines and types of courses are best suited for intensive formats?
  2. What type of students are best suited for intensive formats?
  3. What types of pedagogical styles and instructional practices are best suited for intensive formats? Must teaching strategies change for intensive courses to be effective?
  4. Can certain instructional practices optimize learning?
  5. Do learning strategies differ between intensive and traditional-length courses and if so, can students effectively "learn how to learn" in time compressed formats? In other words, can students be taught effective learning strategies for intensive courses that would enhance achievement outcomes?



See Also 

John V. Kucsera & Dawn M. Zimmaro Comparing the Effectiveness of Intensive and Traditional Courses College Teaching Volume 58, Issue 2, 2010, pages 62-68