Thursday, April 20, 2017

Hack Your Organizational Problems

"Hackathons" are cool. But who knows what they really are and how they work?  This makes them ideal things for clueless managers to do poorly. But if we take a little time to understand their "why" and "how" they do represent a potentially useful organizational form that could have a positive impact on sclerotic, inertia bound institutions. For higher educational organizations they have special potential for moving beyond "we tried that 5 years ago" and "not invented here" and for making actual interdisciplinary teams actually effective and the experience of working on institutional problems inspiring instead of demoralizing.

This post from InnovationManagement.se is a good starting point because of how it manages to convey the essence of hackathoning outside the context of coding.

That essence is group process bound in space and time that focuses effort on well defined challenges in a short, structured design sprint.  The elements are important:
  • space/time
  • defined chalenge
  • structured process.

Especially the last. Read more at InnovationManagement.se


Wednesday, December 14, 2016

Ten Reflections from the Fall Semester

Notes from this semester. Each semester I jot down observations about organizational practices, usually inspired by events at my place of employment.  Every now and then I try to distill them into advice for myself. Most are obvious, once articulated, but they come to notice, usually, because things happen just the other way round.
  1. Always treat the people you work with as if they are smart; explain why you take a stand or make a decision in a manner that demonstrates that you know they are smart, critical, and open to persuasion by evidence and argument. Set high standards for yourself. Your institutional work should be at least as smart as your scholarly work.
    1. "it is better to be wrong than vague." - Stinchcombe
    2. If smart people are opposed to your idea, ask them to explain why. And listen non-dismissively and non-defensively. Remember, you goal is to get it right, not to get it your way.
  2. Do not put people in charge of cost cutting and budget reductions. Put them in charge of producing excellence within a budget constraint.

  3. Make sure everyone is able to say how many Xs one student leaving represents.  How much will it cost to do the thing that reduces the chance a student will get fed up with things?

  4. If most of what a consultant tells you is what you want to hear (or already believe), fire her.

  5. Don't build/design system and policies around worst cases, least cooperative colleagues, people who just don't get it, or individuals with extraordinarily hard luck situations. Do not let people who deal with "problem students" suggest or make rules/policy.

  6. Be wise about what you must/should put up for a vote and what you should not.  And if you don't know how a vote will turn out, they are are not prepared to put it up for a vote.  Do your homework, person by person.

  7. If a top reason for implementing a new academic program is because there's lots of interest among current students, pause. Those students are already at your school. What you want are new programs that are attractive to people who previously would not have given you a second look.

  8. If you are really surprised by the reaction folks have to an announcement or decision then just start your analysis with the realization that YOU screwed up.

    1. Related: and don't assume it was just about the messaging; you might actually be wrong and you should want to know whether that's the case.

  9. If you or someone else's first impulse when asked to get something done is to form a committee, put someone else in charge of getting that thing done.

  10. Train folks to realize that teams and committees in organizations are not representative democracies. The team does not want your opinions, feelings, experiences, or beliefs; it wants you enrich the team's knowledge base by reporting on a part of the world you know something about.  And that usually means going and finding out in a manner that is sensitive to your availability bias.  In the research phase, team members are the sense organs of the team. 



American Talent Initiative aims to recruit 50,000 highly qualified students from modest backgrounds

Well, this is good news. Unless, perhaps, you are already an institution that does this - sure the pool is a deep one, but what's the net effect when top schools skim the top of it? Still, attaching the research resources to the effort is a good thing - way too much seat-of-the-pants policy and practice in this area.

Looking for Low Income Students

A group of 30 top colleges and universities wants to enroll more low-income students, but critics question whether the focus should be elsewhere.
By Rick Seltzer Inside Higher Ed December 13, 2016

A new effort to enroll low- and moderate-income undergraduates at colleges and universities with high graduation rates is being announced today in an attempt to have more students from modest backgrounds graduate from prestigious campuses seen as opening doors to top careers.

The effort, called the American Talent Initiative, aims to add 50,000 highly qualified students from modest backgrounds to campuses with high graduation rates by the year 2025. A group of 30 colleges and universities have signed on to the initiative, which is being coordinated by the nonprofit Aspen Institute’s College Excellence Program and Ithaka S+R. Bloomberg Philanthropies is providing $1.7 million over two years to start the project, money that won’t go directly to colleges and universities but will be used to fund research on their efforts and related activities.

Read more at Inside Higher Ed

Thursday, December 1, 2016

The "Core" COULD actually be a core

In the Chronicle of Higher Education Nicholas Lemann argues for an alternative approach to a core curriculum that is explicitly focused on intellectual skills and METHODS. The core courses he proposes would all be interesting to teach:
  • Information Acquisition: kinds, acquiring, evaluating
  • Cause and Effect: science as style of thought
  • Interpretation: close reading of texts
  • Numeracy: quantity in everyday life
  • Perspective: the limits of one's own viewpoint
  • Language of Form: intelligently seeing/producing visual information
  • Thinking in Time: thinking historically
  • Argument: how to make a compelling and analytically sound argument
One element of what Lemann is responding to should sound familiar: "Quite a few colleges … devising a new undergraduate liberal-arts curriculum … these new curricula often identify a suite of intellectual skills … [but] permit a wide array of existing courses to fulfill the requirements … [thus] declaring victory simply by pasting on a new label."

Or, he continues:
Or they define the new requirements in terms of "learning outcomes" rather than course content, which puts the emphasis on devising an end-of-course assessment rather than on designing the course itself. Or they offer courses on broad interdisciplinary subjects, with words like "ethics," "values," or "justice" in their titles, rather than on the inescapably different project of identifying fundamental methods of understanding and analysis.
And the result of that is something my own school has: a core curriculum that is neither core nor curriculum.

More to the point, many schools (my own included) allow even a "core" which is called skills or competency based to be captured by colleagues who want the content - especially values and worldviews - that they champion to be required for all and who use core requirements to drive enrollments in their departmental courses. The "core" becomes a symbolic expression of whose intellectual and ideological commitments are on top at the moment and then a whole bunch of organizational ritual and hoohah emerges to regularly remind all of whose game it is and to channel resources in their direction. Until the next reimagining of the core elevates some other group.

My colleagues can read the article here.  If you have premium access to the Chronicle, you can read the whole article there.

The Case for a New Kind of Core

NOVEMBER 27, 2016 

When I was a professional-school dean (at Columbia University’s Graduate School of Journalism), we had no choice but to try to define the specific content of an education in our field. The premise was that if you want to practice a profession, there is a body of material you must master, at least in the early part of your education. That perspective led me to urge, this year in The Chronicle Reviewthat undergraduate colleges move in a similar direction: a core curriculum.

READ MORE at CHE
-->

Wednesday, September 28, 2016

College Affordability Expert on the Daily Show

A friend and co-author has a new book and did an interview on The Daily Show last night. You don't see too many sociologists on TV, BTW. Sara's new book is a research-based look at the challenges of paying for higher education, with solutions.


Paying the Price: College Costs, Financial Aid, and the Betrayal of the American Dream




Thursday, September 22, 2016

"But even if they are not valid, they do tell you something...."

Remember, "validity" means "they measure what you think they measure." "Data driven" can also mean driven right off the side of the road.

From Inside Higher Ed

Zero Correlation Between Evaluations and Learning

New study adds to evidence that student reviews of professors have limited validity.
September 21, 2016
A number of studies suggest that student evaluations of teaching are unreliable due to various kinds of biases against instructors. (Here’s one addressing gender.) Yet conventional wisdom remains that students learn best from highly rated instructors; tenure cases have even hinged on it.
What if the data backing up conventional wisdom were off? A new study suggests that past analyses linking student achievement to high student teaching evaluation ratings are flawed, a mere “artifact of small sample sized studies and publication bias.”
“Whereas the small sample sized studies showed large and moderate correlation, the large sample sized studies showed no or only minimal correlation between [student evaluations of teaching, or SET] ratings and learning,” reads the study, in press with Studies in Educational Evaluation. “Our up-to-date meta-analysis of all multi-section studies revealed no significant correlations between [evaluation] ratings and learning.”

Sunday, August 14, 2016

House of Cards

A Facebook post called my attention to a neat little article about why swimming rules only recognize hundredths of seconds even though modern timing technology allows much more precise measurements. The gist is this: swimming rules recognize that construction technology limits the precision with which pools can be built to something like a few centimeters in a 50 meter long pool.  At top speed a swimmer moves about 2 millimeters in a thousandth of a second.  So, if you award places based on differences of thousandths of a second, you can't know if you are rewarding faster swimming or the luck of swimming in a shorter lane.

This observation points to the more general phenomena of false precision, misplaced concreteness (aka reification, hypostatization), and organizational irrationality rooted in sloppy and abusive quantification.

These are endemic in higher education.

Students graduate with a GPA and it's taken as a real, meaningful thing. But if you look at what goes into it (exams designed less and more well, subjective letter grades on essays, variable "points off" for rule infractions, quirky weighting of assignments, arbitrary conversions of points to letter grades, curves, etc.), you'd have to allow for error bars the size of a city block.

Instructors fret about average scores on teaching evaluations.

"Data driven" policies are built around the analysis of tiny-N samples that are neither random nor representative.

Courses are fielded or not and faculty lines granted or not based on enrollment numbers with no awareness of the contribution of class scheduling, requirement finagling, course content overlap, perceptions of ease, and the wording of titles.

Budgets are built around seat-of-the-pants estimates and negotiated targets.

One could go on.

The bottom line is that decision makers need to recognize how all of these shaky numbers are aggregated to produce what they think are facts about the institution and its environment.  This suggests two imperatives. First, we should reduce individual cases of crap quantification.  Second, when we bring "facts" together (e.g., enrollment estimates and cost of instruction) we should adopt an "error bar" sensibility - in it's simplest form, treat any number as being "likely between X and Y" - so that each next step is attended by an appropriate amount of uncertainty rather than an inappropriate amount of fantasized certainty.