Home » 2013 » April

Monthly Archives: April 2013

The Alex Chilton of Supreme Court cases?

Students in my Supreme Court/Constitutional Law courses face a final exam question where they have to identify what they consider to be the three most important cases we’ve studied during the semester, and to defend their selections with reference to the cases’ legal, political, and perhaps social/cultural significance. (If any of my current students are reading, well, your curiosity will be rewarded shortly.) Well, this month’s Big Question at The Atlantic offers an intriguing variant of my inquiry: “What’s the Most Important Supreme Court Case No One’s Ever Heard Of?” (For those of you for whom the musical reference is too obscure,  read this, go grab some Big Star recordings, and thank me later.)

My initial response, taking into account the state of public knowledge of Supreme Court history, was McCulloch v. Maryland. (Even I’m not snarky enough to pick something like Brown or Roe, though I would note recent Pew Forum on Religion and Public Life findings that a majority of Americans under 30, and almost 2/5 of all Americans, are unaware that Roe is about abortion.) But a more generous reading could define the “no one” in the question as “no one who has a New York Times regular’s understanding of the Supreme Court.” The legal luminaries who contributed to this feature seemed to interpret the phrase this way, so I’ll do the same. (And I’m impressed with the choices made, though I would question Elizabeth Wurtzel’s selection of the 1989 case Michael H. v. Gerald D.–dramatic, yes, but “most important” within the category of obscure cases?)

So what would my selection be? I’m tempted to include several, but that feels like cheating, as the participants at The Atlantic had to limit themselves to one. So Daubert v. Merrell Down Pharmaceuticals (1993), which gave federal trial judges significant gatekeeping functions with respect to scientific expert testimony, will have to settle for honorable mention. My choice would be City of Richmond v. Croson (1989), in which the Rehnquist Court first signaled unambiguously its deep and abiding skepticism for race-consciousness in public policy intended to benefit members of historical disadvantaged groups. In striking down Richmond, Virginia’s requirement that 30 percent of city subcontracting dollars be set aside for minority business enterprises, the Court held that race-based classifications should be reviewed under strict scrutiny, the most stringent standard of equal protection review, whether they were intended to help or harm members of minority groups. The Court’s statement on behalf of a color-blind reading of the Fourteenth Amendment set the stage for the Rehnquist and Roberts Courts to limit the use of race in federal contracting, redistricting, pupil assignment, and perhaps voting rights.

Which case(s) would you nominate for the title of “most important obscure case?”

 

Advertisements

“It’s Big Brother, sort of, but with a good intent”

Back in the day, when professors wanted to know whether students had done the required reading, they would ask questions in class and engage students in discussions that would flow from their answers. They would draw upon nonverbal cues signalling enthusiasm, confusion, boredom, and numerous other feelings the material might evoke. From time to time, they would give exams to measure student understanding. But now, the good folks at CourseSmart, the product of a consortium of textbook publishers, are promising to improve our teaching lives through technology. Now, through the use of digital textbooks, we can “know when students are skipping pages, failing to highlight significant passages, not bothering to take notes — or simply not opening the book at all.”

With this technology, instructors receive an “engagement index” for each student based on how and how much students interact with their electronic textbooks. From CourseSmart’s perspective, this ongoing data mining will improve the educational experience for everyone. Students who are struggling will get prompt, targeted feedback. Instructors, and especially those teaching online courses, will get advance notice of which concepts students understand and which ones confound them. And publishers will be able to use the data to revise sections of textbooks to address student problems, or even perhaps to eliminate sections that aren’t being assigned or read.

Color me skeptical, though. Measuring student engagement through the use of this tracking technology would appear to produce results that are both over-inclusive and under-inclusive. They are over-inclusive because, as the Times noted, students can readily find ways to game the system by behaving in ways that reflect engagement in the most superficial way, much as scholars have been able to trick GradeBot into awarding high scores to essays featuring gibberish. Even leaving the textbook open without actually reading it contributes positively to the engagement score. At the same time, the scores are under-inclusive because handwritten notes, or notes stored to a computer file not being tracked, don’t register as engagement.

Another concern has to do with the assumptions made by CourseSmart about how people interact with their source material. As a student, when I encountered material that was new and complex, I would make a point of identifying central themes in the text. But when the material was less challenging, when the central argument was intuitive and easily remembered, I didn’t bother to note or highlight it. Instead, I would flag, with a “tell me something I don’t already know” attitude, things that were novel, thought-provoking, or counter-intuitive. This approach served me well. But I suspect that the CourseSmart tracker would have reported that I wasn’t understanding the main ideas, or that I was getting distracted by minor points. Maybe I’m weird this way (and yes, friends and family, I can hear you snickering from over here). But my larger point is that each student is weird in his or her own way, and an assessment method that compels a standardized approach to reading will fail students by denying them the opportunity to have their own weirdness work to their benefit.

Finally, I question what effect CourseSmart will have on the books themselves. Yes, as CourseSmart chief executive Sean Devine put it, before the software existed, “the publisher never knew if Chapter 3 was even looked at.” The implication is that the publisher would respond to this neglected Chapter 3 by asking the textbook’s authors to revise, or even to remove, it. But while it is possible that the chapter isn’t being read because professors aren’t assigning it, it’s also possible that professors are assigning it, but students aren’t reading it, or aren’t marking it in ways that register in the engagement score. Should the content of textbooks be driven by what students want, rather than by what educators deem to be valuable? There’s a kind of focus-group mentality at work here, the same kind that leads Hollywood studios to override filmmakers’ visions in order to satisfy the presumed audience’s desire to see more explosions, more boobs, and more happy endings even if they feel as fake as the gratuitously added boobs. Students aren’t customers, and treating them as such does no favors for authors who want to develop, and instructors who want to assign, texts that challenge students and engage them without making their immediate gratification a central concern.

There’s this underlying theme running throughout these technological innovations, be they EdX’s software (what I’ve dubbed GradeBot above) or CourseSmart’s Great Big Data Miner: that professors should welcome these shortcuts because they make our jobs easier and free up time for other tasks. As the comments sections in the Times attest to, my fellow professors are not oblivious to the long-term implications of technology that could make education–“education,” more accurately–available without having to hire so many of us. More gratifyingly, they, and I, also express puzzlement and annoyance at the assumption that we’re seeking shortcuts, that we’d settle for the simulacrum of educational engagement instead of relying on our expertise and experience to determine whether, and how effectively, students are learning.