Back in the day, when professors wanted to know whether students had done the required reading, they would ask questions in class and engage students in discussions that would flow from their answers. They would draw upon nonverbal cues signalling enthusiasm, confusion, boredom, and numerous other feelings the material might evoke. From time to time, they would give exams to measure student understanding. But now, the good folks at CourseSmart, the product of a consortium of textbook publishers, are promising to improve our teaching lives through technology. Now, through the use of digital textbooks, we can “know when students are skipping pages, failing to highlight significant passages, not bothering to take notes — or simply not opening the book at all.”
With this technology, instructors receive an “engagement index” for each student based on how and how much students interact with their electronic textbooks. From CourseSmart’s perspective, this ongoing data mining will improve the educational experience for everyone. Students who are struggling will get prompt, targeted feedback. Instructors, and especially those teaching online courses, will get advance notice of which concepts students understand and which ones confound them. And publishers will be able to use the data to revise sections of textbooks to address student problems, or even perhaps to eliminate sections that aren’t being assigned or read.
Color me skeptical, though. Measuring student engagement through the use of this tracking technology would appear to produce results that are both over-inclusive and under-inclusive. They are over-inclusive because, as the Times noted, students can readily find ways to game the system by behaving in ways that reflect engagement in the most superficial way, much as scholars have been able to trick GradeBot into awarding high scores to essays featuring gibberish. Even leaving the textbook open without actually reading it contributes positively to the engagement score. At the same time, the scores are under-inclusive because handwritten notes, or notes stored to a computer file not being tracked, don’t register as engagement.
Another concern has to do with the assumptions made by CourseSmart about how people interact with their source material. As a student, when I encountered material that was new and complex, I would make a point of identifying central themes in the text. But when the material was less challenging, when the central argument was intuitive and easily remembered, I didn’t bother to note or highlight it. Instead, I would flag, with a “tell me something I don’t already know” attitude, things that were novel, thought-provoking, or counter-intuitive. This approach served me well. But I suspect that the CourseSmart tracker would have reported that I wasn’t understanding the main ideas, or that I was getting distracted by minor points. Maybe I’m weird this way (and yes, friends and family, I can hear you snickering from over here). But my larger point is that each student is weird in his or her own way, and an assessment method that compels a standardized approach to reading will fail students by denying them the opportunity to have their own weirdness work to their benefit.
Finally, I question what effect CourseSmart will have on the books themselves. Yes, as CourseSmart chief executive Sean Devine put it, before the software existed, “the publisher never knew if Chapter 3 was even looked at.” The implication is that the publisher would respond to this neglected Chapter 3 by asking the textbook’s authors to revise, or even to remove, it. But while it is possible that the chapter isn’t being read because professors aren’t assigning it, it’s also possible that professors are assigning it, but students aren’t reading it, or aren’t marking it in ways that register in the engagement score. Should the content of textbooks be driven by what students want, rather than by what educators deem to be valuable? There’s a kind of focus-group mentality at work here, the same kind that leads Hollywood studios to override filmmakers’ visions in order to satisfy the presumed audience’s desire to see more explosions, more boobs, and more happy endings even if they feel as fake as the gratuitously added boobs. Students aren’t customers, and treating them as such does no favors for authors who want to develop, and instructors who want to assign, texts that challenge students and engage them without making their immediate gratification a central concern.
There’s this underlying theme running throughout these technological innovations, be they EdX’s software (what I’ve dubbed GradeBot above) or CourseSmart’s Great Big Data Miner: that professors should welcome these shortcuts because they make our jobs easier and free up time for other tasks. As the comments sections in the Times attest to, my fellow professors are not oblivious to the long-term implications of technology that could make education–“education,” more accurately–available without having to hire so many of us. More gratifyingly, they, and I, also express puzzlement and annoyance at the assumption that we’re seeking shortcuts, that we’d settle for the simulacrum of educational engagement instead of relying on our expertise and experience to determine whether, and how effectively, students are learning.