City Planner, Mediator, and MIT Professor

14/11/2017 12:00 am

Universities are underinvesting in efforts to improve the quality of teaching

My friend and colleagues, Michael O’Hare (a Professor at UC-Berkeley), points out in a recent paper entitled The 1.5% Solution: Quality Assurance for Teaching and Research that major research universities underinvest in continuous improvement of their teaching efforts. Given that universities have only two primary tasks — teaching and research – they ought to be willing to invest as much in improving the quality of their teaching as they do in providing an elaborate infrastructure to support basic and applied research. But, that doesn’t seem to be the case.

O’Hare calculates that major universities devote something like $300,000 to present a semester-long course (i.e. student time, rooms, professor’s salary, web, teaching assistants, etc.). This is what it takes to ensure that faculty and students are present in the right place, at the right time, with the resources they need. He assumes, for planning purposes, that a course is taught to 50 students; faculty at research universities carry a three-course-per-year teaching load, teaching is half a professor’s academic year time, and fringes and benefits are included. To increase student learning by 5%, therefore, O’Hare estimates that it ought to be worth spending $45,000 per year, per professor, to improve the quality of teaching and student learning. Unfortunately, nothing close to that is currently being spent.

O’Hare suggests that universities ought to invest 1.5% of their faculty payroll in quality assurance to improve teaching performance – in much the same way that almost every industry invests in quality assurance as it seeks to improve its efficiency and effectiveness.

O’Hare points out three ways that any and every university department could try (at very modest cost) to improve the quality of its teaching. These follow closely what other segments of the economy have learned about quality improvement. While teaching is not the same as producing most other products or services, I’m convinced (after almost 50 years as a teacher at MIT) that the most basic quality assurance strategies do apply equally well to the university.

Instructors should talk more with each other. You might not believe it, but it is very rare for MIT faculty to sit in on each other’s courses to observe and offer advice on possible ways of improving teaching. And, similarly, faculty members almost never compare notes before classes begin on what they are proposing to cover in their classes and how they intend to go about teaching the material. Everyone is presumed to be a subject matter expert; although why this is presumed to carry over into teaching expertise is beyond me. If a Department made it a policy that every faculty member should expect one of their colleagues to sit in on at least one of their class sessions each semester, then no one would feel singled out. While such assignments could be made absolutely randomly, I don’t see a problem asking the faculty to choose the colleague they want to have sit in. In an after-class discussion, I would hope that the person observing would suggest (1) things I saw you do that I’m going to try myself (and why), and (2) things I’m going to suggest you might find helpful. I don’t think such reports need to be submitted in writing to the relevant Department, but it might be valuable if the person being observed wrote a short summary of what they heard and how they intended to take the feedback on board.

Instructors should make a greater effort to help students learn to teach each other more effectively. Faculty are used to giving students formal feedback (i.e. graded tests and quizzes) on how well they have mastered the material presented in class. It seems to me that faculty could also observe each student giving feedback to a fellow student, and translate that into an occasion to help every student get better at giving constructive feedback and advice to their peers. We need to make it easier for our students to learn from each other. In one of my classes, I ask a few students to make six minute oral presentations — in a hypothetical work situation — drawing on what they have learned that week in class. As soon as they are done, every student in the class is asked to use a one page printed template highlighting five or six aspects of the presentation to provide the presenter with immediate feedback. In addition to noting what was done well and what could be improved, each student provides several sentences of commentary. This is all done in five to seven minutes. Each presenter that gets 25 separate sources of feedback on their presentation. This has nothing to do with their grade. Everyone in the class makes at least three oral presentations throughout the term. Each bit of feedback is not anonymous. We always say students learn as much from the other as from their professors, but what do we do to make sure that happens? Nothing. I think faculty should commit to make sure that students learn (as part of every course!) how to help their fellow students learn as much as they can from the class. It should be the faculty member’s responsibility to instruct and support students as they help each other learn. I think that academic departments should insist that faculty make an effort to get better at doing this.

Academic departments should measure everything they do on a continuous basis. There’s nothing new about this idea. Arthur Demming pointed out many years ago, in the context of industrial activities, that anything not measured is not likely to be improved. What to measure, though, in the context of university teaching, is not clear. Most universities currently measure student satisfaction immediately at the end of a semester-long class. More than anything else, this tends to gauge the popularity of the professor. I’ve rarely seen student course evaluations lead to improvements in teaching strategy or performance. What else might be measured? It seems obvious it would be a good idea to measure student knowledge about course material before and after each segment of a class, as well as before and after the entire course. This works if a class is mostly aimed at helping students master substantive knowledge. But, if a class is supposed to teach students how to do something, it makes more sense to give students simulated opportunities to see whether they have mastered the relevant skills. Digital simulations are expensive to build, but they work. Face-to-face role play simulations are not expensive to create, and they work as well. When groups of students in a class play the same game separately, comparisons of the results and student reflections on the experience can give faculty a clear idea of what they are conveying effectively and what needs improvement. I’ve found that saving the last three minutes of a class to ask students what they took away from the session often generates surprising responses. It certainly helps me recalibrate when what they report don’t correspond with what I thought I was teaching! I’m in favor of asking each faculty member what they intend to measure so that they can improve their teaching performance. A university department should provide technical support to make this happen. Then, with the relevant data in hand, each faculty member should commit in writing to experiments or reforms in their next round of teaching, along with a clear indication of what they will measure next time.

I know that there will be substantial resistance to these three simple ideas. Non-tenured faculty will be worried that admitting there is room for improvement in their teaching may somehow jeopardize their reappointment. Tenured faculty have little or no incentive to invest in getting better at teaching. To date, most faculty members at most research universities have not been asked to focus on teaching their students to teach their classmates. This will be seen as an (uncompensated) expansion of the faculty’s role and responsibility. Most faculty won’t know how to do this. Departments will complain that arranging a system of faculty visits to each classroom is a new administrative task for which they are unprepared. Systematically measuring teaching performance (and improvements in teaching performance) is not something that academic administrators know how to do. Nevertheless, I would argue that University leaders should pursue Professor O’Hare’s 1.5% solution to the problem of improving teaching effectiveness. There’s really no good excuse for not getting better at what we do.