AI-Generated Letters of Recommendation

I write around 30 recommendation letters a year. These are mostly for students applying to college, but increasingly I’m asked to write recs for competitive summer programs, private schools, scholarships, even internships. It’s a lot of work. I estimate that I spend around 100 hours a year on it

And it’s uncompensated work. Almost all of these hours come directly from my personal time, which colleges treat as a free resource. There is nothing to stop them from making me fill out one more form, complete one more ranking, respond to one more school-specific prompt. I often feel like collateral damage in the school admissions arms race.

Some teachers simply refuse to do it. I have come to empathize with that position, but ultimately these recommendations are important to my students, so I put in the time and effort, even though the process is frustrating.

What’s most frustrating is that I’m not sure all this effort makes any difference. Do letters of recommendation really matter in college applications? I find it hard to believe they do. Last year 30,000 students applied to MIT. Who reads those 60,000 letters of recommendation?

I’ve long assumed that these letters just get passed through some kind of sentiment analysis software, where a large language model produces a score, appends it to the student’s profile, and the admissions process grinds on, one automated step at a time. I even recently speculated that colleges were feeding my letters to LLMs without my consent. What’s to stop them?

So when I logged into Naviance, the now-universal portal for college admissions, I shouldn’t have been surprised to see a new feature under the “Letters of Recommendation” tab: a compose-with-AI button.

But I was surprised. Isn’t this an admission that letters of recommendation aren’t that important? If colleges will accept an algorithmically-generated, averaged-out narrative as a substitute for whatever I might have said, how could they possibly value what have I say? Why shouldn’t I just click “Compose”, fill in a couple of blanks, and reclaim my time?

I guess there’s a part of me that still believes a good letter of recommendation can have an impact. Maybe that’s naïve, but if it’s true, then anything less than my full effort would put my students at a disadvantage. I respect them too much to do that, even if the process doesn’t respect me.

For now, I’ll hope that my carefully considered letters will give my students an edge in a world of AI-powered chatbots processing AI-generated recommendations. But I’ll be watching this AI-powered arms race closely.

Related Posts

Workshop — A One-Problem Tour of Statistics

I was excited to run my workshop “A One-Problem Tour of Statistics” for teachers at Math for America in New York City this past week. When I was writing “Painless Statistics” a few years ago, there was one simply-stated problem I kept going back to that continually helped illuminate important statistical ideas for me. I returned to that problem frequently, and each time I came away with a better understanding of sampling distributions, estimators, confidence intervals, and even the debate between frequentism and Bayesianism. This workshop was years in the making, and I was happy to finally be able to share my story, and what I learned, with teachers!

Related Posts

Strogatz, the NYT, and Triangular Numbers

My latest article for the New York Times Learning Network turns Steven Strogatz’s wonderful “Math, Revealed” essay on triangular numbers into a teaching and learning resource. Learn about how a favorite number pattern connects algebra, geometry, and calculus, and even extends into CAT scans the Fab Four!

The article is freely available here, and as with the articles in the series, include free access to Strogatz’s original New York Times essay.

Related Posts

Strogatz, the NYT, and Mathematical Packing

My latest article for the New York Times Learning Network turning Steven Strogatz’s wonderful “Math, Revealed” essays into teaching and learning resources is out. This piece is about mathematical packing, the age-old human quest to find efficiency in organization, and covers everything from packing soda cans in a box to packing information in high-dimensional spaces! It also includes some easy-to-state, but yet unsolved, mathematical conjectures about the best way to fit squares in squares.

The piece is freely available here, and includes free access to Professor Strogatz’s original essay.

Related Posts

Good AI, Bad AI

At the start of the school year I asked my students to let me know how they are using AI in my courses. I’ve seen some Good AI, and some Bad AI.

Good AI

I make mathematical computing a part of my year-long Linear Algebra course. The varying levels of computer programming experience among my students makes this a challenge: some could intern at Google and some can’t remember how a for loop works. AI Coding Assistants, used appropriately, provide invaluable support for students with limited experience. If they can’t successfully write a program to add two rows of a matrix together, they can have the Coding Assistant do it, check to see if it works, and then review the code themselves and try to learn something. Good AI! Here, “appropriate use” means making an honest effort to complete the challenge yourself first: This builds context for learning from whatever code the AI produces, and it also better positions the student to evaluate whether or not the code actually does what they asked it to do.

Bad AI

At the beginning of my Calculus course I ask students to write about a “mathematical observation” they’ve had. I am intentionally vague about what a “mathematical observation” is. Some students write about analyzing their commute to school, some about optimizing a video game strategy, some about a number theory course they took in a summer program. One goal of the assignment is to learn about my students as individuals and as mathematicians, so knowing what they think constitutes a “mathematical observation” tells me something about them.

One student began their paper by disclosing that they first asked ChatGPT to define “Mathematical Observation” for them. This immediately struck me as Bad AI. Thinking about what constitutes a mathematical observation was the point. Not only did the student ask an AI tool to do their thinking for them, but doing so undermined the very purpose of the assignment: Instead of learning what the student thinks as an individual, I got some averaged-out sentiment from a non-random group of authors.

For the record, the student did write a lovely and thoughtful mathematical observation, but afterwards we had a good conversation about the role of AI tools. “Don’t use AI to do your thinking for you,” I said, which seems like a good place to start in navigating this new landscape.

Follow

Get every new post delivered to your Inbox

Join other followers: