It was a pleasure to review Ben Orlin’s wonderful book Math Games with Bad Drawings in the American Mathematical Monthly. The review is available online, with subscriber access, here, and will appear in the next print edition of the Monthly.
And in case you’re still shopping, Math Games would make a fabulous holiday gift for the math enthusiastic, math teacher, math student, or math parent in your life!
Students are learning more statistics in high school math courses than ever before, which is great: statistical literacy is essential to life in the modern world. But statistical techniques are subtle, and must be taught and tested carefully. To that point, consider question 35 from the June 2022 Algebra 2 Regents exam, which involves the important but tricky concept of statistical inference.
The set up of the problem establishes that 65% of a city’s residents drive to work, and an intervention hopes to reduce that percentage. The ultimate question is this: After the intervention, is a random sample of residents in which 61% drive to work evidence that the intervention was successful?
In order to establish the context for making an inference, a dot plot of sample proportions from simulated samples is shown. The trouble begins with the student directive:
“Construct a plausible interval containing the middle 95% of the data.”
What is meant by “the data” here? Does this refer to the simulation data? Because if so, that wouldn’t make sense. You don’t need to construct a “plausible interval” that contains 95% of the simulation data. It’s all right there. You can construct an exact interval that contains 95% of the data.
You don’t want an interval that contains “the data”. What you want is an the interval that contains the central 95% of the sampling distribution of sample proportions, a theoretical distribution used in making inferences. This interval in the sampling distribution can be constructed using the mean and standard deviation of an individual sample, because in the case of sample proportions the mean and standard deviation of a sample can be used to estimate the mean and standard deviation of the sampling distribution itself.
Drawing inferences using statistics is subtle, and vaguely referring to “the data” confuses and obscures the important details of the process. I spent a lot of time trying to clearly build these ideas up in my book Painless Statistics precisely because I don’t think most students and math teachers, many of whom are now occasional statistics teachers, really understand the connection between sampling distributions, estimators, and inference making. As evidence of that, consider this student response.
The student refers to the interval they’ve constructed as a “confidence interval”. While similar in structure, this is not a confidence interval: a confidence interval is used to estimate an unknown population parameter, which is not what is happening here (the population proportion has already been estimated to be 65%). The fact that this student received full credit for this response suggests there are probably more than a few math teachers out there who also think this is a confidence interval. (At least they aren’t saying a 95% confidence interval means they are 95% confident of their results, as they have done before.)
It’s good that more students are learning more statistics, but in order to teach and learn statistics properly we can’t have our standardized tests working against us.
We recently passed the one-year anniversary of school closures in New York City, which means that I have been engaged in remote / hybrid teaching for over a year. You might be surprised to learn that, during the past year, the NYC Department of Education has provided me virtually no support or training on how to teach remotely. Unless you’re a teacher, in which case you wouldn’t be surprised at all.
The lack of guidance at the outset of emergency remote learning in the spring of 2020 is somewhat understandable. Everyone was caught off guard, and schools and teachers were given a week to figure out how to do something they had never done before. Sure, it would have been nice if someone at the country’s largest school system actually knew something about remote learning — it certainly isn’t new — but at that point the best most of us could do was just react.
It was disappointing to return to school in the fall and discover that, after three months of emergency remote learning and an additional two months of summer to prepare, the DOE still didn’t have anything helpful to say about the practice of remote instruction. I’m sure some high-priced consultants or over-paid administrators prepared some unusable documents for us, and maybe a platitude-heavy keynote or webinar was offered. But the details of how to actually make this work were left up to the teacher. As those details usually are.
I was lucky to have a handful of thoughtful colleagues I could bounce ideas off, virtually observe, and share successes and failures with. This helped me identify the pressing issues I needed to deal with, and allowed me to converge on a system that, for the most part, works for me and my students. I doubt my teaching would win any awards this year, but students are learning, math is happening, and progress is being made.
The amount of struggle and independent effort required to reach even this point makes me wonder, where was the support? A year of remote learning was predictable enough in the face of a global pandemic. How is it that an expansive administration, one that oversees 75,000 teachers that serve 1 million students, had virtually nothing helpful to say about how to best implement remote instruction? As has happened so often throughout my career as a teacher, it seemed like it was all up to me to figure it out for myself and my students.
Midway through the year some teachers at my school ran workshops based on training in remote learning they received from Columbia’s Teacher’s College, an education school which enjoys great prestige. In these workshops the teachers told us what they learned about Maslow’s hierarchy of needs, and the difference between a recall question and a thought-provoking question. I remember listening to the same thing 20 years ago in my required education courses, and it was about as helpful then as it was now. I came to the same sad conclusion I came to 20 years ago: If this is the best they have to offer, I’m probably better off figuring it out on my own.
It was the last day of school in 2020 and my students were in breakout rooms finishing up a group quiz on quadrilaterals. Class was nearly over, and they were free to log off and head to their next class as soon as they pressed “Submit”.
I noticed one breakout room had only two students in it, which was odd as students were placed in groups of four. Why were these two finishing the quiz without the help of their groupmates? I popped in to see what was going on.
I saw two of my students looking relaxed and comfortable on their screens. “Where are the others?” I said, with some accusation. “They left,” replied the remaining students, without any of the indignation warranted by the situation. Maybe I could help them find their anger: “Why aren’t they helping you finish?” I was not prepared for the answer.
“Oh, we’re done. We already submitted.”
It took me a moment to re-process the situation. “So, why are you still here?”
“We’re just hanging out,” they said. “We’re using your breakout room to talk.” Suddenly I felt like a nosy parent in my own Zoom meeting.
One of my biggest concerns at the start of this year was how we would build connections in remote learning. I’m trying my best, but the culture and social dynamic of our classroom is nothing like it would be in person. In evaluating this aspect of my work, I don’t feel like a success.
But to see my students take a moment after class to socialize made me a feel a little better. After twenty minutes of arguing about squares and rhombuses, they wanted to connect a bit more. And they found a way. Perhaps we all will.
I’ve resisted comparing pandemic era teaching to my first year in a classroom. I mean, I may be uncomfortable, but I’m not having night terrors.
But after three weeks of teaching digitally I noticed something that hasn’t been true in a very long time: The second time I teach a class goes much, much smoother than the first.
There’s just so much I’m not prepared to prepare for. Did I upload that problem set? Did I change the permissions so everyone can read it? Did I prepare an agenda slide? Is it open in my second screen? Am I sharing my second screen? Did I remember my document camera? Why are breakout rooms not working? How long has Kendra been waiting to get back into the meeting? Oh, wait, am I muted again?
Teaching digitally has stripped me of the procedural expertise I’ve developed, and relied on, the past 20 years of running classrooms. The dozens of automatic decisions and reactions that usually speed class up are now slowing me down.
Luckily my students are patient and understanding. We’ll get up to speed eventually. But this was the first week I let myself dream a little about work getting back to normal. I look forward to being an expert again.