Skip to content
Archive of posts filed under the Teaching category.

Regents Recap — August 2017: Yes, You Can Work on Both Sides of an Identity

In a controversial post last year, I argued that it’s perfectly acceptable to work on both sides of an equation in proving an algebraic identity. While it’s common to tell students “You can’t cross the equal sign” in this situation, doing so is mathematically legitimate as long as the new equation is true under exactly the same circumstances as the original.

For example, when proving an algebraic identity, multiplying both sides of an equation by 2 is permissible, because = y and 2x 2y are true under exactly the same conditions on x and y. Squaring both sides of an equation however, is not, since

x^2 = y^2

can be true under conditions that make y false, say, when x and y-2.

The post in question, “Algebra is Hard”, was a response to a June 2016 Regents scoring guide that deducted a point from a student who, in proving an algebraic identity, multiplied both sides of the equation by a non-zero quantity. The student was penalized for “not manipulating expressions independently in an algebraic proof“, a vague and meaningless criticism.

“Algebra is Hard” received quite a bit of attention, and while many agreed with me, I was genuinely surprised at how many readers disagreed. Which was terrific! Of course my argument makes perfect sense to me, but it was great to have so many constructive conversations with teachers and mathematicians who saw things differently.

But my argument recently received support from the most unlikely of sources: another Regents exam.

Take a look at this exemplar full-credit student response to an algebraic identity on the August 2017 Algebra 2 exam.

Notice that the student works on both sides of the equation and subtracts the same quantity from both sides. Even though the student did not manipulate expressions independently in an algebraic proof, full credit was awarded.

The note here about domain restrictions is an amusing touch, given that it was the explicit domain restriction in the problem from 2016 that ensured the student wasn’t doing something impermissible (namely, multiplying both sides of an equation by 0).

So in 2016 this work gets half credit, and in 2017 this work gets full credit.

While it’s nice to see mathematically valid work finally receiving full credit on this type of problem, it’s no consolation to the many students who lost points for doing the same thing the year before. What’s especially frustrating is that, as usual, those responsible for creating these exams will admit no error nor accept any responsibility for it.

Be sure to read “Algebra is Hard” (and some of the 40+ comments!) for more of the backstory on this problem.

Related Posts

The Math Behind Gerrymandering and Wasted Votes — Quanta Magazine

The U.S. Supreme Court is currently considering a case about partisan gerrymandering in Wisconsin and Texas. One of the keys to the case is the “efficiency gap”, an attempt quantify the partisan bias in a given electoral map. For my latest article in Quanta Magazine, I explain and explore the efficiency gap using simple examples, and talk about some of the implications of this particular measurement.

Imagine fighting a war on 10 battlefields. You and your opponent each have 200 soldiers, and your aim is to win as many battles as possible. How would you deploy your troops? If you spread them out evenly, sending 20 to each battlefield, your opponent could concentrate their own troops and easily win a majority of the fights. You could try to overwhelm several locations yourself, but there’s no guarantee you’ll win, and you’ll leave the remaining battlefields poorly defended. Devising a winning strategy isn’t easy, but as long as neither side knows the other’s plan in advance, it’s a fair fight.

Now imagine your opponent has the power to deploy your troops as well as their own. Even if you get more troops, you can’t win.

The full article is freely available here.

Regents Recap — June, 2017: More Trouble With Statistics

High school math courses contain more statistics than ever, which means more statistics questions on end-of-year exams.  Sometimes these questions make me wonder what test makers think we are supposed to be teaching.  Here are two examples from the June, 2017 exams.

First, number 15 from the June, 2017 Common Core Algebra exam.

This question puzzled me.  The only unambiguous answer choice is (3), which can be quickly eliminated.  The other answer choices all involve descriptors that are not clearly defined:  “evenly spread”, “skewed”, and “outlier”.

The correct answer is (4).  I agree that “79 is an outlier” is the best available answer, but it’s curious that the exam writers pointed out that an outlier would affect the standard deviation of a set of data.  Of course, every piece of data affects the standard deviation of a data set, not just outliers.

From the Common Core Algebra 2 exam, here is an excerpt from number 35, a question about simulation, inference, and confidence intervals.

I can’t say I understand the vision for statistics in New York’s Algebra 2 course, but I know one thing we definitely don’t want to do is propagate dangerous misunderstandings like “A 95% confidence interval means we are 95% confident of our results”.  We must expect better from our exams.

UPDATE: Amy Hogan (@alittlestats) has written a nice follow up post here.

Related Posts

Regents Recap — June, 2017: Three Students Solve a Math Problem

I will never understand why so many exam questions are written like this (question 5 from the June, 2017 Algebra exam):

What is the purpose of the artificial context?  Why must the question be framed as though three people are comparing their answers?  Why not just write a math question?This question not only addresses the same mathematical content, it makes the mathematics the explicit focus.  This would seem to be a desirable quality in a mathematical assessment item.

Instead of wasting time concocting absurd scenarios for these problems, let’s focus on making sure the questions that end up on these exams are mathematically correct.

Related Posts

 

 

Regents Recap — June, 2017: The Underlying Problem with the New York State Regents Exams

I’ve been writing critically about the New York State Regents exams in mathematics for many years.  Underlying all the erroneous and poorly worded questions, problematic scoring guidelines, and inconsistent grading policies, is a simple fact:  the process of designing, writing, editing, and administering these high-stakes exams is deeply flawed.  There is a lack of expertise, supervision, and ultimately, accountability in this process.  The June, 2017 Geometry exam provides a comprehensive example of these criticisms.

The New York State Education Department has now admitted that at least three mathematically erroneous questions appeared on the June, 2017 Geometry exam.  It’s bad enough for a single erroneous question to make it onto a high-stakes exam taken by 100,000 students.  The presence of three mathematical errors on a single test points to a serious problem in oversight.

Two of these errors were acknowledged by the NYSED a few days after the exam was given.  The third took a little longer.

Ben Catalfo, a high school student in Long Island, noticed the error.  He brought it to the attention of a math professor at SUNY Stonybrook, who verified the error and contacted the state.  (You can see my explanation of the error here.)  Apparently the NYSED admitted they had noticed this third error, but they refused to do anything about it.

It wasn’t until Catalfo’s Change.org campaign received national attention that the NYSED felt compelled to publicly respond.  On July 20, ABC News ran a story about Catalfo and his petition.  In the article, a spokesperson for the NYSED tried to explain why, even though Catalfo’s point was indisputably valid, they would not be re-scoring the exam nor issuing any correction:

[Mr. Catalfo] used mathematical concepts that are typically taught in more advanced high school or college courses. As you can see in the problem below, students weren’t asked to prove the theorem; rather they were asked which of the choices below did not provide enough information to solve the theorem based on the concepts included in geometry, specifically cluster G.SRT.B, which they learn over the course of the year in that class.”

There is a lot to dislike here.  First, Catalfo used the Law of Sines in his solution: far from being “advanced”, the Law of Sines is actually an optional topic in NY’s high school geometry course.  Presumably, someone representing the NYSED would know that.

Second, the spokesperson suggests that the correct answer to this test question depends primarily on what was supposed to be taught in class, rather than on what is mathematically correct.  In short, if students weren’t supposed to learn that something is true, then it’s ok for the test to pretend that it’s false.  This is absurd.

Finally, notice how the NYSED’s spokesperson subtly tries to lay the blame for this error on teachers:

“For all of the questions on this exam, the department administered a process that included NYS geometry teachers writing and reviewing the questions.”

Don’t blame us, suggests the NYSED:  it was the teachers who wrote and reviewed the questions!

The extent to which teachers are involved in this process is unclear to me.  But the ultimate responsibility for producing valid, coherent, and correct assessments lies solely with the NYSED.  When drafting any substantial collaborative document, errors are to be expected.  Those who supervise this process and administer these exams must anticipate and address such errors.  When they don’t, they are the ones who should be held accountable.

Shortly after making national news, the NYSED finally gave in.  In a memo distributed on July 25, over a month after the exam had been administered, school officials were instructed to re-score the exam, awarding full credit to all students regardless of their answer.

And yet the NYSED still refused to accept responsibility for the error.  The official memo read

“As a result of a discrepancy in the wording of Question 24, this question does not have one clear and correct answer. “

More familiar nonsense.  There is no “discrepancy in wording” here, nor here, nor here, nor here.  This question was simply erroneous.  It was an error that should have been caught in a review process, and it was an error that should have been addressed and corrected when it was first brought to the attention of those in charge.

From start to finish, we see problems plaguing this process.  Mathematically erroneous questions regularly make it onto these high stakes exams, indicating a lack of supervision and failure in management of the test creation process.  When errors occur, the state is often reluctant to address the situation.  And when forced to acknowledge errors, the state blames imaginary discrepancies in wording, typos, and teachers, instead of accepting responsibility for the tests they’ve mandated and created.

There are good things about New York’s process.  Teachers are involved.  The tests and all related materials are made entirely public after administration.  These things are important.  But the state must devote the leadership, resources, and support necessary for creating and administering valid exams, and they must accept responsibility, and accountability, for the final product.  It’s what New York’s students, teachers, and schools deserve.

Related Posts