Archive of posts filed under the Teaching category.

Regents Recap — June, 2017: More Trouble With Statistics

High school math courses contain more statistics than ever, which means more statistics questions on end-of-year exams.  Sometimes these questions make me wonder what test makers think we are supposed to be teaching.  Here are two examples from the June, 2017 exams.

First, number 15 from the June, 2017 Common Core Algebra exam.

This question puzzled me.  The only unambiguous answer choice is (3), which can be quickly eliminated.  The other answer choices all involve descriptors that are not clearly defined:  “evenly spread”, “skewed”, and “outlier”.

The correct answer is (4).  I agree that “79 is an outlier” is the best available answer, but it’s curious that the exam writers pointed out that an outlier would affect the standard deviation of a set of data.  Of course, every piece of data affects the standard deviation of a data set, not just outliers.

From the Common Core Algebra 2 exam, here is an excerpt from number 35, a question about simulation, inference, and confidence intervals.

I can’t say I understand the vision for statistics in New York’s Algebra 2 course, but I know one thing we definitely don’t want to do is propagate dangerous misunderstandings like “A 95% confidence interval means we are 95% confident of our results”.  We must expect better from our exams.

UPDATE: Amy Hogan (@alittlestats) has written a nice follow up post here.

Related Posts

Regents Recap — June, 2017: Three Students Solve a Math Problem

I will never understand why so many exam questions are written like this (question 5 from the June, 2017 Algebra exam):

What is the purpose of the artificial context?  Why must the question be framed as though three people are comparing their answers?  Why not just write a math question?This question not only addresses the same mathematical content, it makes the mathematics the explicit focus.  This would seem to be a desirable quality in a mathematical assessment item.

Instead of wasting time concocting absurd scenarios for these problems, let’s focus on making sure the questions that end up on these exams are mathematically correct.

Related Posts

Regents Recap — June, 2017: The Underlying Problem with the New York State Regents Exams

I’ve been writing critically about the New York State Regents exams in mathematics for many years.  Underlying all the erroneous and poorly worded questions, problematic scoring guidelines, and inconsistent grading policies, is a simple fact:  the process of designing, writing, editing, and administering these high-stakes exams is deeply flawed.  There is a lack of expertise, supervision, and ultimately, accountability in this process.  The June, 2017 Geometry exam provides a comprehensive example of these criticisms.

The New York State Education Department has now admitted that at least three mathematically erroneous questions appeared on the June, 2017 Geometry exam.  It’s bad enough for a single erroneous question to make it onto a high-stakes exam taken by 100,000 students.  The presence of three mathematical errors on a single test points to a serious problem in oversight.

Two of these errors were acknowledged by the NYSED a few days after the exam was given.  The third took a little longer.

Ben Catalfo, a high school student in Long Island, noticed the error.  He brought it to the attention of a math professor at SUNY Stonybrook, who verified the error and contacted the state.  (You can see my explanation of the error here.)  Apparently the NYSED admitted they had noticed this third error, but they refused to do anything about it.

It wasn’t until Catalfo’s Change.org campaign received national attention that the NYSED felt compelled to publicly respond.  On July 20, ABC News ran a story about Catalfo and his petition.  In the article, a spokesperson for the NYSED tried to explain why, even though Catalfo’s point was indisputably valid, they would not be re-scoring the exam nor issuing any correction:

[Mr. Catalfo] used mathematical concepts that are typically taught in more advanced high school or college courses. As you can see in the problem below, students weren’t asked to prove the theorem; rather they were asked which of the choices below did not provide enough information to solve the theorem based on the concepts included in geometry, specifically cluster G.SRT.B, which they learn over the course of the year in that class.”

There is a lot to dislike here.  First, Catalfo used the Law of Sines in his solution: far from being “advanced”, the Law of Sines is actually an optional topic in NY’s high school geometry course.  Presumably, someone representing the NYSED would know that.

Second, the spokesperson suggests that the correct answer to this test question depends primarily on what was supposed to be taught in class, rather than on what is mathematically correct.  In short, if students weren’t supposed to learn that something is true, then it’s ok for the test to pretend that it’s false.  This is absurd.

Finally, notice how the NYSED’s spokesperson subtly tries to lay the blame for this error on teachers:

“For all of the questions on this exam, the department administered a process that included NYS geometry teachers writing and reviewing the questions.”

Don’t blame us, suggests the NYSED:  it was the teachers who wrote and reviewed the questions!

The extent to which teachers are involved in this process is unclear to me.  But the ultimate responsibility for producing valid, coherent, and correct assessments lies solely with the NYSED.  When drafting any substantial collaborative document, errors are to be expected.  Those who supervise this process and administer these exams must anticipate and address such errors.  When they don’t, they are the ones who should be held accountable.

Shortly after making national news, the NYSED finally gave in.  In a memo distributed on July 25, over a month after the exam had been administered, school officials were instructed to re-score the exam, awarding full credit to all students regardless of their answer.

And yet the NYSED still refused to accept responsibility for the error.  The official memo read

“As a result of a discrepancy in the wording of Question 24, this question does not have one clear and correct answer. “

More familiar nonsense.  There is no “discrepancy in wording” here, nor here, nor here, nor here.  This question was simply erroneous.  It was an error that should have been caught in a review process, and it was an error that should have been addressed and corrected when it was first brought to the attention of those in charge.

From start to finish, we see problems plaguing this process.  Mathematically erroneous questions regularly make it onto these high stakes exams, indicating a lack of supervision and failure in management of the test creation process.  When errors occur, the state is often reluctant to address the situation.  And when forced to acknowledge errors, the state blames imaginary discrepancies in wording, typos, and teachers, instead of accepting responsibility for the tests they’ve mandated and created.

There are good things about New York’s process.  Teachers are involved.  The tests and all related materials are made entirely public after administration.  These things are important.  But the state must devote the leadership, resources, and support necessary for creating and administering valid exams, and they must accept responsibility, and accountability, for the final product.  It’s what New York’s students, teachers, and schools deserve.

Related Posts

Regents Recap — June, 2017: When Side-Side-Angle is Enough

Here is yet another mathematically erroneous question from New York’s June 2017 Geometry Regents exam.

At first this question seems straightforward.  There are several ways to determine if two triangles are similar, and the answer choices cover three of the basics: in (1) segment AB is parallel to segment ED, so congruent alternate interior angles can be used to show that the triangles are similar by Angle-Angle (AA); in (3) Side-Angle-Side (SAS) similarity can be used; and in (4), Side-Side-Side (SSS) similarity applies since all three pairs of sides are in proportion.

Presumably (2) is the answer choice that does not guarantee the triangles will be be similar, and according to the official scoring guide provided by the state (2) is the correct answer.  But as it turns out, (2) is also sufficient to guarantee that the triangles are similar.  This means that this question has no correct answer.

In (2), we have two pairs of sides in proportion and one pair of congruent angles (the vertical angles ECD and ACB).  This is the Side-Side-Angle (SSA) scenario, and because this set of information does not determine a unique triangle, SSA alone is not sufficient to establish that a pair of triangles are similar (or congruent).

But there is additional information to work with in this question.  The lengths of the sides of the triangles guarantee that angles B and D are both acute.  This is because there can be at most one non-acute angle in any triangle, which is necessarily the triangle’s largest angle, and the largest angle in a triangle must be opposite the triangle’s longest side.  Since angles B and D are not opposite their respective triangle’s longest side, they must be acute angles.  And it turns out that this additional piece of information allows us to conclude that the triangles are similar.

Here’s why.  Suppose you know the lengths of segments XY and YZ and the measure of an acute angle Z.  Depending on the length of XY, there are 0, 1, or 2 possible triangles XYZ.  Here’s a geometric representation of all the possibilities:

This explains why SSA fails to uniquely determine a triangle:  there may exist two different triangles consistent with the given information.

But if two triangles XYZ are possible, one of the triangles will have an obtuse angle at X and the other will have an acute angle at X.   This means that if we happen to know that angle X is acute, then only one triangle XYZ is possible, and so this set of information (SSA and the nature of the angles opposite the given sides) uniquely identifies a triangle and can be used to establish similarity (or congruence) among a pair of triangles.  Thus, the information in (2) is sufficient to conclude the triangle are similar, and so there is no correct answer to the above exam question.

Alternately, a more algebraic argument uses the Law of Sines.  From triangle ABC we get

$\frac{sinB}{7.2} = \frac{sinACB}{8.1}$

$sinB = \frac{8}{9} sinACB$

and from triangle EDC we get

$\frac{sinD}{2.4} = \frac{sinECD}{2.7}$

$sinD = \frac{8}{9} sinECD$

And since

$\angle{ACB} \cong \angle{ECD}$

we can conclude that

$sinB = sinD$

Generally speaking we can’t conclude that the measure of angle B is equal to the measure of angle D:  two angles with the same sine could be supplements or differ by a full revolution.  But since we know both angles are acute, we can conclude that

$m\angle{B} = m\angle{D}$

Thus, the triangles are similar by AA.  (This argument also shows that SSA together with knowledge of the nature of the angles is a congruence theorem.)

So, this high-stakes exam question has no correct answer.  And despite the Change.org petition started by a 16-year old student that made national news, the New York State Education Department refuses to issue a correction.  In fact, they refuse to acknowledge the indisputable fact that this question has no correct answer, perhaps because they don’t want to admit that a third question on this exam (see question 14 and question 22) has been determined to be mathematically erroneous.

UPDATE:  All the media attention apparently convinced the NYSED to award full credit to all test takers for this erroneous question.  Due to the discrepancy in wording, of course (link).

Related Posts

Regents Recap — June, 2017: Trouble with Dilations (and Logic)

The emphasis on transformations in Common Core Geometry has proven to be a challenge for the creators of the New York State Regents.  Here’s the latest example.

This is a tricky question.  So tricky, in fact, that it tripped up those responsible for creating this exam.

Dilation is a similarity mapping (assuming, as we do, that the scale factor is non-zero), and translation is a congruence mapping.  Thus, any composition of the two will be a similarity mapping, but not necessarily a congruence mapping.  So in the above question, statement II will always be true, and statements I and IV are not always true.

Statement III requires closer attention.  Under most circumstances, translations and dilations map lines to parallel lines, and so the same would be true of their compositions.  However, if the center of dilation lies on a given line, or the translation is parallel to the given line, then that line will be mapped onto itself under the transformation.

This means that the answer to this test question hinges on the question, “Is a line parallel to itself?”

If the answer is yes, then statement III will always be true, and so (3) II and III will be the correct answer.  If the answer is no, then statement III won’t always be true. and so (1) II only will be the correct answer.

So which is the correct answer?  Well, that’s tricky, too.  The answer key provided by New York state originally gave (3) as the correct answer.  But several days later, the NYS Department of Education issued a memo instructing graders to accept both (1) and (3) as correct.  Apparently, the state isn’t prepared to take a stance on this issue.

Their final decision is amusing, as these two answer choices are mutually exclusive:  either statement III is always true or it isn’t always true.  It can’t be both.  Those responsible for this exam are trying to get away with quietly asserting that (P and not P) can be true!

Oddly enough, this wasn’t the only place on this very exam where this issue arose.  Here’s question 6:Notice that this question directly acknowledges that the location of the center of dilation impacts whether or not a line is mapped to a parallel line.  It’s not entirely correct (a center’s location on the line, not the segment, is what matters) but it demonstrates some of the knowledge that was lacking in question 14.  How, then, did the problem with question 14 slip through?

As is typical, the state provided a meaningless and generic explanation for the error:  this problem was a result of discrepancies in wording.  But there are no discrepancies in wording here.  This is simply a careless error, one that should have been caught early in the test production process, and one that would have been caught if production of these exams were taken more seriously.

Related Posts