Skip to content

Regents Recap — June, 2016: Reused Questions

When I looked at performance data from the June 2016 Common Core Geometry Regents exam, I noticed that students did exceptionally well on one of the final multiple choice questions.  It didn’t take long to figure out why:  it was virtually identical to a question asked on the August 2015 exam.

2016 June GEO 20

Just a few words changed here and there.  All the specifics of the problem, and all the answer choices, are exactly the same.

The most basic quality control system conceivable should prevent questions from being copied from last year’s exam.  It’s hard to understand how something like this could happen on a high stakes exam that affects tens of thousands of students and teachers.

Issues like this, which call into question the validity of these exams, are what led me to originally start asking the question, “Are these tests any good?”

Related Posts

 

Regents Recap — June 2016: Scale Maintenance

This June, the Common Core Algebra Regents exam came with a surprising change in its conversion chart, which is used to translate raw test scores into official “scaled” scores.

Below is a sampling of the conversions from the June 2014 and June 2016 exams.  I’ve chosen June 2014, the first administration of the Common Core Algebra Regents exam in New York, as a specific comparison, but those numbers are consistent for all exams given prior to June 2016.

cc alg scaling comparison 2

For example, to earn a scaled score of 85 out of 100 on the Common Core Algebra Regents exam (the so-called mastery score), a student had to earn 73 out of 86 points (85%) in 2014, but only 67 out of 86 points (78%) in 2016.  The changes are quite dramatic in places:  to earn a scaled score of 75, a student had to earn 57 out of 86 points (66%) in 2014, but only 39 out of 86 points (45%) in 2016.

Overall, the new conversion is much more generous.  If you happen to teach Algebra in New York state, it should look familiar: it is essentially the conversion used with the old Integrated Algebra Regents exam, which the Common Core Algebra Regents exam replaced in 2014.

Why was such a dramatic change made this year in how this exam is scored?  It’s not because the exam has gotten harder.  The Common Core Algebra Regents exams have been fairly consistent in difficulty since being introduced in 2014.  In particular, the June 2016 exam was roughly equivalent in difficulty to the prior exams.

So then, why the change?  Scale maintenance.

Here’s an excerpt from a NY State Education Department memo dated May, 9th, 2016 that reports the recommendations of a workgroup convened by the Board of Regents to study Common Core Regents exam scoring.

In addition, the Workgroup reviewed relevant data from the Regents Examination in Algebra I (Common Core) and recommended that scale maintenance be performed such that the passing standard is realigned with the recommendations of the educator panel from June 2014 when the exam was first administered.  [Source]

Scale maintenance.

It seems that when the Common Core Algebra Regents exam was rolled out in 2014, an “educator panel” recommended a conversion chart similar to the one used for the Integrated Algebra Regents exam.  Their recommendation was obviously rejected, since the conversion chart that has been used for the past two years is quite harsh, especially for mastery rates.  [See here for a visual comparison.]

Adopting this conversion chart now, in 2016, seems like a tacit admission that rejecting the educator panel’s recommendation in 2014 was a mistake.  Which seems like an acknowledgement that the exams in 2014 and 2015 were scored improperly.

Regardless of how this is framed, it’s clear that the students who took the exam in 2014 and 2015 have been treated quite unfairly.  Had their old scores on comparable exams been scaled using this year’s conversion chart, their official grades would be higher, perhaps substantially so.

This complaint seems to have been anticipated.  Here’s another excerpt from the same memo:

The Workgroup recommended the maintenance for the June 2016 administration of the Regents Examination in Algebra I (Common Core) and that the resulting adjustment be applied to this and future administrations only.

In other words, we’ll be seeing this more generous conversion chart for this and all future exams, but if you are a student who took the exam in 2014 and 2015, or a teacher or a school who was evaluated based on those exams, you’re out of luck.

Testing is often presented as an objective means to evaluate student learning and teacher and school performance, but episodes like this clearly demonstrate how subjective it can be.  This year, the state simply decided that students would get higher Common Core Algebra Regents scores, and so they did, even though their actual performance probably didn’t change at all.  Maybe two years ago, the state simply decided that students should get lower scores, and so those lower mastery rates from 2014 and 2015 might not actually reflect lower student performance.

As a final remark, it’s hard not to think about the word accountability in this situation.  For the past decade, “more accountability” has been a common refrain in education, as in, more accountability for teachers, for schools, and for districts.  If we’ve been improperly scoring a high-stakes exam for the past two years, and tens of thousands of students and teachers have been affected, will anyone be held accountable for that?

Related Posts

Scratch@MIT Conference, 2016

Scratch MIT logoI’m excited to be participating in this summer’s Scratch@MIT conference.

The conference, held at MIT Media Labs, brings together educators, researchers, developers, and other members of the Scratch community to share how they use Scratch, the free, block-based, web-based programming environment, in and out of classrooms.  The theme of this year’s conference is Many Paths, Many Styles, which aims to highlight the value of diversity in creative learning experiences.

I’ll be running a workshop on Mathematical Simulation in Scratch, which will introduce participants to some of the ways I’ve been using Scratch in my math classes.  I’m looking forward to sharing, and learning!  And I’m grateful to Math for America, whose partial support has made it possible for me to attend.

The 2016 Scratch@MIT conference runs from August 4th through 6th.  You can find more information here.

Related Posts

 

Regents Recap — June 2016: What Do They Want to Hear?

I read this problem several times and still did not understand what it was asking for.  It is the first part of problem 36 from the June 2016 Common Core Geometry Regents exam.

2016 June GEO 36“The base with a diameter of 2 inches must be parallel to the base with a diameter of 3 inches in order to find the height of the cone.  Explain why.”  Explain why?  What do they want to hear?

Is the expectation that students will say something like “Height is only well-defined when measured between two parallel objects”, or “If the bases aren’t parallel, the height will vary depending on where the measurement is taken, thus height is only a meaningful measurement when the bases are parallel”?  As usual, the rubric was no help, simply awarding points if A correct explanation is given.

But the model student work says it all.  Here are two examples of complete and correct solutions.

2016 June GEO 36 -- Student Work ComboThese are not explanations of why the two bases must be parallel.  These are descriptions of how you might compute the height given that the two bases are parallel.  This argument essentially says “The bases are parallel because in order answer this question I need to apply a technique that requires that the bases be parallel.”

Not only is this not an explanation, it’s a kind of argument we want to teach students not to make.  Validating these responses works against what we should be trying to do as math teachers.

These high stakes exams shouldn’t encourage teachers to promote invalid mathematical thinking.  Unfortunately, as the posts below suggest, it’s happening far too often.

Related Posts

 

Regents Recap — June, 2016: Simplest Form

“Simplest form” is a dangerous phrase in math class.  Whether a form of an expression is simple or not depends on context.  For example, while \frac{3}{8} and \frac{21}{56} are representations of the same number, the first fraction is likely to be seen as simpler than the second.  But if the goal were, say, to determine if the number was greater than \frac{17}{56}, then the expression on the right might be considered simpler.

Despite the wide and varied uses of the phrase “simplest form”, I have never heard it used in the context of complex numbers.  So I was surprised by this Common Core Algebra 2 Regents exam question.2016 June CCA2 3

I don’t know what the author of this question means here by “simplest form”.  I asked around, and someone suggested that the natural interpretation of “simplest form” here is a + bi.  That seems reasonable, but since none of the answers are in a + bi form, the author of this question could not have meant that.  [It is also worth noting the implicit assumption here that y is a real number, an issue that has come up before on these exams].

What’s most bothersome about this imprecise use of language is that it is completely irrelevant to this question.  Whatever “simplest form” means here, it is of no consequence:  there is no answer choice which is otherwise correct but in some improper form.

The question should simply ask which expression is equivalent to the given expression.  The use of “simplest form” here not only obfuscates the mathematics of the problem, but models imprecise use of mathematical terminology.  We should expect our high stakes exams to do better.

Related Posts