Regents Recap — June 2016: How Much Should This Be Worth?

The following problem appeared on the June 2016 Common Core Algebra 2 Regents exam.

2016 June CCA2 28 v2

This is a straightforward and reasonable problem.  What’s unreasonable is that it is only worth two points.

The student here is asked to construct a representation of a mathematical object with six specific properties:  it must be a cosine curve;  it must be a single cycle; it must have amplitude 3; it must have period \pi / 2; it must have midline y = -1; and it must pass through the point (0,2).

That seems like a lot to ask for in a two-point problem, but the real trouble comes from the grading guidelines.

According to the official scoring rubric, a response earns one point if “One graphing error is made”.  Failure to satisfy any one of the six conditions would constitute a graphing error.  So a graph that satisfied five of the six required properties would earn one point out of two.  That means a response that is 83% correct earns 50% credit.

It gets worse.  According to the general Regents scoring guidelines, a combination of two graphing errors on a single problem results in a two-point deduction.  That means a graph with four of the six required properties, and thus two graphing errors, will earn zero points on this problem.  A response that is 66% correct earns 0% credit!

The decision to make this six-component problem worth two points creates a situation where students are unfairly and inconsistently evaluated.  It makes me wonder if those in charge of these exams actually considered the scoring consequences of their decision, especially since there are two obvious and simple fixes:  reduce the requirements of the problem, or increase its point value.

This is another example of how tests that are typically considered objective are significantly impacted by arbitrary technical decisions made by those creating them.

Related Posts

 

Regents Recap — June 2016: Are These Figures Congruent?

Given the congruent triangles below, is the statement “Triangle ABC can be proved congruent to triangle ZYX” true, or false?

Congruent Triangles -- labeled

I imagine most will say that the statement is false, and argue that the correspondence of the triangles is incorrect.  That is, segment AB is not congruent to segment ZY, and so on.  I think this is a reasonable response.

However, a substantial part of me believes the statement is true.  “Triangle ABC” references an object, as does “triangle ZYX“.  These two objects are indeed congruent.  Thus, how can it be said they can’t be proved congruent?

In other words, I don’t believe the statement “Triangle ABC can be proved congruent to triangle ZYX” entails a binding correspondence in the way that the statement

\Delta ABC \cong \Delta XYZ

does.

I was thinking about this because of this question from the June 2016 Common Core Geometry Regents exam.

2016 June CC GEO 16

According to the rubric, the correct answer is (3) reflection over the x-axis.  The most common incorrect response, of course, was (1) rotation.  But I’m not certain it’s really incorrect.  I don’t think anyone would get this question wrong based on my objection, but since the question is designed to entice students to say rotation, I think it deserves some scrutiny.

Related Posts

Regents Recap — June, 2016: Reused Questions

When I looked at performance data from the June 2016 Common Core Geometry Regents exam, I noticed that students did exceptionally well on one of the final multiple choice questions.  It didn’t take long to figure out why:  it was virtually identical to a question asked on the August 2015 exam.

2016 June GEO 20

Just a few words changed here and there.  All the specifics of the problem, and all the answer choices, are exactly the same.

The most basic quality control system conceivable should prevent questions from being copied from last year’s exam.  It’s hard to understand how something like this could happen on a high stakes exam that affects tens of thousands of students and teachers.

Issues like this, which call into question the validity of these exams, are what led me to originally start asking the question, “Are these tests any good?”

Related Posts

 

Regents Recap — June 2016: Scale Maintenance

This June, the Common Core Algebra Regents exam came with a surprising change in its conversion chart, which is used to translate raw test scores into official “scaled” scores.

Below is a sampling of the conversions from the June 2014 and June 2016 exams.  I’ve chosen June 2014, the first administration of the Common Core Algebra Regents exam in New York, as a specific comparison, but those numbers are consistent for all exams given prior to June 2016.

cc alg scaling comparison 2

For example, to earn a scaled score of 85 out of 100 on the Common Core Algebra Regents exam (the so-called mastery score), a student had to earn 73 out of 86 points (85%) in 2014, but only 67 out of 86 points (78%) in 2016.  The changes are quite dramatic in places:  to earn a scaled score of 75, a student had to earn 57 out of 86 points (66%) in 2014, but only 39 out of 86 points (45%) in 2016.

Overall, the new conversion is much more generous.  If you happen to teach Algebra in New York state, it should look familiar: it is essentially the conversion used with the old Integrated Algebra Regents exam, which the Common Core Algebra Regents exam replaced in 2014.

Why was such a dramatic change made this year in how this exam is scored?  It’s not because the exam has gotten harder.  The Common Core Algebra Regents exams have been fairly consistent in difficulty since being introduced in 2014.  In particular, the June 2016 exam was roughly equivalent in difficulty to the prior exams.

So then, why the change?  Scale maintenance.

Here’s an excerpt from a NY State Education Department memo dated May, 9th, 2016 that reports the recommendations of a workgroup convened by the Board of Regents to study Common Core Regents exam scoring.

In addition, the Workgroup reviewed relevant data from the Regents Examination in Algebra I (Common Core) and recommended that scale maintenance be performed such that the passing standard is realigned with the recommendations of the educator panel from June 2014 when the exam was first administered.  [Source]

Scale maintenance.

It seems that when the Common Core Algebra Regents exam was rolled out in 2014, an “educator panel” recommended a conversion chart similar to the one used for the Integrated Algebra Regents exam.  Their recommendation was obviously rejected, since the conversion chart that has been used for the past two years is quite harsh, especially for mastery rates.  [See here for a visual comparison.]

Adopting this conversion chart now, in 2016, seems like a tacit admission that rejecting the educator panel’s recommendation in 2014 was a mistake.  Which seems like an acknowledgement that the exams in 2014 and 2015 were scored improperly.

Regardless of how this is framed, it’s clear that the students who took the exam in 2014 and 2015 have been treated quite unfairly.  Had their old scores on comparable exams been scaled using this year’s conversion chart, their official grades would be higher, perhaps substantially so.

This complaint seems to have been anticipated.  Here’s another excerpt from the same memo:

The Workgroup recommended the maintenance for the June 2016 administration of the Regents Examination in Algebra I (Common Core) and that the resulting adjustment be applied to this and future administrations only.

In other words, we’ll be seeing this more generous conversion chart for this and all future exams, but if you are a student who took the exam in 2014 and 2015, or a teacher or a school who was evaluated based on those exams, you’re out of luck.

Testing is often presented as an objective means to evaluate student learning and teacher and school performance, but episodes like this clearly demonstrate how subjective it can be.  This year, the state simply decided that students would get higher Common Core Algebra Regents scores, and so they did, even though their actual performance probably didn’t change at all.  Maybe two years ago, the state simply decided that students should get lower scores, and so those lower mastery rates from 2014 and 2015 might not actually reflect lower student performance.

As a final remark, it’s hard not to think about the word accountability in this situation.  For the past decade, “more accountability” has been a common refrain in education, as in, more accountability for teachers, for schools, and for districts.  If we’ve been improperly scoring a high-stakes exam for the past two years, and tens of thousands of students and teachers have been affected, will anyone be held accountable for that?

Related Posts

Scratch@MIT Conference, 2016

Scratch MIT logoI’m excited to be participating in this summer’s Scratch@MIT conference.

The conference, held at MIT Media Labs, brings together educators, researchers, developers, and other members of the Scratch community to share how they use Scratch, the free, block-based, web-based programming environment, in and out of classrooms.  The theme of this year’s conference is Many Paths, Many Styles, which aims to highlight the value of diversity in creative learning experiences.

I’ll be running a workshop on Mathematical Simulation in Scratch, which will introduce participants to some of the ways I’ve been using Scratch in my math classes.  I’m looking forward to sharing, and learning!  And I’m grateful to Math for America, whose partial support has made it possible for me to attend.

The 2016 Scratch@MIT conference runs from August 4th through 6th.  You can find more information here.

Related Posts

 

Follow

Get every new post delivered to your Inbox

Join other followers: