# Regents Recap — June, 2017: More Trouble With Statistics

High school math courses contain more statistics than ever, which means more statistics questions on end-of-year exams.  Sometimes these questions make me wonder what test makers think we are supposed to be teaching.  Here are two examples from the June, 2017 exams.

First, number 15 from the June, 2017 Common Core Algebra exam.

This question puzzled me.  The only unambiguous answer choice is (3), which can be quickly eliminated.  The other answer choices all involve descriptors that are not clearly defined:  “evenly spread”, “skewed”, and “outlier”.

The correct answer is (4).  I agree that “79 is an outlier” is the best available answer, but it’s curious that the exam writers pointed out that an outlier would affect the standard deviation of a set of data.  Of course, every piece of data affects the standard deviation of a data set, not just outliers.

From the Common Core Algebra 2 exam, here is an excerpt from number 35, a question about simulation, inference, and confidence intervals.

I can’t say I understand the vision for statistics in New York’s Algebra 2 course, but I know one thing we definitely don’t want to do is propagate dangerous misunderstandings like “A 95% confidence interval means we are 95% confident of our results”.  We must expect better from our exams.

UPDATE: Amy Hogan (@alittlestats) has written a nice follow up post here.

Related Posts

Categories: TeachingTesting

#### Amy Hogan · August 28, 2017 at 12:51 pm

I had a comment about this. It quickly grew in length and I decided it needed to be a post over on my blog.

#### MrHonner · August 28, 2017 at 6:22 pm

Nice! I added the link above. I like how you offer some fixes.

But nothing to say about “95% confident of our results”, eh?

#### Amy Hogan · August 28, 2017 at 6:29 pm

Don’t be silly; of course I have something to say about that. Basically, in #35, they have not demonstrated an understanding of what a 95% confidence interval means. But that’s for another day…

#### l hodge · August 31, 2017 at 7:29 pm

Yikes! I have quoted additional pieces of the question below. The author really doesn’t seem to understand inference or simulations at all. On top of that, the wording is really cumbersome in places.

The dealership will launch the new procedure if 50% or more of the customers will rate it
favorably.
(The dealership cannot see into the future. It is a matter of what the dealership believes will happen based on one sample, not what will actually happen).

Each dot on the graph below represents the proportion of the customers who preferred the new check-in procedure, each of sample size 40, simulated 100 times.
(Each dot represents the proportion of customers in a simulated sample that preferred the new check-in procedure. One hundered simulated samples of fourty customers were generated).

Assume the set of data is approximately normal and the dealership wants to be 95% confident of its results.
(In what do they wish to have 95% confidence? Presumably that at least 50% of customers favor the new procedure, not “its results” as you have already pointed out).

Determine an interval containing the plausible sample values for which the dealership will launch the new procedure.
(The word plausible should be removed. We can identify plausible simulated sample values. But, we can’t identify plausible actual sample values because we don’t know what % of actual customers prefer the new procedure.)

(The decision to move forward or not is based on whether or not it is “plausible” that the one real sample percentage we get came from a population where at least 50% favored the new procedure.)

#### l hodge · August 31, 2017 at 7:35 pm

Gary Rubinstein commented on a different simulation problem on his blog. There was a major misunderstanding involving a simulation and what is plausible reflected in that question as well.

Follow