# Distractor point biserial correlation

Read about our **exciting partnership with Blackboard, Inc.** to simplify and streamline the process for assessing student learning: Blackboard’s Assessment and Accreditation Solution: Expanding the Possibilities for Continual Improvement.

To have confidence in a test question, we assess its reliability using a point biserial correlation and a Cronbach Alpha with Deletion among other statistics. If it turns out to be an unreliable question, then it would surely help to have additional information that might tell us why. That's where the distractor point biserial correlation comes in.

The **distractor point biserial correlation** digs deeper than the item statistics and measures the reliability of each answer choice presented to students.

How? It correlates student scores on each answer choice with their scores on the test as a whole.

The driving assumption is simple: Students who score well on the test as a whole should on average select the correct answer choice for each question. Students who struggle on the test as a whole should on average select an incorrect answer choice for each question. If an answer choice deviates from this assumption, the distractor point biserial correlation lets us know.

The distractor point biserial correlation ranges from a low of **-1.0** to a high of **+1.0**.

The closer a **correct** answer choice's distractor point biserial correlation is to **+1.0** the more reliable the answer choice is considered because it discriminates well among students who mastered the test material and those who did not. This answer choice "works well."

By the same token, the closer an **incorrect** answer choice's distractor point biserial correlation is to **-1.0** the more reliable the answer choice is considered because it discriminates well among students who did not master the test material and those who did.

Since making sense of distractor point biserial correlations can be challenging, we provide a simple Distractors Key in EAC Visual Data.

EAC suggestion: Refer to the Distractors Key to assess whether each answer choice is "working" as expected (i.e., whether each answer choice discriminates among students who have mastered the test material and those who have not). Consider changing answer choices that aren't "working" as expected and also those that students don't select at all. If no student selected an answer choice, that answer choice isn't really a "distractor" after all.

See other test statistics: