# EAC Visual Data Documentation

EAC Visual Data simplifies and streamlines the process of collecting, aggregating, analyzing, and reporting student performance data from tests, rubrics, and goals in Blackboard Learn™. If our documentation doesn't provide answers you need, email support@edassess.net.

# Important Notes

Review the following notes to get the most out of EAC Visual Data.

### Blackboard roles

To see test and rubric reports in EAC Visual Data for a particular Blackboard Learn course or organization, you must be enrolled as an instructor, teaching assistant, or grader in that site.

### Normalized scores

EAC Visual Data is an **assessment tool**. It's not a Grade Center.

It is designed to provide information to improve the quality of exams and exam questions, and rubrics and rubric rows, as well as to analyze student performance on tests and rubrics across courses and over time.

To do this, EAC Visual Data **normalizes scores** on a scale from **0.0 (low)** to **1.0 (high)**, which means that both average scores and individual student scores for every test question and every rubric row will be presented in EAC Visual Data on a scale from **0.0 (low)** to **1.0 (high)**.

### Combine Test or Rubric Reports

EAC Visual Data can combine into one aggregate report Tests or Rubrics scored in different Blackboard Learn courses and organizations as long as the Tests or Rubrics in each course are the same, including sharing identical names.

To create a combined report, click the checkbox next to the name of any Test or Rubric on the dashboard list, and EAC will automatically select others on the list that are identical. Next, click the arrow button to generate the aggregate report.

### Randomized questions

EAC Visual Data generates test statistics for a variety of **randomize** options in Blackboard Learn, including random blocks of questions, randomizing the order in which questions are presented to students, and randomizing the order in which answer choices are presented to students for each question.

One drawback to randomizing question order is that EAC Visual Data, while able to generate test statistics for the exam and each exam question, will not be able to put statistical reports back into the original question order that an instructor may expect to see.

### Searching in EAC Visual Data

There are four special characters available in EAC Visual Data to enhance searches for tests and rubrics in Blackboard course and organization sites:

**^**means**AND**(e.g.,**Exam^Fall**searches for Tests with names that contain the words "Exam" and "Fall")**,**means**OR**(e.g.,**Midterm,Final**searches for Tests with names that contain the words "Midterm" or "Final")**$**means**STARTS WITH**(e.g.,**$Comm**searches for Rubrics with names that begin with the letters "Comm")**||**(two pipe characters in a row) means search for Tests or Rubrics scored in particular Blackboard Learn course and organization sites that contain the fragment you place after the double-pipe characters (e.g.,**||NURS**searches for Tests in all courses with Course Names or IDs that contain "NURS")

# Quick Start

First review important notes and then follow these steps to start quickly with EAC Visual Data.

**Login**to Blackboard Learn with your instructor credentials.- For Original courses, click the
**EAC Visual Data link**in Course Tools. For Ultra courses, click the**EAC Visual Data tile**in Tools from Ultra's base navigation. - If you don't see an EAC Visual Data link in Course Tools in your Original course, make it available via
**Control Panel > Customization > Tool Availability**. - The EAC Visual Data dashboard opens in a new browser tab. Adjust
**response dates**as needed then click**Go**to find tests students took or rubrics that were scored in your Blackboard courses. When you find a test or rubric you'd like to review, click its name to get reliability and other statistics. - A test view or rubric view opens in a new browser tab. Click the Word or Excel icons at the top of the page to download statistics.
- Consult the glossary for definitions of and ranges for the primary statistics used in EAC Visual Data.

# Test View

When you click the name of any test on the EAC Visual Data dashboard, a Test View opens.

A Test View contains detailed information about the selected test, and it consists of data tables that you can navigate on-screen or download in a variety of file formats.

#### Courses Included

**Courses Included** provides context, including information about the courses in which the assessment was delivered and how many students participated.

#### Summary Statistics

**Summary Statistics** provides overall information, including student High Scores, Low Scores, Mean Scores, and overall measures of reliability (KR20s).

#### Item Analysis

**Item Analysis** provides statistics for each test question, including a measure of difficulty (P-Value) and measures of reliability (Point Biserial Correlation, Cronbach Alpha with Deletion, and a Discrimination Index).

#### Distractors

**Distractors** provides more details on each test question, including a frequency distribution among correct and incorrect answers and, in limited circumstances, a measure of distractor reliability (Distractor Point Biserial Correlation).

#### Student Questions

**Student Questions** provides a matrix of raw scores, including each student's name and attempt date, how each student compared to the Mean Score (Diff), and how each student performed on each test question.

#### Student Landscape

**Student Landscape** provides more details on each student's interaction with each test question.

#### Student Coaching

**Student Coaching** provides details on each student's incorrect answers.

# Rubric View

When you click the name of any rubric on the EAC Visual Data dashboard, a Rubric View opens.

A Rubric View contains detailed information about the selected rubric, and it consists of data tables that you can navigate on-screen or download in a variety of file formats.

#### Courses Included

**Courses Included** provides context, including information about the courses in which the rubric was scored and how many students were evaluated.

#### Summary Statistics

**Summary Statistics** provides overall information, including student High Scores, Low Scores, Mean Scores, and overall measures of reliability (KR20s).

#### Row Analysis

**Row Analysis** provides statistics for each rubric row, including average scores and standard deviations as well as measures of reliability (Point Biserial Correlation and Cronbach Alpha with Deletion).

#### Student Rows

**Student Rows** provides a matrix of raw scores, including each student's name and evaluation date; how each student compared to the Mean Score (Diff); and how each student performed on each rubric row.

#### Details

**Details** provides more details on each rubric row, including a frequency distribution across Levels of Achievement.

#### Student Landscape

**Student Landscape** provides more details on each student's rubric evaluation.

# Glossary

This glossary provides definitions, ranges, and suggestions for the primary statistics used in EAC Visual Data.

#### Actual item scores

Actual item scores equals Possible item scores less the total number of unanswered (i.e., skipped) questions.

In the example below, there are 139 actual item scores which equals 140 possible item scores less 1 skipped question.

#### Cronbach Alpha with Deletion

Cronbach Alpha with Deletion helps assess test question reliability.

How? It asks whether the exam as a whole is more reliable if you simply delete the question under review. The Cronbach Alpha with Deletion re-runs the exam's KR(20) without the question under review. If the exam as a whole is more reliable without it, there's probably something wrong with that question.

The Cronbach Alpha with Deletion generally ranges between 0.0 and +1.0, but it can fall below 0.0 with smaller sample sizes.

More important than its range is how the Cronbach Alpha with Deletion compares to the exam's KR(20). If a question's Cronbach Alpha with Deletion is greater than the exam's KR(20), it means the exam as a whole is more reliable without it. For example, take a look at Question No. 2 in the picture below. If the exam's KR(20) is 0.64, then we know Question No. 2 is "suspect" because its Cronbach Alpha with Deletion of 0.78 is greater than 0.64.

**EAC suggestion:** Look out for questions with a Cronbach Alpha with Deletion greater than the exam's KR(20). These questions decrease overall test reliability and should be considered suspect.

#### Distractor point biserial correlation

To have confidence in a test question, we assess its reliability using a point biserial correlation and a Cronbach Alpha with Deletion among other statistics. If it turns out to be an unreliable question, then it would help to have additional information that might tell us why. That's where the distractor point biserial correlation comes in.

The **distractor point biserial correlation** digs deeper than the item statistics and measures the reliability of each answer choice presented to students.

How? It correlates student scores on each answer choice with their scores on the test as a whole.

The driving assumption is simple: Students who score well on the test as a whole should on average select the correct answer choice for each question. Students who struggle on the test as a whole should on average select an incorrect answer choice for each question. If an answer choice deviates from this assumption, the distractor point biserial correlation lets us know.

The distractor point biserial correlation ranges from a low of **-1.0** to a high of **+1.0**.

The closer a **correct** answer choice's distractor point biserial correlation is to **+1.0** the more reliable the answer choice is considered because it discriminates well among students who mastered the test material and those who did not. This answer choice "works well."

By the same token, the closer an **incorrect** answer choice's distractor point biserial correlation is to **-1.0** the more reliable the answer choice is considered because it discriminates well among students who did not master the test material and those who did.

**EAC suggestion:**Consider changing answer choices that aren't "working" as expected and also those that students don't select at all. If no student selected an answer choice, that answer choice isn't really a "distractor" after all.

#### Highest score

The highest number of correct answers on any one test submission.

The highest score may not correspond to Grade Center "points" unless each question is equal to 1 point.

#### KR(20)

A KR(20) of 0.0 means the exam questions didn't discriminate at all.

Imagine a test where all 20 students answered all 40 questions correctly. The test didn't discriminate among any of them, and its KR(20) of 0.0 makes perfect sense.

**EAC suggestion:** The interpretation of the KR(20) depends on the purpose of the test. Most high stakes exams are intended to distinguish those students who have mastered the material from those who have not. For these, shoot for a KR(20) of +0.50 or higher. A KR(20) of less than +0.30 is considered poor no matter the sample size. If the purpose of the exam is to ensure that ALL students have mastered essential skills or concepts or the test is a "confidence builder" with intentionally easy questions, look for a KR(20) close to 0.00.

#### Lowest score

The lowest number of correct answers on any one test submission.

The lowest score may not correspond to Grade Center "points" unless each question is equal to 1 point.

#### Questions

The total number of questions on the exam.

**EAC suggestion:** Shoot for at least 40 questions to get "good" reliability statistics.

#### p-Value

In the branch of statistics dealing with test reliability, and unlike other branches of statistics, p-Value is a simple measure of question difficulty.

The p-Value ranges from a low of 0.0 to a high of +1.0. The closer the p-Value is to 0.0 the more difficult the question.

For example, a p-Value of 0.0 means that no student answered the question correctly and therefore it's a really hard question. If an item's p-Value is unexpectedly close to 0.0, be sure to check the answer key.

The closer the p-Value is to +1.0 the easier the question. For example, a p-Value of +1.0 means that every student answered the question correctly and therefore it's a really easy question.

**EAC suggestion:** On high stakes exams, shoot for p-Values between +0.50 and +0.85 for most test questions. A p-Value less than +0.50 means the question may be too difficult or you should double-check the answer key. A p-Value greater than +0.85 means the question may be too easy or most students have mastered that concept.

#### Point biserial correlation

The point biserial correlation measures item reliability.

How? It correlates student scores on one particular question with their scores on the test as a whole.

The driving assumption is simple: Students who score well on the test as a whole should on average score well on the question under review. Students who struggle on the test as a whole should on average struggle on the question under review. If a question deviates from this assumption (aka, a "suspect" question), the point biserial correlation lets us know.

The point biserial correlation ranges from a low of -1.0 to a high of +1.0. The closer the point biserial correlation is to +1.0 the more reliable the question is considered because it discriminates well among students who mastered the test material and those who did not.

A point biserial correlation of 0.0 means the question didn't discriminate at all. Imagine a test where all 20 students answered Question 1 correctly. Since Question 1 doesn't discriminate among any of the students relative to how they performed on the rest of the test, its point biserial correlation of 0.0 makes perfect sense.

A negative point biserial correlation means that students who performed well on the test as a whole tended to miss the question under review and students who didn't perform as well on the test as a whole got it right. It's a red flag, and there are a number of possible things to check. Is the answer key correct? Is the question clearly worded? If it's multiple choice, are the choices too similar?

**EAC suggestion:** For high stakes exams intended to distinguish among students who mastered the material from those who did not, shoot for questions with point biserial correlations greater than +0.30. They're considered very good items. Questions with point biserial correlations less than +0.09 are considered poor. Questions with point biserial correlations between +0.09 and +0.30 are considered acceptable to reasonably good.

#### Possible item scores

The product of Scored responses x Questions. In the example below, there are 140 possible item scores which equals 14 Scored Responses x 10 Questions.

#### Scored responses

The total number of submitted attempts. In the example below, there were 14 submitted attempts or scored responses for this exam.

# Guide for Bb Administrators

If this guide doesn't provide answers you need, email support@edassess.net.

### Building Block installs and updates

For the latest information on installing the EAC Building Block for the first time, or updating an existing Building Block, email support@edassess.net.

## Modes in EAC

EAC Visual Data can be configured for three different Modes, or user permission levels. Learn about each Mode and how to grant EAC users appropriate privileges:

#### Instructor Mode

Instructor Mode gives an EAC user the ability to retrieve all test, rubric, and goals data from Blackboard Learn courses in which that user is enrolled as an instructor or teaching assistant, and from Blackboard Learn organizations in which that user is enrolled as a leader.

Instructor Mode exists by default for all EAC users.

#### Enterprise Mode

Enterprise Mode gives an authorized EAC user the ability to retrieve all test, rubric, and goals data from all Blackboard Learn courses and organizations.

How do I give Enterprise Mode privileges to authorized EAC users?

- If it doesn't exist, create a special permission course in Blackboard with Course Name =
**EACDMEnterprise**(case sensitive). - Enroll as an
**instructor**in the EACDMEnterprise course any user account with**Blackboard System Administrator**privileges. - Enroll as a
**student**in the EACDMEnterprise course all authorized individuals who should have Enterprise Mode privileges in EAC.

#### Administrator Mode

Administrator Mode gives an authorized EAC user the ability to retrieve all test, rubric, and goals data from those Blackboard Learn courses and organizations that are associated with specific Institutional Hierarchy Nodes.

How do I give Administrator Mode privileges to authorized EAC users?

- If it doesn't exist, create a System Role in Blackboard with Role Name =
**EAC Administrator**and Role ID (critical) =**EAC_ADMIN**(all caps). You can remove ALL PRIVILEGES for this System Role. Itâ€™s only purpose is to effectuate Administrator Mode in EAC Visual Data and therefore has no functional role in Bb Learn. - In Institutional Hierarchy, make sure that
**Courses are associated with Nodes**for which the EAC user will have Administrator Mode privileges in EAC Visual Data. For example, make sure that all Nursing courses are associated with the School of Nursing Node. - In Institutional Hierarchy, add the Blackboard user account as an
**Administrator**of the Node(s) and be sure to give that user account a System Role =**EAC Administrator**within those Nodes.