Everything You Wanted to Know About Assessments Learning Path

This page is intended to cover all the steps involved in assessments, and to answer the most frequently asked questions.

If the passing threshold or grading method of an assessment is changed while a student is taking the assessment, it does not affect that student's attempt. Any changes made to either threshold or grading method will only come into effect for students who take the assessment in the future.

When a student satisfies the passing threshold of an assessment, the LMS will advance them to the next activity. However, if they have available retakes, a student can go back and launch an assessment until those retakes are exhausted. The highest score will be counted toward their overall grade.

Frequently Asked Questions:
What is Automatic Progression and when is it used?

Automatic Progression is related to assessments and retakes. With the Automatic Progression option enabled, the system will accept the highest earned score and allow the student to progress on to the next activity. Since the student will not be stopped in the course for using all retake attempts, the retake alert on the Dashboard will not appear on courses with this feature enabled.

Typically when a student exhausts all retake attempts, the student cannot progress until an instructor provides some type of intervention. Usually the intervention will be an optional retake, resetting the lesson, or inserting a supplemental activity. However, there are times educators do not want students stuck after exhausting all retake attempts. This is the reason why educators use Automatic Progression. 

Automatic Progression is toggled through the Edit Options page. Once Automatic Progression is enabled, the LMS will advance the student to the next activity, even if they do not meet the predefined passing thresholds on the assessments. As soon as the individual exhausts all attempts (or pass an assessment), the LMS will take the highest score and record it as the graded attempt in the Gradebook.

What happens when a student runs out of retakes?

This all depends on the options a course is set up with.

With the default options, an alert will go out to the Dashboard and a teacher will need to intervene so the student can move forward. The instructor can do one of the following:

  • Allow more retakes
  • Reset an assignment to help a student review
  • Insert a supplemental activity to strengthen a weakness
  • Pass the student onto the next activity and accept the failing assessment score

If auto progression is enabled, the highest score will be recorded as the graded attempt in the Gradebook and the student will advance to the next activity. No teacher intervention will be required and no alerts will be sent out to the Dashboard.

How does the Default Review Timeout Length work?

This is the time frame (in minutes) a student is given to launch an assessment after a teacher completes a Teacher Review. This time is defined in the Edit Options page.

By default, the time is set at 0 minutes to allow students to launch an assessment any time after the instructor completes a Teacher Review. To ensure academic integrity, some teachers choose to require that the student begin an assessment right after the teacher completes the review. This prevents students from taking the assessment at home or asking for help outside of the classroom. 

Districts may choose to change this time to 15 minutes, to require the student to launch the assessment no more than 15 minutes after a teacher completes a review.

Instructors are able to override the predefined Default Review Timeout Length. 

Is there any way to prevent students from taking assessments at home?

Yes. Select the Secure Access feature to allow students to work on the assignments anywhere, but it will bypass all assessments until the student works from an authorized computer, typically on campus. When the assessment is bypassed, students will get an alert and then can move on in the coursework. When the student gets to an authorized computer, the system will unlock the bypassed assessments, sending the student back automatically to the first, unsubmitted assessment.

Because Secure Access will bypass all assessments, courses with pretesting enabled cannot be used in this implementation model. Remember, pretesting bypasses all assignments and leaves only the assessments. If a student tries to work on a pretesting course from home, both the assignments and assessments will be bypassed, which means nearly the entire course will be bypassed.

The other option is to enable Teacher Review for all attempts. This will stop students every time there is an attempt to access the assessment and send an alert to the teacher on the Dashboard. This option can be turned on for quizzes, tests, and exams.

How do we know our assessments are valid and reliable?

Imagine Learning offers four different types of assessments to measure student learning:

  1. Diagnostic Assessment occurs at the beginning of each course and assesses student’s prior knowledge of content and establishes a customized learning path over the specific content. Administrators can also enable pretesting which is a 10-question, objective-based assessment that is presented to the student at the beginning of the new lesson. If students pass a predetermined threshold, the individual will move on to the next pretest. If students do not meet mastery, they will have the opportunity to proceed through the lesson at their own pace.
  2. Formative Assessments embedded within a lesson, check understanding of concepts and skills as presented. Assignments, which follow the lesson, also serve as formative assessments. By providing corrective feedback, Imagine Edgenuity's formative assessments help students understand where gaps in knowledge exist, and learn where additional practice or support is needed.
  3. Interim Assessments occur after students finish an Imagine Edgenuity lesson. The items for these assessments are drawn from an item bank, each aligned to a specific lesson objective. Using Webb’s Depth of Knowledge and Bloom’s Taxonomy, items are labeled based on their level of difficulty. Typically, there is a 1-2-1 ratio of easy, medium, and hard items.
  4. Summative Assessments are provided at the end of each unit and/or course to evaluate students’ overall performance.

Validity of a test is the degree to which an assessment actually measures what it claims to measure. Imagine Edgenuity measures two types of validity:

  1. Content Validity refers to the adequacy with which relevant content has been sampled and represented in the test. Each diagnostic, formative, interim, and summative assessment in Imagine Edgenuity is designed to measure content-area achievement. Items are aligned to Imagine Edgenuity's course content material and represent the breadth of content described in current state and Common Core standards. All targets and distractors are reviewed by experienced classroom teachers and content specialists to align with Haladyna (2006) and the Smarter Balanced Assessment Consortium’s (2012) for bias, fairness, and sensitivity standards. Teachers and specialists also ensure that items measure content and objectives presented in each course. If discrepancies are found, items are revised and/or replaced.
  2. Construct Validity assesses the degree to which a test measures the theoretical construct it is designed to measure.

Reliability refers to the degree to which an assessment produces consistent scores. Imagine Edgenuity measures internal consistency reliability, the degree to which items that propose to measure the same general construct produce similar scores.

In 2011, Imagine Learning evaluated the validity and reliability of the polynomial quiz in the Algebra 1 course. The evaluation focused on 465 high school students from across the country. Results revealed:

  • Content Validity: Six content-area experts reviewed the polynomial quiz for content validity. Results revealed that the overall item-congruency validity was .86. This indicates that 86% of experts rendered the items to be a perfect match to their objectives.
  • Construct Validity: In order to examine the construct validity of the polynomial quiz, quiz items were correlated to their objectives. A confirmatory factor analysis revealed that the polynomial quiz had an exceptionally strong construct validity X2 (72) = 75.23, p = .37; RMSEA = .01(.90 CI = .000 - .029). The standardized component factor correlation was .995. Typically, any correlation greater than .80 is considered substantial.
  • Internal Consistency Reliability: The internal consistency reliability coefficient for the polynomial quiz was .75, and the highest possible value is 1.0. This finding provides strong support that the polynomial quiz is reliable.

Follow the steps in this learning path to learn all about the assessments available: