Content Validity & Reliability Procedure

CAEP Standard Alignment: R5/RA5: Quality Assurance System and Continuous Improvement
CAEP Standard Component Alignment: R5.2/RA5.2: Data Quality

CAEP Criteria for Evaluation of EPP-Created Assessments and Surveys (2022)
Before you proceed with the content validity and reliability procedures, please view the CAEP Criteria for Evaluation of EPP-Created Assessments and Surveys.
CAEP Criteria for Evaluation of EPP-Created Surveys (2022)
Before you proceed with the content validity and reliability procedures, please view the CAEP Criteria for Evaluation of EPP-Created Surveys.

Introduction

The College of Education and Professional Development (COEPD) at Marshall University has established a content validity procedure for all Education Preparation Provider (EPP) created assessments and surveys, including key assessments, performance tasks, clinical evaluations, and national board-certified exams.  The EPP adopted the procedure to evaluate its assessments in Spring 2022. The content validity and reliability procedures are used by both initial- and advanced-level programs.  Procedures follow the guidelines outlined in the CAEP Evaluation Framework document for EPP-Created Assessments to design, pilot, and judge the adequacy of the assessments created by the EPP.

The purpose of the content validity procedure is to provide guidance for collecting evidence and to document the adequate technical quality of assessment instruments and rubrics used to evaluate candidates in the COEPD.

CAEP Defined Assessments

CAEP uses the term “assessments” to cover content tests, observations, projects or assignments, and surveys – all of which are used with candidates.  Surveys are often used to gather evidence on candidate preparation and candidate perceptions about their readiness to teach.  Surveys are also helpful to measure the satisfaction of graduates or employers with preparation and the perceptions of clinical faculty about the preparedness of EPP completers.

Assessments and rubrics are used by faculty to evaluate candidates and provide them with feedback on their performance.  Assessments and rubrics should address relevant and meaningful candidate knowledge, performance, and dispositions, aligned with CAEP standards.  An EPP will use assessments that comprise evidence offered in accreditation self-study reports to examine candidates at various points from admission through completion consistently.  These are assessments that all candidates are expected to complete as they pass from one stage of preparation to the next or that are used to monitor the progress of candidates’ developing proficiencies during one or more stages of preparation.

EPP-Defined Assessment

The definition of assessment adopted by the EPP includes three significant processes: data collection from a comprehensive and integrated set of assessments, analysis of data for forming judgments, and use of analysis in making decisions.  Based on these three processes, assessment is operationally defined as a process in which data/information is collected, summarized, and analyzed as a basis for forming judgments.  Judgments then form the basis for making decisions regarding continuous improvement in our programs.

EPP Five-Year Review Cycle

The EPP established a consistent process to review all EPP-created assessments/rubrics on a five-year cycle when possible.

Purpose

This policy establishes formal procedures for ensuring the content validity of all Educator Preparation Provider (EPP)-created assessments within the College of Education and Professional Development (COEPD) at Marshall University. Content validity ensures assessments accurately measure intended constructs, align to professional standards, reflect P–12 expectations, and support continuous improvement.

Scope

These procedures apply to:

  • Key assessments (initial- and advanced-level programs)
  • Disposition instruments
  • Clinical evaluation tools (when modified by the EPP)
  • Survey instruments (completer, employer, stakeholder)
  • Substantively revised rubrics

Governing Framework

All assessments must demonstrate alignment to applicable professional standards, including CAEP, InTASC, WVPTS, or Specialized Professional Association (SPA) standards.

Conditions Requiring Content Validation

  • Development of a new key assessment
  • Substantive rubric revision
  • Updates to applicable standards
  • Data trends suggesting construct misalignment
  • Stakeholder feedback indicating measurement concerns
  • Routine 3–5 year revalidation cycle

Content Validity Procedures

Step One: Define the Construct

Program faculty define the construct, identify transition point alignment, and document operational definitions.

Step Two: Standards Alignment Matrix

Develop a detailed alignment matrix mapping each rubric element to applicable professional standards.

Step Three: Expert Panel Selection

The program coordinator or responsible individual coordinates a validation panel including:

  • Program faculty (2–3)
  • Clinical partner representative
  • External stakeholder or employer
  • Current program candidate
  • Program completer
  • Content specialist (advanced programs, if applicable)

The program coordinator or designated responsible individual should actively engage and utilize all members of the advisory board, when applicable, to ensure broad representation, shared expertise, and comprehensive program input.

Step Four: Rating Process

Panelists independently rate each rubric element using a 3-point relevance scale:

  • 3 = Essential
  • 2 = Useful but Not Essential
  • 1 = Not Important

Panelists also provide qualitative feedback regarding clarity, alignment, and representativeness.

Step Five: Content Validity Index (CVI) Calculation

The Content Validity Index (CVI) is calculated using the proportion of panelists rating an item as ‘Essential (3).’

Item-Level CVI (I-CVI) = Number of ratings of 3 ÷ Total number of raters

Scale-Level CVI (S-CVI) = Average of all I-CVI values across rubric elements

COEPD Benchmarks:
  • I-CVI ≥ .78 required for retention
  • S-CVI ≥ .80 required for instrument approval
  • Items below threshold must be revised and re-evaluated

Step Six: Revision and Recalibration

Items not meeting benchmark thresholds are revised for clarity, alignment, and measurability. All revisions are documented in a formal revision log.

Step Seven: Governance Review

Validation findings are presented to EPPAC or content specific advisory board for affirmation of professional relevance and workforce alignment.

Step Eight: Implementation and Rater Training

Prior to implementation, calibration sessions are conducted. High-stakes assessments require interrater reliability evidence (≥80% agreement or ICC ≥.70).

Documentation and Archiving

The program coordinator or designated responsible individual must clearly document each phase of the content validity process, including rater selection, materials provided, rating results, calculations, and any resulting revisions. Upon completion, all documentation must be submitted to the COEPD Director of Assessment for review and recordkeeping.

Continuous Monitoring

Annual data reviews examine distribution patterns, ceiling effects, construct overlap, and stakeholder feedback. Identified concerns trigger revalidation procedures.

Roles and Responsibilities

  • Facilitate expert panel processes
  • Design rating instruments
  • Calculate CVI metrics
  • Ensure documentation integrity
  • Prepare accreditation evidence
  • Report validation outcomes in Annual Assessment Reports