College of Education and Professional Development
Content Validity Flowchart
Content Validity Introduction
The College of Education and Professional Development (COEPD) at Marshall University has established a content validity procedure for all Education Preparation Providers (EPP) created assessments, including all key assessments, performance tasks, clinical evaluations, and national board-certified exams. The procedure (was) adopted by the EPP to evaluate its own assessments and for the Council for the Accreditation of Educator Preparation (CAEP) site teams to review evidence in self-study submissions. The content validity procedures will be used by both the initial licensure-level and advanced-level programs and follows the guidelines set forth in the CAEP Evaluation Framework document for EPP-Created Assessments to design, pilot, and judge the adequacy of the assessments created by the EPP.
The purpose of the content validity procedure is to provide guidance for collection of evidence and to document adequate technical quality of assessment instruments and rubrics used to evaluate candidates in the COEPD.
CAEP Defined Assessments
CAEP uses the term “assessments” to cover content tests, observations, projects or assignments, and surveys – all of which are used with candidates. Surveys are often used to gather evidence on aspects of candidate preparation and candidate perceptions about their own readiness to teach. Surveys are also useful to measure the satisfaction of graduates or employers with preparation and the perceptions of clinical faculty about the readiness of EPP completers.
Assessments and rubrics are used by faculty to evaluate candidates and provide them with feedback on their performance. Assessments and rubrics should address relevant and meaningful attributes of candidate knowledge, performance, and dispositions, aligned with CAEP standards. Assessments that comprise evidence offered in accreditation self-study reports will be used by an EPP to consistently examine candidates at various points from admission through completion. These are assessments that all candidates are expected to complete as they pass from one stage of preparation to the next, or that are used to monitor progress of candidates’ developing proficiencies during one or more stages of preparation.
COEPD Educator Preparation Provider (EPP) Defined Assessments
The definition of assessment adopted by the EPP includes three major processes: data collection from a comprehensive and integrated set of assessments, analysis of data for forming judgments, and use of analysis in making decisions. Based on these three processes, assessment is operationally defined as a process in which data/information are collected, summarized, and analyzed as a basis for forming judgments. Judgments then form the basis for making decisions regarding continuous improvement in our programs.
EPP Five-Year Review Cycle
The EPP established a consistent process to review all EPP created assessments/rubrics on a five-year cycle when possible.
Content Validity Procedure
- Complete the COEPD Rubric Review Form for each EPP created assessment/rubric. The COEPD Rubric Review Form is available in the COEPD Resources Microsoft Team.
- Q Methodology, a card-sort technique designed to enable the study of subjectivity (views, opinions, beliefs, values, etc.) will be used to identify important components of an assessment/rubric. If the Q-sort is desired to be done electronically, Qualtrics is a useful tool to conduct a Q-sort using the Pick, Group, and Rank question type.
- Identify a Q-Sort Group that include a mixture of COEPD faculty members, students, and external experts (classroom teachers, supervisors, etc.) to conduct the Q-sort.
- Congruence: Components that are 80% congruent should be added, per the Content Validity Index (CVI).
- Identify a Panel of Experts and Credentials: The review panel should include a mixture of COEPD faculty and external experts, with no less than 15 people per panel. Minimal credentials for each expert should be established by consensus from program faculty and their credentials should bear up to reasonable external scrutiny.
- Create an Assessment Packet for Each Panel Member: The packet should include:
- A letter explaining the purpose of the study, the reason the expert was selected, a description of the measure and its scoring, and explanation of the response form.
- A copy of the assessment instructions provided to the candidates.
- A copy of the rubric used to evaluate the assessment.
- The response form aligned with the assessment/rubric for the panel members to rate each item.
- Rubric Response form for each EPP created assessment/rubric should be completed by panel members asking them to rate items that appear on the rubric. Program faculty should work collaboratively to develop the response form required for each rubric used in the program to officially evaluate candidate performance. The Rubric Response Form is available in the COEPD Resources Microsoft Team.
- For each item, the overarching construct that the item purports to measure should be identified and operationally defined.
- The item should be written as it appears on the assessment.
- Panel members should rate the item’s level of representativeness in measuring the aligned overarching construct on a scale of 1-4, with 4 being the most representative. Space should be provided for experts to comment on the item or suggest revisions.
- Experts should rate the importance of the item in measuring the aligned overarching construct, on a scale of 1-4, with 4 being the most essential. Space should be provided for experts to comment on the item or suggest revisions.
- Experts should rate the item’s level of clarity on a scale of 1-4, with 4 being the most clear and 1 being not clear. Space should be provided for experts to comment on the item or suggest revisions.
- Revise and Pilot. Ensure that all rubrics are calibrated.