Standard 5

[back to CAEP Self-Study Report]

 

5.1 The EPP is committed to a Quality Assurance System (QAS) that is comprised of multiple measures to monitor candidate progress, completer achievements, and provider operational effectiveness. (QAS Appendix C).

The definition of assessment adopted by the EPP includes three major processes: data collection from a comprehensive and integrated set of assessments, analysis of data for forming judgments, and use of analysis to inform decisions. Based on these three processes, assessment is operationally defined as a process in which data/information are collected, summarized, and analyzed as a basis for forming judgments. Judgments then form the basis for making decisions regarding continuous improvement of our programs.

The QAS evolved through a process of systematic thought and work focused on assessing education candidates and their programs. Assessment of candidates and programs aligns unit requirements with institutional, state, and national standards and leads to measured decision making involving candidates, programs, and faculty (PROGRAM REVIEWS, STRATEGIC COMMITMENTS). EPP assessment instruments have been developed, used, and refined with feedback from both public school and university supervisors (EPPAC, CSLCITE). This developmental approach to assessment creates a connected, expanded system that encompasses all EPP assessments. This approach allows collected data to be viewed by various parties as input gathered for judgments and decisions regarding how educational personnel are prepared. The data are collected in a systematic, purposeful manner, to be used for various studies such as CAEP, specialized professional associations (SPAS), Marshall University, and other agencies as needed (QAS: Appendix I – Assessment Planning Chart).

The current QAS and specific program assessment plans reflect the continuing development of the COEPD’s long-standing commitment to assessment and continuous improvement. Several criteria guided the development and implementation of the QAS, and its system components are:

  1. Systematic and coherent with multiple decision points;
  2. Integrated with other existing evaluation/assessment requirements;
  3. Comprehensive and reflect the Conceptual Framework;
  4. Flexible;
  5. Aligned with applicable knowledge and skill standards;
  6. Participatory in development and implementation
  7. Based on data from multiple sources that are based on carefully selected evaluation criteria;
  8. Developed from simple to complex;
  9. Committed to fairness, accuracy, consistency, and the avoidance of bias;
  10. Inclusive through stakeholder (content faculty, professional education faculty, P-12 faculty and administrators, candidates, and graduates/alumni) involvement in system development and management;
  11. Continuously supported and managed; and
  12. Formally reviewed and revised as needed on a regular basis.

The QAS reflects the mission of the EPP regarding preparation of teachers and other school professionals by assessing the preparation of these individuals and their development in programs as measured by EPP and program standards. Because the EPP provides education and related services for a society that is open, complex, demanding, and evolving, each program features distinct methods to assess candidate progress (CONCEPTUAL FRAMEWORK).

The QAS is designed to extend beyond the classroom and evaluate other elements that are important influences on teaching and learning. The QAS incorporates assessments by various institutional, state, and national entities. Concurrently, the focus of the assessment system is both quantitative and qualitative and utilizes multiple data sources and assessment strategies (SPAS). All assessment measures have been categorized as focusing on the continuous improvement of candidate performance, program effectiveness, faculty effectiveness, or EPP operations.

Assessment processes are tailored to the characteristics of the community and candidate population. As beginning post-secondary education candidates, initial candidates bring to the classroom their recent knowledge of public schooling and a desire to become professional educators. From the beginning of the initial programs, candidates study course topics in relation to educational theory and the integration of theory in the classroom (INTASC). Assessment is focused on performance assessments in courses and in clinical settings as candidates work toward licensure in various fields (PERFORMANCE TASKS). As experienced, employed professionals, candidates in advanced programs bring to the classroom a wide range of professional experiences and a focus on specific career objectives. From the point of application through program completion and into practice in the specialization, assessment processes reflect applicable standards and program goals and objectives (QAS, Appendices E and F – Transition Points Charts).

Multiple assessment techniques are used to evaluate both initial and advanced candidate performance (LEVEL I, LEVEL II, LEVEL III, WVTPA), program effectiveness (COMPLETER SATISFACTION SURVEYS INITIAL LEVEL, COMPLETER SATISFACTION ADVANCED LEVEL, EMPLOYER SATISFACTION SURVEYS INITIAL LEVEL, EMPLOYER EVALUATIONS ADVANCED LEVEL), faculty effectiveness (COURSE EVALUATIONS, FACULTY ACHIEVEMENTS), and EPP operations (OPERATIONS SURVEY). Data are collected, analyzed, and used to improve candidate performance, curricula, instruction, delivery, and operations. Improvement beyond the norm, as well as corrective action, is a desired outcome of the assessment process.

The cornerstone of the integration of the EPP assessment and university level assessment is the annual Program Assessment Report developed by each program (PROGRAM REVIEWS).  These reports are the result of program level analyses based on data from candidates, the program, COEPD and institution.  These profiles include EPP, program, and candidate key assessment data. These reports also include summaries of program strengths, needs, and changes/modifications during the past year.

The EPP functions on an Annual Assessment Cycle (QAS, Appendix B). Data gathering begins in the fall.  Programs complete an analysis of the academic year’s (Fall, Spring, Summer) data and prepare the PROGRAM REVIEWS. Faculty prepare their Annual Reviews using current calendar year data. EPP surveys, candidate evaluation of faculty, and completer achievement follow-up data are also collected. During the spring semesters, program faculty review fall candidate data and any program data available.  Candidate key assessment (PERFORMANCE TASKS, WVTPA, PRAXIS II SCORES, PLT) and disposition data (LEVEL I, LEVEL II, LEVEL III, SELECTIVITY FACTORS INITIAL LEVEL, PROFESSIONAL DISPOSITIONS FOR ADVANCED LEVEL) are also collected. Summer data analysis includes aggregation of key assessment and dispositional data from the preceding academic year and the collection of any summer course data. Cabinet discussions are conducted late summer, with programmatic faculty providing data reviews. Recommendations are submitted to the Cabinet for possible curricular changes (HANDBOOK).

Candidate progress is measured at several transition points throughout the program, including admission, program progress, clinical involvement, program completion and follow up. These transition points are listed in Appendices E and F in the QAS document, and include PRAXIS II SCORES, PLT and PERFORMANCE TASKS. Through bimonthly meetings of the Cabinet and Program Directors, the EPP administers, coordinates, evaluates, monitors, reviews, and revises all EPP programs. The EPP ensures that all programs are conducted in a manner that is consistent with the stated mission and goals of the COEPD.

Evidence has been selected and provided for CAEP Standards 1-4 that the COEPD uses a Quality Assurance System (QAS) in which programs at both the Initial and Advanced level have developed assessment processes designed to assess the applicable SPA, state, or professional standards for each program. These program level standards are consistent and aligned with the broader unit outcomes for candidates (CONCEPTUAL FRAMEWORK).

5.2 The QAS relies on relevant, verifiable, representative, cumulative and actionable measures, and is improving its ability to produce empirical evidence that interpretations of data are valid and consistent.

The QAS has been designed as an integrated component of the COEPD and institutional governance systems. Integration ensures multi-level review and feedback. Oversight for the QAS is the responsibility of the Assessment and Accreditation Coordinating Council (AACC). This group develops and approves assessment policies/guidelines and monitors the development and operation of the QAS across the EPP. The COEPD has created an Office of Assessment and Continuous Improvement (OACI) and appointed Assessment Directors. These Assessment Directors are responsible for coordinating and implementing the QAS. They also coordinate their work with the institutional Office of Assessment and Program Review, the AACC (co-chaired by the Assessment Directors), and the Dean’s Cabinet.

An initial assessment by the AACC of EPP function and capacity to ensure that quality evidence was available to support continuous improvement efforts concluded that there was no systematic plan for doing so.  Additionally, the AACC determined there was little organizational and personnel capacity for ensuring evidence quality.  Given these factors, the AACC recommended to the Dean that “Improving the Quality of Evidence Available to Support Continuous Improvement” be a focus of our overall assessment efforts (SIP).  The Dean concurred and the decision was subsequently supported by the COEPD Cabinet and Program Directors. A survey of faculty was conducted to gauge commitment and importance placed on possible strategies. The results of the survey (SIP, P.11-12) were used to further develop plans and establish priorities.

The focus of the “Improving the Quality of Evidence Available to Support Continuous Improvement” is clearly aligned with CAEP standards at both the initial and advanced levels. It incorporates elements of all five standards at both levels (SIP).  Specific relationships are evident in the following standards and elements at both levels:

Standard 1:  Content and Pedagogical Knowledge. (1.1, 1.2, 1.3, 1.4, 1.5) (A.1.1, A.1.2)

Standard 2:  Clinical Partnerships and Providers.   (2.2, 2.3) (A.2.2)

Standard 3:  Candidate Quality, Recruitment, and Selectivity. (3.1, 3.2, 3.3, 3.4, 3.5, 3.6) (A.3.1, A.3.2, A.3.3, A.3.4)

Standard 4:  Program Impact.  (4.1, 4.2, 4.3, 4.4) (A.4.1, A.4.2)

Standard 5:  Provider Quality Assurance and CI. (5.1, 5.2, 5.3, 5.4, 5.5) (A.5.1, A.5.2, A.5.3, A.5.4, A.5.5)

The expected outcomes resulting from this transition include improvements in the quality of evidence available to support decision making for continuous improvement, an organizational structure to support the provision of quality evidence, and enhanced faculty and staff capacity to implement evidence based decision-making. Within the framework, the EPP identified five key elements (Goal Areas) critical in the development of and transition to a culture of evidence. These key elements became the EPP-wide objectives:

  1. Goal Area: Leadership and Personnel:  Develop a leadership/personnel environment and structure that supports and encourages the transition to a “culture of evidence”.
  2. Goal Area: Training and Support:  Develop the faculty and staff capacity (knowledge and skills) needed to support the transition to a “culture of evidence”.
  3. Goal Area: Collaboration and Networking:  Develop and actively support networking and collaborative arrangements that support the development and maintenance of a “culture of evidence”.
  4. Goal Area: Organizational Support:  Develop and maintain an organizational structure necessary to support a “culture of evidence”.
  5. Goal Area: Recognition, Rewards, and Incentives:  Develop and implement a recognition, rewards, and incentive system for supporting the development and maintenance of a “culture of evidence”.

Specific strategies and interventions to be implemented along with a timeline for implementation are provided in the SIP. The following criteria guided their selection.

  • Specific strategies and/or initiatives are identified.
  • Identified strategies and/or initiatives are aligned with goals and objectives of the plan.
  • A yearly timeline is provided for meeting goals and/or objectives.
  • Included is a plan for the evaluation and monitoring of strategies and/or interventions.
  • Evaluation and monitoring are linked to goals and objectives.

(SIP: Management Chart)

In an effort to provide organizational support necessary for the transition to a “culture of evidence”, the EPP created the QEWG (Quality of Evidence of Work Group). The primary purpose of the QEWG is to facilitate, support and monitor the transition within the COEPD from a culture of compliance to a culture of evidence. Within the parameters of this purpose, the QEWG is charged with ensuring the availability/access and improving the quality of the evidence available to support continuous improvement within the COEPD.

Specific responsibilities assigned to the QEWG include:

  • Monitor the development and implementation of the Quality Assurance System (QAS) to ensure that it utilizes multiple measures and provides valid data;
  • Facilitate and monitor the use of data to set priorities, enhance programs, and improve capacity;
  • Facilitate and monitor the development of an infrastructure that adequately supports data collection and monitoring;
  • Facilitate and monitor stakeholder participation and feedback;
  • Facilitate and monitor the availability of sufficient quality evidence to support the meeting of CAEP and WVBOE standards;
  • Ensure that all available evidence reflects minimal Measurement Error (ME) and meets or exceeds CAEP criteria for quality;Ensure that all EPP assessments that are used to support accreditation efforts meet the CAEP “sufficient” level standards for EPP developed assessments;
  • Provide applicable faculty development/training necessary to support the development of quality evidence;
  • Ensure the availability of and access to the technical support (e.g. data processing, statistical support, data analysis assistance, etc.) necessary to ensure quality evidence;
  • Provide oversight and monitor implementation of the Selected Improvement Plan (SIP) to include provision of an Annual Report of Progress by June 30 of each year; and
  • Other responsibilities as they are identified by QEWG members, the Dean, or as they evolve/emerge from the continuous improvement process.

 

The Quality of Evidence Work Group (QEWG) has the responsibility of examining evidence across the EPP and working with faculty and program directors to increase the quality of data. This includes regularly conducting more statistically rigorous studies of the validity and reliability of assessments (RELIABILITY AND VALIDITY PROCESS, QAS: Appendix I, Assessment Planning Chart). QEWG was conceptualized and formed in the fall of 2017 in response to challenges in maintaining and improving the assessment system. Several meetings of the group were focused on training and practice addressing the quality of evidence based on CAEP sufficiency. Beginning in the spring of 2018, the group will begin addressing the quality of assessments and evidence on a rotating schedule. The intention is to review all assessments on a 5-year schedule, working with faculty to improve all measures of assessment for use in the next SPA/CAEP accreditation cycle.

QEWG recommended the creation of initial and advanced level Assessment Planning Charts to better monitor the use of assessment data. The chart contains the following information for all the licensure programs: name of assessment, course where administered, validity information, reliability information, location of data, contact person and the last revision of the assessment. The QEWG uses the chart to facilitate the process of ensuring that all EPP assessments meet the required “sufficient level”.  In beginning this process, the earliest attention was given to the clinical documents, documents very important to measuring outcomes and program success. Additional information about the process used is included in RELIABILITY AND VALIDITY PROCESS. A process for establishing validity and reliability has already been implemented. This group will work through a planned cycle to examine assessments across the EPP, consult with faculty developers and assist in the improvement of data for decision making.

The EPP also instituted a series of CAEP-specific newsletters and assessment-specific workshops for faculty during the 2016-17 academic year (FACULTY WORKSHOPS AND NEWSLETTERS).  The overarching focus for both was “Improving the Quality of the Evidence Available to Support Continuous Improvement”. Newsletters were distributed during both the fall 2016 and spring 2017 semesters with topics ranging from the CAEP standards to timelines and schedules.  Each one explored information in a summary style and provided faculty with a quick and easy review of important material.  In addition, five separate workshops were conducted focusing on a culture of evidence, technology as a cross-cutting theme, the selected improvement plan, diversity as a cross-cutting theme, and improving our assessments.  The workshops were 2-hour long sessions detailing more specific means of addressing CAEP requirements, with opportunity for Q&A, faculty feedback, and working drafts.

To ensure compliance with CAEP accreditation standards and the CAEP “8 Annual reporting measures,” the EPP has described in standards 1-4 the current tools utilized for obtaining these data, as well as future plans, and items outlined in the SIP.  The WV Department of Education also provides EPP programs in the state with data to assist with this process. TITLE 2 reports are publicly posted on the EPP website. (http://www.marshall.edu/coepd/accreditation/titleii/) Additional information is posted for stakeholders at www.marshall.edu/coepd/eppac.

5.3 The EPP regularly and systematically assesses performance against its goals and relevant standards, tracks results over time, tests innovations and the effects of selection criteria on subsequent progress and completion, and uses results to improve program elements and processes.

Multiple levels of assessment, analysis, and decision-making undergird the QAS. Beginning with the day-to-day, systematic, continuous assessment of all candidates, programs, faculty, staff, and school/community partners, this process includes assessment at the individual, program, and college levels.

Faculty members undergo multiple assessment processes including assessment by candidates, the administration, and peer committees. Faculty members are evaluated by their students each semester (COURSE EVALUATIONS, FACULTY LOAD). These data are collected by the Office of Institutional Research and shared with the COEPD Dean and faculty to be used in annual faculty reports. Program faculty engage in continuous reflection and assessment of candidate progress, program effectiveness, and individual practice. This ongoing assessment and the resulting data-based programmatic decision-making form the basis for their Annual Reviews.  Annual faculty reports are conducted on a calendar year basis (January 1 through December 31). The COEPD Dean reviews Annual Faculty Reports and provides summary and evaluative feedback to individual faculty during the spring semester. Faculty also provide a self-reflection and projected work plans for the coming year. Data are included from the following areas:

  1. Faculty scholarly accomplishments
  2. Teaching effectiveness
  3. Faculty service activity
  4. Professional development
  5. Annual goals reflecting Conceptual Framework goals and commitments

Each initial and advanced degree program also develops an annual Program Assessment Report which details program achievement for the prior year and plans for the next year and beyond. These program reports incorporate key assessment data from all applicable licensure programs within the approved degree area. The reports are reviewed at the program and college/school levels and submitted to the University Assessment Office by December 15 of each year. These reports are then reviewed by the University Assessment Committee (initial programs) or Graduate Council (advanced programs) with feedback and recommendations for improvement provided for each program. (PROGRAM REVIEWS)

The COEPD Dean works with the Assessment Directors, the Cabinet, and the AACC to synthesize the data from these annual reports and plan the August Cabinet meeting.  Held in mid-August, this discussion involves the key institutional stakeholders, and has the following objectives:

  1. Build EPP assessment capacity
  2. “Close the loop” in the EPP Annual Assessment Cycle (Appendix B)
  3. Identify focus areas for EPP continuous improvement efforts
  4. Identify any modifications/enhancements needed in the EPP QAS
  5. Share/discuss available program/EPP data
  6. Develop appropriate follow up plans

Each spring, the faculty are given an opportunity to evaluate the dean and the college processes. The Marshall University Board of Governors Policy AA-39 (http://www.marshall.edu/board/files/Policies/MUBOG%20AA-39%20Deans.pdf) requires an in-depth evaluation of the dean on a four-year cycle. This evaluation is based on a report of the effectiveness of the college, programs, completion, recruitment, fiscal viability and other measures of quality. This policy provides for input from staff, students, and faculty, culminating in an evaluation by the president at which time the future direction of the college is discussed in detail.

The COEPD’s Quality Assessment System (QAS) ensures that teacher candidates at the COEPD are prepared to serve as professional teachers who create effective learning environments for meaningful learning and student engagement.  Assessments address and align with the EPP CONCEPTUAL FRAMEWORK “Critical Thinker and an Experienced Professional as Specialist,” the Interstate New Teacher Assessment and Support Consortium (InTASC) standards and SPA-specific standards for individual programs (SPAS).  Candidate assessment occurs at several transition points using multiple assessments in both initial and advanced programs including:

  1. Admission to the Program,
  2. Progress through the Program,
  3. Clinical Component (entry/exit),
  4. Program Exit/Completion, and
  5. Follow up

(PROGRAM PROGRESSIONS AND TRANSITIONS INITIAL LEVEL, QAS: APPENDIX F, TRANSITION POINTS ADVANCED PROGRAMS)

Assessments of the teacher candidate through strategic gateways or juncture points (CANDIDATE ADMISSION POLICY INITIAL LEVEL) provide evidence of the candidate’s growth and development.  Each transition point uses various formative and summative assessments that collect data to determine the professional and pedagogical content knowledge of candidates and graduates.  Licensure pass rates (PRAXIS II SCORES, PLT) provide evidence demonstrating the ability of the EPP’s graduates to meet national and professional standards.  When benchmarks are not reached, programs have modified and aligned curriculum with these standards, hired staff, and given referrals to resources for test preparation workshops (PRAXIS HELP WORKSHOPS, SELECTIVITY FACTORS INITIAL LEVEL).

In order to be admitted to programs, candidates must meet certain qualifications so that the unit is assured of promising and suitable candidates. As the candidate progresses through the program, certain assessments mark that progress and inform the candidate and program as to his or her growth and development. At the initial level, candidates complete similar PERFORMANCE TASKS and performance-based assessments (LEVEL I, LEVEL II, LEVEL III) with appropriately licensed public school supervisors. At the advanced level, the components of the candidate performance-based assessment system are unique to each program or licensure area (QAS: Appendix F, Transition Points Advanced Programs). Each program demonstrates changes and revisions that occur in the curriculum and/or field experiences and clinical practice along with the rationale and data analysis that created a foundation for change. Candidates are also assessed as a part of their field-based and/or clinical experiences. The NEXT SURVEY serves as a benchmark for candidate performance and comparisons on a state-wide basis. Exit assessments provide assurance that the candidate has successfully met the program requirements and is eligible for licensure (GRADUATION AND CERTIFICATION REQUIREMENTS INITIAL LEVEL). Follow-up assessments provide data regarding candidate performance on the job (EMPLOYER SATISFACTION SURVEYS INITIAL LEVEL, EMPLOYER EVALUATIONS ADVANCED LEVEL, PERSONNEL DIRECTORS).

WVBE Policy was revised in 2016 to require that candidate completion include a Teacher Performance Assessment. In anticipation of this, the EPP was asked to participate in a test project to evaluate the use of the two existing products, edTPA from ETS and PPAT from Pierson. The EPP participated in an extended version of the original test project covering 3 semesters. At the conclusion of the test project, the policy as passed allow for the use of any valid assessment. The result of a discussion with other IHEs in the state was the development of the WVTPA. Marshall University faculty took a leadership role in this development and the subsequent establishment of RELIABILITY AND VALIDITY PROCESS.

In a search to identify a data collection tool, the EPP examined several options. The university wanted to make use of existing processes, and encouraged the development of data collection through Qualtrics or Blackboard. After multiple attempts to use one of these approaches, it was determined that the missing piece was the ability to collect clinical data on these platforms. As a result, the EPP adopted LiveText as its data collection and analysis tool in 2014.

In an effort to implement innovative practices, the EPP is exploring a number of options. For a number of years the EPP has required a 15 to 16 week period of student teaching (INITIAL PROGRAMS WITH CLINICAL HOURS). This exceeds the 12 weeks required by the WVBE. Recent national trends toward increased clinical experiences have the state promoting a yearlong internship, instead of a semester of student teaching. Most recently, this was a topic of a Professional Development Schools Conference (PDS). The dean and program director are working with state leaders to develop a plan to pilot this within the next two years. The Marshall University President has proposed a STEM initiative which would provide scholarships to students planning to teach in those fields. This proposal coincides perfectly with the yearlong internship efforts. On January 22, the approval of a grant to completely fund 4 scholarships for teacher candidates in the STEM fields was announced.  Two more grant proposals, in collaboration with the June Harless Center and the Marshall University Foundation, have been submitted and are under consideration to plan and implement a revised approach to the clinical experiences, and to enhance the STEM scholarships. (GRANT)

Collaborative partnerships between the EPP and school districts in the Professional Development Schools (PDS) program provides two-way communication opportunities, and the ability to share professional development with school level constituents.  Additionally, the EPP is developing further plans to gather data on an annual basis to ensure that the CAEP “8 Reporting Measures” are addressed, including measures of program impact, program outcome and consumer information.  The EPP depends on the input of these internal and external stakeholders to maintain a quality assurance system, regularly assess performance of the EPP and its impact, and to monitor these results over time.

Faculty have conducted numerous workshops, and stakeholders have provided some options for student assistance.  In an effort to assist at the content exam level, discussions with CSCLITE members and other key faculty have explored how to provide assistance to candidates both in advance of testing, and when testing has been unsuccessful. EPPAC discussions related to these problems have helped to shape the approach to seeking solutions. (PRAXIS HELP WORKSHOPS)

Recruitment of potential teachers can be aided with these supports and scholarship opportunities, but additional work continues to be needed. A revised and updated recruitment plan is in place (RECRUITMENT PLAN INITIAL LEVEL). Additionally, through the vocational office at the WVDE, we are working to assist in revising the career exploration program for high school students who are interested in teaching. Fully developed plans will include a connection with these students to promote consideration of MU as their college choice.

5.4 The EPP provides measures of completer impact that is analyzed, shared, and acted upon in decision-making related to programs, resource allocation, and future direction.

As outlined in standard 4, the evidence throughout the COEPD self-study provides evidence demonstrating candidates complete a rigorous, progressive, and comprehensive program to graduate and be recommended for state teacher licensure. Standard 4 outlines efforts to track completer impact data (CASE STUDIES 1 YEAR OUT INITIAL LEVEL, CASE STUDIES 3 YEARS OUT INITIAL LEVEL) and outlines future plans for gathering completer impact on P-12 student learning.  Completer impact and available outcome data on P-12 student growth and development will be analyzed, shared widely in advisory council meetings, and will be the basis for decision-making related to EPP programs, resource allocation, and future directions for the EPP.  As noted in Standard 4, EPP impact data includes a systematic exit evaluation of program completers, maintenance of employment data, surveys of both graduates and employers and records on the awards and distinctions of our graduates as measures of impact.

The COEPD has developed and implemented a multidimensional approach for compiling, soliciting, and using data regarding its completers. The strategies used are consistent with the overall COEPD framework for collecting completer impact and employer/completer satisfaction data and implement data collection with multiple strategies and data points. Where applicable, assessments (surveys) have been subjected to appropriate RELIABILITY AND VALIDITY PROCESS and have been determined to meet CAEP criteria for assessments at the “Sufficient” or higher level. Findings from multiple applications of EMPLOYER SATISFACTION SURVEYS INITIAL LEVEL and EMPLOYER EVALUATIONS ADVANCED LEVEL, COMPLETER SATISFACTION SURVEYS INITIAL LEVEL and COMPLETER SATISFACTION ADVANCED LEVEL, EDUCATOR EXPO SURVEYS INITIAL LEVEL, EPPAC surveys and participant comments, TEACHER-IN-RESIDENCE mentor/supervisor surveys and performance assessments, NBCT performance, and CASE STUDIES 1 YEAR OUT INITIAL LEVEL AND CASE STUDIES 3 YEARS OUT INITIAL LEVEL clearly indicate completers are effective teachers positively increasing student growth and that both employers and completers are more than satisfied with the preparation and performance. COEPD faculty have used data to continually improve and inform the development of the QAS and data collection system, increase the interaction with P-12 stakeholders, and consider a number of changes in the educator preparation program. Future plans include a continued focus on development/refinement of assessments, continuing to identify strategies to increase responses rates, increasing the focus on gathering employer/completer feedback that can be more readily disaggregated by licensure area, and expanding the scope of measuring completer performance in the field via case studies or state assisted measures. An additional data point (NEXT SURVEY) will be added in spring 2018 as results from a WVDE/WVHEPC sponsored statewide survey of P-12 supervisors will be available. The NEXT Survey will also allow for benchmarking COEPD data against statewide averages.

The initial level of the COEPD utilizes three different measures to indicate completer impact on student learning. These include performance assessment data from the TEACHER-IN-RESIDENCE experience, CASE STUDIES FOR 1 YEAR OUT INITIAL LEVEL and CASE STUDIES 3 YEAR OUT INITIAL LEVEL.  Initial conclusions are positive on the impact completers are having on students in the P-12 schools. Data from the case studies furnish solid evidence that initial level completers are making a decisive difference in student growth. The case studies will be continued for the next three years to identify more substantial patterns and to examine all content areas and all school levels. The initial level also demonstrates that completers effectively apply the professional knowledge, skills, and dispositions that the preparation experiences were designed to achieve. In order to provide evidence, four different measures of teaching effectiveness are tracked. These include 1) Marshall completers attaining National Board Certification (NBCT), 2) Performance data from the TEACHER-IN-RESIDENCE experience, 3) 1st year out case study data, and 4) 3rd year out case study data. The first round of case study data produced substantial evidence of the teaching effectiveness of initial level completers in P-12 schools. Data collected document that Marshall University completers are considered to be valuable teachers and are highly effective in their jobs. The case studies will be continued for the next three years to identify more apparent patterns and to examine all content areas and all school levels.

Consistent with national trends in teacher preparation, overall EPP enrollment numbers have decreased over several years (ENROLLMENT). In examining numbers from the freshman through the senior year, a significant element is the ability of the candidate to successfully complete Praxis tests, both the Core and the Content exams. Candidates who have difficulty passing the Praxis Core are often delayed in being able to take higher level courses, and may have difficulty with Financial Aid because of a significant number of hours taken prior to full admission to teacher preparation. This is reflected in the drop in enrollment at the junior level, as candidates are often transferred to other majors to be eligible for aid while they continue to attempt to pass the exams. Likewise, with the latest state requirement that candidates pass the content exam(s) prior to student teaching, without success in testing, the candidate is delayed in being able to complete the other program requirements. With the current remodeling of the primary building for the EPP at the initial level, plans are underway to create a Praxis-lab situation. This room will be committed to the use of candidates preparing for the Praxis. Contacts have been made with the state and with ETS to begin to identify materials, both in hard copy and online, to be used in this setting. Further plans will determine staffing resources and additional ways to utilize this to the advantage of the candidates.

Collecting accurate licensure and employment data is an ongoing challenge. Many candidates in advanced programs are already in the classroom at the time of their enrollment in the programs. At the initial licensure level, the definition of program completion provides its own challenge of definition through the Office of Institutional Research. According to communication with this office, the university graduation rate for the baccalaureate cohort is 45% in 6 years (26% in four years) (LICENSURE AND EMPLOYMENT RATES)

For students starting in COEPD, the 6-year rate is 50%, and the 4-year rate is 19%. This data is further muddied by influx of candidates from articulation agreements with CTCs, as well as students transferring from other institutions, or colleges within Marshall University. As further improvements are made to the QAS, working with Institutional Research to establish workable definitions, cleaner data will be a goal. Currently, there is obviously a concern with 4 year graduation rates and a need to continue to explore supports for candidate success (ENROLLMENT)

The data are also summarized, analyzed, and shared with various college committees. A committee composed of arts and sciences representatives for initial secondary teacher education programs, the Content Specialization Liaison Committee for Initial Teacher Education (CSLCITE), meets each semester to consider the specialization courses offered for teacher education.  The Undergraduate Program Curriculum Committee (UPCC) and the Graduate Program Committee (GPC) are composed of COEPD faculty who study program data and proposed changes and make recommendations regarding implementation and changes. Data are also shared with the Education Personnel Preparation Advisory Committee (EPPAC) for their review and/or suggestions. The EPPAC is an advisory committee for teacher preparation that is composed of education faculty, arts and sciences faculty, public school personnel, and candidates who meet each semester to consider program changes, additions and deletions, and the assessment of clinical and field-based experiences, plus other items of importance such as evaluation reports and new program proposals. Additional information about each of these groups is available in the COEPD Handbook.

A liaison for the West Virginia Department of Education (WVDE) is a member of the EPPAC and shares information about state policy issues and proposed changes for teacher preparation programs. Approved program changes are then sent to the WVDE for approval. The COEPD also prepares a yearly annual report for the WVDE and submits a major re-filing of programs every five years. The annual reports and program re-filings contain data that have been collected as part of the QAS.

5.5 The EPP assures that appropriate stakeholders are involved in program evaluation and improvement.

The EPPAC and CSLCITE meet each semester to examine programs, discuss problems, and begin to identify improvement directions. Annually, PERSONNEL DIRECTORS and superintendents are invited to a session to meet with the certification analysts in the EPP, program directors, and representatives of the WVDE. These meetings provide an opportunity to discuss procedures, changes in requirements, and local district needs. The EPP has been working to establish new and more effective ways of gaining information about completers, particularly through employers through direct contact. The EPP will be continuing to refine these efforts to better evaluate the impact of our completers.

The EPP’s organizational structure and guiding policies, create at multiple levels a culture of involvement of internal and external stakeholders.  Through shared governance, these structures allow internal and external stakeholders to participate in setting strategic priorities and directions for the institution and unit, participate in the decision-making process, and collaborate in decisions regarding the academic enterprise of the EPP.

Evidence included throughout the SSR demonstrate multiple ways candidate and program data are shared with stakeholders.  These constituent groups include school partners, alumni, employers, students and faculty within and outside the EPP. Members of the EPP faculty, staff and administration participate in meetings with regional constituents to provide updates on programs and gather information on the needs of the area (RESA 2 PERSONNEL DIRECTORS) Faculty and administers play an active role in WV TEAC, an organization of representatives from state Institutions of Higher Education that serves as an advisory body to the WVDE.  As noted in 5.2, data from the WV Department of Education is also shared with the EPP.

SUMMARY

In summary, both initial and advanced programs have identified multiple methods of performance assessment to ensure graduates are prepared to enhance the educational system with their commitment to P-12 students and the profession. These graduates will enter the profession with an awareness of diversity within schools, the impact of technology on education, a strong knowledge base concerning teaching and learning, and a willingness to ensure that all students will learn.

In seeking to prepare a Critical Thinker and an Experienced Professional as Specialist (CONCEPTUAL FRAMEWORK), the EPP is striving to collect quality data on candidates, faculty, and programs. Wherever possible, the assessment strategies have been integrated with other existing evaluation/assessment requirements. These assessment strategies follow a continuum of development by candidates and are based on institutional, state, and national standards. The data collected are used to improve candidate performance, faculty performance, and program operations.

The unit provides a visual picture of the assessment strategy, the point in the program at which it is used, the use of the data collected, and the Annual Assessment Cycle (QAS). The EPP is aware that it needs to continue to review and revise its data collection in order to measure program standards and outcomes in a way that will facilitate and support continuous improvement. Therefore, the EPP has planned activities and timelines to assist in keeping it aligned with institution, state, and national standards. (QAS)

Evidence Applys To
PROGRAM REVIEWS 5.1 Effective quality assurance system that monitors progress using multiple measures
5.3 Results for continuous program improvement are used
HANDBOOK 5.1 Effective quality assurance system that monitors progress using multiple measures
RELIABILITY AND VALIDITY PROCESS 5.2 Quality assurance system relies on measures yielding reliable, valid, and actionable data.
5.3 Results for continuous program improvement are used
5.4 Measures of completer impact are analyzed, shared and used in decision-making
COMPLETER SATISFACTION ADVANCED LEVEL 5.1 Effective quality assurance system that monitors progress using multiple measures
5.4 Measures of completer impact are analyzed, shared and used in decision-making
COURSE EVALUATIONS 5.1 Effective quality assurance system that monitors progress using multiple measures
5.3 Results for continuous program improvement are used
CONCEPTUAL FRAMEWORK 5.1 Effective quality assurance system that monitors progress using multiple measures
5.3 Results for continuous program improvement are used
5.5 Relevant stakeholders are involved in program evaluation
CANDIDATE ADMISSION POLICY INITIAL LEVEL 5.3 Results for continuous program improvement are used
CASE STUDIES 1 YEAR OUT INITIAL LEVEL 5.4 Measures of completer impact are analyzed, shared and used in decision-making
CASE STUDIES 3 YEARS OUT INITIAL LEVEL 5.4 Measures of completer impact are analyzed, shared and used in decision-making
COMPLETER SATISFACTION SURVEYS INITIAL LEVEL 5.1 Effective quality assurance system that monitors progress using multiple measures
5.4 Measures of completer impact are analyzed, shared and used in decision-making
CSLCITE 5.1 Effective quality assurance system that monitors progress using multiple measures
5.3 Results for continuous program improvement are used
5.4 Measures of completer impact are analyzed, shared and used in decision-making
5.5 Relevant stakeholders are involved in program evaluation
EDUCATOR EXPO SURVEYS INITIAL LEVEL 5.4 Measures of completer impact are analyzed, shared and used in decision-making
EMPLOYER SATISFACTION SURVEYS INITIAL LEVEL 5.1 Effective quality assurance system that monitors progress using multiple measures
5.3 Results for continuous program improvement are used
5.4 Measures of completer impact are analyzed, shared and used in decision-making
ENROLLMENT 5.4 Measures of completer impact are analyzed, shared and used in decision-making
FACULTY LOAD 5.3 Results for continuous program improvement are used
FACULTY WORKSHOPS AND NEWSLETTERS 5.2 Quality assurance system relies on measures yielding reliable, valid, and actionable data.
GRADUATION AND CERTIFICATION REQUIREMENTS INITIAL LEVEL 5.3 Results for continuous program improvement are used
GRANT 5.3 Results for continuous program improvement are used
INITIAL PROGRAMS WITH CLINICAL HOURS 5.3 Results for continuous program improvement are used
InTASC 5.1 Effective quality assurance system that monitors progress using multiple measures
5.3 Results for continuous program improvement are used
LEVEL I 5.1 Effective quality assurance system that monitors progress using multiple measures
5.3 Results for continuous program improvement are used
LEVEL II 5.1 Effective quality assurance system that monitors progress using multiple measures
5.3 Results for continuous program improvement are used
LEVEL III 5.1 Effective quality assurance system that monitors progress using multiple measures
5.3 Results for continuous program improvement are used
LICENSURE AND EMPLOYMENT RATES 5.4 Measures of completer impact are analyzed, shared and used in decision-making
NBCT 5.4 Measures of completer impact are analyzed, shared and used in decision-making
NEXT SURVEY 5.4 Measures of completer impact are analyzed, shared and used in decision-making
OPERATIONS SURVEY 5.1 Effective quality assurance system that monitors progress using multiple measures
PDS 5.3 Results for continuous program improvement are used
PERFORMANCE TASKS 5.1 Effective quality assurance system that monitors progress using multiple measures
5.3 Results for continuous program improvement are used
PLT 5.1 Effective quality assurance system that monitors progress using multiple measures
5.3 Results for continuous program improvement are used
PRAXIS HELP WORKSHOPS 5.3 Results for continuous program improvement are used
PRAXIS II SCORES 5.1 Effective quality assurance system that monitors progress using multiple measures
5.3 Results for continuous program improvement are used
PROGRAM PROGRESSION AND TRANSITIONS INITAL LEVEL 5.3 Results for continuous program improvement are used
QEWG 5.2 Quality assurance system relies on measures yielding reliable, valid, and actionable data.
RECRUITMENT PLAN INITIAL LEVEL 5.3 Results for continuous program improvement are used
RESA 2 PERSONNEL DIRECTORS 5.5 Relevant stakeholders are involved in program evaluation
SELECTIVITY FACTORS INITIAL LEVEL 5.1 Effective quality assurance system that monitors progress using multiple measures
5.3 Results for continuous program improvement are used
SIP 5.2 Quality assurance system relies on measures yielding reliable, valid, and actionable data.
STRATEGIC COMMITMENTS 5.1 Effective quality assurance system that monitors progress using multiple measures
TEACHER-IN-RESIDENCE 5.4 Measures of completer impact are analyzed, shared and used in decision-making
TITLE 2 5.2 Quality assurance system relies on measures yielding reliable, valid, and actionable data.
WVTPA 5.1 Effective quality assurance system that monitors progress using multiple measures
PERSONNEL DIRECTORS 5.3 Results for continuous program improvement are used
5.5 Relevant stakeholders are involved in program evaluation
PROFESSIONAL DISPOSITIONS FOR ADVANCED LEVEL 5.1 Effective quality assurance system that monitors progress using multiple measures
FACULTY ACHIEVEMENTS 5.1 Effective quality assurance system that monitors progress using multiple measures
QAS 5.1 Effective quality assurance system that monitors progress using multiple measures
5.2 Quality assurance system relies on measures yielding reliable, valid, and actionable data.
5.3 Results for continuous program improvement are used
5.4 Measures of completer impact are analyzed, shared and used in decision-making
5.5 Relevant stakeholders are involved in program evaluation
EPPAC 5.1 Effective quality assurance system that monitors progress using multiple measures
5.4 Measures of completer impact are analyzed, shared and used in decision-making
5.5 Relevant stakeholders are involved in program evaluation
SPAS 5.1 Effective quality assurance system that monitors progress using multiple measures
5.3 Results for continuous program improvement are used
EMPLOYER EVALUATIONS ADVANCED LEVEL 5.1 Effective quality assurance system that monitors progress using multiple measures
5.3 Results for continuous program improvement are used
5.4 Measures of completer impact are analyzed, shared and used in decision-making

[back to CAEP Self-Study Report]