Specialty Licensure Area Data

[back to CAEP Self-Study Report]

 

1.Based on the analysis of the disaggregated data, how have the results of specialty licensure area or SPA evidence been used to inform decision making and improve  instruction and candidate learning outcomes?

COEPD programs have used the results of specialty licensure area evidence to inform decision making and improve instruction and candidate learning outcomes. Examples from programs which are recognized with conditions are outlined here. Based upon the feedback from previous recognition reports, the review of the Social Studies assessments and assessment system indicated the need for a substantial revision to most key assessments. As a result, several key assessments were revised in order to improve alignment with NCSS content standards, provide broader coverage of standards by our assessments, and allow for more detailed data to be compiled for analysis. New assessments that  better measure how well our candidates are being prepared have been developed. The Social Studies program continues to examine its college’s assessment system and the collection, analysis, and sharing the program data. LiveText is now being used and training has been offered to all faculty involved in the teaching of social studies candidates. Using a more formal data collection system has guided the revision of assessment rubrics throughout the program to better follow the curriculum and pedagogy guidelines of national standards. The Social Studies Education Program Coordinator has worked more closely with the professors of the Marshall University Economics, History, Geography, Political Science, Psychology, and Sociology departments through the Content Specialization Liaison Committee, which includes professors from these areas. This liaison committee meets each semester for formal communication with these content area professors. In response to this institution’s renewed commitment to continuous improvement, Assessment #3 Planning Instruction and Assessment #4 Student Teaching assessments were changed to the rigorous edTPA assessment tools as Task 1 Planning for Instruction and Assessment and Task 2 Instructing and Engaging Students in Learning. The MU College of Education and Professional Development (COEPD) was asked by the West Virginia Board of Education to pilot this program over several semesters. Data have proved informative for the program. The rubric elements are strongly tied to the NCSS pedagogical standards and revealed excellent candidate performance in student teaching overall. The Clinical Level III Evaluation rubric, also a part of Key Assessment 4, has been improved to give better descriptions of mastery levels.

Based upon the feedback from previous recognition reports, the English 5-Adult and 5-9 programs are still searching for better and improved assessment tools to measure the NCTE standards. We have made major changes to the types of assessments we are using in our English pedagogy courses. Because the NCTE standards have been updated since our last review, we re-vamped our ENG 402 (writing methods) and ENG 419 (literature methods) portfolios. This included a major overhaul in how we align the course objectives with NCTE standards. In addition, we added an eighth assessment to our review this year to address Standard VI, the new social justice standard. We created a project for ENG 476 (grammar methods) to meet these criteria. Next year, new state requirements will affect when candidates take the Praxis II exam. Because candidates will need to pass the exam before student teaching, this will affect the content knowledge they have going into the exam. The English department established an ad-hoc committee of English Education faculty to address these changes. The ad-hoc committee will be considering several ideas; including practice exams as part of the curriculum, offering regular content specific exam prep workshops, and rearranging four-year plans so candidates can take content-specific pedagogy courses before the exam. In addition, the ad-hoc committee has created a department wiki that charts all content area courses taught in the English Education curriculum, objectives covered, assessments given, and corresponding NCTE standards addressed. A new system of sharing assessment data with our English content faculty has been implemented. All content faculty continue to meet collectively once per semester in the CSLCITE committee to address programmatic changes, adjustments, and deletions, but a new once per semester meeting occurs with the separate English content area group (members of the ad-hoc committee mentioned above). In this meeting, we share Praxis II data and discuss areas identified for improvement. We also ask for liaisons to share any new assessments they are utilizing that would be evidence of content knowledge acquisition. This increased communication/interaction has improved delivery of content area information. Analysis of the data informed us that there are areas where candidate performance could be improved and where assessments may need to be revised.  Both Assessment 3 and 4 will be looked at by faculty.

Based upon the feedback from previous recognition reports, the Science programs (Biology, Chemistry, Physics, and General Science) are still searching for better and improved assessment tools to measure the NSTA standards. Two new assessments have been successfully put into place in the secondary education methods course and capstone course for science candidates. Data from the capstone Assessment #7 Research Project provide robust and rich descriptions of candidates’ use of science content knowledge to carry out a research project. And Assessment #6 Science Safety Module was revised based upon recognition report feedback and now candidates complete an extensive assignment, including a safety plan, safety contract, safety quiz, and demonstrably measured evidence of safe teaching in the field. Scores on this new assessment were excellent during this reporting period. Assessment 3 has been revised to better account for candidates’ ability to plan by improving descriptor levels within the rubric. Candidates created a unit plan and mapped out the unit by identifying objectives for a series of daily lesson plans. Candidates then used these lesson plans in their second clinical experience (Clinical II). The assessment is a substantial expansion of the previous planning assessment with a much more detailed rubric. In further response to our renewed commitment to continuous improvement the Assessment #4 Student Teaching assessment was changed to the rigorous edTPA assessment tool as Task 1 Planning for Instruction and Assessment and Task 2 Instructing and Engaging Students in Learning. The rubric elements are strongly tied to the NSTA standards and revealed excellent candidate performance in student teaching overall. While the data collected using the edTPA assessment tool from Assessments 3, 4 and 5 shed a positive light on this institution’s science education program, this institution is working on areas and assessments that can be improved and changed to better our ability to measure candidates’ knowledge, skills, and dispositions and meet NSTA standards.

Each year the School Psychology Program has a fall retreat in September to review data and determine what program changes are needed. Mean ratings from field supervisors during practicum and internship are consistently high. Average ratings for the practicum experiences were consistently met. Proficient ratings for the Internship were met easily for the class of 2015 except for one candidate who required an additional semester of internship. In 2016, two candidates overall score was below 2.0. One placement was new and there was conflict between his two field supervisors regarding the sharing of the supervision. The second placement had a new supervisor and there may have been some confusion over using the assessment instrument. Both of the students struggled with time management issues, which were most evident in their failure to complete the required program evaluation in a timely manner. Examination of the Praxis II scores and faculty ratings resulted in a recommendation for graduation for both of these candidates. We also noticed that some first semester practicum candidates had ratings of 4 while some interns had ratings of 2 in the spring semester. While this is possible, faculty ratings did not always agree with the field supervisors’ ratings. In order to address these issues we have started having meetings twice a year with all internship and practicum field supervisors. These meetings are used to improve communication, convey expectations for candidates, provide trainings, and receive feedback concerning needed program improvement. We think these trainings will help us obtain more accurate data for our candidates. We have also just implemented LIVETEXT. We provided training for our field supervisors and we think this will improve our data collection practices. We also are going to do some inter-rater reliability calculations of the ratings between our faculty and field supervisors for the candidates. This information will help us better process the information we receive from field supervisors and may influence future trainings. When evaluating the field supervisor ratings, the lowest domain was 2.9 Research and Program Evaluation for both Practicum Candidates and Interns. In response to these lower scores, we have adjusted the course sequence to ensure the thesis/program evaluation course is offered earlier to candidates but not preceding statistics or research methods. We also have been discussing the two possible statistics courses and are planning on making a change to one that is a better fit for our candidates’ research needs. An additional systems level project requiring application of SPSS and evaluation skills has also been inserted in SPSY 603 (Professional School Psychology) to help improve candidates’ skills. Also candidates have been encouraged to discuss their program evaluations with their field supervisors. It is believed that the lower scores may be in part due to the field supervisors not being aware of the candidates’ skills in this area. A review of portfolio scores indicated consistently high mean aggregated scores across all domains. When comparing the 2015 and 2016 scores, there is a drop in scores. A new faculty member was hired and she rated the portfolios lower. In an attempt to have a more consistent rating, a plan was developed to have two faculty rate the portfolios. After rating individually, the faculty discuss the scores and come to a consensus. This should improve the reliability and validity of the ratings. In spring 2016, we interviewed 6 randomly selected employers of our first year graduates. Information from the employer surveys indicated that our employers are extremely pleased with our graduates. Employers were asked to rate our graduates on a Likert scale ranging from unsatisfactory to mastery. In all areas our graduates were rated at proficient or mastery levels. They indicated our graduates are very knowledgeable about assessments, behavioral consultation, and interventions. The employers indicated that our graduates would benefit from more experience in risk assessment, crisis intervention and familiarity with state policy. The 2.6 Prevention and Response Scores are also the next to lowest scores on the practicum ratings. Due to the low participation of candidates in crisis prevention and intervention activities in field experiences, the faculty planned improvements in SPSY 620 (Primary Prevention) to improve candidates’ knowledge and skills in this domain. First, SPSY 620 was repositioned in the course sequence to the summer following their first two semesters in the program. This change now affords all candidates access to the curriculum prior to entering their three semesters of practical experience (SPSY 738, 739, and 740) in the schools. Secondly, PREPaRE Workshop 2 will now be provided as part of SPSY 620. Finally, additional practicum requirements for crisis intervention and risk/suicide assessments are planned for Practicum I (SPSY 738) and II (SPSY 739). To address the state policy concern, we added more instruction in state policy during the first year of the program for the 2016/2017 school year in SPSY 601 (Professional Practice: Schools).

Since 2013, the TESOL program has been working to improve our program in four specific ways: First we have revised our key assessments so that all now align with the current TESOL standards; second, we have refined the practicum experience to include more active and engaged supervised practice; third, we have begun a systematic process of assessing, revising, piloting, and implementing changes to key assessments; and fourth, we have supported our new program leadership’s to more actively engaged the state and region’s ESL professional community. In the case of content knowledge, we have revised courses and key assessments to be more rigorous; we now ask candidates to demonstrate higher levels of cognition in both coursework and key assessments. In coursework, candidates are required to do original scholarly research that relies more heavily on major peer-reviewed journals, and to integrate that research into their writing, reflection, application, and practice. In order to strengthen candidates’ expertise in ESL content knowledge, we have also implemented changes to the courses and key assessments that focus on content. We have expanded the length of the Writing Project (WP), for example, from 12-15 to 15-20 pages (sans front matter and references); more explicitly defined the specific kinds of knowledge candidates are to demonstrate in the paper; and moved it to a course held later in the rotation so that candidates would have additional coursework and thus, expertise to draw upon. We have also piloted a “cross cultural interview” assignment as a preliminary to the Micro-Culture Project, so that candidates may begin thinking earlier and more explicitly about culture, how it affects student learning, and how best to serve as ESL and cultural resources in their schools and communities. In 2016, West Virginia passed legislation that requires ESL candidates to demonstrate proficiency on the ETS “English to Speakers of Other Languages” Praxis Exam. Beginning in fall 2017, Marshall University’s ESL candidates will take the Praxis exam, and we will then use passage of the ESOL Praxis Exam as evidence of mastery for Standard 1. We intend to keep both the Writing and Micro-Culture Projects, however, because faculty and candidates report that both projects help candidates to explore, apply, and deepen their knowledge about language, language acquisition, and culture, and about how the interaction of all three affect student learning. (2) professional and pedagogical knowledge, skill, and dispositions, Per the suggestion of reviewers and the TESOL consultant who assisted with Program Revision as we worked to resubmit our ESL program in 2012-2013, we have revised our ESL Program so that candidates participate in clinical experiences earlier and more consistently throughout the program, and so that field based professionals engage in deeper and more intentional supervision during the Practicum, candidates’ culminating clinical experience. These combined changes enable candidates to better demonstrate their professional and pedagogical knowledge, skill, and dispositions, and allow program faculty more opportunities to assess candidates’ knowledge, skills, and dispositions as they evolve. The added requirement that candidates submit videos of their teaching has been especially helpful in that regard, as has the requirement that candidates reflect more intentionally on their teaching. Whether delineated by race, ethnicity, religion, or language, West Virginia is one of the nation’s least diverse states. In the case of language, fewer than 1% of the state’s population are English Language Learners. Moreover, because most of those ELLs are concentrated in five counties, candidates who live, work, and complete Practicums in a county with very few ELLs report that they regularly struggle to “find” English Language Learners with whom to work. Engaging candidates in clinical experiences earlier in their programs_in the Micro-Culture project, for example, and in the evolving Cross Cultural Interview candidates do prior to the Micro-Culture project_has helped candidates to both apply their growing knowledge of ESL research and theory to practice, and to make ESL connections that will help them “hit the ground running” when it comes time for the Practicum. We have also revised courses and assessments so that candidates begin to think of themselves and perform as ESL resources earlier in their coursework, which has helped faculty to assess dispositions related to cultural inclusivity, diversity, and context. Several of the Key Assessments_for example, the MicroCulture Project, the Applied Research Project, and the Philosophy of Teaching Paper_explicitly direct candidates to serve as ESL resources in their school or home communities and to reflect on that service, directives that help program faculty to assess candidates’ dispositions. That evolving emphasis has been helpful not just to candidates, but to program faculty. The environments within which most of us live and work are not particularly progressive and, regrettably, it is not uncommon to encounter people with less than generous attitudes toward those for whom English is not a first language. As those who are lucky enough to work with ESL professionals know, the vast majority of those who go into ESL possess a special kind of dedication to diversity that reaches even beyond dispositional requirements. Program Faculty find candidates’ reports of their efforts to affect school and community climates to be particularly inspiring. In addition to piloting practice related course and assessment revisions, the current ESL Program Coordinator has become involved with West Virginia’s TESOL affiliate. This has benefited Marshall’s ESL program for a number of reasons, not least of which is that it has helped the Program to cultivate connections with ESL teachers across the region who can suggest Program improvements, provide clinical placement and supervision opportunities, and serve as mentors and resources for new ESL teachers

2. Based on the analysis of specialty licensure area data, how have individual licensure areas used data for change?

As outlined in the Quality Assurance System (QAS) all programs in COEPD use data to guide programmatic decision making and improvement.  Examples of this will be provided from the four Nationally Recognized programs, Early Childhood Education, Elementary Education, Principalship and Reading. The first two are Initial Level Programs while the latter two are Advanced Level programs.

In the nine assessments related to the Early Childhood Education program, teacher candidates consistently performed above minimum expectations and typically performed considerably above expectations. However, the data did indicate several programmatic opportunities that would enhance candidate preparedness. Moreover, numerous assessment modifications were introduced to help better pinpoint areas where candidates could improve in the future. Under Content Knowledge, data indicated that the candidates would benefit from a methods experience prior to the capstone field experiences. Each of the courses had an existing 15 hour clinical experience with children. This was rolled into one 60 hour field experience in which the student would do observational assessment, plan and implement a mini-project integrating play, language, literacy, science and mathematics. For Student Learning, the Practica were redesigned to be more sequential in nature. A scope and sequence of classes has been established to better prepare student for the work of a professional educator. There are four levels of experience Level I observation, Level II 60 hour Content practice, Level III Student teaching or Practica 150 hours for each of the levels for a total of 600 hours. In the prior assessments, they were not viewed holistically. In some cases, one particular standard may have been evaluated four or five times while another standard only once. Moreover, the assessments were often loosely tied to the NAEYC and CEC standards. In response to the need to enhance assessment data with respect to the standards, each standard was first aligned to the appropriate assessments. Next, the NAEYC and CEC standards were thoroughly reviewed and each assessment item developed. The performance levels in each assessment item are now closely aligned to both the standards and the supporting documentation contained in the standards. All in all, this effort will permit us to accurately and consistently address candidate performance across all assessments.  Another issue with the data collection process was the time lag between recording the assessments on paper and transferring this information to a central electronic database. To promote real-time data collection (and more timely data analysis), a web-based, point-of-use data entry system has been created.

The Elementary Education program has revised key assessments in order to improve candidates’ experiences and learning; and improve alignment with content standards, provide broader coverage of standards by assessments, and allow for more detailed data to be compiled for analysis. All rubrics have been refined and more explicitly aligned to ACEI standards. Program faculty are committed to using the data resulting from our assessments for continuous program improvement. Content Knowledge:. State licensure assessment data (Praxis II) indicate that candidates did well in their overall Praxis II test performance in 2011-2013. Program scores in the six areas measured by the test are in line with (and many times higher than) state and national averages. From this data, there is an indication that we need to continue providing the high level of instruction that generated the current results, while examining ways to strengthen the lower scoring areas, especially in the consistently lower scoring areas of Reading and Language Arts, Mathematics, and occasionally General Information. Course grades for Elementary Education (Assessment 2) demonstrate, overall, that our candidates performed very well. Because the candidates are passing the courses with the required grade or better and the courses are closely aligned to the ACEI standard, the data suggest that the candidates are meeting the standard. The scores that fell below the 3.0 range were found in the physical sciences and world geography courses. Therefore, there is an indication that the areas of Science and Social Studies, specifically Geography, need more attention for the elementary candidates since those were the lowest average grades (in the mid to upper 2.0 range). Discussion of putting into place content reinforcement through tutoring and changes in content course curriculum for these areas, and others as they arise, has begun in this program to address these needs. The Assessment 3 rubric is also going through the process of a rewording of the descriptions of the differences between levels of performance to describe what an evaluator would expect to see at each performance level. With the strong connection to the ACEI standards, rewording of the performance levels, this assessment will more clearly show that our teacher candidates have a good understanding of what is to be expected of them as teachers in writing lesson plans. Professional and Pedagogical Knowledge, Skills, and Dispositions: Assessment 4, while the data showed the majority of the teacher candidates received “distinguished” and “proficient” rating on all categories of the Level III experience, we believed there were areas of this assessment that needed to be improved and changed to better our ability to measure candidates’ knowledge, skills, and dispositions, and to meet the ACEI standards. Consequently, revisions were made to Assessment 4 (Student Teaching/Level III Clinical Experience) to more closely align the rubric elements to the ACEI standards. According to the data, 35 of the 37 rubric elements were met with a mean score of 3.0 or better. “Physical Education”  and “recommends/facilitates opportunities for change”  were the only two elements of the rubric that had mean scores below 3.0 (both were at a 2.9). Over two semesters and two placements of students each semester, those two elements fell below a 3.0 only 3 times. The mean scores quickly improved during the next semesters due to the revisions of this Assessment. At the present time, The Assessment 4 rubric is also going through the process of a rewording of the descriptions of the differences between levels of performance to describe what an evaluator would expect to see at each performance level. This will give an even stronger rubric to be able to assess candidate performance in connection to the ACEI standards. Assessment 6 (Contextual Reflection) was a new assessment that would look at differentiated instruction in the math and science classroom. Even though this was a new assessment, the mean scores were at or above 2.6. These scores were slightly lower than anticipated, but the faculty has analyzed the data and made modifications to the “instructions” of the assignment toward more clarity and precision. More emphasis needs to be placed on the following: instructional strategies and implications, non-structure vs. structured group activities, assessment methods, and interpretation of student learning. For the most part, these low scores can be attributed to the fact that students did not elaborate in a detailed fashion on these sections. Minimal explanations/descriptions were common. Therefore, course instructors have been encouraged to emphasize the need for deeper and more reflective narratives.

In the Principalship Program, the leadership studies faculty meets on a regular basis (normally one time each month) to discuss program issues and has a faculty retreat at least once a year to review and modify the program as needed. Each course and field experience is discussed, strengths and weaknesses are analyzed, and changes are made to improve the effectiveness of the program in preparing school leaders to have the requisite content knowledge, professional and pedagogical knowledge, skills, and dispositions, and understanding to affect student learning. Content Knowledge: the reflective essays (Assessment 2) are a second assessment used in the program that allows candidates to demonstrate the ability of candidates to conceptualize their comprehension of the ELCC standards for the principalship in a narrative fashion through six written essays. Data for the first two cycles suggest that the greatest areas of weakness in content knowledge were found in ELCC Standard 1 and Standard 6 (i.e, vision and context), with the largest percentage of initial non-acceptable scores being found in those essays. That trend, however, did not hold for 2015-16, suggesting that changes made by the faculty in the writing prompts and rubrics, resulting from careful analysis of the problems identified in submissions from the first two data cycles, were helpful in increasing students’ understanding of the concepts represented in those standards. Another change in the essay assessment has been made beginning in the fall of 2016. Prior to that time, the essays were submitted independent of course work to be evaluated prior to admission to the capstone courses (i.e., LS 660 and LS 685). The faculty determined that better preparation for writing the essays could be provided by including them in foundational coursework required for all students, both M.A. and post-M.A. Faculty will carefully review data for the 2016-17 cycle to determine if candidates are performing at a higher level with fewer revisions needed. Instructional Leadership Skills Assessments 3 is used to measure candidates’ abilities to provide instructional leadership in working with faculty on issues of instruction, curriculum, culture, and professional development. Assessment 3 requires candidates to complete a teacher observation, evaluation, and plan of improvement and aligns with ELCC Standard 2. Mean scores for Assessment 3 and percentages of those earning acceptable or outstanding evaluations demonstrate that candidates are achieving mastery of the skills associated with the field experience. There were no students whose work was rated as unacceptable in these areas over the 3-year period, although he percentage of outstanding scores was lower in ELCC elements 2.2 and 2.3 for 2015-16 for both MA and post-MA students. Based on the teacher observation and evaluation instrument, candidates may need additional support in learning how to identify effective instructional strategies with a focus on providing research based suggestions for improvement. This support will be provided by making more research-based sources available to students as well as specific examples of skills to look for in classroom observations and in plans of improvement from the West Virginia Department of Education website. The mentor evaluation (Assessment 4) data indicate that the field mentors believe candidates are developing the skills needed to be effective school leaders in the areas of ethics and establishing and maintaining relationships with the community. While the instrument was originally designed to collect data on standards and elements that were not assessed elsewhere in the program, it is the plan of the faculty to request additional input from the field mentors in order to gather supporting data in other areas of professional knowledge, skills, and dispositions. It is believed these data will be of great use in revising and strengthening the internship and the coursework of the program. Another need is to increase the number of mentor evaluations returned to the program. No good explanation exists, for instance, for the complete absence of evaluations from the post-MA track for 2015-16. Evaluations are returned to the instructor(s) of record during the semester students are in the capstone courses (i.e., LS 660 or LS 685). While these are returned voluntarily by the candidates’ mentors, the instructors of the capstone courses will begin to directly contact all mentors in that semester to request they complete the evaluation(s) on candidate(s) they’ve mentored.

For the Literacy-Reading Specialist Program data is used to continually improve the program. For Content Knowledge, the careful calibration and validation of test items from Assessment 2, Pre- Post Foundation Tests, is an ongoing process to ensure that test items reflect content covered in foundation courses align to IRA Standards and provide the program with an accurate measure of learning. Test items will be re-calibrated to revised ILA standards to be released in 2018. Cumulative data findings from Assessment 2, Pre and Post Foundation test suggest that candidates who complete the foundation courses have a solid foundational knowledge base that informs instructional practices and program design. This finding is supported by the near 100% passing rate of candidates on the Praxis II test for Reading Specialists, a West Virginia state requirement for licensure as a Reading Specialist. A future goal for the Pre-Post Foundation Knowledge Tests is to implement specific instructional interventions to support candidates whose scores fall more than one standard deviation from the norm on the post-foundation test. Interventions may include tutorials, additional course work and study groups. Consistent with the view that assessment is a method of improvement, candidates will have additional opportunities to demonstrate mastery of content knowledge expected of literacy specialists. Results from The Reflective Practitioner (Assessment 6) gathered across the program through an electronic portfolio, demonstrate gains in critical and reflective thinking. Data from these assessments triangulate and extend the conclusion that candidates are developing the content knowledge expected of reading specialists. The sections of this portfolio assessment, including the description provided to candidates and the rubric, were aligned to IRA 2010 standards in 2012-13. As part of the realignment, faculty undertook a revision of the language of each assessment to ensure it met the needs of candidates. In particular, elements to address diversity and coaching were incorporated to ensure candidates had robust learning experiences that prepared them for the role of the reading specialist/coach, as outlined in IRA 2010 standards. It is anticipated that this assessment will be realigned to ILA 2018 Standards. While cumulative results confirm overall success on Assessment 6, data also established that several candidates each term did not meet benchmark on portfolio rubric sections relating to the analysis and evaluation of relevant theory and research. A three-pronged strategy implemented to address this need for improvement. First, program faculty decided in2015 that candidates would have the opportunity to revise a portfolio submission during the term in which it was submitted. This was done to provide opportunities for more immediately refinement of thinking and knowledge rather than waiting to submit in the next semester. A quick resubmission supported candidate knowledge-making processes and promoted the value of life-long learning expected of literacy specialists. Tracking instances of resubmissions, as noted in 2015 and 2016 portfolio data, demonstrates that this strategy is having a positive impact on candidate learning. Secondly, the program made the decision to emphasize theory and research in foundation courses. At least one assignment in each foundation would require investigation and evaluation of theory and peer-reviewed studies relevant to the instructional intervention. Secondly, CIRG 621 Current Issues and Problem In Literacy instructors will incorporate a more structured approach to review of peer reviewed articles to ensure that all candidates have a solid knowledge of different research models and the tools with which to evaluate literacy research. Faculty will evaluate the effect of this three-pronged intervention as part of portfolio evaluation completed each semester. Data from Assessment 7 Writing Portfolio and Assessment 8 Literacy and Technology Project provide examples of how peer coaching experiences are incorporated into foundation course learning experiences. To emphasize the importance of coaching experiences, the program plans to separate coaching from the presentation element of these two assignments. Cumulative data from assessments linked to advanced program courses suggest that candidates develop the assessment and instructional capacities needed to meet the needs of diverse learners and prepare candidates for coaching and literacy leadership roles expected of literacy specialists. Assessment 4, Folio of Practicum Experiences Section A and B, gathers data from two practicum courses. CIRG 643 focuses on clinical interventions; CIRG 623 focuses on coaching and literacy leadership. CIRG 643 shifted from a one-on-one model to a model that expected candidates to develop and show proficiency in working with different classroom configurations, including individual and small group instruction. This adjustment prepared candidates for the kinds of interventions expected of literacy specialists. Because many candidates complete their clinical in a school, typically a PK-6 school, their clinical experiences are often restricted to the levels of students served by the school. Under discussion are changes to field experiences in foundation classes and the clinical courses to ensure that graduates have the depth and range of instructional experiences expected of a Pk-12 Reading Specialist. CIRG 623 requires extensive, supervised coaching experiences in a local school demonstrating competence at supporting classroom teachers and professional development initiatives, based on a school-wide literacy plan . Candidates have expressed interest through a survey in a course exclusively devoted to coaching and a second course developed to literacy leadership. Faculty are exploring the option of developing a post-MA certificate that emphasizes coaching and leadership. Currently, both program clinical courses are delivered using a hybrid model that involves a combination of face-to-face meetings and asynchronous activities using Blackboard tools. The program is exploring an initiative that will make better use of virtual tools such as Collaborate, incorporated into the Blackboard system, and FaceTime, a component of Apple tools. These virtual tools will permit real time clinical supervision without compromising the privacy of students or teachers at local schools and enable clinical supervisors and peers to provide timely feedback that will better develop the clinical and coaching and mentoring skills of candidates.  Student Learning Cumulative data findings suggest that candidates in CIRG 643 demonstrate the ability to plan and implement effective instruction. The program has developed a unique and comprehensive tool, the Student Progress Record (SPR) through which to gage the impact on student learning The program is developing comprehensive tools with which to evaluate impact on student learning. We piloted detailed student surveys in 2012 that included observations in schools. In 2015-2016, the program began working with a focus group from local schools that included former candidates, Title I personnel, and county curriculum leaders. Our goal is to continue and expand the use of focus groups to open a dialogue with constituents about effective literacy instruction and the ways in which the program can collaborate with public schools to prepare literacy specialists who better meet the diverse needs of students.

3. How does the specialty licensure area data align with and provide evidence for meeting the professional standards in the licensure area at initial and specialty area for advanced?

COEPD has five programs that are evaluated through CAEP Program Review with Feedback (PRwF): Art, Autism, Post-Baccalaureate, Reading Endorsement and Social Services and Attendance.  Each of the programs have data that align with and meet the professional standards in their licensure area.

The Art PreK-Adult initial level licensure program aligns data and provides evidence for meeting professional standards in the several ways. Overall, the data from Assessment 1 (Praxis II: State Licensure Exam) and Assessment 2 (course grades) provide evidence of candidates’ content knowledge and document the alignment of te content knowledge with all elements of NAEA Standards I (except Standards XI and XIII that are addressed in Assessments 4 and 5). State licensure assessment data indicate that candidates did well in their overall Praxis II test performance. The scores and pass rates indicate this institution’s Art Education program experiences enable candidates to develop the knowledge and skills required of Art educators. Program scores in the sub-category areas measured by this praxis test are in line with state and national scores. Course grades (Assessment 2) also demonstrate that candidates performed very well. The data indicate that all candidates in the Art Education program met the minimum expectation of a C in the content courses. In almost every instance, candidates scored above this minimum of C indicating a strong understanding of the content. This assessment shows candidates are well prepared in the knowledge needed to teach. Data from Assessments 6, the capstone culminating projects show a minimum pattern of success. It is the intent of the Art Program faculty to revisit the requirements of the art capstone, its goals, and its alignment with the national standards for visual arts educators. Professional and pedagogical knowledge, skill and disposition Assessment #3 Planning and Assessment #4 Student Teaching shed a positive light on Art Education candidates’ abilities to plan and implement lessons and assess student learning. Data have proved informative for the program. The rubric elements are strongly tied to the NAEA pedagogical standards and revealed excellent candidate performance in student teaching overall. This WV-TPA assessment tool allowed our candidates to demonstrate their abilities to develop knowledge of Art subject matter and Art teaching, develop and apply knowledge of varied students’ needs, and consider research and theory about how students learn. Student learning Overall results from Assessment 5 indicate that our Art Education candidates demonstrated solid performance in understanding the practice and theory of candidate effect on student learning. Candidates successfully performed at the Proficient and Advanced levels on all assessment components. The Impact on Student Learning section of the West Virginia Teacher Performance Assessment tool is used to assess our candidates’ abilities to impact student learning.

The Autism program has six assessments that are aligned with the Council for Exceptional Children (CEC) standards. 1) Grades from two courses are used to evaluate the knowledge of topics such as history, legal and ethical issues, characteristics, and evidence-based teaching practices specific to students with ASD. The first course, CISP 527, aligns predominately with content knowledge standards while the second course, CISP 662, aligns with both content knowledge and application of content knowledge during the field-based experience. The average grade was a B or higher for the majority of students (over 90%) from 2014-2016.  2) The Evaluation of Professional Knowledge was created to provide a measure of basic candidate knowledge of concepts related to ASD. The assessment is a modified version of an assessment created by the WVDE for the Autism Academy trainings previously offered to WV county school systems by the department. The assessment items cover a broad range of information that is specific to content knowledge standards related to understanding the unique characteristics of ASD and how it affects student learning and educational programs. The expectation is that students receive a minimum score of 30 or a “C” or better on this assessment which reflects an adequate level of knowledge upon finishing the two autism courses. Candidates met this expectation with the majority scoring a 34 (B) or higher. 3) For Field-based Experience Lesson Plans, candidates are required to develop two lesson plans targeting skills for at least one student with ASD. With the exception of one candidate during the Summer 2017 semester, all candidates performed at the proficient level or above in the areas of overall lesson plan and goals.   4) For Field-based Experience Lesson Plan Implementation, candidates are required to videotape themselves teaching using two distinct evidence-based strategies for students with ASD, Discrete Trial Teaching (DTT) and Task Analysis/Chaining (TA).  This assessment evaluates candidates’ ability to apply content knowledge to their teaching and addresses skill level standards.   Candidates performed at the proficient or above level on all components of the rubric.  As compared to the other rubric components, candidates performed best on implementation/development of materials, learning environment, rapport and positive behavior supports. Candidates demonstrated lower scores on their ability to implement instructional strategies, prompts and reinforcement, and communication supports relative to the other measures, although the majority scored at the proficient or above level on these specific components of the rubric.  5) Continuous Data Collection on Student Learning requires candidates to collect, graph and analyze data on student learning related to performance on lesson implementation. This assessment evaluates candidates’ ability to apply content knowledge to their teaching and improve student learning.  Eighty-six to 53% of candidates performed at the distinguished level on the rubric with the highest scores being for adjustment to instruction.  6) The Candidate Disposition Survey is a modified version of an assessment created by the WVDE for the Autism Academy trainings previously offered to WV county school systems by the department.  Some questions were modified to meet current standards and reflect the latest information about ASD.  This assessment evaluates candidates’ perceptions of their knowledge of important content and skills needed to teach students with ASD.  The majority of candidates rated themselves as proficient or above across items and semesters on the assessment with mean ratings ranging from 3.33 to 4.29. The SD was relatively small (Range: .51-1.06) across items and semesters suggesting consistency across ratings by participants.  The median score was 4 across all candidates regardless of semester.  These data suggest that candidates view themselves as meeting the standards. The specific alignment of the assessments with the standards are available in Section III of the PRwF report.

The Post Baccalaureate program has eight assessments that are aligned with the West Virginia Professional Teaching Standards (WVPTS). 1) Praxis II content area tests are standardized subject specific tests adopted by the West Virginia Department of Education and administered by the Educational Testing Service. Praxis II is aligned with core content, pedagogy and teaching. The data show that all candidates had passing scores on their Praxis II Content Tests. 2) The transcript analysis process was developed in collaboration with content area faculty and is aligned with national and or SPA standards for each content area.  All content areas require a minimum GPA of 2.8 with no grades below a C.  Each analysis requires candidates to have between 36 to 48 credit hours in the content area and depending on the specific content between 33% to 92% of that coursework must be completed at the upper (junior and senior) or graduate level.  3) Assessment 3 is a unit plan that is used to evaluate the candidate’s ability to plan units of instruction. The unit plan assesses the candidate’s ability to plan for instruction against the subject specific standards.  The data indicate that the majority of candidates are at the distinguished level on the preparation of instructional units.  The average score for each component in the rubric falls between Proficient and Distinguished.  The area with the lowest average score is Formative Assessment indicating that this is an area that needs to be further emphasized. 4) Clinical Performance Evaluation used to determine a candidate’s level of proficiency during their EDF 677, MAT Level III Clinical Experience (student teaching). The data show that the majority of candidates are at the Accomplished level for the WVPTS and the Target level of professional conduct which is above average of what would be expected for a novice teacher.  Across both semesters, candidates scored lowest in the area of Professional Responsibility for School and Community.  5) Assessment 5 is a work sample prepared by the candidates as a requirement during EDF 677, MAT Level III Clinical Experience (student teaching). Most candidates received final scores of Distinguished or Accomplished with a few items being rated as Emerging.  No items were rated as unsatisfactory for the final grade although four candidates in spring 2017 and almost all candidates in fall 2016 were required to redo portions of their TPA to make corrections to address some unsatisfactory ratings.  Errors made were typically the result of not reading directions or lack of explanation in narrative.  No errors required remediation or redoing the entire TPA.  6) For the Capstone Presentation, candidates make an oral presentation based on their West Virginia Teacher Performance Assessment and student teaching (capstone) experience. Nine candidates completed the Capstone in Spring 2017 and 9 candidates in Fall 2016 for a total of 18 candidates.  Average student scores ranged from 3.2 to 4.0 on a 4.0 scale. 7) Assessment 7 is a Science Safety Module.  Pre-service science teachers should provide evidence of appropriate knowledge of safety in the science classroom.  Standard 4 from the NSTA Standards addresses safety. Students taking the exam displayed acceptable levels on all eight standards of the Science Safety Rubric. 8) The ACTFL Oral Proficiency Interview (OPI) is a valid and reliable means of assessing how well a person speaks a language. It is a 20-30 minute one-on-one interview between a certified ACTFL tester and an examinee. The speaker’s performance is compared to the criteria outlined in the ACTFL Proficiency Guidelines 2012 – Speaking or the Inter-Agency Language Roundtable Language Skill Level Descriptors – Speaking. The interview is double rated and an Official ACTFL Oral proficiency Certificate stating the candidate’s proficiency level is issued to the candidate. The Oral Proficiency Interview is the officially recognized assessment instrument of the American Council on the Teaching of Foreign Languages. Recent candidates received median scores on the ACTFL OPI.

The Reading Endorsement program has seven assessments that are aligned with the International Reading Association (IRA) standards.  1&7) The Pre-Post Foundation Tests evaluate mastery of content-based information and concepts. This assessment features a pre-post design that captures candidate gains in core knowledge covered in foundation courses and readiness for advanced program courses. The Pre- Foundation Knowledge test was designed as a diagnostic test, and is used for advising purposes.  Candidates take the Pre-Foundation Knowledge test upon entry into the program using, online Blackboard assessment tools.  It has 45 items. As the test is diagnostic, there are no unacceptable scores. The Post-Foundation Knowledge Test, a different version of the Pre-Foundation Knowledge Test, was developed to gauge the impact of program foundation courses on candidate knowledge. Findings demonstrate that candidates improve their knowledge of literacy theories, research and instructional applications.  2) The Literacy Biography is a digital cross-program portfolio submission that includes a guided biographical reflective essay that integrates individual experiences, theories, beliefs and pedagogic practices. Candidates in 2016 and 2017 demonstrate high pass rates on this assessment. 3) Technology and Literacy Final Project enables candidates to synthesize what they have learned about technology and apply this knowledge to support literacy learning in P-12 classrooms. This project aligns with IRA Standards 1-5. All candidates except 1 in academic years 2012 and 2017 scored at meets benchmark or exceeds benchmark on all components of this assessment, demonstrating that candidates competently and thoughtfully incorporate technology into literacy learning projects. 4) Analysis of Foundation Learning Experiences is a digital cross-program portfolio submission that requires the candidate to do a reflective analysis of the program learning experiences. The pass rates (86%-100%) from all applications of data from this assessment confirm candidates’ mastery of foundation learning experiences and related literacy theory. 5) The Writing Portfolio is a semester-long learning activity associated with all sections of CIRG 615 Writing in the Literacy Curriculum, a foundation course that engages candidates with two conversations. What do I know about being a writer? What do I know about teaching writing? The five sections of the Writing Portfolio align with specific IRA Standards- 2010 connected with the teaching of writing (IRA 1.1,1.2, 2.3, 3.3, 4.1, 5.2, 5.4, 5.5, 6.2, 6.3, 6.4).  Weighted, cumulative mean scores (2.38; 2.92) and 100% passing rates from 2012-17 demonstrate strong performance on all components of the Writing Portfolio assessment. Both the high means and their close range (2.33-2.42) on the five sections of this assessment indicate that all candidates have competence in understanding the processes of writing from the dual perspectives of writer and teacher. They have acquired the skills and knowledge to implement writing instruction in a balanced, comprehensive literacy curriculum and demonstrate an impact on student writing and learning. 6) Supporting English Language Learners is an assignment that will be added to the certificate program in spring 2018 to better prepare candidates to meet the needs of second language learners. This will align with foundational knowledge, curriculum and instruction, diversity and creating a literate environment for all. Data will be collected this spring to determine if the standards are met for this assessment.

The Social Services and Attendance program has six assessments that are aligned with the Social Services and Attendance Standards. 1) Pre-intern Essay requires the candidate to do an assessment of the school and community through the following contexts;   individual, social organizational, community and a self-assessment. A review of the data indicate that students possess some general knowledge of the duties and responsibilities of school social services and attendance directors upon entering the program.  The standards of cultural assessment, professional values, ethics and professional competencies and the understanding of determining most appropriate interventions were the weakest knowledge areas noted in the pre-assessment (SSAS 1.1, 1.5,2.29, 3.5, 3.7). Recognizing the importance of maintaining a list of community resources and the value of an education were the highest knowledge areas measured in Assessment 1 during both years of assessing (SSAS 2.32, 4.12 and 6.1). 2) Attendance Director Handbook Portfolio is a compilation of informational material used by the Attendance Director in daily practice and practical application of the information gathered for the handbook.  The handbook is compiled from student assignments completed during both the LS 691 (Attendance Director) and LS 692 (Attendance Director Internship).  During the course of the semesters the student gains information regarding areas of practice within the attendance and social services profession that are supplemented by forms, service directories, codes of ethic, etc. that are required for submission into the Attendance Director Handbook/student portfolio.  Once the student has completed both LS 691 and LS 692 he/she has accumulated materials to use in practice as an Attendance Director. All students adequately performed this task. 3) Mentor Evaluation is an evaluation checklist completed at the end of the semester by the certified Attendance Director mentor.  Mentors rate the students in 6 areas: 1. administration, 2. social work: programs, service and delivery skills,3. advocacy, 4. liaison function 5. law and legal issues and 6. professional roles. All candidates were rated highly by their mentors. 4) Action Research Plan: Problem Analysis of Truancy and School Dropouts requires the candidate t to define and research a problem within the context of truancy, school dropout, or other social or academic issue contributing to poor school attendance or school dropout.  The student must analyze the problem, discuss possible solutions and finally provide a solution.  The action research plan requires reflective practice as a guide for development. On a scale of 0-2, mean scores range from 1.25-1.98 indicating above average performance. 5) Attendance Director Professional Skills Assessment is a simulation or actual district project  that provides data on the professional skills of the student. The student is required to complete tasks within 5 subcategories: service plan, technology, legal system, attendance director duties, and community resources.  These tasks reflect the student’s competencies in standards required for the certification of Attendance Director by the West Virginia Department of Education.  On a scale of 0-2, mean scores range from 1.50-1.98.  This comprehensive assessment indicates that students mastered a myriad of standards.  The service plan and technology standards, while mastered, indicated lower mean scores than community resources (SSAS 6.5 & 6.8). 6) School Attendance Law and State Attendance Policy Exam are 19 multiple choice questions. The exam measures the student’s knowledge of both West Virginia State Code, School Laws, 18-8 and West Virginia State Board of Education school attendance policy.  The data show that the students have excellent content knowledge of West Virginia State Code Attendance Laws and West Virginia Board of Education Attendance Policy. These scores are collectively among the highest on any assessments for both years ranging from 1.71-1.99.

4. How are SPA reports that are not Nationally Recognized being addressed?

COEPD is working diligently to obtain National Recognition for 51% of current programs. COEPD  currently has four Nationally Recognized programs, two at the Initial Level (Elementary and Early Childhood) and two at the Advanced Level (Literacy-Reading Specialist, Leadership Studies Principalship). We have five recognized with conditions  at the Initial Level (Social Studies, Science, English Language Arts, Math Comprehensive 5-12, Math Comprehensive 5-9) and four at the Advanced Level (Teaching English as a Second Language, Special Education Multi-categorical, Special Education Pre-school Special Needs, Early Childhood Education). At the Advanced Level School Library Media Specialist was recognized with probation and Math through Algebra I was rated as further development required. In March 2018 Wellness (PE and Health), Special Education Multi-categorical, Japanese and Spanish will be submitted for the Initial Level and Early Childhood Education and Elementary Math Specialist for the Advanced Level, all as Initial submissions. At the initial level, the following programs were submitted in March 2017 and Recognized with Conditions (English 5-Adult and 5-9, Social Studies 5-Adult and 5-9, Biology 9-Adult, Chemistry 9-Adult, Physics 9-Adult, and General Science 5-Adult and 5-9). Problems noted in the SPA reports included lack of alignment of rubrics with standards, inadequate student teaching data available, and insufficient number of assessments linked to some standards. These issues were addressed and the SPAs were resubmitted in September 2017.

For Social Studies, the following changes were made:  1) Enrollment completer table corrected and all assessment tables revised; 2) Course descriptions clarified; 3) Student teaching evaluation data made available; and 4) development of new key assessment #6.

For all Science programs, the following changes were made:  1) assessments more clearly aligned with standards; 2) student teaching evaluation data made available; 3) safety procedures more clearly assessed; and 4) new assessment measuring candidate professional knowledge and skills created.

For English, the following changes were made:  1) assessments rewritten to demonstrate better alignment to NCTE standards; and 2) performance descriptors added to rubrics.

One other program (Physical Education PreK-Adult) submitted in March 2017 was rated as Further Development Required.   Concerns from the program report were:  1) assessments inappropriately aligned to standards; 2) lack of assessments providing evidence of candidates’ ability to plan; 3) unclear rubric criteria; and 4) insufficient scoring guides and data tables. Many of these concerns were exacerbated by faculty attrition.  The COEPD is currently advertising for a new Wellness Program Director and anticipates the new hire providing leadership in correcting these deficiencies.

The Mathematics 5-Adult and 5-9 programs were submitted for the first time in September 2017, and we are awaiting the results from these SPA submissions.  The Health PreK-Adult, Foreign Languages 5-Adult, and Multi-Categorical will have first submissions March 2018. At the Advanced Level, three programs were submitted in March 2017 and Recognized with Conditions (School Psychology, Teaching English as a Second Language, and Special Education Multi-Categorical). Problems noted in the SPA reports included lack of alignment of rubrics with standards, clarification of types of clinical experiences, inadequate number of assessments linked to standards, and assessment instruments with insufficient items for some standards. These issues were addressed and the SPAs were resubmitted in September 2017.

For School Psychology the following changes were made: 1) Inconsistencies that had been noted were explained. These inconsistencies were in the areas of delivered course of study, internship hours on transcripts and variability for part-time candidates. 2) Additionally, all teaching faculty for School Psychology candidates were included to address faculty-to-candidate ratio concerns.  3) Concerns with key assessments were addressed through stakeholder groups, which resulted in increased specificity and number of rubric items for the Practicum Evaluation and Internship Performance Assessment.  Improvements to these assessments similarly involved disposition alignment, clear statement of criteria, and examination of validity.  4) Finally, Assessment 5 and 6 were strengthened through the “look fors” document and clarifying PND/GAS instructions. This clarified the assignment descriptions and rubrics.

In the Special Education Multi-categorical re-submission, the following areas were addressed: 1) In order to clarify the informal/formal clinical experiences of the multi-categorical program, the narrative was expanded and two charts were added, one containing a field experience tracking chart to ensure diversity of placements and experiences for candidates. 2) Rubrics for Key Assessment 5, 6, 7 were re-developed utilizing descriptors and language to differentiate levels of performance, in order to get away from “grading rubrics.”

For our September 2017 resubmission for Teaching English as a Second Language the the following changes were made:  1) The Assessment 4 rubric was revised to separate more specific elements of candidates’ performance and the language was revised on competency levels so that indicators unambiguously described performance. 2)  The Assessment 5 rubric was revised to use Blooms verbs that clearly delineate performance levels. 3) The revised submission provided additional information on how Standard 5a was measured in Assessment 2.

Three additional programs (Special Education Preschool Special Needs, Deaf and Hard of Hearing and Visually Impaired) were submitted for the first time in September 2017.  We are awaiting the results from these SPA submissions. Three other programs were submitted in March 2017 and will be resubmitted in March 2018 (Early Childhood Education, School Library Media Specialist, Math through Algebra I).

Concerns noted for Early Childhood Education (Recognized with Conditions) included rubrics not clearly aligned with specific standards, clarification of types of interaction with diverse children, and concerns about the use of reflection for performance based assessment.  Additionally specifics on improvements to course content and relating them to the standards needs to be addressed. The corrections  are being made for Early Childhood Education and the SPA will be resubmitted in March 2018. The rubrics are currently being revised to document alignment with the specific standards.  Other revisions include the assignment for clinical  interaction with diverse learners is being clarified and updated, new assessments for clinical performance are being identified, and providing details of improvements to course content including which courses and what content needs to be updated.

The School Library Media program was Recognized with Probation. Concerns in the Program Report were lack of delineation of activities, no clear assignment details, advocacy and leadership were not addressed, rubrics were not aligned to standards, little evidence of teaching strategies/knowledge of learners/skills, assessments too broad and unfocused, assessments not aligned to standards, and data not connected to program improvement. In order to address these concerns, the program coordinator for School Library Media Specialist met with AASL CAEP representative to review the SPA and discuss the rewrite. Helpful suggestions from the representative will be included in the March 2018 submission. Additionally, changes to the SPA for Assessment 2 & 4 will focus on implementation rather than planning. The use of data for program improvement will be explained. All rubrics will be clarified and aligned to the standards.

General Math through Algebra I was rated as Further Development Required. Concerns in the Program Report indicated lack of disaggregated candidate data, lack of alignment with current standards, course descriptions did not sufficiently address course content, rubrics not clearly aligned to specific standards, and rubrics had generic language that did not address standard elements or sub-elements. Revisions are being made for the General Math through Algebra I program and include: 1) Assessment 1 was revised for more clarity of individual candidate Praxis scores for content knowledge. 2) Assessment 2 was revised for more clarity of completer grades for content knowledge. Course descriptions were revised for more clarity of NCTM content standards evidence. 3) The Assessment 3 rubric was revised for more clarity of evidence for candidate mathematics lesson planning, student teaching planning, and better clarity of NCTM standards evidence. 4) Assessment 5 rubric was revised for more clarity of evidence for candidate mathematics action research / impact on student learning and better clarity of NCTM standards evidence. 6) Assessment 6 rubric was revised for more clarity of evidence for candidate content knowledge through the research project and better clarity of NCTM standards evidence. The Elementary Math Specialist Program is also undergoing extensive revisions and will be submitted in March 2018.

A table summarizing the submitted and pending changes to SPAs can be found in the SPA Report document.

[back to CAEP Self-Study Report]