5.3 Program Evaluation
Candidates design and implement program evaluations to determine the overall effectiveness of professional learning on deepening teacher content knowledge, improving teacher pedagogical skills and/or increasing student learning. (PSC 5.3/ISTE 4c)
Candidates design and implement program evaluations to determine the overall effectiveness of professional learning on deepening teacher content knowledge, improving teacher pedagogical skills and/or increasing student learning. (PSC 5.3/ISTE 4c)
The artifact before you is a professional development program evaluation tool I made for ITEC 7460. I wanted to create a quick guide for different types of professional development evaluation for different purposes and mediums. I based this guide heavily off of the Guskey Framework, which divides program evaluation into five levels of assessing a professional development program. While I did not implement this exact guide, I refer to it as a tool to design evaluations for future professional development programs I led, such as my capstone project. I am the sole contributor to this artifact.
This artifact combined with the evaluations I have derived from it demonstrate mastery of Standard 5.3: Program Evaluation. For example, it clearly shows that I understand the different purposes of assessment, that some questions speak to the learning experience itself (Was the environment pleasant? Was your instructor knowledgeable?), while other questions attempt to gauge the level of learning that occurred in a training session, so that further instruction can fill the gaps in learning and clarify the misconceptions in a teacher’s understanding. At level five of the Guskey Framework, teachers answer questions about the impact of their learning on student performance. Ultimately, this is the end goal of any content or pedagogical instruction: to improve student performance. Thus, teacher perception along with assessment data can be used to evaluate the effectiveness of a professional learning program. In this artifact, I explain how data about on student performance can inform future PD. When I implemented PD evaluations for the training I delivered during my capstone, I followed up with several surveys asking teachers to rate the quality of their instruction as well as their willingness to try the new tool as part of their instructional practice. Additionally, I followed up with teachers a few weeks after the training to determine if teachers had implemented the instructional tool, and, if not, what trouble they were having. I designed and implemented several different types of program evaluation based on the Guskey Framework to evaluate what worked in the PD session and what needed further relevance, alignment, or clarification in order to deepen teacher content knowledge and/or teaching strategies and, ultimately, improve student learning.
From completing this artifact, I learned that there is more to program evaluation than I initially believed. While I was familiar with level one questions on the Guskey Framework—since I had taken program evaluation surveys before—I did not understand the purpose of these evaluations from the perspective of PD leaders. Now I know that evaluations are an important tool for improving and planning an effective PD program. Additionally, as I learned in my capstone experience, a good PD program will take several cycles of teaching, tweaking, and re-teaching in order for training to stick. During the course of my capstone, I created several training evaluations for the staff in attendance and, subsequently, had to adjust instruction and re-teach based on feedback from surveys. If I were to do something differently in order to improve the quality of this artifact, I would have included real examples of survey questions I created for my capstone project.
The work that went into this artifact has had an impact on the quality of PD programs I have led, and this has, in turn, impacted faculty development. For example, from the initial survey data I collected after my capstone project, I was able to design a follow-up session to address concerns and misunderstanding from the first session that I led. The program evaluation learning that went into this artifact certainly helped improve the quality of instruction I delivered. The impact of this learning can be assessed through further surveys asking teachers about the impact of the training on student learning (Guskey level five) or by comparing student assessment data from year to year.
This artifact combined with the evaluations I have derived from it demonstrate mastery of Standard 5.3: Program Evaluation. For example, it clearly shows that I understand the different purposes of assessment, that some questions speak to the learning experience itself (Was the environment pleasant? Was your instructor knowledgeable?), while other questions attempt to gauge the level of learning that occurred in a training session, so that further instruction can fill the gaps in learning and clarify the misconceptions in a teacher’s understanding. At level five of the Guskey Framework, teachers answer questions about the impact of their learning on student performance. Ultimately, this is the end goal of any content or pedagogical instruction: to improve student performance. Thus, teacher perception along with assessment data can be used to evaluate the effectiveness of a professional learning program. In this artifact, I explain how data about on student performance can inform future PD. When I implemented PD evaluations for the training I delivered during my capstone, I followed up with several surveys asking teachers to rate the quality of their instruction as well as their willingness to try the new tool as part of their instructional practice. Additionally, I followed up with teachers a few weeks after the training to determine if teachers had implemented the instructional tool, and, if not, what trouble they were having. I designed and implemented several different types of program evaluation based on the Guskey Framework to evaluate what worked in the PD session and what needed further relevance, alignment, or clarification in order to deepen teacher content knowledge and/or teaching strategies and, ultimately, improve student learning.
From completing this artifact, I learned that there is more to program evaluation than I initially believed. While I was familiar with level one questions on the Guskey Framework—since I had taken program evaluation surveys before—I did not understand the purpose of these evaluations from the perspective of PD leaders. Now I know that evaluations are an important tool for improving and planning an effective PD program. Additionally, as I learned in my capstone experience, a good PD program will take several cycles of teaching, tweaking, and re-teaching in order for training to stick. During the course of my capstone, I created several training evaluations for the staff in attendance and, subsequently, had to adjust instruction and re-teach based on feedback from surveys. If I were to do something differently in order to improve the quality of this artifact, I would have included real examples of survey questions I created for my capstone project.
The work that went into this artifact has had an impact on the quality of PD programs I have led, and this has, in turn, impacted faculty development. For example, from the initial survey data I collected after my capstone project, I was able to design a follow-up session to address concerns and misunderstanding from the first session that I led. The program evaluation learning that went into this artifact certainly helped improve the quality of instruction I delivered. The impact of this learning can be assessed through further surveys asking teachers about the impact of the training on student learning (Guskey level five) or by comparing student assessment data from year to year.