In the previous post, we looked at how the number of policies implemented affected the CS Education success metrics in the 2021 State of Computer Science Report by Code.org, CSTA, & ECEP Alliance. This post will investigate how individual policies affect these success metrics. For review, these nine policies are:
Policy | Description |
---|---|
P1 | Create a state plan for K–12 computer science |
P2 | Define computer science and establish rigorous K–12 computer science standards |
P3 | Allocate funding for computer science teacher professional learning |
P4 | Implement clear certification pathways for computer science teachers |
P5 | Create preservice programs in computer science at higher education institutions |
P6 | Establish computer science supervisor positions in education agencies |
P7 | Require that all high schools offer computer science |
P8 | Allow a computer science credit to satisfy a core graduation requirement |
P9 | Allow computer science to satisfy a higher education admission requirement |
And the four CS Education success metrics tracked by the report are:
Success Metric | CAPE Pyramid Level |
Code identifier in charts |
---|---|---|
The percentage of public high schools that offer a computer science course | Capacity | PctHSwFCS |
The percentage of students that are in a school that offers a computer science course | Access | PctStudentsHSwCS |
The percentage of students enrolled in a computer science course | Participation | Pct_InFCS |
The percentage of students that take an AP CS Exam (either AP CS A or AP CS Principles) | Experience | Pct_InAP |
While I am hopeful that the reader will find this analysis between individual policies and CS Education success metrics insightful, there are important caveats to this study:
- Policy implementation is either 0 or 1. This analysis will miss nuances within a policy. For example, a $1 million budget will score 1 regardless of the student population of a state. A policy scores 0 unless it is an unqualified “Yes” in the 2021 State of Computer Science Education report workbook in Feb 2022.
- This study does not consider dates. The dates of policy implementation and the dates associated with individual state data are not utilized. For example, a teacher certification policy counts the same whether established in 2015 or 2021.
- And finally, correlation is not causation.
Correlation Heat Map between Policy and Success Metric
Figure 1 below shows a correlation heat map between each policy and success metric. The number of policies implemented and the total number of students in a state are included “as policies” for reference.
This correlation analysis provides the following insights:
- For all success metrics, the correlation with the total number of policies implemented is much higher than with any single policy implementation.
- As discussed in the previous post, the correlation between the number of policies implemented and the success metrics in the Capacity and Access levels of the CAPE pyramid, % of HS offering CS (0.64) and % of Students in an HS offering CS (0.57), is much higher than for the success metrics in the Participation and Experience levels of CAPE, % of students enrolled in computer science (0.36) and % of students taking an AP CS exam (0.34)
- For the first seven policies, the correlation with the Capacity/Access success metrics is much higher than with the Participation/Experience success metrics.
- P8 (Allow a computer science credit to satisfy a core graduation requirement) and P9 (Allow computer science to satisfy a higher education admission requirement) are slightly more highly correlated with the Participation/Experience metrics than the Capacity/Access ones.
- While the number of students in a state negatively correlates with the percentage of HS offering CS, it positively correlates with the percentage of students taking AP.
Difference between Policy Have and Have Nots
Figure 2 above shows the average percentage of states that have implemented a particular policy (in light green) vs. those that have not implemented that policy (in orange) for each success metric. The red line shows the average of the metric for all states. The percentage in the x-axis labels is the percentage of states with the policy. For example, 83% of states have implemented P2 (Define computer science and establish rigorous K–12 computer science standards). While Figure 2 shows which policies show strong performance in an individual metric, it is difficult to visualize how well policies perform across success metrics. Figure 3 shows the PEP (Policy Existence Percentage) for each success metric that compares states that have implemented a policy and those that have not. As a definition, PEP (Policy Existence Percentage) is
Non-weighted average of a metric in states that have implemented a policy –
Non-weighted average of a metric in states that have not implemented a policy
Figure 3 shows how policies P1 to P7 have a similar PEP on the main benchmark Percentage of HS offering CS. In contrast, P8 (Allow a computer science credit to satisfy a core graduation requirement) and P9 (Allow computer science to satisfy a higher education admission requirement) have very low PEP. A similar pattern exists with the Percentage of Students in HS offering CS metric but at a much lower PEP rate for all policies. There are significant differences in the PEP of each policy on the Percentage of students enrolled in CS and the Percentage of Students taking an AP CS Exam. Further, the average PEP of each policy on these metrics is greater than the average PEP on the Percentage of HS offering CS metric. To better visualize how the policies work across the various metrics, Figure 4 below eliminates the Percentage of Students in a High School offering CS metric and uses conditional formatting in Excel to visualize policies with high/low PEP. Conditional formatting applies per metric. The Percentage of Students in a High School offering CS metric was eliminated because its policy pattern is similar to that of the Percentage of HS offering CS metric but at a much lower level.
Some observations from this visualization:
- P4 (Implement clear certification pathways for computer science teachers) shows high PEP across all three metrics (at different levels of the CAPE pyramid)
- P7 (Require that all high schools offer computer science) also shows high PEP across all three metrics but not as high as P4.
- P3 (Allocate funding for computer science teacher professional learning) shows solid PEP across all three metrics.
- P1 (Create a state plan for K–12 computer science) and P5 (Create preservice programs in computer science) show solid PEP in the Percent of HS offering CS (Capacity/Access) and Percent of Students taking AP CS Exam (Experience). But these policies have low PEP in Percent of Students Enrolled in CS (Participation)
- P2 (Define computer science and establish rigorous K–12 computer science standards) and P6 (Establish computer science supervisor positions in education agencies) have solid PEP for the Percent of HS offering CS (Capacity/Access) and Percent Students Enrolled in CS (Participation). But these policies have low PEP for in Percent of Students taking an AP CS Exam (Experience).
- P8 (Allow a computer science credit to satisfy a core graduation requirement) and P9 (Allow computer science to satisfy a higher education admission requirement) have good PEP for Percent of Students Enrolled in CS (Participation). But these policies have very low PEP for the Percent of HS offering CS (Capacity/Access) and Percent of Students taking an AP CS Exam (Experience).
Please visit the CSEd Analytics page for underlying data and reports for this blog post and more nuanced information. The attached reports will also show how your state compares to others in critical CSEd metrics. The next post in the #CSEdAnlytics series will introduce the per-state reports.