Webinar recap: CFSR 3 Measures as a Springboard for Continuous Quality Improvement
Back in May, Data Center Director Fred Wulczyn presented a webinar that detailed ACF’s proposed measures and methods for Round 3 of the Child and Family Services Review (CFSR). On July 1st, Lily Alpert continued that conversation with CFSR 3 Measures as a Springboard for Continuous Quality Improvement, a session focused on how states might apply the proposed CFSR 3 measures and methods to local CQI efforts.
A recap of the webinar is provided below. To watch a recording of the webinar click here. To view the slides, click here. For a primer on the CQI process, click here.
MAIN TAKEAWAYS
- The proposed measures and methods for CFSR 3 have broad applicability to Continuous Quality Improvement efforts at the state and local levels.
- These techniques are not limited to measures proposed for CFSR 3. States will and should have other outcomes that they will want to monitor over time and subject to CQI efforts.
- Structurally, the challenge to public agencies will be to find a way to streamline CQI efforts and reporting efforts in a way that capitalizes on this new alignment between the two.
WEBINAR RECAP
The proposed measures for CFSR 3 are improved over their predecessors, largely because they have almost entirely eliminated exit and point-in-time measures in favor of entry cohort measures, which provide a better foundation for measuring outcomes at a system level. Because they yield more actionable insights, the proposed measures are well-suited for the CQI process.
The notice in the Federal Register regarding CFSR 3 also proposed two improvements to the methods for establishing relative state performance: (1) ACF proposes to compare state performance to the national average as opposed to the 75th percentile of performance; (2) ACF is considering risk-adjusting those comparisons based on child and/or state characteristics. Briefly, risk-adjustment makes state-to-state comparisons more equitable by controlling for factors known to influence state performance (e.g., poverty rate, foster care entry rate, case mix, etc.). (ACF’s final decisions regarding risk-adjustment are still pending as of this writing. For more information on risk-adjustment in general, click here.)
The graphic below shows what state-to-state comparisons would look like after using a method similar to the one ACF has proposed. Imagine that this graph depicts state performance on the CFSR 3 indicator percent of children reunified within 12 months. Each dot represents a state. The horizontal zero line represents the national average. The dot tells you how different the state performance is from the national average after risk-adjustment. The bar tells you how confident we can be that the difference between the state performance and national average is statistically significant.
- If the bar crosses zero, the state’s rate of permanency within 12 months is not statistically different from the national average.
- If the bar falls entirely above the zero line, the state’s rate of permanency within 12 months is significantly higher than the national average.
- If the bar falls entirely below the zero line, the state’s rate of permanency within 12 months is significantly lower than the national average.
- (For more detail on interpreting this type of graph, click here.)
ACF proposes to use the analysis described above to determine which CFSR 3 measures will become part of states’ Performance Improvement Plans (PIP); states falling below the national average on a particular indicator would be required in their PIPs to improve upon that outcome without compromising performance in areas where they are performing at or above average. The method can also be applied to CQI efforts at the state and local level and can be used to observe performance on indicators in addition to those included in the federal review. Three possible applications are:
- Guiding efforts to improve upon CFSR 3 outcomes. States can use similar methods to examine how their counties perform on CFSR 3 outcomes relative to the state average. With that knowledge, a state could call on its counties to develop quality improvement plans using an approach similar to the federal PIP—that is, call on each county to improve the outcomes on which it falls below the state average without compromising performance in the areas where it performs at or above the state average. The same process could be applied to regions or other administrative subdivisions of the state agency.
- Targeting potential jurisdictions for Title IV-E waiver demonstration projects. As noted above, the utility of this method is not limited to CFSR 3 indicators. Consider “rate of placement in foster care”—an indicator that is not included in the CFSR but is important to many states that have planned to use a Title IV-E waiver to reduce the use of foster care. The IV-E waiver calls on states to be strategic about the populations where it will invest capped funds. Knowledge about which parts of the state have admission rates above the state average, at the state average, or below the state average can bring potential target subpopulations to the fore. Note, this method does not obviate the need for cost-benefit analyses; it does, however, provide an essential starting point for strategic planning.
- Monitoring performance of private service providers. To this point, we have discussed how this method can be used to understand the relative performance of counties. However, the technique can be used to examine performance by any type of business unit, including by private provider. In this iteration, the contracting agency may want to risk-adjust on provider specific factors such as the level of need of children entering their care. With knowledge about how providers in the network perform with regard to the network average, the contracting agency can identify providers whose performance needs to improve, as well as explore whether outperforming providers have certain policies or practices that contribute to their effectiveness and can be applied elsewhere.