History of Performance-Based Funding in Tennessee
As resources become scarcer,
policy-makers are continuously trying to find ways to fund state services not
supported by their constituents over other, more “necessary” items like
healthcare, K-12 education, and infrastructure.
They and the public have been increasingly calling for more information
about the effective use of public funds (Field & Quizon, 2011). Higher education is no exception and has,
in recent years, been heavily targeted by negative
perceptions of its ability to produce acceptable retention rates, graduation
rates, and student learning outcomes (Alexander, 2000; Schmidtlein, 1999). In times of increased austerity there is
often a call for more scrutiny over how funds are spent on behalf of taxpayers
and legislators wisely heed the call from those they represent. Not addressing these concerns can shorten a
legislator’s tenure and so accountability measures are often put into place
without confirmation of their effectiveness (Schmidtlein, 1999).
The 1990s spurred a call for accountability with policy makers in individual states instituting budget legislation that incorporated performance-based funding/budgeting elements in 23 states (Shin, 2010). While this was a new endeavor for many states, Tennessee has had some form of performance-based funding (PBF) tied to its higher education budget allocations since 1979. To understand performance-based funding and the progression of this movement, I will be focusing on Tennessee because the state has the longest history of PBF, many other states have modeled their programs on Tennessee’s programs, and the changes and evolutions of Tennessee’s PBF programs captures the evolution of PBF in general. This paper will explore the idea of PBF, the different versions of PBF in Tennessee throughout the years, and the move from “PBF 1.0” to “PBF 2.0” through the state’s Complete College Tennessee Act.
Tennessee’s History of PBF
In 1979, Tennessee revolutionized state funding to colleges and universities by instituting an opportunity for them to earn additional funding by demonstrating that they could improve student learning outcomes. Prior to 1979, higher education institutions received funding from states based on head count, or the number of students that enrolled at that institution (enrollment-based funding). In Tennessee’s new system, institutions could earn “up to 2% of the instructional component of its education and general budget” for obtaining accreditation for academic programs, testing students in their majors and in general education through standardized tests, surveying constituents to assess satisfaction with academic programs and student services, and conducting peer reviews of its academic programs (Banta et. al, 1996, p. 23). All institutions who participated did so on a voluntary basis. The majority of the funding was still distributed based on number of students enrolled at any institution and that this new funding component was a “bonus” option for institutions to participate in if they saw it as a worthwhile endeavor. All institutions in the Tennessee public system have participated in the program since 1979 (Banta et. al, 1996).
Since its inception, the Tennessee PBF model has undergone a review every five years with a total of six cycles of revisions. While the 1979 requirements did not entail much additional work on the part of the individual institutions outside of the data that was normally gathered, in contrast, the standards implemented in 1983-1987 required much more effort in the evaluation of each undergraduate program offered at the institution. The principal difference between the 1979-1982 and 1983-1992 PBF standards was to fine-tune scoring methods. Peer review of graduate programs with quantitative components issued from raters, decreased weight on accreditation and general education assessment were all eligible to earn credits by improving institutional performance over time and exceeding an established norm. From 1993-1997, there was a significant emphasis placed on internal improvement with standards encouraging institutions to set their own goals and measure progress as they saw fit. The number of standards included in the process increased substantially from 5 to 10 (Banta, 1996). From 1997-2000, there was a return to the emphasis on general education as well as new emphasis on job placement for community colleges and a new measure for performance on state strategic master plan outcomes. In the fifth cycle (2000-2005), assessment of transfer and articulation to four year schools was added as an outcome, and Tennessee continued to tweak the general education performance standards including a pilot evaluation area that institutions could earn points for. In the sixth cycle (2005-2010), a huge shift from program assessment to student persistence occurred with 10 points (out of 100) in the scoring matrix being reassigned to student persistence. Additionally, while state and institutional goals were still a factor, 5 points were reassigned to Standard 5 which provided incentives for institutions to include Performance-Funding related assessment into their Quality Enhancement Plans (QEP) (Tennessee Higher Education Commission, 2010).
Effectiveness of PBF
Unfortunately, in assessing the affects of Tennessee’s program, there is very little definitive research. In a study conducted by Banta et. al (1996), of campus coordinators for PBF data collection, Tennessee’s performance-based funding system scored a “C” for overall effectiveness. Those surveyed suggested that it has been more successful in developing and reporting state-level indicators and using them to promote the state's higher education system than in encouraging institutions to engage in their own local planning and assessment exercises. Despite this perspective, they also reported that institutions had undertaken assessment initiatives that have yielded data to improve curricula, instruction and student services. Improved teaching and testing techniques were also reported. Additionally, a majority of the participants did acknowledge that peer reviews and constituent groups-students/alumni/employers and planned program improvement actions based on assessment findings are methods that improve student learning on campus
Sanford and Hunter (2011) studied Tennessee’s PBF program, looking at the program from 1995-2009. Their research showed that retention and graduation rate standards had no correlation to changes in performance measures over that 15 year time period. They also noted that in 2005, when the incentive was doubled (2% to 5.45%), retention and 6-year graduation rates were not associated with increased retention rates. They emphasized that the funding levels were not sufficient to adequately motivate institutional leaders to implement the great changes needed to produce the desired outcomes.
One study by Jung Cheol Shin (2010) of over 467 universities across the country found that most of the variance in graduation rates can be explained by institutional characteristics versus state actions. The researchers suggest that policy makers should focus on whether institutional performance is best influenced through accountability measures like PBF or by “focusing on facilitating the capabilities of the university.” If a practice is not already accepted by the institution it is difficult to adopt a new form of accountability if it is not “well grounded in current institutional practices.” The amount of money that states tied to institutional performance was less than 6% of the total state budget, which, as suggested in the Sanford and Hunter (2011) study, may not be enough to motivate institutions to create large changes. They also pointed out that graduate rate is a measure of quantity of performance and is not reflective of the quality of institutional performance, a common critique of some PBF policies. This study brings to light many areas that PBF can improve upon.
Performance-based funding is difficult to study. There are many limitations to the research that is available including: too few multivariate studies, the tendency to leave out important players like community colleges, not accounting for variables such as how long the problem has been in place, how the performance funding is allocated and other accountability funding in place in a state; limited control variables and limited distinction in the literature between PBF 1.0 and PBF 2.0 models (Dougherty, 2011). Specific research is needed to address these gaps in the literature so that those pursuing PBF as a way to provide accountability in the higher education system can understand the most effective ways to construct well-built models.
Performance-Based Funding 2.0
In response to the underwhelming affects of the first attempts of PBF over the last 30 years, a new form of PBF has begun surfacing in state budgets. Referred to as “Performance-Based Funding 2.0”, this re-energized form of PBF is characterized by embedding performance indicators into the base state funding formula versus using it as a “bonus” for institutions. PBF 2.0 also includes a substantial amount of funding tied to these measures versus the limited 2-6% of total budget monies seen in PBF 1.0 (Dougherty, 2011). Dougherty (2011) also suggests that PBF 2.0 models also should include: improved performance indicators that more accurately reflect the success of students, elements that insulate the funding from state revenue cycle, resistance to gaming of the system, reduction/elimination of negative impacts, lower compliance costs, and protection of academic standards and safeguards to combat the narrowing of the institutional mission. States that have begun to pursue this new model of PBF by including some of these elements include: Ohio, Pennsylvania, Indiana, Washington, Louisiana, and Tennessee (Miao, 2012).
Despite this move towards PBF 2.0, the National Conference of State Legislatures (2013) lists an equal number of states with some performance-based funding still have the 1.0 model elements (Illinois, Michigan, Minnesota, New Mexico, Oklahoma, South Dakota). Missouri, Colorado, Virginia and Arkansas are in the process of transitioning to performance funding with models also matching up more with the 1.0 models of the past. With 19 additional states in formal discussions on implementing PBF, it will be interesting to observe if they follow the research trends towards 2.0 models or fall back on the 1.0 model.
While many applaud the move towards PBF models and the increased accountability it provides for higher education, there are those that caution this might not be the best solution. Those that argue against PBF contend that there is not a significant amount of empirical data supporting its effectiveness in creating positive change in higher education institutions; it provides incentives for more restrictive admissions criteria which would unfairly target vulnerable populations (Abdul-Alim, 2013; Kantrowitz, 2012); it narrows the institutional mission; may produce grade inflation; weakens academic standards; further diminishes faculty voice in academic governance; and provides opportunities to game the system (Dougherty, 2011). But many of these elements are characteristics of the PBF 1.0 model that have been dealt with by the new 2.0 model. PBF 2.0 has limited data because of its new inception but elements such as premiums for institution’s success with low-income or adult students, and purposefully selecting measures based on specific institutional mission are built in to combat previous weaknesses.
As part of the Complete College Tennessee Act (CCTA) a new model was instituted in 2010 developed by the Tennessee Higher Education Commission (THEC)(Whissemore, 2012). Whereas in the past, Tennessee had about 60% of it’s funding based on enrollment, the new model does not include enrollment in any metric. The state has made the bold move of providing 100% of their funding to higher education in the form of performance-based funding. In addition, the bonus that had previously existed in Tennessee based on performance is still available to institutions above the base funding so that institutions can earn an additional 5.45% of its outcomes model recommendation (Malbroux, 2011). In order to ease any negative affects that might come from the transition, a “phase-in” factor will account for the difference between an institution’s would-be enrollment based funding and the new outcomes-based model funding until FY 2014-2015 (Malbroux, 2011).
CCTA incorporates important elements including productivity metrics and institutional missions into the funding formula. According to the THEC, the outcomes measured in the new formula include but are not limited to, “degree production, research funding and graduation rates at universities, and student remediation, job placements, student transfer and associates degrees at community colleges.” These outcomes are weighted differently for each institution and emphasize the mission and Carnegie classification as well as the importance of each outcome (Tennessee Higher Education Commission, 2010). Each institution in Tennessee has been assigned a mission statement that outcomes are based on to determine performance. This is aimed at ensuring that there is less duplication of services and programs, something that happened quite a bit under the enrollment-based funding model. With this new model, Tennessee hopes to increase their college graduates by 3.5% annually and yield 210,000 more bachelor’s and associate’s degrees by 2025 (Jones, 2011).
While the CCTA enjoyed bipartisan support in the state legislature, higher education administrators were quick to point out areas that caused some concern. The system pits institutions against each other in a battle for resources although each year state funding goes back to square one with no institution receiving any minimal level of appropriations. It also forces institutions to accept imposed mission statements as the state has clearly articulated the missions of community colleges, state schools and universities in the state which limits the ability to, for example, add a medical program if not included in the mission. In addition, as with any model, there are ways that institutions can game the system. Some outcomes are based on student progression in earning 24, 48, and 72 credit hours as well as earning degrees (National Conference of State Legislators, 2013). Institutions may start focusing their enrollment target population on students who are less “risky” to enroll, namely high performing, high socioeconomic status, second-generation students.
As with any new model, it will take some time to see if this iteration of Tennessee’s Performance-Based Funding will make significant advancements in improving their higher education institutions. Dr. Richard Rhoda, executive director of THEC, insists that CCTA has impacted the culture of Tennessee higher education and the focus of administrators has started to shift from enrollment management to student success initiatives. The University of Tennessee and University of Memphis have beefed up their student support adding student success programs and academic support centers (Locker, 2012). It appears that this may finally be, minimally, a step in the right direction for Tennessee higher education.
Reference List Alexander, F.K. (2000). The changing face of accountability: Monitoring and assessing institutional performance in higher education. The Journal of Higher Education, 71(4), 411-431.
Banta, T.W., Rudolph, L.B., Van Dyke, J., & Fisher, H.S. (1996) Performance funding comes of age in Tennessee. The Journal of Higher Education, 67(1), 23-45.
Dougherty, K.J. & Reddy, V. (2011). The impacts of state performance funding systems on higher education institutions: Research literature review and policy recommendations. Working paper 37. Community College Research Center. http://ccrc.tc.columbia.edu/publications/impacts-state-performance-funding.html
Field, K. & Quizon, D. (January 4, 2011). Critic of Obama panels will lead higher-education panel in U.S. House. Chronicle of Higher Education, Retrieved from: http://chronicle.com.mutex.gmu.edu/article/Critic-of-Obama-Policies-Will/125802/
Jones, R.A. (May, 2011). Outcome Funding: Tennessee experiments with a performance-based approach to college appropriations. Retrieved from http://www.highereducation.org/crosstalk/ct0511/news0511-tenn.shtml
Locker, R. (November 11, 2012). Tennessee’s outcomes-based college funding model already “changing the way our postsecondary institutions do business,” says Haslam. Retrieved from http://www.politifact.com/tennessee/statements/2012/nov/11/bill-haslam/tennessees-outcomes-based-college-funding-model-al/
Malbroux, L. (November 2, 2011). Tennessee Outcomes Based Funding Formula (Fact Sheet). Retrieved from http://www.collegeproductivity.org/blogs/tennessee-outcomes-based-funding-formula-fact-sheet
Miao, K. (August, 2012). Performance-based funding of higher education: A detailed look at best practices in 6 states. Center for American Progress, Policy Brief. Retrieved from: http://www.americanprogress.org/issues/2012/08/pdf/performance_funding.pdf
National Conference of State Legislatures. (February, 2013). Performance funding for higher education. Retrieved from http://www.ncsl.org/issues-research/educ/performance-funding.aspx
Sanford, T., Hunter, J. M. (2011) Impact of Performance-funding on Retention and Graduation Rates Education Policy Analysis Archives, 19(33). Retrieved May 26, 2013, from http://epaa.asu.edu/ojs/article/view/949
Shin, J.C. (2010) Impacts of performance-based accountability on institutional performance in the U.S. Higher Education, 60(1). 47-68.
Tennessee Higher Education Commission. (2010). Academic Affairs: Performance Funding-Historical. Retrieved from http://www.state.tn.us/thec/Divisions/AcademicAffairs/aa_main.html
Tennessee Higher Education Commission. (2010). Complete College Tennessee Act of 2010. Retrieved from http://tn.gov/thec/complete_college_tn/ccta_summary.html
Whissemore, T. (June 26, 2012). The ups and downs of performance funding. Community College Times, Retrieved from http://www.communitycollegetimes.com/Pages/Funding/The-ups-and-downs-of-performance-funding.aspx
The 1990s spurred a call for accountability with policy makers in individual states instituting budget legislation that incorporated performance-based funding/budgeting elements in 23 states (Shin, 2010). While this was a new endeavor for many states, Tennessee has had some form of performance-based funding (PBF) tied to its higher education budget allocations since 1979. To understand performance-based funding and the progression of this movement, I will be focusing on Tennessee because the state has the longest history of PBF, many other states have modeled their programs on Tennessee’s programs, and the changes and evolutions of Tennessee’s PBF programs captures the evolution of PBF in general. This paper will explore the idea of PBF, the different versions of PBF in Tennessee throughout the years, and the move from “PBF 1.0” to “PBF 2.0” through the state’s Complete College Tennessee Act.
Tennessee’s History of PBF
In 1979, Tennessee revolutionized state funding to colleges and universities by instituting an opportunity for them to earn additional funding by demonstrating that they could improve student learning outcomes. Prior to 1979, higher education institutions received funding from states based on head count, or the number of students that enrolled at that institution (enrollment-based funding). In Tennessee’s new system, institutions could earn “up to 2% of the instructional component of its education and general budget” for obtaining accreditation for academic programs, testing students in their majors and in general education through standardized tests, surveying constituents to assess satisfaction with academic programs and student services, and conducting peer reviews of its academic programs (Banta et. al, 1996, p. 23). All institutions who participated did so on a voluntary basis. The majority of the funding was still distributed based on number of students enrolled at any institution and that this new funding component was a “bonus” option for institutions to participate in if they saw it as a worthwhile endeavor. All institutions in the Tennessee public system have participated in the program since 1979 (Banta et. al, 1996).
Since its inception, the Tennessee PBF model has undergone a review every five years with a total of six cycles of revisions. While the 1979 requirements did not entail much additional work on the part of the individual institutions outside of the data that was normally gathered, in contrast, the standards implemented in 1983-1987 required much more effort in the evaluation of each undergraduate program offered at the institution. The principal difference between the 1979-1982 and 1983-1992 PBF standards was to fine-tune scoring methods. Peer review of graduate programs with quantitative components issued from raters, decreased weight on accreditation and general education assessment were all eligible to earn credits by improving institutional performance over time and exceeding an established norm. From 1993-1997, there was a significant emphasis placed on internal improvement with standards encouraging institutions to set their own goals and measure progress as they saw fit. The number of standards included in the process increased substantially from 5 to 10 (Banta, 1996). From 1997-2000, there was a return to the emphasis on general education as well as new emphasis on job placement for community colleges and a new measure for performance on state strategic master plan outcomes. In the fifth cycle (2000-2005), assessment of transfer and articulation to four year schools was added as an outcome, and Tennessee continued to tweak the general education performance standards including a pilot evaluation area that institutions could earn points for. In the sixth cycle (2005-2010), a huge shift from program assessment to student persistence occurred with 10 points (out of 100) in the scoring matrix being reassigned to student persistence. Additionally, while state and institutional goals were still a factor, 5 points were reassigned to Standard 5 which provided incentives for institutions to include Performance-Funding related assessment into their Quality Enhancement Plans (QEP) (Tennessee Higher Education Commission, 2010).
Effectiveness of PBF
Unfortunately, in assessing the affects of Tennessee’s program, there is very little definitive research. In a study conducted by Banta et. al (1996), of campus coordinators for PBF data collection, Tennessee’s performance-based funding system scored a “C” for overall effectiveness. Those surveyed suggested that it has been more successful in developing and reporting state-level indicators and using them to promote the state's higher education system than in encouraging institutions to engage in their own local planning and assessment exercises. Despite this perspective, they also reported that institutions had undertaken assessment initiatives that have yielded data to improve curricula, instruction and student services. Improved teaching and testing techniques were also reported. Additionally, a majority of the participants did acknowledge that peer reviews and constituent groups-students/alumni/employers and planned program improvement actions based on assessment findings are methods that improve student learning on campus
Sanford and Hunter (2011) studied Tennessee’s PBF program, looking at the program from 1995-2009. Their research showed that retention and graduation rate standards had no correlation to changes in performance measures over that 15 year time period. They also noted that in 2005, when the incentive was doubled (2% to 5.45%), retention and 6-year graduation rates were not associated with increased retention rates. They emphasized that the funding levels were not sufficient to adequately motivate institutional leaders to implement the great changes needed to produce the desired outcomes.
One study by Jung Cheol Shin (2010) of over 467 universities across the country found that most of the variance in graduation rates can be explained by institutional characteristics versus state actions. The researchers suggest that policy makers should focus on whether institutional performance is best influenced through accountability measures like PBF or by “focusing on facilitating the capabilities of the university.” If a practice is not already accepted by the institution it is difficult to adopt a new form of accountability if it is not “well grounded in current institutional practices.” The amount of money that states tied to institutional performance was less than 6% of the total state budget, which, as suggested in the Sanford and Hunter (2011) study, may not be enough to motivate institutions to create large changes. They also pointed out that graduate rate is a measure of quantity of performance and is not reflective of the quality of institutional performance, a common critique of some PBF policies. This study brings to light many areas that PBF can improve upon.
Performance-based funding is difficult to study. There are many limitations to the research that is available including: too few multivariate studies, the tendency to leave out important players like community colleges, not accounting for variables such as how long the problem has been in place, how the performance funding is allocated and other accountability funding in place in a state; limited control variables and limited distinction in the literature between PBF 1.0 and PBF 2.0 models (Dougherty, 2011). Specific research is needed to address these gaps in the literature so that those pursuing PBF as a way to provide accountability in the higher education system can understand the most effective ways to construct well-built models.
Performance-Based Funding 2.0
In response to the underwhelming affects of the first attempts of PBF over the last 30 years, a new form of PBF has begun surfacing in state budgets. Referred to as “Performance-Based Funding 2.0”, this re-energized form of PBF is characterized by embedding performance indicators into the base state funding formula versus using it as a “bonus” for institutions. PBF 2.0 also includes a substantial amount of funding tied to these measures versus the limited 2-6% of total budget monies seen in PBF 1.0 (Dougherty, 2011). Dougherty (2011) also suggests that PBF 2.0 models also should include: improved performance indicators that more accurately reflect the success of students, elements that insulate the funding from state revenue cycle, resistance to gaming of the system, reduction/elimination of negative impacts, lower compliance costs, and protection of academic standards and safeguards to combat the narrowing of the institutional mission. States that have begun to pursue this new model of PBF by including some of these elements include: Ohio, Pennsylvania, Indiana, Washington, Louisiana, and Tennessee (Miao, 2012).
Despite this move towards PBF 2.0, the National Conference of State Legislatures (2013) lists an equal number of states with some performance-based funding still have the 1.0 model elements (Illinois, Michigan, Minnesota, New Mexico, Oklahoma, South Dakota). Missouri, Colorado, Virginia and Arkansas are in the process of transitioning to performance funding with models also matching up more with the 1.0 models of the past. With 19 additional states in formal discussions on implementing PBF, it will be interesting to observe if they follow the research trends towards 2.0 models or fall back on the 1.0 model.
While many applaud the move towards PBF models and the increased accountability it provides for higher education, there are those that caution this might not be the best solution. Those that argue against PBF contend that there is not a significant amount of empirical data supporting its effectiveness in creating positive change in higher education institutions; it provides incentives for more restrictive admissions criteria which would unfairly target vulnerable populations (Abdul-Alim, 2013; Kantrowitz, 2012); it narrows the institutional mission; may produce grade inflation; weakens academic standards; further diminishes faculty voice in academic governance; and provides opportunities to game the system (Dougherty, 2011). But many of these elements are characteristics of the PBF 1.0 model that have been dealt with by the new 2.0 model. PBF 2.0 has limited data because of its new inception but elements such as premiums for institution’s success with low-income or adult students, and purposefully selecting measures based on specific institutional mission are built in to combat previous weaknesses.
As part of the Complete College Tennessee Act (CCTA) a new model was instituted in 2010 developed by the Tennessee Higher Education Commission (THEC)(Whissemore, 2012). Whereas in the past, Tennessee had about 60% of it’s funding based on enrollment, the new model does not include enrollment in any metric. The state has made the bold move of providing 100% of their funding to higher education in the form of performance-based funding. In addition, the bonus that had previously existed in Tennessee based on performance is still available to institutions above the base funding so that institutions can earn an additional 5.45% of its outcomes model recommendation (Malbroux, 2011). In order to ease any negative affects that might come from the transition, a “phase-in” factor will account for the difference between an institution’s would-be enrollment based funding and the new outcomes-based model funding until FY 2014-2015 (Malbroux, 2011).
CCTA incorporates important elements including productivity metrics and institutional missions into the funding formula. According to the THEC, the outcomes measured in the new formula include but are not limited to, “degree production, research funding and graduation rates at universities, and student remediation, job placements, student transfer and associates degrees at community colleges.” These outcomes are weighted differently for each institution and emphasize the mission and Carnegie classification as well as the importance of each outcome (Tennessee Higher Education Commission, 2010). Each institution in Tennessee has been assigned a mission statement that outcomes are based on to determine performance. This is aimed at ensuring that there is less duplication of services and programs, something that happened quite a bit under the enrollment-based funding model. With this new model, Tennessee hopes to increase their college graduates by 3.5% annually and yield 210,000 more bachelor’s and associate’s degrees by 2025 (Jones, 2011).
While the CCTA enjoyed bipartisan support in the state legislature, higher education administrators were quick to point out areas that caused some concern. The system pits institutions against each other in a battle for resources although each year state funding goes back to square one with no institution receiving any minimal level of appropriations. It also forces institutions to accept imposed mission statements as the state has clearly articulated the missions of community colleges, state schools and universities in the state which limits the ability to, for example, add a medical program if not included in the mission. In addition, as with any model, there are ways that institutions can game the system. Some outcomes are based on student progression in earning 24, 48, and 72 credit hours as well as earning degrees (National Conference of State Legislators, 2013). Institutions may start focusing their enrollment target population on students who are less “risky” to enroll, namely high performing, high socioeconomic status, second-generation students.
As with any new model, it will take some time to see if this iteration of Tennessee’s Performance-Based Funding will make significant advancements in improving their higher education institutions. Dr. Richard Rhoda, executive director of THEC, insists that CCTA has impacted the culture of Tennessee higher education and the focus of administrators has started to shift from enrollment management to student success initiatives. The University of Tennessee and University of Memphis have beefed up their student support adding student success programs and academic support centers (Locker, 2012). It appears that this may finally be, minimally, a step in the right direction for Tennessee higher education.
Reference List Alexander, F.K. (2000). The changing face of accountability: Monitoring and assessing institutional performance in higher education. The Journal of Higher Education, 71(4), 411-431.
Banta, T.W., Rudolph, L.B., Van Dyke, J., & Fisher, H.S. (1996) Performance funding comes of age in Tennessee. The Journal of Higher Education, 67(1), 23-45.
Dougherty, K.J. & Reddy, V. (2011). The impacts of state performance funding systems on higher education institutions: Research literature review and policy recommendations. Working paper 37. Community College Research Center. http://ccrc.tc.columbia.edu/publications/impacts-state-performance-funding.html
Field, K. & Quizon, D. (January 4, 2011). Critic of Obama panels will lead higher-education panel in U.S. House. Chronicle of Higher Education, Retrieved from: http://chronicle.com.mutex.gmu.edu/article/Critic-of-Obama-Policies-Will/125802/
Jones, R.A. (May, 2011). Outcome Funding: Tennessee experiments with a performance-based approach to college appropriations. Retrieved from http://www.highereducation.org/crosstalk/ct0511/news0511-tenn.shtml
Locker, R. (November 11, 2012). Tennessee’s outcomes-based college funding model already “changing the way our postsecondary institutions do business,” says Haslam. Retrieved from http://www.politifact.com/tennessee/statements/2012/nov/11/bill-haslam/tennessees-outcomes-based-college-funding-model-al/
Malbroux, L. (November 2, 2011). Tennessee Outcomes Based Funding Formula (Fact Sheet). Retrieved from http://www.collegeproductivity.org/blogs/tennessee-outcomes-based-funding-formula-fact-sheet
Miao, K. (August, 2012). Performance-based funding of higher education: A detailed look at best practices in 6 states. Center for American Progress, Policy Brief. Retrieved from: http://www.americanprogress.org/issues/2012/08/pdf/performance_funding.pdf
National Conference of State Legislatures. (February, 2013). Performance funding for higher education. Retrieved from http://www.ncsl.org/issues-research/educ/performance-funding.aspx
Sanford, T., Hunter, J. M. (2011) Impact of Performance-funding on Retention and Graduation Rates Education Policy Analysis Archives, 19(33). Retrieved May 26, 2013, from http://epaa.asu.edu/ojs/article/view/949
Shin, J.C. (2010) Impacts of performance-based accountability on institutional performance in the U.S. Higher Education, 60(1). 47-68.
Tennessee Higher Education Commission. (2010). Academic Affairs: Performance Funding-Historical. Retrieved from http://www.state.tn.us/thec/Divisions/AcademicAffairs/aa_main.html
Tennessee Higher Education Commission. (2010). Complete College Tennessee Act of 2010. Retrieved from http://tn.gov/thec/complete_college_tn/ccta_summary.html
Whissemore, T. (June 26, 2012). The ups and downs of performance funding. Community College Times, Retrieved from http://www.communitycollegetimes.com/Pages/Funding/The-ups-and-downs-of-performance-funding.aspx