Resources
Tennessee's Performance-Based Funding Model
Tennessee Higher Education Commission's Website: The THEC was tasked with constructing Tennessee's new PBF model. Be sure to check out the Outcomes Formula that institutions can use to estimate what their funding would be as well as the Outcomes Formula Presentation that thoroughly explains Tennessee's: http://www.state.tn.us/thec/
Tennessee's PBF formula narrative: http://www.state.tn.us/thec/Divisions/Fiscal/funding_formula/1-Outcomes%20Based%20Formula%20Narrative%20-%20for%20website.pdf
Complete College Tennessee Act (CCTA), the state legislation that included the new PBF 2.0 model and how Tennessee is doing its part to meet the president's 2025 graduates goal: http://www.tn.gov/thec/complete_college_tn/ccta_summary.html
Other State's Performance-Based Funding Models
National Conference of State Legislators: NCSL has some basice information on the trends of PBF in the US. The site also has an interactive map that shows you where states are in the process of implementing some form of PBF: http://www.ncsl.org/issues-research/educ/performance-funding.aspx
Illinois' PBF: http://www.ibhe.state.il.us/PerformanceFunding/default.htm
Indiana's PBF: http://www.in.gov/che/files/HCM_Strategies_Study_Performance_Funding_8-22-11_B.pdf
Louisiana's Grad Act: http://agb.org/ingram/policy/louisianas-grad-act-20
Michigan's PBF-Policy Brief: http://aftmichigan.org/files/performancebasedfunding12.pdf
Ohio's PBF: https://www.ohiohighered.org/press/new-performance-based-model-higher-education-ohio
Pennsylvania's PBF (Case Study): http://www.collegeproductivity.org/blogs/pennsylvania%E2%80%99s-performance-funding-system-march-2011-case-study
South Dakota's PBF: https://sdbor.edu/theboard/agenda/2012/march/21.pdf
Washington State's PBF for Community Colleges: http://www.sbctc.edu/college/e_studentachievement.aspx
College Completion Agenda Resources
College Completion Agenda: http://completionagenda.collegeboard.org/about-agenda
To learn more about the Lumina Foundation for Education's goal to increase graduates by 60% by 2025 click here: http://www.luminafoundation.org/
To learn more about the President's goal to have the most college graduates in the world by 2020 click here: http://www.whitehouse.gov/issues/education/higher-education
Tennessee Higher Education Commission's Website: The THEC was tasked with constructing Tennessee's new PBF model. Be sure to check out the Outcomes Formula that institutions can use to estimate what their funding would be as well as the Outcomes Formula Presentation that thoroughly explains Tennessee's: http://www.state.tn.us/thec/
Tennessee's PBF formula narrative: http://www.state.tn.us/thec/Divisions/Fiscal/funding_formula/1-Outcomes%20Based%20Formula%20Narrative%20-%20for%20website.pdf
Complete College Tennessee Act (CCTA), the state legislation that included the new PBF 2.0 model and how Tennessee is doing its part to meet the president's 2025 graduates goal: http://www.tn.gov/thec/complete_college_tn/ccta_summary.html
Other State's Performance-Based Funding Models
National Conference of State Legislators: NCSL has some basice information on the trends of PBF in the US. The site also has an interactive map that shows you where states are in the process of implementing some form of PBF: http://www.ncsl.org/issues-research/educ/performance-funding.aspx
Illinois' PBF: http://www.ibhe.state.il.us/PerformanceFunding/default.htm
Indiana's PBF: http://www.in.gov/che/files/HCM_Strategies_Study_Performance_Funding_8-22-11_B.pdf
Louisiana's Grad Act: http://agb.org/ingram/policy/louisianas-grad-act-20
Michigan's PBF-Policy Brief: http://aftmichigan.org/files/performancebasedfunding12.pdf
Ohio's PBF: https://www.ohiohighered.org/press/new-performance-based-model-higher-education-ohio
Pennsylvania's PBF (Case Study): http://www.collegeproductivity.org/blogs/pennsylvania%E2%80%99s-performance-funding-system-march-2011-case-study
South Dakota's PBF: https://sdbor.edu/theboard/agenda/2012/march/21.pdf
Washington State's PBF for Community Colleges: http://www.sbctc.edu/college/e_studentachievement.aspx
College Completion Agenda Resources
College Completion Agenda: http://completionagenda.collegeboard.org/about-agenda
To learn more about the Lumina Foundation for Education's goal to increase graduates by 60% by 2025 click here: http://www.luminafoundation.org/
To learn more about the President's goal to have the most college graduates in the world by 2020 click here: http://www.whitehouse.gov/issues/education/higher-education
Annotated Bibliography
Abdul-Alim, J. (2013). The Price of Performance. Diverse: Issues In Higher Education, 29(26), 14-15.
Abdul-Alim’s article does a great job of delineating between performance-based funding 1.0 and 2.0 and focusing in on the heavy emphasis of retention and graduation rates that is found in the new Tennessee performance-based funding model for 2013-2014. The article points out the concerns of many: that the measures would encourage institutions to be more selective and move away from a more accessible admissions model. But the new 2.0 model in Tennessee hopes to remove this unintended consequence by including a measure for student access and support efforts serving populations important to the institution’s mission.
In addition to the attention paid to the “low-risk student” aversion consequences that may result from Tennessee’s performance-based funding 2.0 endeavor, Abdul-Alim also brings to the forefront the lack of funding that Tennessee plans to appropriate to the program for the upcoming fiscal year. Lack of funding is a major criticism of performance-based funding programs and it’s an important point to bring up in the discussion.
Alexander, F.K. (2000). The changing face of accountability: Monitoring and assessing institutional performance in higher education. The Journal of Higher Education, 71(4), 411-431.
In this argument, Alexander contends that two major trends have lead to increased state interest in higher education accountability measures: the “massification” of higher education and the limited public expenditures for higher education. College is now seen as a way to increase human capital and make states and countries flourish in the new knowledge-based economy. In his examination of the US, UK and Organization for Economic Control and Development (OECD) countries, this has lead to a call for more judicious use of public funds and policy-makers are heeding the calls of their constituents. This discussion of performance based funding outlines the source of increased oversight starting in the US and how it was implemented in the UK as well as other OECD countries. The study provides a macro view of performance-based funding and provides helpful international examples to further understand the issue.
anta, T.W., Rudolph, L.B., Van Dyke, J., & Fisher, H.S. (1996) Performance funding comes of age in Tennessee. The Journal of Higher Education, 67(1), 23-45.
The authors of this article looked at the third, five-year cycle of Tennessee’s performance –based funding program to determine why the program has continued for so long, what the strengths and weaknesses are in the program and what indicators are the strongest in determining the success of a university or college. A survey was mailed to the campus coordinators who were in charge of compiling the annual performance-funding. Despite the survey sample being limited, the information provided in this survey gives the reader a glimpse into the real strengths and limitations of performance-based funding in Tennessee in 1993. The researchers do an excellent job of identifying the positives and negatives of each indicator, as reported by those that need to work within the program on a daily basis. While administrators are hesitant to call it a success, they do acquiesce that their institutions were motivated by the program to make changes to support student learning.
Burke, J.C. & Modarresi, S. (2000). To keep or not to keep performance funding. The Journal of Higher Education, 71(4), 432-453.
Burke studied 11 tentative assumptions of a stable performance-based funding program by surveying state and institutional level policy-makers in 6 states with these programs in place. 4 states dropped their program and became the “unstable” group and Tennessee and Missouri became the “stable” group. The study identified 10 characteristics that stable programs display more than unstable programs. The authors showed that stable programs exhibit more input from boards and officers of coordinating agencies for higher education and adopted the programs voluntarily. Indicators used in Missouri and Tennessee emphasized quality where the indicators in the unstable group emphasized efficiency. Burke and Modarrasi’s 10 signals that create a more stable program could help policymakers understand when their programs are on track or when their programs are in danger of becoming defunct.
Frolich, N. (2010). Funding systems for higher education and their impacts on institutional strategies and academia. International Journal of Educational Management, 24(1), 7-21.
This study provides an analysis of multiple stakeholders’ opinions and perceptions of the funding system and its consequences for the development of the higher education system in three countries: Portugal, Denmark and Norway. Denmark has enrollment based funding, Norway, some output based funding of research and education (35% of total), and Portugal is moving towards output based funding. Frolich finds that there are no clear differences in perceived strengths, weaknesses and impacts of input versus output based funding. The patterns in higher education observed in these countries show movement towards performance based funding, increased national control over objectives and priorities, use of assessments, and greater focus on efficiency, quality, and competitive funding, all trends that can be observed in the United States.
This study echoes the findings of many other researchers on the topic of performance-based funding. Frolich finds that it has little to no impact on the performance of higher education institutions but that the governing agents continue to implement these programs throughout the world. Performance-based funding 1.0 models seem to continue to be used in response to the call for more accountability in higher education globally.
Hoyt, J. E. (2001). Performance funding in higher education: The effects of student motivation on the use of outcomes tests to measure institutional effectiveness. Research in Higher Education, 42(1), 71-85.
The researcher in this study obtained data from 1633 students at a Utah community college who took the Collegiate Assessment of Academic Proficiency (CAAP) to evaluate testing as a reliable method for assessing institutions for performance-based funding. Hoyt found that a significant number of students do not put forth their best efforts, affecting their scores on the test. This puts institutions’ funding at risk if this test were to be used as a standard. Hoyt points out that there is limited motivation for students to do well on the tests and administrators have little control over whether or not their students take the test seriously or not. This is an important study for policy-makers to take note of as it shows strong evidence to avoid this measure on performance-based funding measures for higher education institutions. While Tennessee’s previous models incorporated standardized testing, the current model does not include this type of indicator at all.
Layzell, D. T. (1999). Linking performance to funding outcomes at the state level for public institutions of higher education: Past, present, future. Research in Higher Education, 40(2), 233-246.
In his review of performance funding initiatives, Layzell brings to the forefront past experiences of the states with these performance measures. He identifies key limitations and pitfalls of systems including: data limitations, too many indicators, no policy framework for performance indicators, quantitative versus qualitative measures, confusing inputs, policies and outcomes, and lack of “buy-in” up front. Layzell also describes difficulties in implementing performance-based funding initiatives and his own suggestions on developing successful programs. While the review gives the reader a general view of performance-based funding there could have been a more in depth look at how performance-based funding has impacted institutions’ performance. Regardless, one is left with some clear, helpful advice as to how to construct a program and some pitfalls to avoid.
Liefner, I. (2003). Funding, resource allocation, and performance in higher education systems. Higher Education, 46, 469-489.
Liefner’s research involved case studies of higher education institutions in the US, Switzerland, the Netherlands, and Great Britain with interviews from 53 professors. He explores resource allocation by looking at how resources are allocated internally, how performance-based budgeting affects individual faculty behavior, how the nation’s tradition of funding influences the applicability of performance-based methods of budgeting, and if the method of funding affects the long-term success of a university. In determining the most important factor for long-term success of a university, 90% of the faculty chose quality of academics (chosen from faculty qualifications, students’ ability, university culture, form of resource allocation and “other”). Interviewees said that the form of resource allocation was less important and had minimal direct effects on institutions. Because of the limited affects of type of funding to institutions, Liefner proposes allowing university administrators to define basic goals and how to fulfill their mission. Performance funding, according to those studied, would lead to higher activity among those less-motivated and would not produce better results.
This study incorporates the faculty view-point on their perceptions of how performance-based funding would affect their work and is important because faculty play such an important role in higher education and have the ability to make a direct impact on student success. The author asserts that models that do not incorporate faculty concerns may face difficulties and may actually result in unintended side affects, like faculty being risk-averse in their research pursuits, preventing new and provocative ideas from being explored.
Schmidtlein, F. A. (1999). Assumptions underlying performance-based budgeting. Tertiary Education and Management, 5(2), 157-172. Retrieved from http://search.proquest.com/docview/212146807?accountid=14541
This article took an economist’s view of the performance-based budgeting movement in the U.S. in the ‘90s. Schmidtlein reviewed the budgeting techniques of governments in funding higher education and points out that there have been a number of previous budget reform efforts in the past that rest on many of the same assumptions as performance-based funding and that governments continue to use these as well, despite being proven ineffective. He also notes that performance-based funding caters to the era we are now in: one of decreased resources and calls for higher accountability in higher education. He identifies several unrealistic assumptions made in performance-based funding that could lead to its eventual abandonment like those that came before it including: assuming policy budgets are stable, that complex decisions on budget trade-offs can be made at governmental levels on the basis of data, that institutions operate as bureaucracies, resources can be linked to outcomes, outcomes are identifiable and can be agreed upon, accountability can be achieved through budget policies and that current practices create incentives to enroll unqualified students.
This study provides a refreshingly unique view from that of an economist looking at performance-based funding as just one in a long line of budgeting ideas put into practice to try to influence higher education into increasing outputs. His arguments are sound in that the assumptions outlined can be observed today, which should cause concern for its supporters.
Shin, J.C. (2010) Impacts of performance-based accountability on institutional performance in the U.S. Higher Education, 60(1). 47-68.
In this study, Shin looked to evaluate performance-based funding initiatives on teaching and research performance by measuring graduation rates and external research funds. 467 institutions were looked at for graduation rates and 123 were measured for external research funding. Most of the variance found in the two measures was attributed to institutional factors rather than performance-based funding. Although the two indicators measured in this study only measure quantity rather than quality of research and teaching, the study had a large data source that strengthens its core findings that this type of accountability was not effective in changing performance on graduation rates and research findings. The study is strengthened by the researcher’s attention to control variables important to studying the topic of performance-based funding, noting that the length of time the accountability measures were in place needed to be controlled for, as well as performance funding and performance budgeting differences. In his conclusion, Shin notes that the ineffectiveness of these programs may be due to lack of resources to implement change and that institutional flexibility in policies is essential to produce change.
Abdul-Alim, J. (2013). The Price of Performance. Diverse: Issues In Higher Education, 29(26), 14-15.
Abdul-Alim’s article does a great job of delineating between performance-based funding 1.0 and 2.0 and focusing in on the heavy emphasis of retention and graduation rates that is found in the new Tennessee performance-based funding model for 2013-2014. The article points out the concerns of many: that the measures would encourage institutions to be more selective and move away from a more accessible admissions model. But the new 2.0 model in Tennessee hopes to remove this unintended consequence by including a measure for student access and support efforts serving populations important to the institution’s mission.
In addition to the attention paid to the “low-risk student” aversion consequences that may result from Tennessee’s performance-based funding 2.0 endeavor, Abdul-Alim also brings to the forefront the lack of funding that Tennessee plans to appropriate to the program for the upcoming fiscal year. Lack of funding is a major criticism of performance-based funding programs and it’s an important point to bring up in the discussion.
Alexander, F.K. (2000). The changing face of accountability: Monitoring and assessing institutional performance in higher education. The Journal of Higher Education, 71(4), 411-431.
In this argument, Alexander contends that two major trends have lead to increased state interest in higher education accountability measures: the “massification” of higher education and the limited public expenditures for higher education. College is now seen as a way to increase human capital and make states and countries flourish in the new knowledge-based economy. In his examination of the US, UK and Organization for Economic Control and Development (OECD) countries, this has lead to a call for more judicious use of public funds and policy-makers are heeding the calls of their constituents. This discussion of performance based funding outlines the source of increased oversight starting in the US and how it was implemented in the UK as well as other OECD countries. The study provides a macro view of performance-based funding and provides helpful international examples to further understand the issue.
anta, T.W., Rudolph, L.B., Van Dyke, J., & Fisher, H.S. (1996) Performance funding comes of age in Tennessee. The Journal of Higher Education, 67(1), 23-45.
The authors of this article looked at the third, five-year cycle of Tennessee’s performance –based funding program to determine why the program has continued for so long, what the strengths and weaknesses are in the program and what indicators are the strongest in determining the success of a university or college. A survey was mailed to the campus coordinators who were in charge of compiling the annual performance-funding. Despite the survey sample being limited, the information provided in this survey gives the reader a glimpse into the real strengths and limitations of performance-based funding in Tennessee in 1993. The researchers do an excellent job of identifying the positives and negatives of each indicator, as reported by those that need to work within the program on a daily basis. While administrators are hesitant to call it a success, they do acquiesce that their institutions were motivated by the program to make changes to support student learning.
Burke, J.C. & Modarresi, S. (2000). To keep or not to keep performance funding. The Journal of Higher Education, 71(4), 432-453.
Burke studied 11 tentative assumptions of a stable performance-based funding program by surveying state and institutional level policy-makers in 6 states with these programs in place. 4 states dropped their program and became the “unstable” group and Tennessee and Missouri became the “stable” group. The study identified 10 characteristics that stable programs display more than unstable programs. The authors showed that stable programs exhibit more input from boards and officers of coordinating agencies for higher education and adopted the programs voluntarily. Indicators used in Missouri and Tennessee emphasized quality where the indicators in the unstable group emphasized efficiency. Burke and Modarrasi’s 10 signals that create a more stable program could help policymakers understand when their programs are on track or when their programs are in danger of becoming defunct.
Frolich, N. (2010). Funding systems for higher education and their impacts on institutional strategies and academia. International Journal of Educational Management, 24(1), 7-21.
This study provides an analysis of multiple stakeholders’ opinions and perceptions of the funding system and its consequences for the development of the higher education system in three countries: Portugal, Denmark and Norway. Denmark has enrollment based funding, Norway, some output based funding of research and education (35% of total), and Portugal is moving towards output based funding. Frolich finds that there are no clear differences in perceived strengths, weaknesses and impacts of input versus output based funding. The patterns in higher education observed in these countries show movement towards performance based funding, increased national control over objectives and priorities, use of assessments, and greater focus on efficiency, quality, and competitive funding, all trends that can be observed in the United States.
This study echoes the findings of many other researchers on the topic of performance-based funding. Frolich finds that it has little to no impact on the performance of higher education institutions but that the governing agents continue to implement these programs throughout the world. Performance-based funding 1.0 models seem to continue to be used in response to the call for more accountability in higher education globally.
Hoyt, J. E. (2001). Performance funding in higher education: The effects of student motivation on the use of outcomes tests to measure institutional effectiveness. Research in Higher Education, 42(1), 71-85.
The researcher in this study obtained data from 1633 students at a Utah community college who took the Collegiate Assessment of Academic Proficiency (CAAP) to evaluate testing as a reliable method for assessing institutions for performance-based funding. Hoyt found that a significant number of students do not put forth their best efforts, affecting their scores on the test. This puts institutions’ funding at risk if this test were to be used as a standard. Hoyt points out that there is limited motivation for students to do well on the tests and administrators have little control over whether or not their students take the test seriously or not. This is an important study for policy-makers to take note of as it shows strong evidence to avoid this measure on performance-based funding measures for higher education institutions. While Tennessee’s previous models incorporated standardized testing, the current model does not include this type of indicator at all.
Layzell, D. T. (1999). Linking performance to funding outcomes at the state level for public institutions of higher education: Past, present, future. Research in Higher Education, 40(2), 233-246.
In his review of performance funding initiatives, Layzell brings to the forefront past experiences of the states with these performance measures. He identifies key limitations and pitfalls of systems including: data limitations, too many indicators, no policy framework for performance indicators, quantitative versus qualitative measures, confusing inputs, policies and outcomes, and lack of “buy-in” up front. Layzell also describes difficulties in implementing performance-based funding initiatives and his own suggestions on developing successful programs. While the review gives the reader a general view of performance-based funding there could have been a more in depth look at how performance-based funding has impacted institutions’ performance. Regardless, one is left with some clear, helpful advice as to how to construct a program and some pitfalls to avoid.
Liefner, I. (2003). Funding, resource allocation, and performance in higher education systems. Higher Education, 46, 469-489.
Liefner’s research involved case studies of higher education institutions in the US, Switzerland, the Netherlands, and Great Britain with interviews from 53 professors. He explores resource allocation by looking at how resources are allocated internally, how performance-based budgeting affects individual faculty behavior, how the nation’s tradition of funding influences the applicability of performance-based methods of budgeting, and if the method of funding affects the long-term success of a university. In determining the most important factor for long-term success of a university, 90% of the faculty chose quality of academics (chosen from faculty qualifications, students’ ability, university culture, form of resource allocation and “other”). Interviewees said that the form of resource allocation was less important and had minimal direct effects on institutions. Because of the limited affects of type of funding to institutions, Liefner proposes allowing university administrators to define basic goals and how to fulfill their mission. Performance funding, according to those studied, would lead to higher activity among those less-motivated and would not produce better results.
This study incorporates the faculty view-point on their perceptions of how performance-based funding would affect their work and is important because faculty play such an important role in higher education and have the ability to make a direct impact on student success. The author asserts that models that do not incorporate faculty concerns may face difficulties and may actually result in unintended side affects, like faculty being risk-averse in their research pursuits, preventing new and provocative ideas from being explored.
Schmidtlein, F. A. (1999). Assumptions underlying performance-based budgeting. Tertiary Education and Management, 5(2), 157-172. Retrieved from http://search.proquest.com/docview/212146807?accountid=14541
This article took an economist’s view of the performance-based budgeting movement in the U.S. in the ‘90s. Schmidtlein reviewed the budgeting techniques of governments in funding higher education and points out that there have been a number of previous budget reform efforts in the past that rest on many of the same assumptions as performance-based funding and that governments continue to use these as well, despite being proven ineffective. He also notes that performance-based funding caters to the era we are now in: one of decreased resources and calls for higher accountability in higher education. He identifies several unrealistic assumptions made in performance-based funding that could lead to its eventual abandonment like those that came before it including: assuming policy budgets are stable, that complex decisions on budget trade-offs can be made at governmental levels on the basis of data, that institutions operate as bureaucracies, resources can be linked to outcomes, outcomes are identifiable and can be agreed upon, accountability can be achieved through budget policies and that current practices create incentives to enroll unqualified students.
This study provides a refreshingly unique view from that of an economist looking at performance-based funding as just one in a long line of budgeting ideas put into practice to try to influence higher education into increasing outputs. His arguments are sound in that the assumptions outlined can be observed today, which should cause concern for its supporters.
Shin, J.C. (2010) Impacts of performance-based accountability on institutional performance in the U.S. Higher Education, 60(1). 47-68.
In this study, Shin looked to evaluate performance-based funding initiatives on teaching and research performance by measuring graduation rates and external research funds. 467 institutions were looked at for graduation rates and 123 were measured for external research funding. Most of the variance found in the two measures was attributed to institutional factors rather than performance-based funding. Although the two indicators measured in this study only measure quantity rather than quality of research and teaching, the study had a large data source that strengthens its core findings that this type of accountability was not effective in changing performance on graduation rates and research findings. The study is strengthened by the researcher’s attention to control variables important to studying the topic of performance-based funding, noting that the length of time the accountability measures were in place needed to be controlled for, as well as performance funding and performance budgeting differences. In his conclusion, Shin notes that the ineffectiveness of these programs may be due to lack of resources to implement change and that institutional flexibility in policies is essential to produce change.