American Council on Education (2012). National and international projects on accountability and higher education outcomes.
 Aneshensel, C. S. (2002). Theory-based data analysis for the social sciences. Thousand Oaks, CA: Pine Forge Press.
 Bamberger, M., Rugh, J., Church, M., & Fort, L. (2004). Shoestring evaluation: Designing impact evaluations under budget, time and data constraints. American Journal of Evaluation, 25, 5-7.
 Brass, C. T., Nunez-Neto, B., & Williams, E. D. (2006). Congress and program evaluation: An overview of randomized control trials (RCTs) and related issues. URL (last checked 24 October 2008).
 Burke, J. (2005). Achieving accountability in higher education: Balancing public, academic, and market demands. San Francisco: Jossey-Bass.
 Carifio, J., & Perla, R. (2009). A critique of the theoretical and empirical literature on the use of diagrams, graphs and other visual aids in the learning of scientific-technical content from expository texts and instruction. Interchange, 41, 403-436.
 Coryn, C. L. S. (2007). The “holy trinity” of methodological rigor: A skeptical view. Journal of Multidisciplinary Evaluation, 4, 26-31.
 Deming, W. E. (1986). Out of the crisis. Cambridge, MA: Center for Advanced Engineering Study, Massachusetts Institute of Technology.
 Denzin, N., & Lincoln, Y. (2005). The Sage handbook of qualitative research. ThousandOaks, CA: Sage.
 Dlugacy, Y. (2006). Measuring healthcare: Using quality data for operational, financial and clinical improvement. San Francisco, CA: Jossey-Bass.
 Elton, L. (1988). Accountability in higher education: The danger of unintended consequences. Higher Education, 17, 377-390.
 English, F. W., & Hill, J. C. (1994). Total quality education: Transforming schools into learning places. Thousand Oaks, CA: Corwin Press.
 Figlio, D. (2011). Intended and unattended consequences of school accountability. http://www.youtube.com/watch?v=e3aKEuctqy8
 Glass, G. (2000). Meta-analysis at 25. URL (last checked 15 January 2007). http://glass.ed.asu/gene/papers/meta25.html
 Godfray, H. (2002). Challenges for taxonomy. Nature, 417, 17-19.
 Green, J., Camilli, G., & Elmore, P. (2006). Handbook of complementary methods in educational research. Mahwah, NJ: Erlbaum.
 Harman, G. (1994). Australian higher education administration and quality assurance movement. Journal for Higher Education Management, 9, 25-45.
 Kenney, C. (2008). The best practice: How the new quality movement is transforming medicine. Philadelphia, CA: Perseus Book Group.
 Kleining, G. (1982). An outline for the methodology of qualitative social research. URL (last checked 22 October 2008).
 Lederman, D. (2009). Defining accountability. Inside higher education. http://www.insidehighered.com/news/2009/11/18/aei
 Lincoln, Y., & Guba, G. (1985). Naturalistic inquiry. Thousand Oak, CA: Sage.
 London Times (2012). World university rankings.
 Mets, T. (2011). Accountability in higher education: A comprehensive analytical framework. Theory and Research in Education March, 9, 41-58.
 Morley, R. (2012). R morley incorporated.
 O’Rand, A., & Krecker, M. (1990). Concepts of the life cycle: Their history, meanings, and uses in the social sciences. Annual Review of Sociology, 16, 241-262.
 Mertens, D. (2010). Research and evaluation in educational and psychology: Integrating diversity with quantitative, qualitative, and mix- methods approaches. Thousand Oaks, CA: Sage.
 Mezzich, J. E. (1980). Taxonomy and behavioral science: Comparative performance of grouping methods. New York: Academic Press.
 Mulligan, R. (2012). The Deming University.
 Pawson, R. (2006). Evidence-based policy: A realistic perspective. Thousand Oaks, CA: Sage.
 Pawson, R., & Tilley, N. (2008). Realistic evaluation. Thousand Oaks, CA: Sage.
 Perla, R., & Carifio, J. (2009). Toward a general and unified view of educational research and educational evaluation: Bridging philosophy and methodology. Journal of Multi-Disciplinary Evaluation, 5, 38-55.
 Perla, R., & Carifio, J. (2011). Theory creation, modification, and testing: An information-processing model and theory of the anticipated and unanticipated consequences of research and development. Journal of Multi-Disciplinary Evaluation, 7, 84-110.
 Phillips, F. (2005). The contested nature of empirical research (and why philosophy of education offers little help). Journal of Philosophy of Education, 39, 577-597.
 Schick, T. (2000). Readings in the philosophy of science: From positivism to postmodernism. Mountain View, CA: Mayfield.
 Scriven, M. (2010a). Rethinking Evaluation methodology. Journal of Multidisciplinary Evaluation, 6, 1-2.
 Scriven, M. (2010b). Contemporary thinking about causation in evaluation: A dialogue with Tom Cook and Michael Scriven. American Journal of Evaluation, 31, 105-117.
 Scriven, M. (2012). Evaluating evaluations: A meta-evaluation checklist. http://michaelscriven.info/images/EVALUATING_EVALUATIONS_8.16.11.pdf
 Shavelson, R. (2010). Accountability in higher education: Déjà vu all over again.
 Sloane, F. (2008). Through the looking glass: Experiments, quasi-experiments and the medical model. Education Researcher, 37, 41-46.
 Stake, R. (2003). Standards-based and responsive evaluation. Thousand Oaks, CA: Sage
 Stake, R. (2010). Qualitative research: Studying how things work. New York: Guilford Press.
 State Higher Education Executive Officers (2012). National commission on accountability in higher education.
 Stufflebeam, D. (2001). Evaluation models. New Directions in Evaluation, 89, 7-98.
 Stufflebeam, D. L., & Shinkfield, A. J. (2007). Evaluation theory, models and applications. San Francisco, CA: Jossey-Bass.
 Suppe, F. (1974). The structure of scientific theories. Urbana: University of Illinois Press.
 US News (2012). Best colleges and universities.
 van Thiel, S. & Leeuw, F. (2002). The performance paradox in the public sector. Public Performance & Management Review, 25, 267-281.
 Yin, R. (2008). Case study research: Design and methods (applied social research methods). Thousand Oaks, CA: Sage.