Quality indicators for evaluating distance education programs at community colleges
Author(s)Hirner, Leo J., 1962-
Contributor(s)Kochtanek, Thomas R.
Full recordShow full item record
AbstractThe entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file.
Title from title screen of research.pdf file (viewed on June 8, 2009)
Includes bibliographical references.
Thesis (Ph. D.) University of Missouri-Columbia 2008.
Dissertations, Academic -- University of Missouri--Columbia -- Information science and learning technologies.
The continued rapid growth of online courses and programs in higher education has brought concerns regarding support services, learning resources, and effectiveness of instruction, as well as how institutions monitor the quality of online programs. These concerns have prompted questions about the effectiveness of instruction and how participants perceive online learning, as well as the methodology of the body of research on online programs and the need for a process by which programs and institutions could be compared by academics or prospective students (Phipps and Meritosis,1999). Unfortunately, these concerns continue to persist (Hannafin, Oliver, Hill, Glazer, & Sharma, 2003; Sherlock & Pike 2004). The need for a process by which online programs may be evaluated and compared provided the impetus for this study, the goals of which were to identify quality indicators specific to community college online programs, and to determine stakeholders' perceived importance of those indicators. A literature review identified common standards and best practices for online courses and programs developed by accrediting organizations and policy groups. The terms best practices, criteria, and standards are used interchangeably in the literature when discussing recommendations regarding practices and policies institutions should adopt for distance learning programs (Twigg, 1999a). Synthesizing these sources yielded five categories: institutional support, curriculum and instruction, faculty support, student support, and evaluation and assessment. A case was made for adding technical support as a sixth category. The items identified through the literature review were used to guide the development of a Delphi study to identify potential indicators. The results of the Delphi Study were then used to create a three-part Stakeholder Survey designed to collect input on perceived levels of importance for each potential indicator using the magnitude estimation technique and validate the results of the Delphi. The stakeholder survey was then distributed to students and faculty, technical support staff, and program administrators participating in online courses offered by a community college system in the Midwest. Participants were also able to recommend indicators not included in the survey, and demographic data was also collected. To refine the results a final survey of a group of distance learning experts, identified through their scholarly research and professional activity, was asked to review the results of the Delphi study and classify each item as a factor, indicator, or other according to definitions provided. Results from this study identify data that an institution might collect when measuring the effectiveness of its online programs and services. Both the factors and indicators represent parameters that may support the examination of how an institution supports its programs, or how programs might compare across institutions. What these factors and indicators do not address is how an institution uses the data it collects on its programs.