An Evaluation Framework and Instrument for Evaluating e-Assessment Tools
Author(s)
Singh, Upasana Gitanjali; University of KwaZulu-Natalde Villiers, Mary Ruth; School of Computing, University of South Africa, Johannesburg, South Africa
Keywords
e-Learning; e-Assessmentaction research, e-assessment, evaluation criteria, evaluation framework, evaluation instrument, multiple choice questions, MCQs
Full record
Show full item recordOnline Access
http://www.irrodl.org/index.php/irrodl/article/view/2804Abstract
e-Assessment, in the form of tools and systems that deliver and administer multiple choice questions (MCQs), is used increasingly, raising the need for evaluation and validation of such systems. This research uses literature and a series of six empirical action research studies to develop an evaluation framework of categories and criteria called SEAT (Selecting and Evaluating e-Assessment Tools). SEAT was converted to an interactive electronic instrument, e-SEAT, to assist academics in making informed choices when selecting MCQ systems for adoption or evaluating existing ones.Date
2017-09-25Type
info:eu-repo/semantics/articleIdentifier
oai:www.irrodl.org:article/2804http://www.irrodl.org/index.php/irrodl/article/view/2804
10.19173/irrodl.v18i6.2804
Copyright/License
Copyright (c) 2017 Upasana Gitanjali Singh, Mary Ruth de VilliersRelated items
Showing items related by title, author, creator and subject.
-
Monitoring and Evaluation Capacity Development in Africa : Selected Proceedings, Johannesburg, 25-29 September 2000Development Bank of Southern Africa; World Bank; African Development Bank (Johannesburg, 2000)The importance of the monitoring and evaluation (M&E) function within public administration has been magnified by the growing voice of civil society, which has brought the issues of good governance and more effective public administration to the fore. The global trend towards more accountable, responsive and efficient government has bolstered the appeal for M&E capacity development, which has been the central focus of efforts to improve governance in the context of a comprehensive development framework. Evaluation has become increasingly important in Africa owing to stagnant, and negative economic growth rates, together with concerns related to governance, and doubts about the efficacy of development assistance. These are selected proceedings from the seminar and workshop on "Monitoring and Evaluation Capacity Development for Africa" as a follow-up to the regional seminar, to foster networking among M&E practitioners, and to share knowledge on M&E in the context of improved governance, accountability, and effective development delivery, and results. In addressing monitoring and evaluation, and the development challenge in Africa, selected topics in Part I, vary from the policy challenge as viewed from the African Development Bank perspective, through new dimensions of poverty-focused evaluation within a comprehensive development framework, to key challenges for M&E practice in Africa. Part II offers an overview of evaluation capacity development in selected African states, and its role in rebuilding demand and infrastructure for M&E. In addressing evaluation capacity development through new methodologies, Part III examines how to focus the M&E of development programs on changes in partners, and the implications of decentralized delivery for national M&E, while Part IV reviews African sector experiences, through case studies and implications for M&E. Finally, Parts V, VI and VII address how to develop national evaluation associations, and opportunities for international cooperation; looking at the future through National Action Plans for 2001; and, the way forward.
-
Evaluation of Government Performance and Public Policies in SpainZapico-Goni, Eduardo; Feinstein, Osvaldo (Washington, DC: World Bank, 2010-05)This paper covers selective aspects of
 Spain's experience in evaluating government performance
 and public policies. Rather than a cohesive evaluation
 system, there is instead a constellation of organizations,
 with evaluation mandates and/or practices, which are not
 interrelated. These organizations and their respective
 practices have been evolving without coordination over the
 past three decades. An evaluation culture is slowly
 emerging, amid different conceptual approaches used by
 different organizations that are managing and/or conducting
 evaluations. Evaluation activity has been taking place in
 Spain for years, with a marked acceleration and qualitative
 shift since 2005. Despite Spain's standing as an
 Organization for Economic Co-operation and Development
 (OECD) country as well as a European Union (EU) country, it
 still has not developed a consolidated evaluation system.
 This fact points out how long-term and complex is the task
 of institutionalizing an evaluation system. Finally, this
 paper contains several website addresses where readers can
 obtain additional information on aspects of the paper that
 most interest them and to follow the Spanish experience as
 it unfolds.
-
Evaluation as a Powerful Practices in Digital Learning ProcessesSørensen, Birgitte Holm; Levinsen, Karin (2015)The present paper is based on two empirical research studies. The Netbook 1:1 project (2009–2012), funded by the municipality of Gentofte and Microsoft Denmark, is complete, while Students’ digital production and students as learning designers (2013–2015), funded by the Danish Ministry of Education, is ongoing. Both projects concern primary and lower secondary school and focus on learning design frameworks that involve students’ agency and participation regarding digital production in different subjects and cross-disciplinary projects. Within these teacher-designed frameworks, the students perform as learning designers of learning objects aimed at other students. Netbook 1:1 has shown that digital and multimodal production especially facilitates student-learning processes and qualifies student-learning results when executed within a teacher-designed framework, which provides space for and empowers students’ agency as learning designers. Moreover, the positive impact increases when students as learning designers participate in formative evaluation practices. Traditionally, the Danish school has worked hard to teach students to verbalise their own academic competencies. However, as our everyday environment becomes increasingly complex with digital and multimodal technologies, formative evaluation as a learning practice becomes central, requiring the students to develop a digital and multimodal literacy beyond the traditional, language-centred type. In order to clarify these practices, we address the various understandings of evaluation and assessment that may blur our arguments. Students’ digital production and students as learning designers is a large-scale project that follows up on the findings of Netbook 1:1. It experiments further with various evaluation practices in a digitalised learning environment that focuses on different phases of the learning processes and includes feed-forward and feedback processes. Evaluation as a learning practice in a digitalised learning context focuses on students as actors, addressing their self-reflections, responses to feedback from peers and feed-forward processes, and responses to feedback from teachers and feed-forward processes. We find that apart from teacher initiated and planned evaluations, the teachers find it useful to initiate ad-hoc evaluations in order to grab interesting aspects on the fly. At the same time we see students initiate ad-hoc peer-evaluations and make appointments for swopping their work for peer-evaluation.