• English
    • français
    • Deutsch
    • español
    • português (Brasil)
    • Bahasa Indonesia
    • русский
    • العربية
    • 中文
  • English 
    • English
    • français
    • Deutsch
    • español
    • português (Brasil)
    • Bahasa Indonesia
    • русский
    • العربية
    • 中文
  • Login
View Item 
  •   Home
  • Journals AtoZ
  • International Review of Research in Open and Distributed Learning
  • View Item
  •   Home
  • Journals AtoZ
  • International Review of Research in Open and Distributed Learning
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Browse

All of the LibraryCommunitiesPublication DateTitlesSubjectsAuthorsThis CollectionPublication DateTitlesSubjectsAuthorsProfilesView

My Account

Login

The Library

AboutNew SubmissionSubmission GuideSearch GuideRepository PolicyContact

Statistics

Most Popular ItemsStatistics by CountryMost Popular Authors

Identifying Sources of Difference in Reliability in Content Analysis

  • CSV
  • RefMan
  • EndNote
  • BibTex
  • RefWorks
Author(s)
Murphy, Elizabeth
Ciszewska-Carr, Justyna

Full record
Show full item record
URI
http://hdl.handle.net/20.500.12424/44727
Online Access
http://www.irrodl.org/index.php/irrodl/article/view/233
Abstract
This paper reports on a case study which identifies and illustrates sources of difference in agreement in relation to reliability in a context of quantitative content analysis of a transcript of an online asynchronous discussion (OAD). Transcripts of 10 students in a month-long online asynchronous discussion were coded by two coders using an instrument with two categories, five processes, and 19 indicators of Problem Formulation and Resolution (PFR). Sources of difference were identified in relation to: coders; tasks; and students. Reliability values were calculated at the levels of categories, processes, and indicators. At the most detailed level of coding on the basis of the indicator, findings revealed that the overall level of reliability between coders was .591 when measured with Cohen’s kappa. The difference between tasks at the same level ranged from .349 to .664, and the difference between participants ranged from .390 to .907. Implications for training and research are discussed. Keywords: content analysis; online discussions; reliability; Cohen's kappa; sources of difference; coding
Date
2005-07-01
Type
info:eu-repo/semantics/article
Identifier
oai:www.irrodl.org:article/233
http://www.irrodl.org/index.php/irrodl/article/view/233
Collections
International Review of Research in Open and Distributed Learning

entitlement

 
DSpace software (copyright © 2002 - 2022)  DuraSpace
Quick Guide | Contact Us
Open Repository is a service operated by 
Atmire NV
 

Export search results

The export option will allow you to export the current search results of the entered query to a file. Different formats are available for download. To export the items, click on the button corresponding with the preferred download format.

By default, clicking on the export buttons will result in a download of the allowed maximum amount of items.

To select a subset of the search results, click "Selective Export" button and make a selection of the items you want to export. The amount of items that can be exported at once is similarly restricted as the full export.

After making a selection, click one of the export format buttons. The amount of items that will be exported is indicated in the bubble next to export format.