Multimodality and reading: The construction of meaning through image-text interaction
Contributor(s)University of New England
Full recordShow full item record
AbstractThe reconceptualization of literacy and literacy pedagogy has been increasingly advocated as a research imperative in view of the increasingly multimodal nature of paper and digital media texts (Chandler-Olcott and Mahar, 2003; Hull and Nelson, 2005; Karnil 'et al'., 2000; Lemke, 2006; Leu 'et al'., 2004; Richards, 2001). Many see image-text relations as central to such a reconceptualization (Andrews, 2004; Bolter, 1998; Boulter, 1999; Dresang, 1999; Jewitt, 2002, 2006; Jewitt and Kress, 2003; Kress, 2003b; Luke, 2003; New London Group, 2000). Although a good deal of recent work addresses the ways in which images construct meanings, very little has specifically addressed the intersemiotic semantic relationships 'between' images and language to show how the visual and verbal modes interact to construct the integrated meanings of multimodal texts (Martinec and Salway, 2005; Royce, 2007).While some descriptive accounts have emerged such as McCloud's (1994) explication of image-language interaction in graphic novels, and educational work on science textbooks (Roth 'et al'., 2005), research within systemic functional semiotics has provided a systematic account of a "semantic system of image-text relations that would map out how images and text interact" (Martinec and Salway, 2005: 341). Nevertheless, the development of a generalized semiotic system describing the semantics of the co-articulation of image and language remains in its infancy (Kress, 2001; Lemke, 2006; MackenHorarik, 2003; Unsworth, 2008). Advancing understanding of how images and language interact to construct meaning seems crucial in seeking to reconceptualize literacy and literacy pedagogy from a multimodal perspective. The discussion in this chapter seeks to contribute to this kind of advancement.