This article by Toni Navarrete, Patricia Santos, Davinia Hernández-Leo and Josep Blat was published on the online journal Educational Technology & Society, volume 14, issue 3, in 2011.
Test-based e-Assessment approaches are mostly focused on the assessment of knowledge and not on that of other skills, which could be supported by multimedia interactive services. This paper presents the QTIMaps
model, which combines the IMS QTI standard with web maps services enabling the computational assessment of geographical skills. We introduce a reference implementation of the model, with Google Maps as the web map service, comprising both an editor and a runtime system, which have been used in two learning situations. The tooling developed and the real use results demonstrate that the QTIMaps model is usable and provides educational benefits. We describe three other assessment activities, showing that the model can be applied to a variety of educational scenarios.
Destinée avant tout aux jeunes de 14 à 20 ans, la web-fiction L@-KOLOK.com met en scène quatre colocataires d’une vingtaine d’années. En les voyant vivre, le spect’acteur est confronté avec eux aux grands enjeux de notre époque : l’environnement, la santé, la consommation, l’énergie, l’alimentation... Chaque épisode explore un sujet de société particulier.
GeoHCI 2013 aims to provide a much needed venue for members of the human-computer interaction and geography communities to create and share knowledge on topics that span this disciplinary boundary.
For the increasing number of HCI researchers and practitioners whose work has a geographic component, GeoHCI 2013 will offer a unique opportunity to discuss best practices and open research questions with like-minded members of the HCI community and with geographers, whose field has a rich understanding of spatial phenomena. For geographers, GeoHCI 2013 is a chance to do the same with experts in HCI-related areas such as online communities, mobile and online maps, location-based social networks, crisis informatics, ubiquitous computing, and augmented reality.
Researchers and practitioners in HCI, geography, and related disciplines who are interested in participating should submit a two-page position statement as described in the call for papers. Position statements are due January 11, 2013 and should be submitted through our EasyChair site. The workshop, co-located with CHI 2013 in Paris, will be on April 27.
On April 28, we are hosting an optional second workshop day that will consist of various "in the field" activities. We are actively seeking proposals for participant-led field trips. Have a great new citizen science app you want to demonstrate? Want to lead an OpenStreetMap data collection activity to bring everyone at the workshop up to speed on the OSM state-of-the-art? Can you guide us on an augmented reality tour of Paris? Let us know! Position statements that are accompanied by field activity proposals will receive extra consideration.
I detta dokument presenterar man en rapport om användningen av realtidsdata för att hjälpa lärare att utvärdera och kartlägga elevernas förtroende. Det arbete som beskrivs här försöker bringa klarhet i hur förtroende kan utnyttjas som en faktor för att stödja kreativt samarbete för lärande online.
Denna rapport klargör denna förståelse från lärarens perspektiv. Här undersöker man elevernas förtroende och engagemang i verkliga fall genom att internationella studenter samarbetar (på distans) och visar upp sina studieuppgifter och kunskaper. Denna forskningslinje har som mål att upptäcka gruppernas sårbarheter för att på så sätt stärka deras samarbete och samverkan.
Genom att förstå hur man kan utvärdera och kartlägga elevernas förtroende tror vi att lärare kan använda denna information för att ingripa (när så behövs) och ge positivt stöd. Därigenom kan de stärka elevernas autonomi och deras motivationsgrad för att lära sig nya saker och finna nya vägar till kunskaper varje dag. Huvuddelen av de resultat man samlat in hittills handlar om mängden inflytande på elevernas beteende.
Man pekar på tre huvudsakliga aspekter vad gäller observeringen av förtroende och dess roll när det kommer till att konsolidera stödjande och positiva åtgärder. Dessa inkluderar att observera: (1) hur elever uppfattar andras intentioner i ett givet sammanhang; (2) förändringar i elevernas engagemang vad gäller en viss aktivitet (samarbetsvilja) och (3) hur elevernas uppfattar användningen av kommunikationsmedier för lärandeändamål (reaktioner, avsedd användning och faktisk användning).
We understand the relationship between UX and usability as the latter is subsumed by the former. Usability evaluation methods (UEMs) and metrics are relatively more mature. In contrast, UX evaluation methods (UXEMs) which draw largely on UEMs are still taking shape. It is conceivable that feeding outcomes of UX evaluation back to the software development cycle to instigate the required changes can even be more challenging than doing so for usability evaluation (UE). It leads to several key issues.
- UX attributes are (much) more fuzzy and malleable, what kinds of diagnostic information and improvement suggestion can be drawn from evaluation data. For instance, a game can be perceived by the same person as a great fun on one day and a terrible boredom the following day, depending on the player's prevailing mood. The waning of novelty effect (cf. learnability differs over time in case of usability) can account for the difference as well. How does the evaluation feedback enable designers/developers to fix this experiential problem (cf. usability problem) and how can they know that their fix works (i.e. downstream utility)?
- Emphasis is put on conducting UE in the early phases of a development lifecycle with the use of low fidelity prototypes, thereby enabling feedback to be incorporated before it becomes too late or costly to make changes. However, is this principle applicable to UX evaluation? Is it feasible to capture authentic experiential responses with a low-fidelity prototype? If yes, how can we draw insights from these responses?
- The persuasiveness of empirical feedback determines its worth. Earlier research indicates that the development team needs to be convinced about the urgency and necessity of fixing usability problems. Is UX evaluation feedback less persuasive than usability feedback? If yes, will the impact of UX evaluation be weaker than UE?
- The Software Engineering (SE) community has recognized the importance of usability. Efforts are focused on explaining the implications of usability for requirements gathering, software architecture design, and the selection of software components. Can such recognition and implications be taken for granted for UX, as UX evaluation methodologies and measures could be very different (e.g. artistic performance)?
- How to translate observational or inspectional data into prioritised usability problems or redesign proposals is thinly documented in the literature. Analysis approaches developed by researchers are applied to a limited extent by practitioners. Such divorce between research and practice could be bitterer in UX analysis approaches, which are essentially lacking.
While the gap between HCI and SE with regard to usability has somewhat been narrowed, it may be widened again due to the emergence of UX.
The main goal of I-UxSED 2012 is to bring together people from HCI and SE to identify challenges and plausible resolutions to optimize the impact of UX evaluation feedback on software development.
Con el proyecto ed@d (Enseñanza Digital a Distancia) el Ministerio de Educación de España presenta un nuevo modelo de libro interactivo, que permite a los estudiantes aprovechar las ventajas de las Tecnologías de la Información y la Comunicación para mejorar su aprendizaje autónomo y agilizar la comunicación con sus tutores, en un entorno tecnológico avanzado.
Det rapporterar även om de viktigaste bevisen som uppkom under Beacon projektet, vilka finansierades av den sista inbjudan av det sjätte ramprogrammet. BEACON Det brasilianska Europeiska Konsortium för DTT service (Brazilian European Consortium för DTT Services) är ett treårigt projekt som behandlar markbunden television med tre huvud mål:
- utvecklingen av interoperabilitet mellan den Europeiska (DVB) och den brasilianska (SBTVD) standarder för digital markbunden television;
- studier om metod för distansinlärning genom digital TV;
- leverans av t-learning service i samband med socialt inkluderande i Sao Paulo, Brasilien.
Termen t-learning kan förknippas med förverkligandet av interaktivt träningsmaterial, innehåll och service vid användning av digital dekoder. T-learningens användningsegenskaper och deras möjlighet att spridas på en större skala än eLearning öppnar upp nya scenarier för undervisning som riktar sig till ett större antal potentiella användare, för såväl formella som informella kurser. Den verkliga utvecklingen av t–learnings systemet och dess applikationer är baserat på integrerade möjligheter och funktioner för både digital markbunden TV och eLearning, framförallt i termer av förhöjd interaktivitet, det uppstår möjligheter för ett mer engagerat och aktivt lärande och virtuella grupper.
Utvecklingen av nya värden baserade på digital video marksändning (DVB-T) teknik läggs till och kommer att göra det möjligt att nå flera användare. Huvudmålet är att erbjuda inlärningsservice till användare som har råd med – av ekonomiska och kulturella skäl – en Internet uppkoppling och en PC, men är TV-innehavare och låta de förvärva kunskap inom många sektorer, underlätta och förbättra deras konkurrenskraft på arbetsmarknaden. De nya digitala utsändningsplattformerna kommer att bidra till media spridning i många länder och kommer i framtiden öka möjligheterna till inlärnings aktiviteter, statlig- och kulturellservice för medborgare.
I praktiken har man sett att denna struktur gagnar gemensamt utvecklade lösningar för lärandet på arbetsplatser och har stor potential för möjligheter till informella informationsöverföringar. Microkurser stöttar informellt lärande nära arbetsplatsen, därigenom höjs inlärningskapaciteten hos företaget.
Microkurskonceptet har utvecklas inom ramen av den Europeiska Unionens Leonardo da Vinci program.
Group or classroom discussions are commonplace in physical learning environments. The last decade has witnessed an increase in the use of virtual learning environments (VLE) such as WebCT and Blackboard. Online discussion tools are a standard feature of such VLE’s. Such tools avail of the internet technology - hence ‘online’ - to enable discussions between students and tutors. Unlike their classroom counterparts or online chat - another VLE feature - such discussions do not take place in real time. Hence they are described as asynchronous.
Teachers have long used classroom discussions as a way of encouraging students to interact, to evaluate conflicting opinions, to learn from one another, to make sense of their subject and derive meaning from their learning. But does the same apply to virtual environments? Can tutors, with little or no technological expertise, design online discussions to support meaningful learning?
Designing meaningful learning into online discussions
According to Thomas Shuell ‘meaningful learning’ is a cognitive, metacognitive & affective activity, which is typified by five characteristics: active, cumulative, goal-oriented, constructive and self-regulated (Shuell 1992: 23-5.)1. The characteristics mentioned above are triggered when the learner engages certain ‘psychological processes’, called ‘learning functions’2. The functions are in turn activated by learning tasks, which can be learner- or tutor-initiated.
It follows therefore that any learning task(s) – including online discussion tasks- which produce these characteristics or activate a large number of Shuell’s learning functions increases the probability of producing ‘meaningful learning’.
For each of Shuell’s characteristics the discussion immediately below
- Shows how that characteristic can be produced by online discussions,
- Illustrates with an example from a second year Managing Information Systems course
- Gives a set of simple design precepts.
Meaningful learning requires substantial cognitive activity, which is ‘the single important determinant’ of what learners learn (Shuell 1992: 24). In contrast with rote learners, where facts are simply recalled from a static knowledge base, meaningful learners actively construct their own learning and build flexible frameworks, which can be applied to diverse problems (Hannafin & Land 1977: 171, 174). Online asynchronous discussions require the participant to engage in a behavioural activity, writing, and a cognitive activity, the mobilisation of, perhaps, inert knowledge into coherent argument, narrative or conversation. The behavioural and cognitive activities are complementary (Brown et al. 1989: 32, 39). The act of writing externalises ‘thinking’ and exposes it to self-scrutiny and the scrutiny of others (Rowntree in Salmon 1998: 6; Jonassen 1996: 166).
Most MIS discussion tasks take a reading as their point of departure. The aim of the tasks is to help learners move from a passive to an active understanding of the reading and from reproductive to reconstructive learning strategies.
The designer can help make discussion tasks active by encouraging learners to:
- endorse or challenge a contribution directly (Painter et al. 2003:169)
- generate hypotheses e.g. how do you apply this idea to your work (Shuell 1992: 36)
- brainstorm a problem (Jonassen 1996: 176). The designer must also attend to learners’ motivation (Shuell 1992: 32-3) and accordingly must take care not to make tasks too facile, too difficult, too restrictive or too well defined (Shuell 1992: 32, Brown et al. 1989: 40).
Meaningful learning, as we have seen, assumes that each learner actively constructs his own knowledge. But this is constructed on the basis of prior knowledge, beliefs and preconceptions (Bransford et al. 2002: 10). New elements of learning are tied together like blocks and laid upon the foundations of prior knowledge in order to build effective overarching conceptual frameworks for their domains. Online discussions can help learners assimilate new knowledge into their schemas by directly or indirectly inviting a learner to recall prior knowledge (including preconceptions), relate it to the topic under discussion and to other ideas (Shuell 1992: 23-5).
MIS students have an average of 10 years experience behind them. One MIS discussion Task asks them to draw on this experience and recall any situation in which they experienced bad customer service and relate this to the question of customer service and IT. The task is designed to encourage discussion about a well-known negative preconception that IT professional support staff hold about their own customers.
The designer can help make discussion tasks cumulative by encouraging learners to:
- articulate relevant prior knowledge (Shuell 1992: 33). (This can encourage ownership of learning (Salmon 1998: 6) and motivate learners for new knowledge acquisition(Shuell 1992: 33) )
- articulate (known) preconceptions/misconceptions or naïve renditions (Bransford et al. 2002: 10).
- attend to the important elements in the new topic (Shuell 1992: 33-4) and
- make comparisons (Shuell 1992: 35) and draw interrelationships between new ideas and old ones.
Shuell’s ‘guided construction’, like constructivism generally, highlights the importance of learner goals. Learners need to be encouraged to take responsibility for formulating their own goals and to be helped to align them to the objectives of the domain. Online discussions can directly ask a learner to reflect on his goals and make them explicit. But goal-orientation also includes exposing the learner to higher order goals, such as critical and creative thinking, the ‘broader more integrative outcomes’ which can be widely applied ‘across diverse learning tasks’ (Hannafin & Land 1997: 171, 174). Online discussions can be framed in such a way that they bring into play critical thinking activities as such as explaining, predicting, arguing, and critiquing (Shuell 1992: 23-5, Ohlsson 1995: 50-52).
One MIS discussion Task asks learners what they think they would like to get out of the MIS course. A later MIS discussion Task asks learners to reflect on what they have achieved.
The designer can help make discussion tasks goal-oriented by encouraging learners to:
- clarify their expectations and goals, (Shuell 1992: 30)
- attend (Shuell 1992: 33-4) to possible differences between personal goals and domain objectives, and
- monitor progress (Shuell 1992: 30, 37-8) in relation changing goals and expectations (Shuell 1992: 30) at different points in the learning episode
- put topics on the discussion agenda by building a number of ‘blank’ tasks, called ‘student topics’, into the task-schedule and, perhaps even,
- take responsibility for the management of a topic.
Constructivism assumes that learners are empowered, active explorers and constructors of knowledge not merely passive receptacles for information (Bransford et al. 2002: 133-139) Such learner knowledge can often be embodied in a construct that summarises their learning: a web site, a video, or simply an argument. ‘Argument’ is a typical construct of discourse. Online discussion can encourage learners to explore issues, to take positions and discuss them – the activities which Jonassen considers the essence of knowledge construction- (Jonassen in Salmon 1998; 6). Elsewhere Jonassen goes as far as to say: ‘No mindtool..better facilitates these constructivist processes than CMC’ (Jonassen 1996: 166).
One MIS discussion Task asks learners to construct a non-text based summary (eg web, powerpoint, video). of the course.The constructs then become the subject of further constructive discussion (Shuell 1992: 23-5; Jonassen 1996: 170).
The designer can help make discussion tasks constructive by encouraging learners to:
- articulate/endorse/challenge a position with written evidence, experience or, usefully, the opinions of peers (Painter et al. 2003:161-2, Salmon 1998: 6)
- compare, combine, integrate, synthesise or reconstruct ideas in their own language (Shuell 1992: 35,38)
- give examples
- shift attention from symptoms to causes, from assertions to justifications and from the specific to the general (Painter et al. 2003: 166-7)
- use any tools or resources to fashion artefacts such as tabular lists, mindmaps which can help learners both to encode their knowledge (Shuell 1992: 34-5) and attend to differences between their work and that of others.
Metacognition refers to the learner’s capacity to reflect on and regulate his learning. The learner needs to be aware of his goals, progress and problems (Salmon 1998: 6). Writing and discourse expose the participant’s reasoning to self-examination and the examination of others (Jonassen 1996: 166). Online discussion can directly request a learner to reflect on his learning (reflection). More indirectly, the learner by studying the contributions of others can make some judgements about his progress in comparison with theirs (regulation).
One MIS discussion Task asks learners what they think they would like to get out of the MIS course. A later MIS discussion Task asks learners to reflect on what they have achieved.
The designer can help make discussion tasks self-regulated by encouraging the learner to:
- monitor and manage his own learning (Shuell 1992: 37-8, Goodyear et al 2003 15-16) by (the designer) carefully scheduling discussion tasks so that they provide frequent deadlines
- articulate his goals and expectations (as above) (Shuell 1992: 33)
- avail of formative feedback (Shuell 1992: 37) by scheduling feedback time into the overall discussion task schedule
- reflect explicitly on his own work perhaps by comparing their contributions with those of their peers.
Shuell’s account of ‘meaningful learning’ neglects somewhat the situated nature of cognition. According to this view learners can be thought of as members of a community of practice availing of resources, tools and tacit know-how, articulated through community discourse, in the pursuit of solutions to useful authentic tasks. From this perspective knowing and doing are inseparable. (Brown et al. 1989: 32-42; Bransford et al. 2002: 280; Koschmann 1996:13). Online discussions can facilitate situated approaches to learning since discourse is central to such communities.
MIS students are drawn from both the IT community and the larger community of the Irish Public Sector. These communities have recognisable values, norms, practices, procedures, problems and a language in which to express them. One MIS Discussion Task 6 asks learners to focus on a critical public sector issue, BPR; and how the IT community can help enable BPR in their respective organisations.
The designer can help make discussion tasks situated by encouraging the learner to:
- ‘lean on whatever context’ they can (Brown et al. 1989: 36, 32-3).
- reflect on and apply their learning or implicit knowledge to multiple appropriate authentic tasks (Brown et al. 1989: 34; Bransford et al 42-44, 59-60)
- use tools (Brown et al. 1989: 33)
- talk to his peers at work and record their views on the discussions
- become more aware of critical community issues by, for instance, designing a task that is moderated by a guest speaker from the community of practice (Jonassen 1996: 165-6).
The designer should, in general, keep tasks ill-defined (Brown et al. 1989: 35) and take care to interweave in the text of the task the formal language of the domain with the ‘argot’ of the community of practice. Most obviously the designer should avail of learning technologies for collaborative activities and the development of online communities (Jones 2003: 10).
- Bransford J., Brown A. & Cocking R (2002) How People Learn National Academy Press Washington, D C
- Brown, JS, Collins, A & Duguid, P (1989) Situated cognition and the culture of learning. Educational Researcher, 32–42
- Goodyear, P, Jones, C, Asensio, M, Hodgson, V & Steeples, C (2003) Constructing the ‘good’ e-learner. Proceedings of the 10th Biennial European Association for Research on Learning and Instruction (EARLI) Conference, Padova, Italy
- Hannafin, M., & Land, S. (1997). The foundations and assumptions of technology-enhanced student centered learning environments. Instructional Science, 25(3), 167-202.
- Jonassen, D (1996) Computer-mediated communication: connecting communities of learners. Chapter 7 in Computers in the classroom: mindtools for critical thinking. Englewood Cliffs, New Jersey: Merrill, Prentice Hall
- Jones, C (2003) Networks and learning: communities, practices and the metaphor of networks. In Cook, J., and McConnell, D. (Eds) Communities of Practice. Research Proceddings of the 10th Association for Learning Technology Conference (ALT-C 2003). Held 8-10 September, 2003, The University of Sheffield and Sheffield Hallam University, UK.
- Koschmann, T. (1996). Paradigm shifts and instructional technology: an introduction. In T. Koschmann (Ed.), CSCL: theory and practice of an emerging paradigm (pp. 1-23). Mahwah New Jersey: Lawrence Erlbaum Associates
- Ohlsson, S. (1995). Learning to do and learning to understand: a lesson and a challenge for cognitive modelling. In P. Reimann & H. Spada (Eds.) Learning in humans and machines: towards an interdisciplinary learning science (pp. 37-62). London: Pergamon.
- Painter, C, Coffin, C & Hewings, A (2003) Impacts of directed tutorial activities in computer conferencing: a case study. Distance Education, 24 (2), 159–174
- Salmon, G (1998) Developing learning through effective online moderation. Active Learning 9, 3–8
- Shuell, T. (1992). Designing instructional computing systems for meaningful learning. In M. Jones & P. Winne (Eds.), Adaptive Learning Environments. New York: Springer Verlag.
- Shuell gives this as a description of cognitive conceptions or characteristics of learning but it seems to apply equally well to his own work.
- The learning function are expectations, motivation, prior knowledge Activation, Attention, encoding, comparison, hypothesesis generation, repetition, feedback, evaluation, monitoring and combination-integration-synthesis.