Scott W. Greenwald, Mina Khan, Pattie Maes
Work for a Member company and need a Member Portal account? Register here with your company email address.
Scott W. Greenwald, Mina Khan, Pattie Maes
One of the main ways that humans learn is by interacting with peers, in context. When we don't know which bus to take, how to prepare plantains, or how to use a certain app, we ask a nearby peer. If the right peer is not around, we can use a mobile device to connect to a remote peer. To get the desired answer, though, we need to find the right person, and have the right affordances to send and receive the relevant information. In this paper, we define a class of micro-presence systems for just-in-time micro-interactions with remote companions, and explore dimensions of the design space. Informed by theories of contextual memory and situated learning, we present TagAlong, a micro-presence system conceived for learning a second language in context with help from a remote native speaker. TagAlong uses a connected head-mounted camera and head-up display device to share images of the wearer's visual context with remote peers. The remote companion can convey knowledge using these images as a point of reference-- images are sampled, annotated and then sent back to the wearer in real time. We report on a quantitative experiment to determine the effectiveness of using visual cues that involve spatial and photographic elements as a means of "pointing" at things to draw the wearer's attention to them.