Viral Spaces
How to make scalable systems that enhance how we learn from and experience real spaces.
The Viral Spaces group researches the intersection of mobile networks with physical spaces. We have a rich history in proximal and infrastructure-free networks and their applications as well as applications that integrate mobile computing with the spaces around us. That includes codes embedded in objects and in images that make them self-describing and detectable. Starting in 2013, we are focusing on Ultimate Media (see UM listing.) This multi-sponsor program envisions a unified interface for all visual media, including television, movies, magazines, and newspapers. It is a generalized platform for social and data-driven exploration and creation of news, sports, and narrative experiences.

Research Projects

  • AudioFile

    Andy Lippman, Travis Rich and Stephanie Su

    AudioFile overlays imperceptible tones on standard audio tracks to embed digital information that can be decoded by standard mobile devices. AudioFile lets users explore their media more deeply by granting them access to a new channel of communication. The project creates sound that is simultaneously meaningful to humans and machines. Movie tracks can be annotated with actor details, songs can be annotated with artist information, or public announcements can be infused with targeted, meaningful data.

  • Constellation

    Robert Hemsley, Jonathan Speiser, Dan Sawada and Andrew Lippman

    Constellation generates a stream of frame level metadata based on a set of modular analysis engines. Currently, the system provides named entity extraction, audio expression markers, face detectors, scene/edit point locators, excitement trackers and thumbnail summarization. Constellation includes a video recorder and processes 14 DirecTV feeds as well as video content crawled from the web. Video is retained dependent on storage capacity and the database is permanent. Constellation is the metadata driver for most Ultimate Media projects.

  • Crystal Media

    Amir Lazarovich, Daniel Novy, Andy Lippman, V. Michael Bove

    We have created a hemispherical multi-touch display globe on which we present a personal view of the interconnections between data such as news and narrative entertainment for an individual or a group of people using it simultaneously. It works for people in the same room as well among people in different, similarly equipped places. Multi-touch gestures expose and bring into focus the joint commonalities and present them on a large, 4K display. Instead of fighting over a remote control, this transforms it into a joint activity. The goal is to explore the universe of media as well as the themes that friends hold in common.

  • Electric Price Tags

    Andy Lippman, Matthew Blackshaw and Rick Borovoy

    Electric Price Tags are a realization of a mobile system that is linked to technology in physical space. The underlying theme is that being mobile can mean far more than focusing on a portable device—it can be the use of that device to unlock data and technology embedded in the environment. In its current version, users can reconfigure the price tags on a store shelf to display a desired metric (e.g., price, unit price, or calories). While this information is present on the boxes of the items for sale, comparisons would require individual analysis of each box. The visualization provided by Electric Price Tags allows users to view and filter information in physical space in ways that were previously possible only online.

  • Encoded Reality

    Andy Lippman and Travis Rich

    We are exploring techniques to integrate digital codes into physical objects. Spanning both the hard and the soft, this work entails incorporating texture patterns into the surfaces of objects in a coded manner. Leveraging advancements in rapid prototyping and manufacturing capabilities, techniques for creating deterministic encoded surface textures are explored. The goal of such work is to take steps towards a self-descriptive universe in which all objects contain within their physical structure hooks to information about how they can be used, how they can be fixed, what they're used for, who uses them, etc. Our motivation is to transform opaque technologies into things that teach and expose information about themselves through the sensing technologies we already, or foreseeably could, carry on us.

  • GIFGIF

    Cesar A. Hidalgo, Andrew Lippman, Kevin Zeng Hu and Travis Rich

    An animated gif is a magical thing. It contains the power to convey emotion, empathy, and context in a subtle way that text or emoticons often miss. GIFGIF is a project to capture that magic with quantitative methods. Our goal is to create a tool that lets people explore the world of gifs by the emotions they evoke, rather than by manually entered tags.

  • Graffiti Codes

    Andrew Lippman and Jeremy Rubin

    Graffiti Codes transform the space around you into a mobile-readable environment. Anyone can draw a simple shape on anything, like graffiti, and the mobile device reads it by simply tracing the outline. It's a human-created VR code. This work diverges from the camera-scanning model and uses accelerometer-based paths to unlock data. Where a QR code cannot be easily generated in the field, Graffiti Codes only require a marker and a surface.

  • Helios

    Eric Dahlseng, Robert Hemsley, Dan Sawada and Jonathan Speiser

    Helios provides an automatic way of socializing one's video interactions. It is a Chrome browser plug-in that records user's encounters with embedded videos on the web. This data is contributed to a group collection so that one can readily see what is trending among friends and where the outliers are. In addition the data is processed by Constellation for metadata tagging.

  • Media Universe

    Vivian Diep, Savannah Niles, Andrew Lippman

    The Media Universe presents TV programs, movies, live broadcast events, magazines, and other web media events organized by the intrinsic relations that connect these programs. We create a networked visualization of the interconnections between programs as revealed by the actors, settings, abstracts, and creators. The visualization prioritizes exploration over search: groups of programs spatially arrange themselves to reveal shared connections among them, and selecting a node reveals its related programs. We intersect that view with the personal networks of friends and associates to understand how the narrative experience binds us together.

  • NewsFlash

    Andy Lippman and Grace Rusi Woo

    NewsFlash is a social way to experience the global and local range of current events. People see a tapestry of newspaper front-pages. The headlines and main photos tell part of the story, and NewsFlash tells you the rest. Users point their phones at a headline or picture of interest to bring up a feed of the article text from that given paper. The data emanates from the screen and is captured by a cell phone camera–any number of people can see it at once and discuss the panoply of ongoing events. NewsFlash creates a local space that is simultaneously interactive and provocative. We hope it gets people talking.

  • PictureThat

    Jonathan Speiser and Andrew Lippman

    PictureThat is a social game that lets you turn the world around you into a scavenger hunt. Snap a picture of something you find interesting, and challenge a friend to find it.

  • Recast

    Dan Sawada, Robert Hemsley, Andrew Lippman

    Recast is a media curation and distribution platform that enables anyone to create and distribute "news programs" that represent their views of the world from their own perspective. Recast provides a visual scripting interface similar to Scratch, where users can combine a series of logical blocks to query specific scene elements that presents their views from by drawing from arbitrary video contents, and constructing a story sequence. Recast uses the Constellation system as a backend for querying video content, and uses the Media Matrix as a content distribution platform.

  • Teamification: ChessMaze

    Andrew Lippman, Maurice Ashley and Shen Shen

    Many of us who enjoy what we do were motivated by an event or person that inspired us to learn, challenge, and question. We are building applications that attempt to convey that epiphany through cooperative learning and exploration. This year's work centers around ChessMaze, a strategic thinking game built in collaboration with International Chess Grand Master Maurice Ashley. ChessMaze challenges novices to learn the game as a team, and to develop generalizable strategic thinking. While playing their solo games, individual players contribute to a team database, where the same moves are grouped together, sorted based on popularity, and presented back to the community of players on a heat map.

  • Tele-Correlator

    Robert Hemsley, Dan Sawada, Andrew Lippman

    The Tele-Correlator allows people to view multiple perspectives of real-time news events aligned by content as well as time. It reveals emphasis and timing and allows participants to discover points of view and important events. Currently, the tele-correlator aligns broadcast news from four broadcast sources to visually reveal the emphasis and time allocation they devote to a succession of events.

  • The Glass Infrastructure (GI)

    Henry Holtzman, Andy Lippman, Jon Ferguson and Julia Shuhong Ma

    This project builds a social, place-based information window into the Media Lab using 30 touch-sensitive screens strategically placed throughout the physical complex and at sponsor sites. The idea is get people to talk among themselves about the work that they jointly explore in a public place. We present Lab projects as dynamically connected sets of "charms" that visitors can save, trade, and explore. The GI demonstrates a framework for an open, integrated IT system and shows new uses for it.

  • Vidplora: Exploring the World through Video

    Andy Lippman, V. Michael Bove, and Jonathan Speiser

    What is happening in the world now? It is a question whose answer is scattered across the web, and whose opaque surface can only be scratched via traditional search-based interfaces. The goal of this project is to develop a window for exploration into current world happenings, in order to enable people to answer visually the question of "What is happening now?" without the use of a specific, a priori, search term. To do this, we create a web crawler geared specifically towards extracting recent video content as well as an accompanying user-interface that enables users to virtually explore the world through video. (Ultimate Media Program)

  • VIO

    Robert Hemsley, Dan Sawada, Jonathan Speiser, and Andrew Lippman

    We use video to bridge the gap between the Internet of Things and interactive media. We create a streaming track akin to an additional audio track that coordinates action in an image sequence with object actions at the viewing point. For example, one could start or stop a blender under the control of a cooking program. The track is a functional language with full scripting ability and is bi-directional so that events at the receiving point can automatically alter an image sequence stream.

  • VR Codes

    Andy Lippman and Grace Woo

    VR Codes are dynamic data invisibly hidden in television and graphic displays. They allow the display to present simultaneously visual information in an unimpeded way, and real-time data to a camera. Our intention is to make social displays that many can use at once; using VR codes, users can draw data from a display and control its use on a mobile device. We think of VR Codes as analogous to QR codes for video, and envision a future where every display in the environment contains latent information embedded in VR codes.

  • WorldLens

    Jonathan Speiser, Robert Hemsley, Dan Sawada, and Andy Lippman

    World Lens informs users about newsworthy events that are both popular and obscure. It is a front page that is both navigable and scalable allowing one to discover as well as track ongoing events. We array in-depth news information across a large multitouch display organized by time, by coverage, and by geography. Elements are drawn from blogs, the web, newspapers, magazines and television. Each is presented by a front page that tells the literal story. Readers can fly through the news space, mark items for interest, and activate each. News data is gathered and analyzed by "Constellation," our system that generates frame-by-frame metadata for video and page analysis for other online material.