Fluid Interfaces
Integrating digital interfaces more naturally into our physical lives, enabling insight, inspiration, and interpersonal connections.
The Fluid Interfaces research group is radically rethinking the ways we interact with digital information and services. We design interfaces that are more intuitive and intelligent, and better integrated in our daily physical lives. We investigate ways to augment the everyday objects and spaces around us, making them responsive to our attention and actions. The resulting augmented environments offer opportunities for learning and interaction and ultimately for enriching our lives.

Research Projects

  • Augmented Airbrush

    Roy Shilkrot, Amit Zoran, Pattie Maes and Joseph A. Paradiso

    We present an augmented handheld airbrush that allows unskilled painters to experience the art of spray painting. Inspired by similar smart tools for fabrication, our handheld device uses 6DOF tracking, mechanical augmentation of the airbrush trigger, and a specialized algorithm to let the painter apply color only where indicated by a reference image. It acts both as a physical spraying device and as an intelligent digital guiding tool that provides manual and computerized control. Using an inverse rendering approach allows for a new augmented painting experience with unique results. We present our novel hardware design, control software, and a discussion of the implications of human-computer collaborative painting.

  • BrainVR: A Neuroscience Learning Experience in Virtual Reality

    Pattie Maes, Scott Greenwald, Alex Norton and Amy Robinson (EyeWire) and Daniel Citron (Harvard University)

    BrainVR is a learning experience for neuroscience that leverages motion-tracked virtual reality to convey cutting-edge knowledge in neuroscience. In particular, an interactive 3D model of the retina illustrates how the eye detects moving objects. The goal of the project is to explore the potential of motion tracked virtual reality for learning complex concepts, and build reusable tools to maximize this potential across knowledge domains.

  • Enlight

    Tal Achituv, Natan Linder, Rony Kubat, Pattie Maes and Yihui Saw

    In physics education, virtual simulations have given us the ability to show and explain phenomena that are otherwise invisible to the naked eye. However, experiments with analog devices still play an important role. They allow us to verify theories and discover ideas through experiments that are not constrained by software. What if we could combine the best of both worlds? We achieve that by building our applications on a projected augmented reality system. By projecting onto physical objects, we can paint the phenomena that are invisible. With our system, we have built "physical playgrounds": simulations that are projected onto the physical world and that respond to detected objects in the space. Thus, we can draw virtual field lines on real magnets, track and provide history on the location of a pendulum, or even build circuits with both physical and virtual components.

  • Express

    Cristina Powell (Artist, Pattie Maes, Tal Achituv, Daniel Kish (Expert in Perception and Accessibility for the Blind) and Suffers from Cerebral Palsy)

    We are developing a new and exciting tool for expression in paint, combining technology and art to bring together the physical and the virtual through the use of robotics, artificial intelligence, signal processing, and wearable technology. Our technology promotes expression in paint not only by making it a lot more accessible, but also by making it flexible, adaptive, and fun, for everyone across the entire spectrum of abilities. With the development of the technology, new forms of art also emerge, such as hyper, hybrid, and collaborative painting. All of these can be extended to remote operation (or co-operation) thanks to the modular system design. For example, a parent and a child can be painting together even when far apart; a disabled person can experience an embodied painting experience; and medical professionals can reach larger populations with physical therapy, occupational therapy, and art therapy, including motor/neuromuscular impaired persons.

  • EyeRing: A Compact, Intelligent Vision System on a Ring

    Roy Shilkrot and Suranga Nanayakkara

    EyeRing is a wearable, intuitive interface that allows a person to point at an object to see or hear more information about it. We came up with the idea of a micro-camera worn as a ring on the index finger with a button on the side, which can be pushed with the thumb to take a picture or a video that is then sent wirelessly to a mobile phone to be analyzed. The user tells the system what information they are interested in and receives the answer in either auditory or visual form. The device also provides some simple haptic feedback. This finger-worn configuration of sensors and actuators opens up a myriad of possible applications for the visually impaired as well as for sighted people.

  • FingerReader

    Roy Shilkrot, Jochen Huber, Pattie Maes and Suranga Nanayakkara

    FingerReader is a finger-worn device that helps the visually impaired to effectively and efficiently read paper-printed text. It works in a local-sequential manner for scanning text that enables reading of single lines or blocks of text, or skimming the text for important sections while providing auditory and haptic feedback.

  • Food Attack

    Pattie Maes and Niaja Farve

    The rise in wearable devices and the desire to quantify various aspects of everyday activities has provided the opportunity to offer just-in-time triggers to aid in achieving pre-determined goals. While a lot is known about the effectiveness of messaging in marketing efforts, less is known about the effectiveness of these marketing techniques on in-the-moment decision-making. We designed an experiment to determine if a simple solution of using just-in-time persuasive messaging could influence participants' eating habits and what types of messaging could be most effective in this effort. Our solution utilizes a head-mounted display to present health-based messages to users as they make real-time snack choices. We are able show that this method is effective and more feasible than current efforts to influence eating habits.

  • GlassProv Improv Comedy System

    Pattie Maes, Scott Greenwald, Baratunde Thurston and Cultivated Wit

    As part of a Google-sponsored Glass developer event, we created a Glass-enabled improv comedy show together with noted comedians from ImprovBoston and Big Bang Improv. The actors, all wearing Glass, received cues in real time in the course of their improvisation. In contrast with the traditional model for improv comedy, punctuated by "freezing" and audience members shouting suggestions, using Glass allowed actors to seamlessly integrate audience suggestions. Actors and audience members agreed that this was a fresh take on improv comedy. It was a powerful demonstration that cues on Glass are suitable for performance: actors could become aware of the cues without having their concentration or flow interrupted, and then view them at an appropriate time thereafter.

  • HandsOn: A Gestural System for Remote Collaboration Using Augmented Reality

    Special Interest group(s): 
    Kevin Wong and Pattie Maes

    2D screens, even stereoscopic ones, limit our ability to interact with and collaborate on 3D data. We believe that an augmented reality solution, where 3D data is seamlessly integrated in the real world, is promising. We are exploring a collaborative augmented reality system for visualizing and manipulating 3D data using a head-mounted, see-through display, that allows for communication and data manipulation using simple hand gestures.

  • HRQR

    Valentin Heun, Eythor Runar Eiriksson

    HRQR is a visual Human and Machine Readable Quick Response Code that can replace usual 2D barcode and QR Code applications. The code can be read by humans in the same way it can be read by machines. Instead of relying on a computational error correction, the system allows a human to read the message and therefore is able to reinterpret errors in the visual image. The design is highly inspired by a 2,000 year-old Arabic calligraphy called Kufic.

  • Invisibilia: Revealing Invisible Data as a Tool for Experiential Learning

    Pattie Maes, Judith Amores Fernandez and Xavier Benavides Palos

    Invisibilia seeks to explore the use of Augmented Reality (AR), head-mounted displays (HMD), and depth cameras to create a system that makes invisible data from our environment visible, combining widely accessible hardware to visualize layers of information on top of the physical world. Using our implemented prototype, the user can visualize, interact with, and modify properties of sound waves in real time by using intuitive hand gestures. Thus, the system supports experiential learning about certain physics phenomena through observation and hands-on experimentation.

  • JaJan!: Remote Language Learning in Shared Virtual Space

    Kevin Wong, Takako Aikawa and Pattie Maes

    JaJan! is a telepresence system wherein remote users can learn a second language together while sharing the same virtual environment. JaJan! can support five aspects of language learning: learning in context; personalization of learning materials; learning with cultural information; enacting language-learning scenarios; and supporting creativity and collaboration. Although JaJan! is still in an early stage, we are confident that it will bring profound changes to the ways in which we experience language learning and can make a great contribution to the field of second language education.

  • KickSoul: A Wearable System for Foot Interactions with Digital Devices

    Pattie Maes, Joseph A. Paradiso, Xavier Benavides Palos and Chang Long Zhu Jin

    KickSoul is a wearable device that maps natural foot movements into inputs for digital devices. It consists of an insole with embedded sensors that track movements and trigger actions in devices that surround us. We present a novel approach to use our feet as input devices in mobile situations when our hands are busy. We analyze the foot's natural movements and their meaning before activating an action.

  • LuminAR

    Natan Linder, Pattie Maes and Rony Kubat

    LuminAR reinvents the traditional incandescent bulb and desk lamp, evolving them into a new category of robotic, digital information devices. The LuminAR Bulb combines a Pico-projector, camera, and wireless computer in a compact form factor. This self-contained system enables users with just-in-time projected information and a gestural user interface, and it can be screwed into standard light fixtures everywhere. The LuminAR Lamp is an articulated robotic arm, designed to interface with the LuminAR Bulb. Both LuminAR form factors dynamically augment their environments with media and information, while seamlessly connecting with laptops, mobile phones, and other electronic devices. LuminAR transforms surfaces and objects into interactive spaces that blend digital media and information with the physical space. The project radically rethinks the design of traditional lighting objects, and explores how we can endow them with novel augmented-reality interfaces.

  • MARS: Manufacturing Augmented Reality System

    Rony Daniel Kubat, Natan Linder, Ben Weissmann, Niaja Farve, Yihui Saw and Pattie Maes

    Projected augmented reality in the manufacturing plant can increase worker productivity, reduce errors, gamify the workspace to increase worker satisfaction, and collect detailed metrics. We have built new LuminAR hardware customized for the needs of the manufacturing plant and software for a specific manufacturing use case.

  • Move Your Glass

    Special Interest group(s): 
    Niaja Farve and Pattie Maes

    Move Your Glass is an activity and behavior tracker that also tries to increase wellness by nudging the wearer to engage in positive behaviors.

  • Open Hybrid

    Valentin Heun, Shunichi Kasahara, James Hobin, Kevin Wong, Michelle Suh, Benjamin F Reynolds, Marc Teyssier, Eva Stern-Rodriguez, Afika A Nyati, Kenny Friedman, Anissa Talantikite, Andrew Mendez, Jessica Laughlin, Pattie Maes

    Open Hybrid is an open source augmented reality platform for physical computing and Internet of Things. It is based on the web and Arduino.

  • PsychicVR

    Pattie Maes, Judith Amores Fernandez, Xavier Benavides Palos and Daniel Novy

    PsychicVR integrates a brain-computer interface device and virtual reality headset to improve mindfulness while enjoying a playful immersive experience. The fantasy that any of us could have superhero powers has always inspired us, and by using Virtual Reality and real-time brain activity sensing we are moving one step closer to making this dream real. We non-invasively monitor and record electrical activity of the brain and incorporate this data in the VR experience using an Oculus Rift and the MUSE headband. By sensing brain waves using a series of EEG sensors, the level of activity is fed back to the user via 3D content in the virtual environment. When users are focused, they are able to make changes in the 3D environment and control their powers. Our system increases mindfulness and helps achieve higher levels of concentration while entertaining the user.

  • Reality Editor

    Valentin Heun, Eva Stern-Rodriguez, Marc Teyssier, Shuo Yan, Kevin Wong, Michelle Suh

    The Reality Editor is a new kind of tool for empowering you to connect and manipulate the functionality of physical objects. Just point the camera of your smartphone at an object and its invisible capabilities will become visible for you to edit. Drag a virtual line from one object to another and create a new relationship between these objects. With this simplicity, you are able to master the entire scope of connected objects.

  • Remot-IO: A System for Reaching into the Environment of a Remote Collaborator

    Judith Amores Fernandez, Xavier Benavides Palos and Pattie Maes

    Remot-IO is a system for mobile collaboration and remote assistance around Internet-connected devices. It uses two head-mounted displays, cameras, and depth sensors to enable a remote expert to be immersed in a local user's point of view, and to control devices in that user's environment. The remote expert can provide guidance through hand gestures that appear in real time in the local user's field of view as superimposed 3D hands. In addition, the remote expert can operate devices in the novice's environment and bring about physical changes by using the same hand gestures the novice would use. We describe a smart radio where the knobs of the radio can be controlled by local and remote users. Moreover, the user can visualize, interact, and modify properties of sound waves in real time by using intuitive hand gestures.

  • Scanner Grabber

    Tal Achituv

    Scanner Grabber is a digital police scanner that enables reporters to record, playback, and export audio, as well as archive public safety radio (scanner) conversations. Like a TiVo for scanners, it's an update on technology that has been stuck in the last century. It's a great tool for newsrooms. For instance, a problem for reporters is missing the beginning of an important police incident because they have stepped away from their desk at the wrong time. Scanner Grabber solves this because conversations can be played back. Also, snippets of exciting audio, for instance a police chase, can be exported and embedded online. Reporters can listen to files while writing stories, or listen to older conversations to get a more nuanced grasp of police practices or long-term trouble spots. Editors and reporters can use the tool for collaborating, or crowdsourcing/public collaboration.

  • ScreenSpire

    Special Interest group(s): 
    Pattie Maes, Tal Achituv, Chang Long Zhu Jin and Isa Sobrinho

    Screen interactions have been shown to contribute to increases in stress, anxiety, and deficiencies in breathing patterns. Since better respiration patterns can have a positive impact on wellbeing, ScreenSpire improves respiration patterns during information work using subliminal biofeedback. By using subtle graphical variations that are tuned to attempt to influence the user subconsciously user distraction and cognitive load are minimized. To enable a true seamless interaction, we have adapted an RF based sensor (ResMed S+ sleep sensor) to serve as a screen-mounted contact-free and respiration sensor. Traditionally, respiration sensing is achieved with either invasive or on-skin sensors (such as a chest belt); having a contact-free sensor contributes to increased ease, comfort, and user compliance, since no special actions are required from the user.

  • ShowMe: Immersive Remote Collaboration System with 3D Hand Gestures

    Special Interest group(s): 
    Pattie Maes, Judith Amores Fernandez and Xavier Benavides Palos

    ShowMe is an immersive mobile collaboration system that allows remote users to communicate with peers using video, audio, and gestures. With this research, we explore the use of head-mounted displays and depth sensor cameras to create a system that (1) enables remote users to be immersed in another person's view, and (2) offers a new way of sending and receiving the guidance of an expert through 3D hand gestures. With our system, both users are surrounded in the same physical environment and can perceive real-time inputs from each other.

  • Skrin

    Pattie Maes, Joseph A. Paradiso, Xin Liu and Katia Vega

    Skrin is an exploration project on digitalized body skin surface using embedded electronics and prosthetics. Human skin is a means for protection, a mediator of our senses and a presentation of ourselves. Through several projects, we expand the expression capacity of body surface and emphasize on the dynamic aesthetics of body texture by technological means.

  • SmileCatcher

    Special Interest group(s): 
    Niaja Farve and Pattie Maes

    Our hectic and increasingly digital lives can have a negative effect on our health and wellbeing. Some authors have argued that we socialize less frequently with other people in person and that people feel increasingly lonely. Loneliness has been shown to significantly affect health and wellbeing in a negative way. To combat this, we designed a game, SmileCatcher, which encourages players to engage in in-person, social interactions and get others to smile. Participants wear a device that takes regular pictures of what is in front of them and the system analyzes the pictures captured to detect the number of smiles.

  • STEM Accessibility Tool

    Pattie Maes and Rahul Namdev

    We are developing a very intuitive and interactive platform to make complex information--especially science, technology, engineering, and mathematics (STEM) material--truly accessible to blind and visually impaired students by using a tactile device with no loss of information compared with printed materials. A key goal of this project is to develop tactile information-mapping protocols through which the tactile interface can best convey educational and other graphical materials.

  • TagMe

    Pattie Maes, Judith Amores Fernandez and Xavier Benavides Palos

    TagMe is an end-user toolkit for easy creation of responsive objects and environments. It consists of a wearable device that recognizes the object or surface the user is touching. The user can make everyday objects come to life through the use of RFID tag stickers, which are read by an RFID bracelet whenever the user touches the object. We present a novel approach to create simple and customizable rules based on emotional attachment to objects and social interactions of people. Using this simple technology, the user can extend their application interfaces to include physical objects and surfaces into their personal environment, allowing people to communicate through everyday objects in very low-effort ways.

  • The Challenge

    Special Interest group(s): 
    Natasha Jaques, Niaja Farve, Pattie Maes and Rosalind W. Picard

    Mental wellbeing is intimately tied to both social support and physical activity. The Challenge is a tool aimed at promoting social connections and decreasing sedentary activity in a workplace environment. Our system asks participants to sign up for short physical challenges and pairs them with a partner to perform the activity. Social obligation and social consensus are leveraged to promote participation. Two experiments were conducted in which participants’ overall activity levels were monitored with a fitness tracker. In the first study, we show that the system can improve users' physical activity, decrease sedentary time, and promote social connection. As part of the second study, we provide a detailed social network analysis of the participants, demonstrating that users’ physical activity and participation depends strongly on their social community.


    Pattie Maes and Niaja Farve

    WATCH is a system that attempts to measure the possible influence that a new time-management interface will have on improving the habits of a user. Users set goals for each of the activities detected by the app. Detected activities include physical activity and time spent in pre-defined locations. An Andriod app (WATCH) on their personal phones is able to track their activities (running, walking, and sitting) as well as their GPS location. Their progress in comparison to their goals is displayed on their home screens as a pie chart.