• Login
  • Register

Work for a Member company and need a Member Portal account? Register here with your company email address.

Project

Searchlight: Tracking Light in Space

Groups

In this experiment, I used 5 phototransistors to locate the position of a light source on a 2D plane, and drove interactive demos via serial communication.

A phototransistor is an electronic component that detects the brightness level of its environment.

Four of the five phototransistors are installed on the corners of a square on the PCB (printed circuit board), while the fifth is located at the center. This allows for a rough estimation of the location of the light source, by polling the brightness level preceived by each phototransistor.

The SAMD21E microcontroller is used on this custom board designed from scratch.

I used KiCAD to draw the schematic and layout. In the above image, the phototransistors are labelled "phototransistors".
I wrote a quick p5.js sketch to decorate the empty areas. 

I milled the traces on a Roland SRM-20 machine, then hand-soldered the components. In the image below, you can see the result, as well as an earlier iteration, which contains only two phototransistors.

I used the Arduino IDE to program the board, after flashing it with an appropriate bootloader.

The embedded program is extremely simple: It reads the phototransistor levels, format them to CSV (comma-separated values), and send them via serial when the board is plugged into the USB port on my laptop.

On the other end of the transmission is an openFrameworks app. openFrameworks is an open source C++ toolkit for creative coding. It is co-authored by my professor Zach Liberman, and is the tool of choice here at the Future Sketches group.

Using openFrameworks' builtin support for serial communication, the data sent from the microcontroller is parsed and processed.

The first test was to plot the sensor levels:

As you can see, due to the strategic positioning of the phototransistor components, they each receives slightly different light level when I illuminate the board with the flashlight on my mobile phone.
I came up with a relatively simplistic formula: the (fixed) coordinates of the five sensors are blended, weighted by the luminosity read at the sensors:
The next step was to solve the puzzle of locating the position of the light source (assuming that there is only one) on the 2D plane, given the reading of the five sensors.

lum_sum = lum1 + lum2 + lum3 + ...
light_pos = pos1 * lum1/lum_sum + pos2 * lum2/lum_sum + ...

The estimation tends to bias toward the center, which is expectable due to the existance of the central sensor. I multiplied the vector by a factor of 2, an emperically chosen number, thus producing a somewhat good estimation.

In the visualization below, the size of the colorful circles indicate the preceived luminosity (linear scale), while the white dot is the estimated light position.

For a more intriguing visualization, I decided to fit a 3D surface onto the five data points. There're multiple ways to achieve this; I went with a simple approach, using bicubic interpolation.
Bicubic interpolation is a method for upsampling data on a regular grid, and is commonly used for smoothly upscaling photographs.
I synthesized four additional data points in order to create the 3x3 regular grid, by taking mid-points of the four sides of the square formed by the sensors.
I generated a mesh in openFrameworks to visualize the 3D surface. Using the same bicubic interpolation, as well as OpenGL's vertex color feature, the color-coding of the sensors is also smoothly spread across the surface. In the below image, you can see the mesh corresponding to the sensor levels in the previous image.

Having the basic visualizations in place, I explored ways of driving interactive experiences with this setup.
openFrameworks allows low-level access to the computer's sound buffer. Therefore, I synthesized the sound by sampling sin waves and directly writing the floating points to the buffer.
You can view a video recording of the demo below:
The first experiment is a tiny "theramin", where the frequency of the sound is controlled by movement on one axis, while the amplitude is controlled by that on the other. It is a natural example that comes to mind when one is able to detect motion on two or more axes.

Continuing the exploration of light-driven sound, the second demo is a tiny "piano". Instead of controlling a continous noise, each phototransistor is turned into a pinao "key". And the shadow, the inverse of light, is used to trigger sounds when fingers obscure the sensors.

Limited by the number of phototransistors, I set the frequencies to that of the five notes Do, Re, Mi, Sol, La, the pentatonic scale of classical Chinese music.

An interesting feature is that since the sensor readings are analog, I can control the loudness of the sound as my finger gets closer to or further away from the sensor, allowing for some sublte effects. This makes the "instrument" somewhat closer to a real piano, than say, a computer keyboard.

You can view a video recording of the demo below:

Steering away from the ear back to the eye, in the third demo, I created a 3D scene in which the position of the virtual light is controlled by that of the light source in the real world. In other words, a virtual scene illuminated by a real light!

For the sake of a demo, I populated the scene with simple primitives such as spheres, cubes and cones. But one can definitely think of more interesting applications of this concept: for example, a mixed-reality experience in which the flashlight the player holds illuminates not only objects in the real world, but also virtual ones on a screen.

You can view a video recording of the demo below:

This project is done as a part of an assignment for the class How to Make (Almost) Anything, taught by prof. Neil Gershenfeld, here at MIT.