One of the system's principal
advantages is that no user-worn "viewing
aids" for image separation
(such as LCD shutter goggles, polarized or
wavelength-multiplexing
glasses) are neccessary. The system takes
care of delivering the
correct stereo information to the viewer's eyes, as
s/he moves throughout
the viewzone.
The system layout is shown
below. First, an infrared camera monitors a
viewer moving within
the viewzone. The video images are used as input
to head-tracking software,
which computes the viewer's center-head
position and reports
it to the rendering software.
Upon receiving the viewer's
center-head position, the rendering
software generates the
left- and right-eye views of a 3D scene,
correctly rendered for
the viewer's head position, and displays them on
the image LCDs. The rendering
software also generates an image that
acts as a "linearly polarizing
mask" for display on the viewer-tracking
LCD. The "mask" image
divides the viewer-tracking LCD horizontally
into two regions of crossed
polarization; the location of the boundary
between these two regions
corresponds to the viewer's center-head
position within the viewzone.
The output polarizer of
each image LCD is aligned with one of the
polarized regions of
the viewer-tracking LCD. The crossed-polarized
stereo images are combined
with a beamsplitter, imaged onto the
viewer-tracking LCD,
which, in turn, is imaged into the viewzone by
a lenticular telescope.
In effect, the system
optically projects virtual viewer-tracking
polarized glasses onto
the eyes of a moving viewer, who is free to
observe a 3D scene from
many perspectives. Thus, the system
provides a correctly-rendered
autostereoscopic view of a 3D scene
anywhere in the plane
of the viewzone.