Tuesday, April 17, 2012

Three-Dimensional Thoughts

So I have some ideas about new 3D viewing systems that don't necessarily require the wearing of glasses or the development of special monitors or projectors. They can be implemented using existing hardware.

Lytro Light Field Camera
First Idea : The Lytro Light Field Camera captures the direction of light when it takes a picture, meaning that you can re-focus the picture after taking it. This is cool.

I propose that current 3D cinematographic quality cameras start capturing images this way; once that is done, eye-tracking software/hardware can be used to specify the focal point of the viewer and the images can be adjusted accordingly. The result: dynamic gaze-shift compensation during 3D viewing. No longer is the viewer forced to focus on the part of the screen that the director wants them to be looking at.

But it's not as trivial as freeing the viewer from the attentional directing of the film's director (whew), the more important effect that this has is to reinforce an important depth cue that our visual systems use automatically: accommodation. Accommodation is the flexing or relaxing of ciliary tissues surrounding the lens in the eye that cause the lens to bulge (more convex) or contract (less convex); this focuses light onto the retina in the back of the eye. The optimal shape of the lens depends on the distance of the field of view being processed, therefore lens curvature (and by extension ciliary fiber contraction) at the point of optimal focus contains information about the distance of the object being viewed relative to the eye.

So, by artificially adjusting the images to match the relative distance of the object being viewed, the system would be providing depth information in a way that is readily understood by the brain and is completely missing in classical 2D monitors (even when used in the most modern 3D viewing systems).

Second Idea
: Cross-eyed 3D effects are interesting, and simple enough to understand; they exploit the brain's ability to patch together two slightly-different images into one depth-imbued image. This visual parallax is the same feature of visual perception that other 3D technologies cater to.

My proposal is to create a program that automatically takes whatever is outgoing to the monitor and splits it in half and offsets the images, creating a stereoscopic image. Additionally, eye-tracking software/hardware would be tracking the orientation and distance of the eyes relative to the monitor. The eye-tracking software would inform the splitting program as to the orientation of the eyes (tilt) and the splitting program would be sure to split the screen across an axis that is exactly perpendicular to that of the eyes of the viewer. Also, the amount of offset would depend on the distance of the eyes from the screen and be dynamically adjusted accordingly.

Third Idea : If you successfully combined both of these concepts into one fluidly dynamic system, you would have one of the most immersive 3D effects imaginable o.O (you know, besides real life).


P.S. : If you happen to read this and then patent/invent a working system based on my ideas, please give me some credit/money. Thanks.

1 comment:

  1. I don't know how this would turn out when combined with motion parallax though, how do you make that user-dynamic? Except in a videogame, much more room for dynamic interaction. But in movies the motion path and the point around which it revolves is set by the director. This does leave a little power over attention in the directors' hands; which might is probably a good thing.
    Film as an art is actually very much about directing the attention of the viewer, so this will preserve a part of that tradition.

    ReplyDelete