General S3D shooting tips

Posted on 25 February 2013

0


There are some nice websites with information about shooting s3d:

In general some terms:
> interaxial is distance between the centers of the sensors. Calculated based on i.o. (interocular) seperation = distance between eyes = 65mm.
> ortho-stereo = 65mm distance
> hypo-stereo = smaller distance = macro / theater, to accommodate for the big screen. Be aware of Gigantism (mouse-point-of-view)
> hyper-stereo = greater distance = greater depth effect for mountains, city skylines. Be aware of dwarfism (madurodam)
> convergence is the angle you choose with your eyes towards the object of interest to merge the two images to a single image. This influences the parallax.
> parallax is the difference between the two images, the Retinal Disparity that helps us determine depth AT CLOSE RANGE. At greater distance we will need other cues like relative size because there will be almost no offset between the two images etc.

From http://www.dashwood3d.com/blog/beginners-guide-to-shooting-stereoscopic-3d/:
Here’s an example of how your eyes use convergence in the real world. Hold a pen about one foot in front of your face and look directly at it. You will feel your eyes both angle towards the pen in order to converge on it, creating a single image of the pen. What you may not immediately perceive is that everything behind the pen appears as a double image (diverged.) Now look at the background behind the pen and your pen will suddenly appear as two pens because your eyes are no longer converged on it. This “double-image” is retinal disparity at work and it is helping your brain determine which object is in front of the other.

In stereoscopic 3D we set the zero parallax point by changing the convergence and/or interaxial. If there is zero-parallax, the object will be seen as ‘on your screen window’, while an object with a positive parallax will be ‘behind’ or ‘inside’ your screen window. Negative parallax will be the above mentioned pen: between the viewer and the screen.

Converging can be done during post-production by sliding the two images. You will lose horizontal resolution! This seems to be the same as real convergence (toe-in), changing the angle between lenses but it is not exactly, as the sensors orientation does not change: there might be lens distortion, but there will be no keystoning issues.  That’s why many choose for perfect parallel setup and then setting convergence in post.

I am wondering if the Z10000 is using the 2304 pixel width (2D photo picture width) for convergence instead of real toe-in technique? I can’t find information on this but I am pretty sure.

But back on subject: If objects images are offset in the direction of the corresponding eye (ie: the left images is offset to the left of the corresponding right image) then this is a positive parallax: the object will appear to be behind the screen. If the object has a negative offset (ie: the left image is offset to the right) it has a negative parallax (in front of the screen) and will cause your eyes to cross (like with the pen in the above example) to converge on this object.

> Depth Bracket: this is the actual distance between the closest and furthest object: it has to fit within your Parallax Budget. The ‘borders’ of the parallax is called the Budget: your calculated maximum positive parallax plus the desired maximum negative parallax represented in percentage of screen width. You calculate your maximum positive parallax by deviding the i.o. by the screensize: 65mm / ?? = maximum positive parallax. This is also called Native Pixel Parallax (NPP): 65mm/screen width*pixel width = NPP in pixels. IE for my 23″ display: 6.5cm/50cm*1920= 250 pixels. The smaller the screen, the bigger the parallax can be.

> 1/30th rule: minimum distance between camera and closest object / 30 = interaxial

> Window Violation: If you give an object a negative parallax, you should make sure that it is not touching the edges of your frame. It is not natural for something to come out of you screen but being cut-off by the screen that is behind it. If it happens: use soft masks with color correcting to put it in the shadow.
> ??? If you have too much positive parallax you would have to diverge your eyes (past infinity) which could become painful as it is not a natural position. See depth bracket.
> Disparties: You can’t use different lenses / sensors / filters etc for the different sensors, and you also can’t rotate (is in stabilize) your images, so turn that off! Any kind of disparity in your image can break the 3D effect or cause eyestrain. This is why you want to callibrate your setup and use 3D postproduction tools

> Hyperfocal distance: this is the point at which you lens has the deepest depth of field.

> In cinema the optimal viewing distance equals the diagonal of the screen (THX recommendation).  At 1920×1080 you’d want to stay at about 1.6x the diagonal in order to see all the details but not see the pixels (SD -> 3,5x the diagonal). Important for this is resolution per degree of arc (or angular resolution), or Human visual system limitation.

Advertisements
Posted in: S3d