Onsight 3D

Posted on 31 May 2012

0


This Onsight OS3D plugin by Nick Shaw for FCP is a free alternative to the great Dashwood Stereo 3D toolbox 3D.

Onsight 3D screenshot

It is a bit limited, but a great start if you want to try 3D editing in FCP. As I work with 1920×1080 only, the main difference to me is that Dashwood has more output options and a global 3D output as well as a 2D bypass setting, so you don’t have to change each of the (hundreds of) clips in your edit if you want to switch from anaglyph (useful for budget checking) to side-by-side (for final checks). If you need more output options like line-by-line interlaced or top/bottom, you’ll need to buy the Dashwood plugin anyway.

Workflow

These are my first thoughts on working with Onsight 3D. It’s easy to install and use. The basics are the same as with the Dashwood plugin, you can import different types of 3D source material, it has an image well like Dashwood. I am working with 2x full HD source, so to me the image well was the easiest solution. I can make the edit in 2D with the left eye only. Then, when I am happy, I will lock the edit to take it to color and sound, as well as S3D mastering. I don’t know if there is a proper name for this yet, but I like to do all the 3D adjustments after the edit, instead of the current trend to do this before editing. I believe that adjusting HIT should be done considering the cuts between shots, for continuity of the zero parallax point.

Adding right-eye

First, I add the Onsight plugin to the first clip. I make my basic settings, like ‘right eye in well’ and ‘anaglyph output’. Then I copy the clip and choose ‘paste attributes’ to add this filter to all other clips in one step. Then I select the first clip again, drag the corresponding right-eye clip to the well and use my arrow-down key to go to the next clip. In FCP7, it will open this clip automatically in the filters tab, so all I have to do is look for the right right-eye clip and drag it to the well. I can hear you think: why not do this before editing. Yes, I might have to do this a few times for the same masterclip, if I did cut it up. On the other hand, I do not have to do this for all the footage I didn’t use!

If you have applied a keyframed speed correction in the 2D edit (which, by the way, is a bad thing to do in FCP) you will need to copy the (full) right-eye clip to a disabled track on the timeline first. Copy the left eye clip and then paste the ‘speed’ settings to the right eye clip. The timing of the keyframes is calculated from the start of the left-eye masterclip to the start of the clip in your sequence, so you will only get it right if you trim the clip AFTER you pasted these speed settings. Then, drag this clip from the sequence to the image well of the left-eye Onsight filter. If you are happy with the result, you can delete the right-eye clip from the timeline. I have to say that I have seen temporal disparity because of FCP speed adjustments, so be careful!

Disparity correction

Now the interesting part: correcting disparity. Fortunately, using the Z10000 I don’t have to correct for keystoning, sync etc etc. The only thing I found is that – for some reason I still don’t really know – is that sometimes there is a sync offset that I need to correct with the sync adjust feature of Onsight, which can adjust in one-frame steps, up to 1000 frames in either direction. The reason might be the conversion to prores, but it might also have something to do with the fact that I like to use motion -> opacity on a lot of clips. I found that the clips with motion added and deleted later are infected by this ‘bug’ for sure.

I found that this was easiest by looking for a clear movement that shows the offset, you can see this if the anaglyph red or blue image is ‘following’ the main image. Then find a moment where this moment stops for a moment and use the convergence to set the parallax point to this object or person. Then, scroll up or down a few frames (hold the cursor above the canvas) to get to the highest offset, this is the point where the blue or red border of the object is the biggest. The hold the cursur above the sync adjust slider and scroll until the offset is the smallest. You need to get to a point where the offset is the same at any time (provided the object and camera do not move to/from each other). The easiest way to do this is to find a vertical movement!

One other thing I found is that the HIT (convergence adjustment) is limited to 50×0,10% (coarse) and 10×0,01% (fine) = 5,1%. This should really be enough for HIT, especially when you use the ‘autoscale’ function of the plugin, but in the case of quick run and gun shooting you might need more once in a while. It would be good to have this in the disparity section as well.

Convergence correction (HIT)

Then it comes down to convergence correction. There is no grid available in the plugin, so you might want to make your own in photoshop and put it on the first track of the timeline. Damn you, FCP 2:12:00 limit for stills. It’s easy to calculate the distances, take the width of the screen (250 cm for instance) divided by the IO (I use 5 cm to allow for children). Now you know how many lines you need (50 in this case). Then, divide 1920 pixels by this number and place the grip lines. It’s easy to work with multiples of 10 px, as that is what shift-arrow seems to do in photoshop. In my example, I chose to make 48 lines of 40 px width.

Next to do is to set the zero parallax point. I like to set the parallax point on the main subject and match the previous and next point. At the same time, I want to make sure the red and blue bits of the positive parallax will not get wider then the distance of one line of the grid (in this example: 40 pixels). Then I will look at the negative parallax, if there is any. I found that window violations are fine if the subject is not too bright and/or if the negative parallax is not wider then my grid.

When I added the plugin, I made sure that I set the output type to anaglyph 1. This way, I got a nice B&W image with red and blue bits that tell me the horizontal disparity. If the red bit is on the left side of the brighter subject (and the blue bit is on the right side), it is a positive parallax. Sometimes you’ll have to look twice: is this negative or positive parallax?  You’ll need the context of the whole image to see that, but it’s hard to get used to. It would be nice to have Z-depth colors, like purple for ‘negative’ and green for ‘positive’.

Checking

After all this boring math it is finally time to put the glasses on. You’ll have to go through all the clips on the timeline, one by one, to change the output type. This is where you get reach for your creditcard en buy the Dashwood plugin, or you choose to ditch your mac and get a PC with edius or vegas. But for this edit, you might want to use the arrow-down key again to finish the job.

Then there is the next disadvantage: you will need to render everything before playback, because due to the ‘Image Well’ function of OS 3D, Final Cut Pro restricts the Unlimited RT framerate to a quarter. Because you can keep the resolution at full 1920×1080, I found that rendering is not really needed for good checks of the S3D settings, but it is for viewings with clients.

The output options are somewhat limited, I used side-by-side to feed my passive monitor, but it would have been nice to have a top/bottom or line-by-line interlaced option like Dashwood has, as now I loose half the vertical AND half the horizontal resolution, resulting in 2x 960×540 pixels (OS3D is limited to 1920x1080px).

Export,

For this reason, you might want to go through all clips again and set it to left eye only. Then export to a 1920×1080 file. Then set it to right eye only and export again. Now color correction (you might find a nice way to re-use the 2D project with the new L eye and R eye sources) and mux the two files + the soundfile into a proper 2x full HD AVCHD 2.0 MVC file for 3D Blu-ray.

Advertisements
Posted in: S3d