Template - Overview / Building the Camera rig

The current template contains a Leia camera rig and a compositing scheme to produce 4 View renders. The rig is built with armatures / drivers and has a simple adjustment UI in virtual space. Adjusting finer settings such as resolution and camera specific properties (eg. DoF) are not exposed, and require adjusting multiple parameters throughout the project. NOTE: Ideally all relevant settings would be presented to the user, and the rig would be generated when required (moving away from template project). This will be possible once the components are moved over to scripts and packaged as an add-on.

Leia Camera Setup with armatures

There are two sliders, and a gizmo exposed which manipulate properties relevant to the Leia camera rig.

  • Camera focal length - The overall focal length of the Leia Camera Rig.

  • Baseline Scaling - Relative distance (disparity) between cameras.

  • Convergence Plane - Direct interaction with positioning convergence plane.

Each UI slider handle’s Y-axis world-space position is remapped to a 0-1 value. The convergence plane can be positioned directly (by moving the target gizmo) when in object mode.

A combination of values from the sliders influence properties on each Camera. Specifically the position, focal length and horizontal-shift of the camera sensor. As well as the convergence distance gizmo’s size and position.

Camera Property Drivers

X - Position

Each camera’s position is shifted on the x-axis in relation to the center (or 2D) camera position. The cameras are parallel to the convergence plane, with their views orthogonal to the plane.

The local position of each camera can be determined by the following equation:

Local Position = Convergence Distance x Scaling x Index Multiplier

  • Convergence Distance - the world space distance from center camera to convergence plane.

  • Scaling - Baseline scale factor, is a remapped slider value (in this case 0-0.05).

  • Index multiplier - Units from center (determined by camera order).

Index Multiplier

Camera a

Camera b

Camera c

Camera d

-1.5

-0.5

0.5

1.5

Focal Length

Focal Length is a standard camera property, so only requires a mapping function for converting the 0-1 slider value to something more appropriate. In this case I chose to use 10mm - 200mm a common range for camera focal lengths (ultra wide angle to telephoto).

All Leia rig camera focal lengths must be the same, and can be determined by the following:

Mapped Value = b1 + (original - a1) / (b2 - b1) (a2-a1)

Where a1,a1 and b1,b2 are Min-Max ranges

Shift - X

In order to create a convergence plane, each camera frame must be horizontally shifted such that the camera frustums intersect at the desired convergence distance.

In Blender the shift value is a fraction of the render frame size in pixels, a shift of 1 would shift exactly 1 frame unit (eg 1920 pixels for a 1920x1080 frame) in a direction. The frame unit size is the larger value of the resolution pixel dimensions.

Horizontal frame shift can be described by the following formula:

Shift = Disparity / ( 2 × tan( FOV / 2 ) * Convergence Distance)

Convergence Plane / Distance Gizmo

This object is controlled directly by the Convergence Distance Target Gizmo. It provides an indication of where the convergence plane is in virtual world space.

The following equations can be used to calculate parameters required to position and draw the plane:

Local Z distance from Center camera = Convergence Distance

X Scale = tan( FOV / 2 ) * Convergence Distance

Y Scale = ( tan( FOV / 2 ) * Convergence Distance )/ Aspect Ratio

Top down, visual cue for convergence plane.

4 View Compositing

NOTE: There are alternative methods to render individual cameras which would be worth exploring, such as Multi View rendering under stereoscopy. The current method is un-optimal as each view is rendered at the output resolution (in essence 4x the required render time). This could potentially be resolved through scripting. To mitigate some of the excess rendering, each camera scene’s render samples are reduce by a factor of 4. Visually this has little bearing on the end result as the views are eventually scaled by a factor of 4.

A render from each view / camera must be scaled, translated and overlaid to generate the 4 view Leia image. The following diagram outlines a node group for this process.

  • Each camera is assigned to a scene and their render output is isolated

  • Renders are scaled (ideally for optimization this step should not exist, renders should already be at correct resolution)

  • Translate images

  • Composite views into a single image.

Isolate Camera Views

Each camera has been assigned to an individual scene in order to isolate its render output.

The MainScene serves as the working scene. All models and animations should be built or imported into this scene. The main compositing setup (which outputs 4V images) is also tied to this scene. NOTE: A 2D camera is associated with the MainScene. This cameras view is from the center of the Leia rig, essentially what is seen from a normal 2D camera.

Linking objects and settings to camera scenes

Once you are ready to export 4 view images, you will have to link your objects across all camera scenes. To do this, hold CRTL+L and apply to scenes a-d.

Within the template a Layout collection has been created which is pre-linked across the scenes.

Placing objects within this collection will automatically link objects for you.

If there are any world, render or camera settings, these should also be applied to each camera or scene.

Building 4 View

To build the quad view, each render image has to be translated by a specific pixel amount. If we consider the final quad image resolution, Resolution, then the relative translations are

X = Resolution.width / 4

Y = Resolution.height / 4

Each camera render can be mapped to the quad view image with the following translations:

Camera a

Camera b

Camera c

Camera d

(-x,y)

(x,y)

(-x,-y)

(x,-y)

Last updated

Copyright © 2023 Leia Inc. All Rights Reserved