Week 5 Tracking shots and Match move

3D match moves

  • First step was to import a sequence of frames using the Export buffer compression to play the footage into 3D Equalizer easier. 
  • Confirming the position movement of the camera by tracking the scene
  • Tracking the positional movement of objects (always track edges or carved corners)
point the the middle tracking a pattern an the external square is the searching area, searching a few frames beforehand which can track lights into the patters or moving objects. if you split down the centre area the width created is how big the search area should be.
timeline to play the footage

In 3D Equaliser, for the whole footage, there has to be found as may point as it was possible so that the software will figure out all the depth by the way the points move away or towards the camera. We first track the points using tracking method of marker (forward and backwards) in the scene for the main areas of the footage making sure that they were ended, meaning that when tracked had a start and end frame depending on if they were in the scene or not, also checking if the green bar appearing while playing the footage when tracking the point did not turn red because that would mean that the point is moving too much and that the pattern of the point needed to be edited. Unique patterns had to b e found to understand where the point is on the footage. Though the centre 2D option the point can be zoomed in and out to better control them. Never track glass because it reflect light or water because it moves, the tracking won’t be accurate. The centre area of the point should always be in the frame. All the points at the end when calculating they will be in 3D space.

The white lines are every single frame that has been tracked and every red point is the keyframe of the camera there should not be gaps. At this point the 3D environment is already taking place: match move and layout are almost the same since you recreate the object.

the lineup controls show an horizon line allow to see the track in 3D space since the points, which have the same size, depending on where they are positioned they will appear bigger(closer) or smaller (further away)
to help visualise the points I give them different colours depending on the areas they covered

In this step it also helped the image control to edit the contrast and the brightness in the colour control section of the frames to spot the right points to track the tracking points:

Once the foreground of the scene had been tracked the background of the scene was tracked too to make the 3D camera more accurate and not leaving any areas out of the tracking: it involved tracking point that comes in the scene and that come out again. For this part tracking the points backward by starting forward tracking and then changing the direction in the settings and tracking backwards by ending the point first, which was useful since that most of the involved area was in the last frames.

Once the tracking was done in Maya the geometry could be created helping with the points created in the scene which was useful to get as much information as possible.

After the points have been tracked, I was able to check the quality of those points in the deviation browser. The green line and number represent the average of all of the points, and the purple lines are all of the active points used for the calculation. The deviation of the camera should be under 1 but higher than 0 otherwise it means that there is no movement. If some points had spikes and peaks through the timeline weigh and the calculation result it could be fixed since it would feather the weight of the point. The goal was to make the green line smoother.

Next step is to open the Parameter Adjustment to find the curvature of the lens that were shot when shooting the film. The lens that we used is the following:

the dimension of the camera were copied in the lens attribute editor

The pixel aspect always need to be one.


Another thing that will affect the point deviation is the lens attributes of the camera. To Recreate the camera curvature: add the film back hight and width from the camera in order to be able to recreate the Camera depth (see above) and where we add the recalculates after we tracked the different distortion of the lens.

The first is select the important attributes that will need to be changed : the focal length of the lens. I can allow the software to do this by changing it from fixed to adjust. Since the lens will also have some type of distortion

with settings set to Brute force, and after clicking Adjust, the computer will recalculate the camera parameters. The difference between Adoptive and brute force is that Adoptive will tell the software to find the best result possible

the pixel aspect should always set to 1

I have used the classic lens distortion model because is simple to use since you have only to work on:

quartic distortion moves the lens to the outside exaggerating the distortion and the distortion, the curvature of the lens, itself.

So to recap the footage should be considered as a sheet and all the point keep the sheet still and when working on the lens distortion the points will keep the footage “still”.

I then went to the 3DE4 tab and export the project to Maya: this creates a .mel file that I can then drag and drop into an empty Maya scene.

Before I export the matchmove to Maya, I will need to save an undistorted version of the sequence. This basically reverses the effect of the lens distortion depending upon the parameters that I calculated earlier. The footage has to be undistorted before it can be used in Maya otherwise the point would slide and positioned in the wrong places.

This is the part of the match move where the layout comes in. All of the locators in the Maya scene represent the 3D positions of the points from the 3DE track. The camera is also imported and is now a Maya camera that has keyframes to follow the path from the 3D Equalizer file. However, there isn’t any footage imported yet and you, therefore, can’t see where the points are in relation to that footage. I then middle dragged the camera on the left panel to create an image place in front of the camera that will stay in within the film gate as the camera moves to see where the points are in relation to the footage. Since the camera movement is now matched in 3D, any new objects in front of the camera will look as if it is on top of the original footage.To make the image move onto the next frame, I will tick use image sequence.

The camera movement is now matched in 3D, and, based on the point I have created for each area covered in 3Dequalizer, I have put some actual geometry into the scene according to the camera so that they will look as if they they are on top of the original footage.

Leave a Reply

Your email address will not be published. Required fields are marked *