3
$\begingroup$

I'm pretty new in camera tracking and Blender (2.80). I have a "long"(1 minute) dash cam footage of driving on the road, where most trackers stay visible only for a second or less, so a lot of trackers must be created, which is tedious. Tracking whole footage first and solving camera in the end seems too risky approach, so I wonder if I can solve only part of the footage until some pseudo-end frame number to make sure trackers are good, then repeat with incremented pseudo-end number until the end of footage. So far, I have tried setting end frame in Blender scene settings or in timeline, but solver still uses whole footage.

To clarify idea, I want to track only 2 first seconds of the 1-minute footage and then solve camera for only first 2 tracked seconds, then add tracks for next 2 seconds and again solve camera, but in total for 4 first tracked seconds. And repeat until I have whole footage covered. This way I will see results faster and could start improving tracking accuracy right away.

Example frame below, but in general, question still holds for any dash cam video or similar to CGMatter tracking tutorial (first 2 seconds) at https://www.youtube.com/watch?v=6Vo-jyWlDhM, but imagine video goes way longer. Frame example

$\endgroup$
0

1 Answer 1

2
$\begingroup$

It helps if you understand a bit deeper how tracking and 3d reconstruction work.

When you track points in the video and solve the camera you get two different kinds of information: Camera intrinsics and extrinsic parameters.

Camera intrinsics are the camera lens, lens center and distortion. Intrinsics are unique for a combination of sensor size and a specific lens. So all shots done with the same lens on the same camera will behave the same.

Once you have the camera intrinsics, then you can calculate the extrinsics, which are the correspondenc of the placement of points in an image, with coordinates in 3d space.

pff.... got that out.

So what does that mean in this case? You just need to solve a small portion of the scene to determine the camera intrinsics (lens field of vies, optical center and distortion coefficients). With that information you can solve other segments of the shot as well.

At the very least you need 2 images that show parallax and have 8 accurate trackers to solve the scene, but it helps if you have a few more and more frames to evaluate more precisely.

The segment to use can be defined in the keyframe section, the range of frames between frame A and B will be the one used to solve the scene.

enter image description here

But setting that range doesn't mean that only that segment of the video clip will be solved. Blender will still try to solve the whole thing, but by setting the range means that you don't need 8 trackers on every frame.

And now for the grand finale:

Yes! you need to add trackers for the whole shot if you want to solve it. Don't be lazy or try to take a shortcut: It's only a minute of footage, a limited number of frames and blender is doing most of the work! But remember that not all of the tracking needs to be wonderful, just the segments used between keyframes. Bottom line is: unless you add trackers blender will never guess any motion for you.

$\endgroup$
4
  • $\begingroup$ Thanks for the answer. You are correct as assuming I don't understand how Blender tracking works, and basically my question was raised by confusion on why Blender throws errors when I didn't have all frames tracked. Now I think real reason for errors was bad automatic keyframe selection. Because Blender works fine if there are non-tracked frames in a long footage if keyframes are already selected. I still don't exactly understand what are keyframes and are they used only for camera intrinsics calculation? If so, then wouldn't camera details be updated similar to REFINE operation? $\endgroup$ Commented Aug 23, 2020 at 20:01
  • $\begingroup$ Also regarding grand finale, I understand I need to track each frame, I just didn't know I could track and solve only some part of footage (because of errors), so I thought I needed to track whole footage before clicking "solve camera" button first time, which could result in few days wasted if tracks were bad. Currently I have advanced with my task and calculated camera intrinsics on smaller portion of footage using REFINE. Now I just do tracking for remaining frames (about 60 at a time). $\endgroup$ Commented Aug 23, 2020 at 20:15
  • $\begingroup$ read also: blender.stackexchange.com/a/142908/92768 $\endgroup$
    – susu
    Commented Aug 23, 2020 at 20:48
  • $\begingroup$ and this one as well: blender.stackexchange.com/questions/42329/… $\endgroup$
    – susu
    Commented Aug 23, 2020 at 20:55

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .