4
$\begingroup$

I'd like to use Mathematica to be able to turn videos of my students dropping balls into a composit image of selected frames showing the path of the ball. Like a multiple exposure image.

So far, I've imported the video and used VideoExtractFrame to get a list of frames for each 1/10 of a second interval.

frames = VideoExtractFrames[video, Table[t, {t, 1.2, 1.7, 1/10}]]

ImageExposureCombine (not made for this process) isn't bad, but isn't great. The resulting image is a bit too blurry.
ball drop

Here is a fancy version of this idea from Wikipedia: enter image description here Wikipedia: Composting

Any help with coding this or just leads would be appreciated.

$\endgroup$
4
  • 2
    $\begingroup$ Needs clarity and focus. $\endgroup$ Commented May 23 at 3:16
  • $\begingroup$ Maybe you could upload the six frames... $\endgroup$ Commented May 23 at 13:20
  • $\begingroup$ Maybe it would be nice to have a reference video/frames for us to work on. I made this little clip from this video for example $\endgroup$
    – ydd
    Commented May 23 at 14:34
  • $\begingroup$ At the time I only had student images, which are not things I usually post online. The one I did was so blurred that I don't think it will be a problem. $\endgroup$
    – David Elm
    Commented May 24 at 0:08

1 Answer 1

5
$\begingroup$

Final Update

We can get better results by applying ImageCorrelate[#,ball] to each frame seperately, and then summing them up:

cor = (
     x = ImageCorrelate[#, ball, NormalizedSquaredEuclideanDistance];
                  
     mask = Dilation[ColorNegate[Binarize[x, 0.23]], DiskMatrix[9]];
     maskDat = ImageData[mask];
     censoredMask = 
      MapIndexed[If[250 < #2[[2]] < 350, #1, 0] &, maskDat, {2}] // 
       Image;
     censoredMask
     ) & /@ frames;
ImageCompose[frames[[1]], (frames . cor // RemoveBackground)]

enter image description here

I have found however that the result of mask is extremely sensitive to the threshold given to Binarize (0.23 in this case), so you'll probably have to play around with that threshold for each video. The issue of overlapping balls appearing white can be solved by lowering the DiskMatrix radius, but this runs into the problem of only showing parts of some of the balls. Here is an example of the same code block above, but with DiskMatrix[6] instead of DiskMatrix[9]:

enter image description here

Update

I decided there was too much going on (moving camera, basketball player moving a lot) in the original video I used, so I decided to record my own video of me dropping a ball

I also realized it would probably be easier to use ImageCorrelate, since the falling object stays roughly constant in all the frames (other than that it starts to get more spread out/ blurry as it goes faster)

I recorded frames every 0.25 s from this video.

video = Import[(*downloaded video from imgur link in first paragraph*)];

frames = VideoExtractFrames[video, Range[0.5, 3.25, 0.25]];

I first started by ImageExposureCombineing all the frames like you did:

iec = ImageExposureCombine[frames]

enter image description here

I also isolated the ball from one of the frames:

ball = ImageTake[frames[[4]], {225, 255}, {290, 320}]

enter image description here

I then used the 2nd example in the ImageCorrelate -> Applications documentation to find the parts in iec that look like ball:

x = ImageCorrelate[ iec, ball, NormalizedSquaredEuclideanDistance];
mask = Dilation[ColorNegate[Binarize[x, 0.27]], DiskMatrix[12]]

Note that the binarization threshold of 0.27, and the DiskMatrix radius of 12 are specific to this example, and probably won't work universally. Larger objects will require a larger DiskMatrix for example.

enter image description here

There are still some erroneous points highlighted in mask as well. Since the ball is dropping straight down, you could remove these by finding only the highlighted areas in a certain row range. I just found where this range was by eye. There is probably a more elegant way to do this however:

maskDat = ImageData[mask];
censoredMask = 
  MapIndexed[If[250 < #2[[2]] < 350, #1, 0] &, maskDat, {2}] // Image;

enter image description here

I then multiply this mask with the first frame in our list of frames, remove background, and compose:

res = iec*censoredMask // RemoveBackground;
ImageCompose[frames[[1]], res]

enter image description here

This seems to remove all the blurryness except for the ball actually moving. My phone camera is apparently not good at this high speed stuff though, so the ball becomes very dim when it's moving fast at the bottom (the bright window in the background might not help either).

Original Post

There is a lot of room for improvement here, but this might be a start.

I first downloaded this video I uploaded to imgur:

video = Import[(*directory to video*)];

I extracted frames every 0.5 s:

frames = VideoExtractFrames[video, Range[1, 7, 0.5]];

ImageCollage[frames]

enter image description here

I then picked a part of one of the frames with just the basketball to get it's color

objectImg = (*image of just the ball*);

objectColor = DominantColors[objectImg, 1][[1]];

This is objectImg:

enter image description here

I then defined a function that creates a grayscale mask of colors near the objects color. I then multiply this mask with the image:

filAuto[img_] := (
  mask = ColorDetect[img, ColorsNear[objectColor]];
  mask = ColorConvert[mask, "Grayscale"];
  ImageMultiply[mask, img]
  )

Here, for example is filAuto applied to one of our frames:

filAuto[frames[[2]]]

enter image description here

I also found that applying filAuto multiple times to an image produced better results. Here is filAuto applied 5 times to that same image:

Nest[filAuto, frames[[2]], 5]

enter image description here

So I now apply the filter filAuto to our frames 5 times each, remove background and then compose them together:

ballSep = Nest[filAuto, #, 5] & /@ frames;
rb = RemoveBackground /@ ballSep;
tot = ImageCompose[rb[[1]], Rest@rb]

enter image description here

And then ImageCompose tot with the first frame:

ImageCompose[frames[[1]], tot]

enter image description here

Comparing this with ImageExposureCombine, the ball is better highlighted, and there is much less blur. But there a lot of issues (listed below)

ImageExposureCombine[frames]

enter image description here

Issues:

  1. The ball gets chopped off sometimes (probably because the automatic ColorsNear range is too strict after being nested 5 times)
  2. At the same time, the ColorsNear range seems too loose, because we get the player's moving hand/body and the rim in the image as well. This ends up looking really bad in the final product.
  3. The camera is moving during the shot, so when we ImageCompose, the ball doesn't end up in the net. I could fix this by composing on frames[[-1]], but I kind of like seeing the shooter's hand extended in the air. This point may not be an issue for you if your camera is stable in your videos.

I hope this can at least help, and maybe someone can come along and improve on this.

$\endgroup$
1
  • 1
    $\begingroup$ Thanks! You put so much work in that answer. I really appreciate that! $\endgroup$
    – David Elm
    Commented May 24 at 0:10

Not the answer you're looking for? Browse other questions tagged or ask your own question.