[ What I have done ]
I am trying to measure a performance of different ffmpeg decoders by
- timing how long the call to function
avcodec_decode_video2(..)
takes in ffmpeg.c and running ffmpeg binary the following way
~/bin/ffmpeg -benchmark_all -loglevel debug -threads 0 -i ~/Documents/video-input.h264 -c:v libx265 -x265-params crf=25 video-output.hevc
- and by timing how long the same function takes in ffplay.c and running the ffplay binary the following way
~/bin/ffplay ~/Documents/video-input.h264
In my understanding the average time for the call to that function should be the same whether I am converting the video or playing it, because I am only measuring how long it takes to decode the frame for that video. Is this a wrong way of doing it? Please let me know if I am incorrect. The results I am getting are strange to me - the call to the aforementioned function takes twice as much in ffmpeg binary compared to ffplay binary. I have tried to run ffmpeg binary with -threads 0
and without it, but the results are still the same(twice as long as ffplay). Could it be because ffplay binary simply utilizes more threads? When I try it with -threads 1
, ffmpeg takes about 10 times as long as ffplay (which makes sense to me since before it was using several threads and now it is only using 1)
Before I ask my question, I want you to know that I am a beginner in video processing and video encoding/decoding processes.
[My question]
I am wondering what would be an accurate way to measure how long it takes to decode a frame (using 1 thread)? Should I simply measure only how long it takes to call avcodec_decode_video2(..)
function using the ffmpeg binary, and not the ffplay binary? Would the results be more accurate that way?
I also tried to enable -benchmark_all -loglevel debug
options, but it seems like the following message bench: 64537 decode_video 0.0
is not very helpful if 0.0 is supposed to mean time. (Not sure what the other number means).