I try to use Intel core GPU to accelerate ffmpeg video transcoding (pull video from camera via RTSP, convert to Adaptive Bitrate HLS chunks).
ffmpeg -hwaccel vaapi -hwaccel_output_format vaapi -vaapi_device /dev/dri/renderD128 -y -rtsp_transport tcp -i rtsp://admin:[email protected]/1/1 -filter_complex "[0:v]split=2[v0][v1];[v0]scale=w=1920:h=1080[v0out];[v1]scale=w=1280:h=720[v1out]" -map [v0out] -c:v:0 h264 -crf 17 -b:v:0 5000k -maxrate:v:0 5350k -bufsize:v:0 16384k -g 48 -sc_threshold 0 -keyint_min 48 -map [v1out] -c:v:1 h264 -crf 20 -b:v:1 2800k -maxrate:v:1 2996k -bufsize:v:1 9600k -g 48 -sc_threshold 0 -keyint_min 48 -c:a copy -f hls -hls_time 2 -hls_segment_type mpegts -hls_flags delete_segments+independent_segments+omit_endlist -hls_list_size 10 -master_pl_name index.m3u8 -hls_segment_filename stream_%v/chunk_%02d.ts -var_stream_map v:0 v:1 stream_%v/index.m3u8
Arguments in bold is added to use the GPU. This command output the following error:
Impossible to convert between the formats supported by the filter 'Parsed_split_0' and the filter 'auto_scaler_0'
Error reinitializing filters!
Failed to inject frame into filter network: Function not implemented
Error while processing the decoded data for stream #0:0
While the following command line works well:
ffmpeg -y -threads 1 -hwaccel vaapi -hwaccel_output_format vaapi -vaapi_device /dev/dri/renderD128 -rtsp_transport tcp -i rtsp://admin:[email protected]/1/1 -codec:v h264_vaapi out.mp4
Could anyone help figure out the correct arguments for HLS adaptive-bitrate encoding? Thanks!
PS, as discussed here, using -vf 'format=nv12,hwupload'
might fix this problem, however, I don't know how to use that filter using the -filter_complex
syntax.
scale
withscale_vaapi
and replace-c:v h264
with-c:v:0 h264_vaapi
. Start with simple input likeinput.mp4
:ffmpeg -y -f lavfi -i testsrc=size=192x108:rate=1:duration=10 -c:v libx264 -pix_fmt yuv420p input.mp4
. Also start with simple output - two files (or one file) instead of HLS with 1000 arguments.