Skip to main content

2d
comment Using libx265 and dash, there is sound, no video
Try synthetic audio and video. The mpd file is played fine with ffplay.
2d
comment Using libx265 and dash, there is sound, no video
ffmpeg -y -f lavfi -i testsrc=s=384x216:r=30:d=100 -f lavfi -i sine=frequency=400 -shortest -profile:v main -c:v libx264 -pix_fmt yuv420p -g 150 -sc_threshold 0 -map 1:a -c:a aac -b:a 128k -ac 1 -ar 48000 -map 0:v -filter:v:0 "scale=-2:144,fps=30" -b:v 120k -minrate:v:0 90k -maxrate:v:0 150k -bufsize:v:0 60k -init_seg_name "init-stream$RepresentationID$.$ext$" -media_seg_name "chunk-stream$RepresentationID$-$Number%05d$.$ext$" -dash_segment_type mp4 -use_template 1 -use_timeline 0 -seg_duration 10 -adaptation_sets "id=0,streams=v id=1,streams=a" -f dash -movflags +faststart dash.mpd
Jul
17
comment ffmpeg - output includes combined video stream from inputs but also exact video streams from inputs
Try removing the -map 0:v, and make sure that yadif filter has defined input and output... And that the output of yadif is the input of overlay.
Jul
17
comment How can I combine HoughlinesP coordinates for a single line in OpenCV Python?
The conditions: if w >= 100 and h <= 5 is for finding horizontal lines. In general SO answers are not guaranteed to work.
Jul
14
comment Using libx265 and dash, there is sound, no video
For fixing the warning "Segment durations differ too much", try replacing -keyint_min 250 -g 250 with -g 150. Each segment is 300 frames (10 seconds at 30fps), and GOP size 250 is not a multiple of 300. -g 150 applies 2 GOPs in each segment (just for testing). You also missed the video bitrate, add -b:v 120k for example.
Jul
13
comment Trying to understand video conversion, difference
Can't you set the video bitrate when encoding? Add -vb 144k argument: ffmpeg -i 1080p.mp4 -map_metadata -1 -pix_fmt yuv420p -c:a libfdk_aac -c:v libx264 -vf scale=-2:240 -vb 144k -ab 128k 1_240.mp4 (note: using fps=30 is usually not recommended, because it skips of duplicates frames, unless the input is exactly 60fps).
Jul
12
comment about ffmpeg option atomic_writing used in segment output
I think that we can follow the generated segment list. Executing the following command for example: ffmpeg -re -y -hide_banner -loglevel error -i audio.mp4 -ar 16000 -ac 1 -acodec pcm_s16le -f segment -segment_format s16le -segment_time 5 -vn -copyts -frame_pts true -segment_list pipe:1 "%03d.pcm", prints the output file name every 5 seconds. I guess that it is printed after the segment output file is closed. Maybe there is a better solution, but scanning the files is probably not the recommended way.
Jul
12
comment about ffmpeg option atomic_writing used in segment output
In the documentation, atomic_writing is under image2 muxer. ffmpeg -h muxer=image2 shows atomic_writing option and ffmpeg -h muxer=segment doesn't. -atomic_writing true probably has no effect when used with segment muxer.
Jul
8
comment Python/FFMPEG - Converting a CbBI (Apple) PNG to a regular PNG
In case you are looking for Python implementation, take a look at PyiPNG.
Jul
6
comment How to extract all frames of a video using GTX 550 ti?
NVIDIA GPU accelerated video decoding normally uses Nvidia NVDEC, and not CUDA. The GTX 550 doesn't have the NVDEC hardware. Maybe your GPU include Nvidia PureVideo hardware. There in not much information about using "PureVideo" decoder with FFmpeg. In Windows you may try using dxva2 (DirectX) hardware decoder as described here. Maybe it uses the the "PureVideo" decoder (I am not sure).
Jul
3
comment Green screen on RTSP stream from USB camera using mediamtx (ffmpeg)
@Pavel I tried to encode MJPEG over RTSP, and it looks like there are limitations. Execute mediamtx.exe, then encode synthetic pattern (for testing): ffmpeg -f lavfi -i testsrc=size=640x480:rate=30 -vcodec mjpeg -huffman 0 -force_duplicated_matrix 1 -pix_fmt yuvj420p -f rtsp rtsp://localhost:8554/camera01. Then play VLC network: rtsp://@127.0.0.1:8554/camera01 (it's working). As you can see, I had to add the arguments -huffman 0 -force_duplicated_matrix 1 -pix_fmt yuvj420p to the MJPEG encoder. I don't no if the yuvj422 MJPEG from your camera can work with RTSP without re-encoding.
Jul
2
comment To make 'trim' and 'atrim' work synchronously
Better try without re-encoding: ffmpeg -ss 00:00:05 -to 00:00:10 -i input.mp4 -c copy output.mp4. For MP4 it is faster to use the -ss before the -i, because MP4 file stores a list of seeking points, and "input seeking" is faster. There are cases in which we can't use the -ss before the -i and have to use it after. I saw lots of discussions about the subject...
Jul
2
comment To make 'trim' and 'atrim' work synchronously
You may try -filter_complex with trim, atrim, setpts, asetpts and concat. Take a look at the following post. Note that for simple cases using -ss and -t is simpler (and sometime works without re-encoding).
Jul
2
comment Green screen on RTSP stream from USB camera using mediamtx (ffmpeg)
When playing the recorded MP4 file, is it OK, or green? Try just: ffmpeg -f dshow -s 640x480 -i video="Integrated Webcam" -vcodec copy -t 10 output.mp4 you may also try: ffmpeg -f dshow -vcodec mjpeg -s 640x480 -i video="Integrated Webcam" -vcodec copy -t 10 output.mp4 (I saw that -input_format is not working with dshow, and the way for selecting MJPEG is adding -vcodec mjpeg before -i).
Jul
1
comment Green screen on RTSP stream from USB camera using mediamtx (ffmpeg)
Is it working with output file instead of RTSP: ffmpeg -hwaccel_output_format qsv -fflags nobuffer -f dshow -vcodec mjpeg_qsv -s 640x480 -i video="Integrated Webcam" -vcodec copy -t 10 output.mp4 for example? Instead of -hwaccel_output_format qsv -vcodec mjpeg_qsv, try: -input_format mjpeg. Since you are using stream copy: -vcodec copy, the camera output suppose to be MJPEG encoded video stream.
Jun
30
comment ffmpeg doesn't have access to qsv nor vulkan
Yes, it looks strange. In Windows, I am getting a long list of codecs (but with Impl: MFX_IMPL_TYPE_SOFTWARE, due to my old hardware). I assumed that oneVPL dispatcher suppose to use the MSDK backend, and that vpl-inspect lists the codecs supported by MSDK. It could be that vpl-inspect is designed for Intel Gen 11 and above (I am not sure).
Jun
30
comment ffmpeg doesn't have access to qsv nor vulkan
Do you think FFmpeg tries to use low power mode? Executing ffmpeg -h encoder=h264_qsv shows: -low_power <boolean> E..V....... enable low power mode(experimental: many limitations by mfx version, BRC modes, etc.) (default auto). You may try adding -low_power 0 to the command. You may try encoding only with -v debug: ffmpeg -v debug -y -f lavfi -i testsrc=size=384x216:rate=1:duration=100 -c:v h264_qsv output.mp4. When I try it in Windows 10, FFmpeg 7.01 uses oneVPL and report error code: -1313558101... FFmpeg 5.1.2 uses MSDK and it's working. Note: In Windows it is statically linked.
Jun
30
comment ffmpeg doesn't have access to qsv nor vulkan
Intel has a guid for Building FFmpeg with QSV. I actually built FFmpeg 7.01 in Ubuntu 20.04 two weeks ago. I wanted to test VAAPI and I can't use QSV in Linux, because my CPU is Gen 3 (in general I am using Windows 10...). Note that Intel example is ffmpeg -hwaccel qsv -c:v h264_qsv -i backgroud_1080.mp4 -c:v h264_qsv out.mp4, without -init_hw_device qsv=hw.
Jun
29
comment ffmpeg doesn't have access to qsv nor vulkan
According to the following table and this table, CPUs before GEN 11 (like Core i7-6700) uses Media SDK backed. Try installing Media SDK. After installing MSDK, vpl-inspect shows different output.
Jun
28
comment ffmpeg doesn't have access to qsv nor vulkan
Intel made a huge mess moving from MediaSDK to oneVPL and from vaapi driver to media-driver. It is always sketchy... I suggest you to post the FFmpeg version (and if it's a custom build), post the CPU model, and if you have multiple GPUs. Then install libva and libva-tools, execute vainfo, and post the output. Then build and install libvpl-tools, and execute vpl-inspect, and post the output.
1 2 3 4 5