the idea I want to achieve is this: I'm planning to make an eye-tracker device, which is in 2 parts: the first part is what the subject/user wears, includes an eyeglass-shaped frame, 2 Infrared cameras and one RGB camera, which are going to be connected to a Raspberry Pi powered by a battery pack. and the feeds are going to be transmitted to the second part, which is a wifi receiver connected to a pc.
so the 3 live video feeds are this: 2 grayscale feeds and 1 RGB feed, and let's assume they're in 720p-120fps. the question is, can I use FFmpeg, to compress the feeds in realtime, and make the packages small enough so they fit the bandwidth that the WiFi 5 module onboard the Pi (or an external wifi 6 module) can provide? I want the compression to be nearly lossless, but it's ok if it's not absolutely perfect. is it possible? and if so, which codec I should use and what settings you suggest me to use?
Edit 1:
- 1 * 1280×720@120fps 24-Bit (RGB) = 2,654,208 kbps ~= 2.65 Gbps
- 2 * 1280×720@120fps 8-Bit (Grayscale) = 1,769,472 kbps ~= 1.77 Gbps total
- raw bandwidth needed: ~4.42 Gbps
Intel® Wi-Fi 7 BE200 can go up to 5.8 Gbps, what if I use this with a Pi 5?