I have been using UDP/WiFi to pipe data between an embedded device (ESP32 acting as a UDP server) and my Windows machine. My objective is to send as many short packets (8-64byte payloads) as frequently as possible, for real time control and telemetry over WiFi. One of my applications is for teleoperation of a robot, which requires very frequent transmission of repeated short data frames.
During testing I noticed that if I sent data only from my Windows client to my ESP32 server, I would experience periods of high latency in packet transmission (~300ms) a couple times a second. I can reduce this latency by not using the broadcast IP, i.e. addressing packets directly to my server, but the 300ms delays still persist, they just decrease in frequency (around once a second). I can eliminate these delays almost entirely, however, by repeatedly transmitting dummy data from the server back to the windows client. The data doesn't even need to actually be read (i.e. no recvfrom() call), it seems just the act of sending any data to the client causes windows to prioritize transmissions to the server. I've observed this behavior with both C++ and Python UDP client implementations on Windows. The fact that the ESP32 does not seem to ever suffer from this issue (i.e. repeated high frequency transmissions appear on the network at a very consistent rate), and the fact that I have observed this behavior on three different routers and two different Windows machines (from different manufacturers) has led me to conclude that it is Windows that is causing the problem.
Is this (sending garbage data) a valid technique for reducing latency and packet delay variation? Am I correct in suspecting Windows is the culprit for injecting these delays/is this known behavior of Windows? Are there other techniques or strategies I could employ that would be better suited to my application?