If you are so close to keeling over that you can't turn on the General log to FILE, you have worse problems; they need fixing.
I suspect, without any real knowledge, that the slowlog would have similar impact, especially with long_query_time = 0
.
5.7 has a "query rewrite" feature. Some trick might be used there. (But, again, there is some overhead, which should be benchmarked.)
How long to you want to catch the queries? Are you just looking for the source of one naughty action? Or are you trying to gather queries to build a realistic benchmark for that table? Or something else?
Do you have replication turned on? Are you interested in reads? Or writes? Or both?
How many threads are active concurrently? The benchmark you showed indicated that for 1, the log has low overhead. It is the table lock on MyISAM or CSV that is killing the processing for high concurrency.
Your second graph points out that the clients should really be restricted to about 5-8 concurrent connections -- else throughput actually declines! What were max_connections
and Max_used_connections
for that graph?