9

I am having many database schema in mysql 5.6 server, now the problem over here is that I want to catch the queries for one schema only.

I can not enable query log for entire server as one of my schema is highly loaded and it will impact the server.

Is their any way, any tool through which I could only log the queries by single schema.

I found benchmarking graph which shows the impact on transactions/second when query log is enabled.

enter image description here enter image description here

1
  • Can you use slow query log instead? And then parse that log with pt-query-digest? If not you can try tcpdump output parsed by pt-query-digest Commented Jul 5, 2016 at 14:50

2 Answers 2

1

Interesting question and a +1. I was interested by this because I can see several use cases for this fuctionality.

Unfortunately, for your case where you cannot switch on general logging, there is only one, rather inadequate, workaround.

That is to use the SQL_LOG_OFF variable to disable logging for a given connection. An ideal solution would have been to have an "SQL_LOG_ON" variable as one can do in Oracle (equivalent) - maybe you could try and switch logging off for all but the connection(s) of interest?

Furthermore, and regretably, this requires the SUPER privilege. Again, this may not (even probably not) be possible in your case.

Depending on the severity of your problem, working hours and server load at given times, you may be able to find a use for Percona's pt-query-digest which can help with log analysis. Small comfort, but as usual PostgreSQL is streets ahead of MySQL (1, 2).

If you'd care to file a feature request, I'd be happy to follow up with a "me-too" if you post the link back here.

1

If you are so close to keeling over that you can't turn on the General log to FILE, you have worse problems; they need fixing.

I suspect, without any real knowledge, that the slowlog would have similar impact, especially with long_query_time = 0.

5.7 has a "query rewrite" feature. Some trick might be used there. (But, again, there is some overhead, which should be benchmarked.)

How long to you want to catch the queries? Are you just looking for the source of one naughty action? Or are you trying to gather queries to build a realistic benchmark for that table? Or something else?

Do you have replication turned on? Are you interested in reads? Or writes? Or both?

How many threads are active concurrently? The benchmark you showed indicated that for 1, the log has low overhead. It is the table lock on MyISAM or CSV that is killing the processing for high concurrency.

Your second graph points out that the clients should really be restricted to about 5-8 concurrent connections -- else throughput actually declines! What were max_connections and Max_used_connections for that graph?

1
  • Hi, I want to capture the queries for about 2 days and these graphs are not related to my benchmarking...I pasted them for knowledge sake only.The app for which I want the queries is a low resource consumer. Tcpdump will not provide the timing stats. Commented Jul 11, 2016 at 1:43

Not the answer you're looking for? Browse other questions tagged or ask your own question.