Volume Of Records

With queries, there can be a very large number of queries.

A typical busy cluster will see maybe 300k user queries per day.

It used to be that the Redshift system tables would store a certain total volume of data, and then purge any data older than permitted total data volume (no details available as to how much of that total volume went to each system tables).

Recently (late 2022) a change was made so that the system tables always store seven days of data. I think this is a mistake, as I think certainly the smaller node type leader nodes do not have the performance to handle it, but there it is; and what it means is that you can be looking at two million rows, if you take this table as-is.

I advise you not to do that. Your web-browser will not thank you.

Additionally, Redshift itself is constantly issue large numbers of queries, quite a lot on the leader node, but a fair few on the worker nodes as well, all from the rdsdb user. You have to remember, Redshift is like a mobile phone; you do not have root. Your admin user is privileged user, but is not the privileged user. The root user is rdsdb, and is controlled by AWS; but you can trust the computer. The computer is your friend.

As such, by default, all pages by default in the SQL they issue consume only the most recent hour of queries, and do not enumerate queries issued by rdsdb. Limiting to an hour massively reduces the number of rows, and you lose nothing by omitting rdsdb - just lots of noise, mainly leader node queries calling SET, and the ongoing background noise of system maintenance queries (although there are quite a few of those - it’s good to see it once, at least, to get a feel for what’s going on in your cluster).