That will only tell you if the log partition is filling up, the choking and use of swapfile already strongly suggests that is the case. You would be better off running
sudo ls -la /var/log/*/ /var/log
to see if any of the logs are too large.
Your logrotation.log shows a couple of issues which I have previously reported. Firstly
error: /etc/logrotate.conf:32 duplicate log entry for /var/log/auth.log
which is due to the changes made earlier this year to “fix” logrotate where the wildcards were expanded and the number of rotated logs reduced from 12 to 1, neither of these changes had any positive impact.
Reading state from file: /var/log/logrotate/logrotate.status
not required on the Oct2018 image as the original location is now RW, the fix for this should be for the updater to replace the original RO file on earlier images to a symlink to /tmp so all images can revert to the original (default) path.
there seems to be 2 emoncms.logs, one is in a emoncms sub-folder.
The new symlink for the emonpiupdate.log is failing to rotate (logrotate doesn’t follow symlinked logs) . And it seems that the logrotate command in the logrotate cron must be using the -l arg to create a logfile, this option in logrotate always overwrite previous logs, but because the emonSD only rotates with a minimum files size of 1M, it will never be rotated itself, so we can not see previous rotations. This log should be written using output redirection in the command-line instead of the -l arg.
However, none of that points to an imminent full log partition, so we will have to wait till you can get some more info.
What it tells us is that the demandshaper logs are just unhandled stdout and stderr being logged by systemd to journald. We know from experience of emonhub (and L2R) that this means the messages are probably being duplicated to syslog and deamon.log. Providing the output is minimal, it shouldn’t pose an issue. But if the output is high traffic (like emonhub) then the log partition will fill up in between hourly rotations and block logrotate from being able to function, leading to a choked system.
most likely when the demandshaper service was started last OR the earliest log entry once the service has been running a while and the journald logs are being trimmed so only the last nMB’s are retained.