Can you do a screen shot of the services status please?
It seems all the services at least think they are running. The daemon.log
is increasing which implies that emonhub is running.
If you cat /var/log/daemon.log
you should see log entries from emonhub.py. As long as they are not all ERROR entries you are OK.
If you refresh the admin page, does the ‘feedwriter’ entry ever have any entries pending write?
Seems the processing might have stopped.
So try these in order and after each one, see if the data input is restored (wait a couple of minutes)
sudo systemctl restart mosquitto.service
sudo systemctl restart emoncms_mqtt.service
sudo systemctl restart feedwriter.service
sudo systemctl restart apache2.service
As a sidenote
The swap file should be no more than an emergency escape lane. It should never be used except in extreme circumstance, the fact the ram usage is just 16% raises concern over how this is being managed. The “swappiness” needs turning down significantly to avoid the swap file being used just to relax the strain on the ram because the swap file is on disk so it adds to the disk writes and sdcard wear.
[edit - Just to note this is a development observation, swappiness is not a user level setting, it is a system setting]
A clip from the log file
Apr 22 10:49:27 emonpi emonhub.py[990]: 2019-04-22 10:49:27,429 DEBUG MQTT Publishing: emon/emonpi/t4 0
Apr 22 10:49:27 emonpi emonhub.py[990]: 2019-04-22 10:49:27,431 DEBUG MQTT Publishing: emon/emonpi/t5 0
Apr 22 10:49:27 emonpi emonhub.py[990]: 2019-04-22 10:49:27,433 DEBUG MQTT Publishing: emon/emonpi/t6 0
Apr 22 10:49:27 emonpi emonhub.py[990]: 2019-04-22 10:49:27,435 DEBUG MQTT Publishing: emon/emonpi/pulsecount 1783
Apr 22 10:49:27 emonpi emonhub.py[990]: 2019-04-22 10:49:27,437 INFO MQTT Publishing: emonhub/rx/5/values 236,0,236,232.54,0,0,0,0,0,0,1783
Apr 22 10:49:29 emonpi emonhub.py[990]: 2019-04-22 10:49:29,102 DEBUG RFM2Pi Discarding RX frame ‘unreliable content’? 16 32 96 16 0 0 96 38 40 2 0 0 32 32 32 32 32 0 0 0 0 (-102)
pi@emonpi:~ $
This restart fixed it and almost instantly
sudo systemctl restart emoncms_mqtt.service
LAST ENTRIES ON THE LOG FILE
2019-04-04 19:21:14.505|ERROR|feedwriter.php|Starting feedwriter script
2019-04-04 19:21:17.502|ERROR|feedwriter.php|Starting feedwriter script
2019-04-04 19:50:24.877|WARN|emoncms_mqtt.php|Not connected, retrying connection
2019-04-04 19:50:24.907|WARN|emoncms_mqtt.php|Connecting to MQTT server: Connection Accepted.: code: 0
2019-04-21 08:43:36.807|ERROR|feedwriter.php|Starting feedwriter script
2019-04-22 10:59:35.740|WARN|emoncms_mqtt.php|Not connected, retrying connection
2019-04-22 10:59:35.770|WARN|emoncms_mqtt.php|Connecting to MQTT server: Connection Accepted.: code: 0
Back in action for another 18 days?
/var/log
is 50M
based on the df -h
And there’s two files that were taking up 25M each, so /var/log would definitely have been full.
If you keep an eye on those files and run
pi@emonpi:~ $ sudo truncate -s 0 /var/log/syslog
pi@emonpi:~ $ sudo truncate -s 0 /var/log/daemon.log
before they get to 25M, you should be able to prevent it from stopping.
Maybe set yourself a reminder to do it in 14 days…
Great. I am 99.9% sure the issue is with the logs filling up.
To help with the time to fill up,
sudo nano /etc/rsyslog.conf
look for the line like this and make sure the entry daemon.none is there - if not add it.
*.*;auth,authpriv.none,daemon.none -/var/log/syslog
then
sudo systemctl restart rsyslog.service
You should now see just the daemon.log
file increasing in size and syslog
rising far more slowly. This should double the time between failures.
@TrystanLea @glyn.hudson - this needs looking at again. I have a suggested solution. A combination of emonSD next steps: emonhub logging (message 1) to create the log file for emonhub and emonSD next steps: emonhub logging - #13 by borpin (message 13) to rotate it. I am 2 days in of a test of this setup . I’d deploy 2 additional files (1 x monit, 1 x rsyslog) for the emoncms log as well.
Just for info, I see those two log files are up to 10M in size in just a few hours
That seems an awful lot of logging
emonSD-30Oct18
pi@emonpi:~ $ ls -lah /var/log
total 21M
drwxr-xr-x 11 root root 520 Apr 22 06:25 .
drwxr-xr-x 12 root root 4.0K Aug 2 2018 ..
drwxr-xr-x 2 root adm 100 Apr 4 19:21 apache2
-rw-r----- 1 root adm 58K Apr 22 14:37 auth.log
-rw-r--r-- 1 root root 6.3K Apr 4 19:21 boot.log
-rw------- 1 root utmp 0 Apr 22 09:14 btmp
-rw-r----- 1 root adm 10M Apr 22 14:37 daemon.log
-rw-r----- 1 root adm 1.2K Apr 4 19:21 debug
-rw-rw-rw- 1 root root 584 Apr 22 10:59 emoncms.log
drwxr-xr-x 2 pi root 60 Apr 4 19:20 emonpilcd
-rw-r----- 1 root adm 29K Apr 22 10:49 kern.log
drwxr-xr-x 2 pi pi 80 Apr 22 06:25 logrotate
-rw-r----- 1 root adm 26K Apr 22 10:49 messages
drwxr-xr-x 2 mosquitto mosquitto 60 Apr 4 19:21 mosquitto
-rw-rw-rw- 1 root root 0 Apr 4 19:21 mqtt_input.log
drwxr-xr-x 2 mysql adm 60 Apr 4 19:21 mysql
-rw-rw-rw- 1 root root 0 Apr 4 19:21 mysql.log
-rw-rw-rw- 1 root root 0 Apr 4 19:21 ntp_update.log
drwxr-xr-x 2 root root 40 Apr 4 19:21 openhab
drwxr-xr-x 2 root root 40 Apr 4 19:21 openhab2
drwxr-xr-x 2 redis redis 60 Apr 4 19:21 redis
-rw-rw-rw- 1 root root 0 Apr 4 19:21 service-runner.log
drwxr-xr-x 2 root root 60 Apr 4 19:21 supervisor
-rw-r----- 1 root adm 11M Apr 22 14:37 syslog
-rw-r----- 1 root adm 190 Apr 4 19:21 user.log
-rw-rw-r-- 1 root utmp 4.2K Apr 22 14:37 wtmp
pi@emonpi:~ $
Which I think rsyslog has been using as /var/log/
has been full.
Great to hear that you’ve been able to narrow it down @IM35461 @borpin to restarting emoncms_mqtt. Im not sure that the logs filling up is the issue given that full logs are not triggering the same behaviour on other systems? I wonder what the difference is here.
@borpin’s suggestion to edit rsyslog.conf though is still worthwhile. You may also like to set the loglevel in emonhub.conf to warning or error. Setup > EmonHub > Edit Config
loglevel = ERROR
Do you see any errors in emoncms.log or:
journalctl -f -u emoncms_mqtt
I just feel it is under reported as an issue as a reboot fixes it.
Restarting the service was done once the logs had been cleared out so it was not just the restarting of the service that fixed it.
Sadly, errors in journatctl would have been rotated out a long time ago plus they are non-persistent.
Perhaps the problem does not manifest itself immediately when the logs fill up but 17-19 days later, somehow associated with swap usage as mentioned?
Tried that command and got this
pi@emonpi:~ $ journalctl -f -u emoncms_mqtt
-- Logs begin at Tue 2019-04-23 08:21:44 UTC. --
^[[B^[[B^C
pi@emonpi:~ $
Empty file I guess?
Indeed, my box is set a DEBUG
[hub]
### loglevel must be one of DEBUG, INFO, WARNING, ERROR, and CRITICAL
loglevel = DEBUG
### Uncomment this to also send to syslog
# use_syslog = yes
#######################################################################
####################### Interfacers #######################
#######################################################################
Changed, selected save and restart.
Current log file sizes are
pi@emonpi:~ $ ls -lah /var/log
total 50M
drwxr-xr-x 11 root root 520 Apr 23 06:25 .
drwxr-xr-x 12 root root 4.0K Aug 2 2018 ..
drwxr-xr-x 2 root adm 100 Apr 4 19:21 apache2
-rw-r----- 1 root adm 80K Apr 23 11:24 auth.log
-rw-r--r-- 1 root root 6.3K Apr 4 19:21 boot.log
-rw------- 1 root utmp 0 Apr 23 11:18 btmp
-rw-r----- 1 root adm 25M Apr 23 11:24 daemon.log
-rw-r----- 1 root adm 1.2K Apr 4 19:21 debug
-rw-rw-rw- 1 root root 584 Apr 22 10:59 emoncms.log
drwxr-xr-x 2 pi root 60 Apr 4 19:20 emonpilcd
-rw-r----- 1 root adm 30K Apr 23 06:59 kern.log
drwxr-xr-x 2 pi pi 80 Apr 23 06:25 logrotate
-rw-r----- 1 root adm 28K Apr 23 11:24 messages
drwxr-xr-x 2 mosquitto mosquitto 60 Apr 4 19:21 mosquitto
-rw-rw-rw- 1 root root 0 Apr 4 19:21 mqtt_input.log
drwxr-xr-x 2 mysql adm 60 Apr 4 19:21 mysql
-rw-rw-rw- 1 root root 0 Apr 4 19:21 mysql.log
-rw-rw-rw- 1 root root 0 Apr 4 19:21 ntp_update.log
drwxr-xr-x 2 root root 40 Apr 4 19:21 openhab
drwxr-xr-x 2 root root 40 Apr 4 19:21 openhab2
drwxr-xr-x 2 redis redis 60 Apr 4 19:21 redis
-rw-rw-rw- 1 root root 0 Apr 4 19:21 service-runner.log
drwxr-xr-x 2 root root 60 Apr 4 19:21 supervisor
-rw-r----- 1 root adm 25M Apr 23 11:24 syslog
-rw-r----- 1 root adm 190 Apr 4 19:21 user.log
-rw-rw-r-- 1 root utmp 4.9K Apr 23 11:19 wtmp
pi@emonpi:~ $
Also entered commands to truncate logs again
pi@emonpi:~ $ sudo truncate -s 0 /var/log/daemon.log
pi@emonpi:~ $ sudo truncate -s 0 /var/log/syslog
pi@emonpi:~ $
Time will tell…
syslog has grown to over 1M again.
Sample content…
Apr 23 18:07:33 emonpi feedwriter[24834]: Reading block 50000
Apr 23 18:07:33 emonpi feedwriter[24834]: Invoking post_bulk_save engine=2
Apr 23 18:08:33 emonpi feedwriter[24834]: Processing feed=1 len=12 :
Apr 23 18:08:33 emonpi feedwriter[24834]: Reading block 50000
Apr 23 18:08:33 emonpi feedwriter[24834]: Invoking post_bulk_save engine=5
Apr 23 18:08:33 emonpi feedwriter[24834]: Processing feed=2 len=12 :
Apr 23 18:08:33 emonpi feedwriter[24834]: Reading block 50000
Apr 23 18:08:33 emonpi feedwriter[24834]: Invoking post_bulk_save engine=5
Apr 23 18:08:33 emonpi feedwriter[24834]: Processing feed=7 len=12 :
Apr 23 18:08:33 emonpi feedwriter[24834]: Reading block 50000
Apr 23 18:08:33 emonpi feedwriter[24834]: Invoking post_bulk_save engine=2
Apr 23 18:08:33 emonpi feedwriter[24834]: Processing feed=8 len=12 :
Apr 23 18:08:33 emonpi feedwriter[24834]: Reading block 50000
Apr 23 18:08:33 emonpi feedwriter[24834]: Invoking post_bulk_save engine=2
Apr 23 18:08:33 emonpi feedwriter[24834]: Processing feed=9 len=12 :
Apr 23 18:08:33 emonpi feedwriter[24834]: Reading block 50000
Apr 23 18:08:33 emonpi feedwriter[24834]: Invoking post_bulk_save engine=2
Apr 23 18:09:01 emonpi CRON[15313]: (root) CMD ( [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]; then /usr/lib/php/sessionclean; fi)
pi@emonpi:~ $
Thanks @IM35461, I’ve changed the above logs to only log if the logging level is set to info, currently in the emoncms master branch which will be merged to stable soon.
Why? This is tinkering around the edges on a case by case basis without any real plan or strategy. Don’t just arbitrarily decide to block certain logs just because it might just fix this. I though excessive logging wasn’t an issue anyway?
Not tinkering, just a cleanup of logging implementation that was not correct, most of the logging in the script was going to /var/log/emoncms.log as it should, a small number of prints and echo’s where still in place logging via daemon.log, they are all now going via the emoncms logger for consistency - where we can also silence them by setting the log level.
But is that logical if it is a service, would you not expect the log messages from the service to go to the service log?
Just need to be consistent I feel.
If we did want to do that, it would be best to modify the logger implementation in emoncms here: emoncms/EmonLogger.php at master · emoncms/emoncms · GitHub at the moment it only supports logging to the log file but could alternatively just print the output but adhering to the user set log level. We could always revisit that at a later date.