Disk Space for Logs

Hi @borpin

I did have a look first.

the emonhub was error at line 257. if you look at my emonhub config its been added too twice from node [[5]]

Great - useful to say the error is different.

Can you post the actual error please. The config looks fine.

I did not save the log file.

If you look at my config file they are 3 node [[5]] this cause it to fall over.

Just delete the extra, repeated nodes - @TrystanLea something odd happening?

I had a same issue a few weeks back (config file node definitions were completely duplicated). The day before , we had a storm and the power went off for a second or two. When I noticed some issues ( feeds not appearing in my other monitoring systems I just thought I should do a controlled shutdown and reboot. (I have a few RasPies and sometimes the time etc gets a bit out of whack if they initialise before the internet router resumes after a power failure) … Anyway I noticed some strange ‘Updating’ message on the EmonPi LCD screen when rebooting and thought that’s strange, not seen that before.
Spent a frustrating few hours problem solving, looking here. Thought for a while it was log files filling for a while … eventually found the emonhub config file duplicate nodes ???
Had backups so just deleted the duplicates and all is now fine.
Sorry I have no logs or notes, but at the time I thought it was a local issue here.
Dont log on here a lot so when I saw this entry I now suspect there is a hidden issue that is lurking in the code that is waiting for some sequence of events.

1 Like

Thanks this is what I saw with @glyn.hudson’s systems but couldnt seem to replicate in further testing so thought it might a one off, clearly a more common issue, and interesting that you saw the same issue @muzza a couple of weeks ago prior to the recent emonhub update.

I will take a look at it in more detail.

1 Like

Looks like the emonhub.conf duplicate nodes issue is a recurring issue, here’s a post from 2016 Emonhub.conf duplicate node decoders after update - #24 by whitecitadel @whitecitadel suggests:

I dont think a full data partition would have caused the issue for @glyn.hudson and if I try filling the /var/log /var/tmp or /tmp partitions the script appears to work fine. Can anyone else think why this line might fail:

if ! grep "\[$var\]" $emonhub_location; then

its looking for the entry [nodeid] in the emonhub.conf file, if it cant find one it will add the node to emonhub.conf.

The answers speak specifically to using ! grep

1 Like

First thought - the verbosity of the comments that explain what this script is doing blew me away :thinking:

If someone adds a comment to the file that has the string [5] in it it would fail for that node…

Initial suggestion … more robust would be (I think)…

  if grep "^\[\[$var\]\]" $emonhub_location; then
    echo "Node $var already present"
  else
    if [ -f $path/$var ]; then
      echo "">>$emonhub_location
      cat $path/$var >> $emonhub_location
      echo "Added node $var to emonhub.conf"
    fi
  fi

Is this an old format for the nodes?

$homedir/emonhub/conf/nodes

Improvements;

  1. If the folder $homedir/emonhub/conf/nodes doesn’t exist, nothing is done anyway so check for that first.
  2. Once the folder at $homedir/emonhub/conf/nodes has been used it should never be needed again so remove it so next time condition 1 causes an exit.
1 Like

Yes I as far as I recall the backups were making /home/pi/data quite full then the upgrade filled up the space - I had the exact issue with duplicate nodes in the config, once removed all was well.

Scripted something later to move the backups off the pi SD card to my NAS each week and ensure it doesn’t creep to full.

I did have additional entries added to support my SMA inverter, but apart from that the config was as per default. Not done it since, but I think I am running now about 12 month old code at least, probably time to try another update (after checking the backups!)

Interesting, I’ve had a look and /var/log is indeed full, however there is plenty of space on the disk so there should be space in /data (/var/opt/emoncms).

df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/root       4.0G  2.7G  1.1G  72% /
devtmpfs        484M     0  484M   0% /dev
tmpfs           488M     0  488M   0% /dev/shm
tmpfs           488M   56M  433M  12% /run
tmpfs           5.0M  4.0K  5.0M   1% /run/lock
tmpfs           488M     0  488M   0% /sys/fs/cgroup
tmpfs           1.0M  4.0K 1020K   1% /var/lib/php/sessions
tmpfs           1.0M     0  1.0M   0% /var/tmp
tmpfs            30M   12K   30M   1% /tmp
/dev/mmcblk0p3   10G  1.7G  7.9G  18% /var/opt/emoncms
/dev/mmcblk0p1  253M   53M  200M  21% /boot
log2ram          50M   50M     0 100% /var/log
tmpfs            98M     0   98M   0% /run/user/1000

I can replicate the nodes duplicating issue each time I run an update or reboot. The update seems to run on each reboot.

Most of the space in /var/log is being taken up by logrotate

du -h /var/log/
0       /var/log/supervisor
0       /var/log/redis
0       /var/log/private
0       /var/log/mysql
0       /var/log/mosquitto
44M     /var/log/logrotate
72K     /var/log/emonpilcd
5.7M    /var/log/emonhub
12K     /var/log/emoncms
28K     /var/log/apt
0       /var/log/apache2
50M     /var/log/

Specifically /var/log/logrotate/logrotate.log which is 44Mb. It seems logrotate is not rotating itself!

The system will eventually stop updating.

Yes I raised the issue in April and put in a fix a while ago…

This will tell you the largest files.

sudo du -a /var/log/* | sort -n -r | head -n 30

On update, if /var/log/ is full it cannot solve the problem as there is no space to do the rotation.

Fantastic, I already had your update pulled in. However, since /var/log was already full it wouldn’t run the update via the web interface. After running:

sudo truncate -s 0 /var/log/logrotate/logrotate.log

The update was able to run and emonub duplicates nodes were not added :smiley:

I wonder what the solution to avoid this catch 22 is? I saw your suggesting regarding adding the truncate command to the update script. However, the update script would not be able to run to pull in and execute the ‘truncate’ command since the /var/log is full? Maybe we need to detect if /var/log is full and temporarily use another location for emonpiupdate.log to allow the update to run?

You could make that the first thing the script does before trying anything else. So the du on the /var/log folder to find the biggest file and truncate it?

1 Like

Good idea, I’ve added a note to your git issue with this suggestion to discuss with @TrystanLea

1 Like

That works well for me, I’ve applied this initial fix improved emonpi_auto_add_nodes script implementation thanks to @borpin · openenergymonitor/emonhub@14c774c · GitHub

I’ve added a check for this as well.

Both merged to stable.

This seems like a good idea. Should we delete the largest file or a specific log file that may be less important? I guess we dont know if the latter will free up enough space…

This would find the largest file:

sudo find /var/log -type f -printf "%s %p\n" | sort -n | tail -1

from https://www.cyberciti.biz/faq/linux-find-largest-file-in-directory-recursively-using-find-du/

1 Like

The following near the top of the update file seems to work quite well:

# Find largest log file and remove
varlog_available=$(df | awk '$NF == "/var/log" { print $4 }')
if [ "$varlog_available" -lt "1024" ]; then
    largest_log_file=$(sudo find /var/log -type f -printf "%s\t%p\n" | sort -n | tail -1 | cut -f 2)
    sudo truncate -s 0 $largest_log_file
    echo "$largest_log_file truncated to make room for update log" 
fi
2 Likes