How do I stop emon*

Stupid question but how do I stop everything to do with emon* ?

The filesystem holding /home/pi/data has errors, so I want to stop all the emon-related processes without stopping the machine so I can run fsck.

I had a quick search but didn’t find any answers. I also looked on the emoncms admin page and there are quite a lot of buttons but none to shut down the services?

I’m not a Linux expert - I think you’ll need to identify and kill them in turn. (And I can’t ever remember seeing this asked before now.)

This could be one to go on @TrystanLea’s To-Do list?

Thanks for responding, Robert. Merry Christmas! Not the best time to ask a question :grinning:

Killing things is, I suspect, not the best way to do it. Proabably some systemctl command or a specific emon-whatever command is what I’m hoping to identify. I’ll wait and see a bit. The system seems to be running for the moment even with the filesystem errors.

I suspect this is something that was never thought necessary. All I can suggest is you copy off the data that’s important to you while you can, and have a new SD card available ready to reload the backup onto.

Thanks again, Robert. I do keep backups but I’ve no reason to suspect the SD card at the moment. It was a power failure that took the system down and that can cause filesystem corruption. Most times ext4 will repair it automatically, but sometimes it may need some help.

Starting and stopping systems are the most basic kinds of software control. So I suspect there must be some form of words. Whatever normally happens when the pi is shut down for example.

I can understand starting, my suggestion was that stopping was not anticipated for a system that’s expected to run ‘forever’. Meaning, your desire to stop emonHub & emonCMS, independently of the OS, to work on the OS was not envisaged.

I think that’s the OS shutting down, and taking everything with it, not just emonHub & emonCMS.

That’s not quite how Linux works.

Merry Xmas to all!

The 2 main services you need to be looking at stopping are emonhub and/or feedwriter, the old commands still work such as sudo service emonhub stop or the newer commands would be sudo systemctl stop emonhub.service (swap out “emonhub” for “feedwriter” service). If all your data is coming via emonhub you could just stop that, but other data such as emonesp etc could still post to emoncms so doing both might be prudent. However . . .

I have a cloud server which has data partitions in much the same way as an emonSD and when I do maintenance or backing up of the data partition I simply unmount it, emonhub will then continue sending to emoncms and feedwriter will keep trying, but failing to save to the data partition until such time as you have remounted it and there will be no break in the data as all the data that has built up will get saved at that point. Within reason of course, ie if you don’t take days/weeks to sort it out or reboot.

I’m sure someone will pipe up and say this is not recommended but I have done it so many times now that I consider it as risk free as any “recommended” method. The benefit of this method is that ALL writing to that disk is stopped, by stopping selected services you may miss something, IIRC you will need to unmount to repair anyways.

As far as the main root partition is concerned, there is a command you can add to the end of the /boot/cmdline.txt file that will trigger a file check (&repair) prior to mounting, but IIRC it doesn’t do any other partition, might be worth checking out that option though just in case my memory is out or things have changed recently since this would be a “recommended” and practical solution if available. Otherwise I would just unmount data and crack on, letting the OS and services run and catch up once you remount.

This advice obviously must come with some disclaimers :roll_eyes: It works for me and I see no reason for it not to work for the emonSD but I do not run emonSD and have not studied the current image in depth so please backup and proceed with caution if you try it, but I’m quietly confident that it should be fine.

1 Like

Indeed. If repair on a mounted filesystem is atempted, fsck will complain that damage is likely to
occur to said filesystem. Unmounting solves both issues. (data written as well as an fsck prerequisite)

Here’s to a happy New Year. :grin:


Option 2: force fsck on reboot using fsck.mode=force as a kernel parameter, and optionally

This should work with many filesystem types, including ext4/ext3/ext2, xfs, ntfs, fat32, exfat, and more. It should also be noted that systemd-fsck does not know any details about specific filesystems, and simply executes file system checkers specific to each filesystem type (/sbin/fsck.*; e.g.: /sbin/fsck.ext4).

systemd runs fsck for each filesystem that has a fsck pass number greater than 0 set in /etc/fstab (last column in /etc/fstab), so make sure you edit your /etc/fstab if that’s not the case.* The root partition should be set to 1 (first to be checked), while other partitions you want to be checked should be set to 2. Example:

# /etc/fstab: static file system information.

/dev/sda1  /      ext4  errors=remount-ro  0  1
/dev/sda5  /home  ext4  defaults           0  2
1 Like

Thanks, Paul. This sounds like a sensible approach. When I try it, I get an error:

$ sudo umount /home/pi/data
umount: /home/pi/data: target is busy
        (In some cases useful info about processes that
         use the device is found by lsof(8) or fuser(1).)

lsof shows that the problems are mysqld and journald. I’ve put the journal there to make it permanent and I can redirect it temporarily. I know how to stop the database but will that screw emon-whatever? And will it reconnect when I restart the database?

As regards all the suggestions about /etc/fstab etc thanks, I know about those and that’s how its set. What had been confusing me is that all the log messages about errors start with EXT4-fs but now I’ve looked at /etc/fstab I see that it’s actually an ext2 partition. No idea why that should be. But it probably explains why the system didn’t fix the errors on startup. :slight_smile:

The root partition is ext4 and isn’t showing any errors.

Oh I forgot to say in my previous posts. A lot of my data comes from a couple of emonTx in the usual way, while the rest is submitted by a number of programs using the emoncms HTTP feed API. No idea whether that is relevant. Data is stored as either PHPFina or PHPTimeSeries.

No idea if it classes as ‘recommended’ or not, but the backup script stops feedwriter to do exactly this - stops writing to disk and the REDIS buffer grows until it hits the limit or feedwriter is restarted.

By making it ext2, the ext4 journal is eliminated. That helps keep the number of data writes to the
partition as low as possible to help prevent early flash memory failure.

1 Like

Hi Dave, yes journald is easy to resolve, this isn’t an issue for emonSD’s IIRC as log2ram handles persisting the journal (if set still up to do so). The mysql is more of an issue and I overlooked this an issue as I don’t agree with the way OEM move mysql tagets around and I do something different. I guess you now have 2 options, either move the mysql stuff off the data partition for the duration of the repairs and cross your fingers nothing else crops up. Or bite the bullet and accept you need to take this offline, If you have a usb microsd reader and another linux box or even just another sdcard you could write a raspiOS image to the sdcard, swap the cards in your current Pi to boot to stock raspian and then carryout the repairs to the data partition of the emonSD using the reader, that would be by far the safest and easiest way

The stuff coming through emonhub is easy to stop if needed but obviously that won’t stop the direct via emoncms api stuff, that’s why we were hoping to use feedwriter to buffer the disk writes. Although I have often just taken the server offline to avoid any “direct” api traffic when I’ve needed to, but I have all data coming via remote emonhubs so the data gets buffered there and simply catches up once I bring the server online again.

So I don’t think there is an easy way for you to do this without losing some data for the duration of the repairs AND avoid lots of fiddling with a fair chunk of risk that might not be worth the headache.

If another Linux box isn’t available, but an external CD reader/writer is, a Linux Live CD is another option. Involves a bit more work, but a Live CD can be a handy rescue tool.