Deleting old data


Running stand alone emoncms server on Pi
I have had a glitch and have rubbish data in a feed. I have located the files on the server. I assume the files relating to the feed are those prefixed by the feed id which is 29 in this case. I have 29.meta, 29_0.dat through to 29_3.dat.
Can I safely delete any of these files to clear old data and if so which ones?



If you want to retain the processes, multigraphs, dashboards etc for a particular feed, yet delete all historic data, then just delete the corresponding .dat file, and leave the meta file unchanged.

The .dat file will be recreated and continue logging, however just check that the ownership of the new .dat file are correct, ie www-data:www-data

If they have changed to root:root, the change them back again;

sudo chown www-data:www-data 29.dat


chown changes ownership.
Permissions are changed via the chmod command.

Of course you’re correct Bill, post amended.


Many thanks

Problem fixed



I have followed these instructions to delete old data but the recreated dat file is recreated but still with the old data. The only change is that the owner and group change to ‘root’. My sequence is:
cd /home/pi/data/phpfina
rm 99.dat
I am sure I am making a silly mistake, but guidance would be appreciated.
Thanks, Tony.

Can you post the result of;
ls -la /home/pi/data/phpfina


@Paul Thanks.

The details are as follows:

DeleteData.txt (28.2 KB)


[email protected]:~ $ ls -la /home/pi/data/phpfina
total 112327
drwxr-xr-x 2 www-data root       12288 Dec  4 16:57 .
drwxrwxrwx 8 pi       pi          1024 May  3  2016 ..
-rw-r--r-- 1 www-data www-data  500148 Dec  5 16:36 100.dat
-rw-r--r-- 1 www-data www-data      16 Sep  9 21:39 100.meta
-rw-r--r-- 1 www-data www-data  500144 Dec  5 16:36 101.dat
-rw-r--r-- 1 www-data www-data      16 Sep  9 21:40 101.meta
-rw-r--r-- 1 www-data www-data  500144 Dec  5 16:36 102.dat
-rw-r--r-- 1 www-data www-data      16 Sep  9 21:40 102.meta
-rw-r--r-- 1 www-data www-data  500140 Dec  5 16:36 103.dat
-rw-r--r-- 1 www-data www-data      16 Sep  9 21:43 103.meta
-rw-r--r-- 1 www-data www-data  500136 Dec  5 16:36 104.dat
-rw-r--r-- 1 www-data www-data      16 Sep  9 21:43 104.meta
-rw-r--r-- 1 www-data www-data  500136 Dec  5 16:36 105.dat
-rw-r--r-- 1 www-data www-data      16 Sep  9 21:43 105.meta

Thanks again, Tony

Edit - formatted data. BT, Moderator

Everything looks fine with your feeds Tony, it’s an ownership issue. You are trying to delete files that are owned by www-data so you need to escalate your super powers!

$ sudo rm 99.dat


Sorry it was a typo error on my part.

$ sudo rm 99.dat

Appears to delete the file for about 3 minutes.
During that time $> ls does not list it and the Emoncms feed list size shows it as 0 KB.
Then it reappears the same size as before with the owner and group changed.
Any idea what is happening.
Thanks Tony.


Could the sudo redis-cli FLUSHALL command be needed here?

Quite possibly Bill.
As the .dat file was owned by www-data www-data and not pi I wanted to make sure that it was actually deleted OK, which it apparently is.
Tony, can you try flushing the redis database keys as per Bill’s comment and again delete the .dat file.


I tried sudo redis-cli FLUSHALL.
I’m afraid it has made a mess of my data.
It added a second entry to all of my inputs, some of which were logged, while some were not. The follow on being that my feeds are allover the place.
When it has settled I will try deleting a dat file again.

You will find it tricky to get a clean deletion while there is new data arriving. You should either stop the source of the data whilst removing the feed data or stop the server so that new data cannot be received.
something like…

sudo service apache2 stop
sudo redis-cli FLUSHALL
sudo rm /home/pi/data/phpfina/99.dat
sudo service apache2 start

Thanks for your help.
I have rebuilt my data and tried

sudo service apache2 stop
sudo redis-cli FLUSHALL
sudo rm /home/pi/data/phpfina/99.dat
sudo service apache2 start

It didn’t work. As before the file appeared to be deleted, but was recreated once I ran $ sudo service apache2 start.
Any other ideas?

Is that not the intention? or are you saying the feed is recreated with the original data intact?

I’ve not used this method myself so cannot vouch for it from experience but by stopping data being posted, flushing and deleting the file directly, by rights, you should have all removed all trace of it.

The recreated file will be the same size as the original as the meta file has not been changed and therefore the start time will be the same. The difference should be that all datapoints from the original start time until the recreation, will be null, so the feeds page entry will look exactly the samebut if you query a past period eg last month, there should be no data.

Why are you trying to use this method? It might be easier to create a new feed and just switch all the old references to the old feed over, unless you have a specific need to do it this way, plus a new feed would begin now not at the original start time.

Thank you for your help.

What I am trying to do is to reclaim some of my data storage space as my SD card is filling up, without loosing my dashboard layouts, their related multigraphs and widgets etc.
I have managed to do this in the past with the older version of emoncms using sudo rm ?.dat, but not with the curent version.

Am I trying the impossible?

If you are trying to reclaim disk space by replacing a file containing data with a file of the same size full of null datapoints then yes, it would seem so.

I’m unsure how this cold have previously worked as the size of the file is strictly governed by the “starttime” in the meta file and the current time, the file size is 4bytes for every “fixed interval” regardless of whether the datapoint is used or null.

You maybe able to create a new fixed interval feed and use that to replace the feed you wish to reduce, for example if you created a new phpfina feed and note it’s feedid as 1234

sudo service apache2 stop
sudo redis-cli FLUSHALL
sudo rm /home/pi/data/phpfina/99.dat
sudo cp /home/pi/data/phpfina/1234.dat /home/pi/data/phpfina/99.dat
sudo rm /home/pi/data/phpfina/99.meta
sudo cp /home/pi/data/phpfina/1234.meta /home/pi/data/phpfina/99.meta
sudo service apache2 start

I have not tried this myself so maybe create a test feed to try it on before tackling your live data, and always do a full back up before trying “backdoor” stuff like this.

It’s worth mentioning again, that replacing a feed with a totally new one using conventional methods doesn’t mean you will lose your dashboards and multigraphs etc, it just means you will need to replace all references to the old feed with references to the new feed.

When replacing a feed it can be easier to rename the old feed something like “OLD FEED TO BE DELETED !!!” so it stands out in the processlists and just edit the lines needed until no processlist contains that feed, then deleting the feed will make all the multigraphs and dash widgets that used that feed look “unconfigured” so it’s easy to identify and edit the configs to point at the new feed too.

What would be useful here is a comandline tool that can overwrite the meta file “starttime” and reduce the file size to suit. This would allow older data to be archived and feeds to be reset etc.

Thank you again.
I will follow your recommendation.
My problem is I have about 300 feeds, so it will be a lot to reconfigure.

I certainly agree with your last point.

If you test the first idea of using newly created feed and copy that to reset an existing feed and it works you can then easily stretch the example I gave for more feeds using a small bash script (run as sudo) eg


OLD_FEEDS = ( 1 3 44 59 62 )
NEW_FEED = 1234

service apache2 stop
redis-cli FLUSHALL
for OLD_FEED in "${OLD_FEEDS[@]}"
   rm /home/pi/data/phpfina/$OLD_FEED.dat
   cp /home/pi/data/phpfina/$NEW_FEED.dat /home/pi/data/phpfina/$OLD_FEED.dat
   rm /home/pi/data/phpfina/$OLD_FEED.meta
   cp /home/pi/data/phpfina/$NEW_FEED.meta /home/pi/data/phpfina/$OLD_FEED.meta
sudo service apache2 start

This would need a “NEW_FEED” created for each different interval used and the script run once for each interval size with the list of “OLD_FEEDS” to suit.

If all your feeds use the same interval and you are replacing them all you could simplify it further by looping through the folder content directly rather than creating an array of feedids.

EDIT - Please backup and test before committing to any of these suggestions, the script is also untested, just written off the cuff and intended a a guide not a complete, tried and tested solution.