Emoncms backup to Dropbox

Well (not tested), but I believe that the location of where PHAR saves it’s temporary files can be changed in php.ini -
sys_temp_dir = "/tmp"
…to the writeable data partition, but that is a global change and will affect all php caching actions.

Probably not a good idea for a read-only system, but then again I’m not aware what other php functions actually cache their data - there’s not much activity in the current /tmp

I would prefer to have @glyn.hudson 's opinion before endorsing this approach.

Paul

If a maximum isn’t specifed, then maximum defaults to half of available memory.

Speaking of PHP caching, have you considered using xcache? https://xcache.lighttpd.net/wiki/Introduction

I’m unsure how that would help.

Paul

TBH, it may not help. The web page says:

It optimizes performance by removing the compilation time of PHP scripts
by caching the compiled state of PHP scripts into the shm (RAM) and
uses the compiled version straight from the RAM. This will increase the
rate of page generation time by up to 5 times as it also optimizes many
other aspects of php scripts and reduces serverload.

Install is quick and easy - sudo apt-get install php5-xcache - so I figured it might be worth a try
since apt-get remove --purge php5-xcache gets rid of it just as easily.

[quote=“Paul, post:20, topic:413, full:true”]
This wouldn’t be a problem for none ‘emonpi’ users, because /tmp would normally be written to disk and not in RAM.[/quote]

I was thinking about the above comments, and that you use a HD and that I use a low-write emonPi (no HD). Is the backup script beating up the SD Card as it writes to the temp_data directory (130MB), creates/writes a archive.tar (another 130MB) and then writes the compressed tar.gz (39MB)?

EDIT: with just emonPi data I am guessing I write less than 0.4 MB per day to the SD.

Even though it’s written/read in a few seconds just once every 24hrs, it’s still a lot of data being written, which undermines the ethos of having a low-write system to protect the SD card.

On reflection, this solution is probably best reserved for non low-write users, which is what it was originally written for (emonpi users have the USB backup facility).
Even though it (just) works for you now Jon, as your data size continues to grow, it will further squeeze your RAM, until it runs into difficulty.

Pau

USB pen-drive backup then upload to the cloud sounds like a good idea.

I have not looked closely at your script but I’m suprised at the size of the compressed backup file. What’s included? The Emoncms backup module produces much smaller compressed backup files all the feed data and dashboards etc, included.

The feed directories, node red config & a MYSQL dump, which uncompressed is about 130Mb in Jon’s installation.

I’ll have a look at using tar instead of phar by moving some of the functions out of php and into bash, and see if that improves the RAM issue.

Paul

Glyn - the backup from Setup > Backup > Export > Create backup is:

  • 39.2 MB for the emoncms-backup-2016-06-03.tar.gz file
  • 130.1 MB for the uncompressed files.

This does not include the Node-Red items. So it is the same size of files & folders for the emonPi backup script and the emonPi backup script.

What (or where) is the USB backup facility for emonpi users?

I don’t know that much about it (as I don’t have a emonpi), but here’s the link.

Paul

got it! The emoncms-export.sh script does a backup of the data files to the /home/pi/data directory and that is the script I had been using. No USB involved.

…So if your SD card fails, you’ve lost your data & data backup too…?

Paul

The emoncms-export.sh is for moving data to a newly created image (new SD card). I don’t think it was meant for nightly backups.

For me I run an rsync to a USB every night. It halts various services to do the backup. And it is local. So I’m one big power glitch away from lost data & data backup. That is why I wanted a Cloud backup.

Well let’s pursue this further…
I also believe cloud (offsite) backup is the way to go, so with a few pointers from Glyn, we might just get there in the end.
I’ll have another look at this over the coming week, and update soon!

Paul

One of the things to look at is incremental back ups. The first one will be heavy but from there on it will be way less heavy and time consuming

It is more synchronising between directories. It needs more time to start the real back up as it needs to compare files but once done it goes very quick.
It has also the advantage to use less bandwidth for those with limited or low bandwidth.
Maybe this approach might (depending on script doing the work) take out the problem of local space needed for the compressed file as it won’t take the full image, compress and then send, it will be file by file.

And these days sync programs have often the ability to retain x days and then erase the older versions. Hence you have x days back up.

This script already does this.

Paul

[quote=“bidouilleur, post:37, topic:413, full:true”]
One of the things to look at is incremental back ups. The first one will be heavy but from there on it will be way less heavy and time consuming[/quote]

Eric - This is the main reason I use rsync (e.g., rsync --archive --hard-links --stats --link-dest= …). The –link-dest is the key flag. It tells the rsync process that if the file on the Pi is identical to the file in yesterday’s backup, then rsync will create a hard link instead of transferring the same file. Very fast for the second through N backup. It even has a compression flag (-z) for smaller/faster transfers across networks.

I just started looking into acd-cli. “acd” is Amazon Cloud Drive. I think this (and maybe FUSE) will allow the use of rsync. I already use ACD for backup of my desktop computer user directory.

I’ve just reworked the Dropbox-archive module with the following changes;

  • The majority of the functions are now contained in a bash script instead of php, which allows the use of ‘tar’ to compress the data instead of ‘phar’.
  • This now uses far less RAM than the initial version to complete the compression, and should work OK now with a 30Mb /tmp restriction.
  • It was necessary to create a new default.settings.conf file which allows both php scripts and bash scripts to share a single user settings.conf file.

Upgrading to new version

  1. Git pull, and the latest version will be installed.
  2. Copy the default.settings.conf renamed to settings.conf
  3. Edit settings.conf with your user settings
  4. Delete the old ‘config.php’ file, as it is no longer used.

New install

  1. git clone https://github.com/Paul-Reed/dropbox-archive.git
  2. Copy the default.settings.conf renamed to settings.conf
  3. Edit settings.conf with your user settings
  4. The first time that you run the script, it will prompt you to setup your Dropbox API, just follow the onscreen prompts.- The most common error cause is not copying the authorization URL accurately due to the wrong interpritation of numbers and letters such as O (letter) 0 (number) & 1 (number) I (letter).
    This only needs to be done once, and when completed, run the backup.php script again, and it will create an archive and upload it to dropbox in your Dropbox ‘app’ folder.

Paul

3 Likes

Paul - I haven’t had time to get back to the Cloud backup yet, but I will!

1 Like