Well (not tested), but I believe that the location of where PHAR saves it’s temporary files can be changed in php.ini - sys_temp_dir = "/tmp"
…to the writeable data partition, but that is a global change and will affect all php caching actions.
Probably not a good idea for a read-only system, but then again I’m not aware what other php functions actually cache their data - there’s not much activity in the current /tmp
I would prefer to have @glyn.hudson 's opinion before endorsing this approach.
It optimizes performance by removing the compilation time of PHP scripts by caching the compiled state of PHP scripts into the shm (RAM) and uses the compiled version straight from the RAM. This will increase the rate of page generation time by up to 5 times as it also optimizes many other aspects of php scripts and reduces serverload.
Install is quick and easy - sudo apt-get install php5-xcache - so I figured it might be worth a try
since apt-get remove --purge php5-xcache gets rid of it just as easily.
[quote=“Paul, post:20, topic:413, full:true”]
This wouldn’t be a problem for none ‘emonpi’ users, because /tmp would normally be written to disk and not in RAM.[/quote]
I was thinking about the above comments, and that you use a HD and that I use a low-write emonPi (no HD). Is the backup script beating up the SD Card as it writes to the temp_data directory (130MB), creates/writes a archive.tar (another 130MB) and then writes the compressed tar.gz (39MB)?
EDIT: with just emonPi data I am guessing I write less than 0.4 MB per day to the SD.
Even though it’s written/read in a few seconds just once every 24hrs, it’s still a lot of data being written, which undermines the ethos of having a low-write system to protect the SD card.
On reflection, this solution is probably best reserved for non low-write users, which is what it was originally written for (emonpi users have the USB backup facility).
Even though it (just) works for you now Jon, as your data size continues to grow, it will further squeeze your RAM, until it runs into difficulty.
USB pen-drive backup then upload to the cloud sounds like a good idea.
I have not looked closely at your script but I’m suprised at the size of the compressed backup file. What’s included? The Emoncms backup module produces much smaller compressed backup files all the feed data and dashboards etc, included.
got it! The emoncms-export.sh script does a backup of the data files to the /home/pi/data directory and that is the script I had been using. No USB involved.
The emoncms-export.sh is for moving data to a newly created image (new SD card). I don’t think it was meant for nightly backups.
For me I run an rsync to a USB every night. It halts various services to do the backup. And it is local. So I’m one big power glitch away from lost data & data backup. That is why I wanted a Cloud backup.
Well let’s pursue this further…
I also believe cloud (offsite) backup is the way to go, so with a few pointers from Glyn, we might just get there in the end.
I’ll have another look at this over the coming week, and update soon!
One of the things to look at is incremental back ups. The first one will be heavy but from there on it will be way less heavy and time consuming
It is more synchronising between directories. It needs more time to start the real back up as it needs to compare files but once done it goes very quick.
It has also the advantage to use less bandwidth for those with limited or low bandwidth.
Maybe this approach might (depending on script doing the work) take out the problem of local space needed for the compressed file as it won’t take the full image, compress and then send, it will be file by file.
And these days sync programs have often the ability to retain x days and then erase the older versions. Hence you have x days back up.
[quote=“bidouilleur, post:37, topic:413, full:true”]
One of the things to look at is incremental back ups. The first one will be heavy but from there on it will be way less heavy and time consuming[/quote]
Eric - This is the main reason I use rsync (e.g., rsync --archive --hard-links --stats --link-dest= …). The –link-dest is the key flag. It tells the rsync process that if the file on the Pi is identical to the file in yesterday’s backup, then rsync will create a hard link instead of transferring the same file. Very fast for the second through N backup. It even has a compression flag (-z) for smaller/faster transfers across networks.
I just started looking into acd-cli. “acd” is Amazon Cloud Drive. I think this (and maybe FUSE) will allow the use of rsync. I already use ACD for backup of my desktop computer user directory.
Copy the default.settings.conf renamed to settings.conf
Edit settings.conf with your user settings
The first time that you run the script, it will prompt you to setup your Dropbox API, just follow the onscreen prompts.- The most common error cause is not copying the authorization URL accurately due to the wrong interpritation of numbers and letters such as O (letter) 0 (number) & 1 (number) I (letter).
This only needs to be done once, and when completed, run the backup.php script again, and it will create an archive and upload it to dropbox in your Dropbox ‘app’ folder.