Synology How to backup Photo from a mobile onto a Synology

I used to rely on DS Cloud to backup my complete Android mobile onto my Synology (both internal and external storage). But after Android 4.4.x, the files must be stored under /storage/<your external sd>/Andoid/data/com.synology.dsclooud/files. It means DS Cloud may not backup the DCIM folder locate on the external SD.

The solution is to get rid of DS cloud and use DS File or DS Photo - or a third party, but this a less preferred option for me...

Click to Read More

DS Photo

It has a feature to backup all photos from a mobile (Android or iOS) into one folder/album of the Photo Station. But it means that Photo Station must be installed.

See documentation here (loo for Upload and Download Photos).

It's really slow because it needs to create the thumbnails, etc... for Photo Station and seems even to stop from time to time. You have to open it to check that it runs effectively.

Pay attention to not check the option that frees space. It will deleted the photos on your mobile after the upload.

NB.: It also backup movies ("You can upload photos or videos from ..."), but does not seem to do it as long as all photos are not yet uploaded. Many old videos were not yet uploaded onto my Synology although photos with the same age were already processed. And after the backup of 1000th of photos, I finally saw a long list of videos being uploaded.

DS File

It has a feature to backup all photos and videos into any subfolder of a shared drive. As far as I am concerned, I do backup into a subfolder of my "home" on the Synology.

See documentation here (look for Backing up Photos and Videos).

Pay attention to not check the option that frees space. It will deleted the photos and movies on your mobile after the backup.

DS File can backup photos and videos from all detected locations containing media: DCIM (external storage), Office Lens, PhotosEditor, WhatsApp Images, WhatsApp Video. But you may not specify yourself a custom folder. It's nevertheless the best option to backup all media in my opinion.

Attention, I noticed that DS File consumed a lot of power during the first backup, so it was suggested to be put in deep sleep mode. But doing that stops the backup background process. Compared to DS Photo, it's quite exactly the same behavior and configuration but one can chose the target folder on the Synology. Also, it is much faster (as it does not have to create anything for the Photo Station) and backup all videos and photos simultaneously.

Moment

There is now a new application, named Synology Moment, which also comes with a Backup feature for photos and videos, similar to Photo Station. I didn't test it yet.

(Synology Moment is combined with Synology Drive, an application replacing Cloud Station Server)

Synology Synology Cloud Station and Cloud Sync are slow

I was wondering why Cloud Station Backup and Cloud Sync were so slow on my NAS. It appeared that it was "normal" for Cloud Sync, but could be improved for Cloud Station Backup, especially on my LAN, by using my NAS' local IP instead of the "QuickConnect" feature.

Click to Read More

Cloud Station

I found here that using "QuickConnect" with Cloud Station, the traffic was routed through Synology’s servers, making it incredibly slow. In my LAN, it really accelerated as soon as I replaced QuickConnect by the local IP of my NAS or with a domain address resolved by my local DNS.

Cloud Sync

I found here the reason why Cloud Sync was slower than native Sync applications. The answer given by Synology is:

Could Sync is performing sync, it is heavy process, please see below detailed information from help:Why is Cloud Sync slower than other cloud services' PC applications?

Due to the below limitations, Cloud Sync might take longer to sync files with public cloud services than the official PC applications such as Dropbox and Baidu.

 

  • Speed limits imposed by cloud service providers: Although no public cloud service provider has disclosed any information related to this topic, it is highly possible that public cloud servers communicate with their official PC applications through a dedicated tunnel/protocol, or impose bandwidth limitations on third-party applications that access their services on a regular basis (such as Cloud Sync).
  • No incremental update: Some cloud service providers do not releases a public API for developers to track file changes. This means Cloud Sync must re-download and sync the entire file every time a file has been modified, even when you've only made partial modifications. On the other hand, cloud service providers' official PC applications might be able to re-download and sync the modified bits only, reducing sync time.
  • No local network sync: Certain public cloud service providers offer LAN sync, a technique that allows one client to obtain files from existing clients in the same local area network (LAN), thus significantly boosting sync speed. However, LAN sync accesses files in client computers without notifying them, which could possibly become a security backdoor. Therefore, this feature is not included in Cloud Sync.

Given the above limitations, the syncing performance of Cloud Sync shall continue to be enhanced, while also maintaining the safety and security of your Synology NAS.

DS currently does not control the sync speed, and does not have function to enhance it.

However, we will continue to try improve our service and product.

Synology Schedule a Backup of all your MySQL databases on Synology

It would be a good idea to schedule a daily backup of all the MySQL databases on your Synology. Those will indeed be wiped out if by accident, you have to reset your NAS (reinstall the firmware), ex.: if you lose a disk in your RAID 0.

That can be done using a shell script and a cron job as described here after.

How to Backup

First, create a user with a local-read-only access on the databases. We will indeed have to let its password in the script, endangering the security. Using a local-read-only user will mitigate the risks.

  1. Go to the web administration interface (DSM) of your NAS.
  2. Install phpMYAdmin via the "Package Center" if not yet done.
  3. Open phpMyAdmin via the "Start Menu" ("Main Menu").
    1. I am using the login 'root' with the password of Syno's 'admin' user
  4. Open the "Users" tab in phpMyAdmin
  5. Click "Add User" (bellow the list of existing users)
  6. Enter the User Name. Ex.: backup
  7. Select “localhost” as an Host.
  8. Enter a password. Ex.: mysql
  9. Keep “none” as “Database for user”
  10. Grant the user with Global privileges: Select (in Data) and Reload, Show Databases and Lock Tables (in Administration)
  11. Click “Add User” at the botton of the page

Next, create a shell scripts in a Shared Folder of the NAS (Ex.: \volume1\backup\backupMySql.sh). Working in a Shared Folder will make it easier for you to copy/open later the backups from your PC). Don't forget to create a "Unix" file, either using the touch command in a Console or saving a file As a "Unix shell script" within Notepad++. Notice that a script created with NotePad++ and saved on the Shared Folder will belong to the user account accessing that Shared Folder (Most probably your Windows Account if like me you simply created a user on your NAS with the same login and password). A script created with "touch" in a Console will belong to the user accessing the NAS via telnet/SSH (Most probably the "root" account).

#!/bin/sh
#
DIR=/volume1/backup/sqlbackup/
DATESTAMP=$(date +%Y%m%d%H%M%S)
DB_USER=backup
DB_PASS=mysql

# create backup dir if it does not exist
mkdir -p ${DIR}

# remove backups older than $DAYS_KEEP
#DAYS_KEEP=30
#find ${DIR}* -mtime +$DAYS_KEEP -exec rm -f {} \; 2&gt; /dev/null

# remove all backups except the $KEEP latest
KEEP=5
BACKUPS=`find ${DIR} -name "mysqldump-*.gz" | wc -l | sed 's/\ //g'`
while [ $BACKUPS -ge $KEEP ]
do
  ls -tr1 ${DIR}mysqldump-*.gz | head -n 1 | xargs rm -f
  BACKUPS=`expr $BACKUPS - 1`
done

#
# create backups securely
#umask 006

# dump all the databases in a gzip file
FILENAME=${DIR}mysqldump-${DATESTAMP}.gz
/usr/syno/mysql/bin/mysqldump --user=$DB_USER --password=$DB_PASS --opt --all-databases --flush-logs | gzip &gt; $FILENAME

NB: Since DSM 6.0, “/usr/syno/mysql/bin/mysqldump” has been moved to “/bin/mysqldump” !!!

Possibly, type the following command in a Console (telnet/SSH) to set the user 'root' as owner of the script:

chown root /volume1/backup/backupMySql.sh

IMPORTANT notice: I used to have "-u DB_USER" (with a blank space in between) in my script above and -"p$DB_USER" (without a blank) instead of --user and --password. But a reader (Fredy) reported that the script was not running fine for him unless removing the blank. As per documentation of mysqldump's user parameter and password parameter, there should be a blank after -u but not after -p. However, samples on the web usually illustrate the use of mysqldump with "-uroot". So, I decided to use the more explicit notation "--user=" and "--password=". I did test this notation with a wrong username or password and the resulting dump is indeed empty. With the correct username and password, it works fine.

Since DSM 4.2, the Task Scheduler can be used to run the script on a daily basis.

  1. Go to the Start Menu
  2. Open the Control Panel
  3. In the "Application Settings", open the Task Scheduler
  4. Select "Create a User-Defined Script"
  5. Type a name for that "Task:"
  6. Keep the "User:" root
  7. In the "Run Command" pane, type:
    sh /volume1/backup/backupMySql.sh

Don't forget the "sh" in front of your command, otherwise, it does not work (although the sample provided by Synology is missing it ?!)

If you don't use the Task Scheduler, you can add a Cron Job to execute the script, e.g. every day at 0:01. Open a Console (Telnet/SSH) and type:

echo "1       0       *       *       *       root    sh /volume1/backup/backupMySql.sh" &gt;&gt; /etc/crontab

FYI, a cron line syntax is "mm hh dd MMM DDD user task" where:

  • mm is the minute (0..59)
  • hh is the hour (0..23)
  • dd is the day in the month (1..31)
  • MMM is the month (jan, feb, ... or 1..12)
  • DDD is the day in the week (sun, mon, ... or 0..7 where 0 and 7 beeing both sunday)

The following values can also be used:

  • * : a every unit (0, 1, 2, 3, 4...)
  • 5,8 : at unit 5 et 8
  • 2-5 : units 2 to 5 (2, 3, 4, 5)
  • */3 : every 3 units (0, 3, 6, 9...)
  • 10-20/3 : every 3 units, from 10th to 20th (10, 13, 16, 19)

So, the script will start every day at 0:01h

Finally, you must restart the cron daemon to activate the new job. To find the process id and send a SIGHUP signal to this one, type the following command line in a Console (Telnet/SSH):

ps | grep crond | grep -v grep | awk '{print$1}' | xargs -t kill -HUP

It should display "Kill -HUP xxxx" where xxx is the pid of the cron daemon.

Added on 03/01/2013

Attention: if you copy paste the command to restart the cron daemon from this page into the telnet console, some symbols will be wiped out: the ‘{ and ‘ around print$1… you have to type them manually…

Attention: if you upgrade the version of DSM, the changes made in the cron script are lost (at least, it's what I have noticed after updating last month...). Reason why I recommend to use the new "Task Scheduler" feature available since DSM 4.2.

Tip to edit the cron script: you can use the package "Config File Editor" available here. Once installed, you can access it via the Main Menu. Then, you have to edit the file named "Config File Editor" and add the following line:

/etc/crontab,crontab

Once this line is added and saved, ... I don't remember how to force the change to be taken into account :/. But restarting the DiskStation is enough :D

Finally, back into the Config File Editor, select the 'crontab' entry and modify this script, save your changes and restart the cron daemon.

Tip to restart the cron daemon: you can use the package "Web Console" available here. To install it, go to the Package Center and add the following url via the Settings > Package Sources : http://missilehugger.com/708/synology-package-web-console. Once this url added, go to the tab "Other Sources" and click Install on the "Web Console" icon.

Run the Web Console via the Main Menu, log in with "admin"/"admin" (The defaults values if not changed) and type:

synoservice --restart crond

NB: Since DSM 6.0, "Web Console" does not work anymore.

Added on 01/12/2013

If you want a lazy solution to notify the Administrator about the success or failure of the backup, you can use the 'synonotify' command (See details here). A more advanced solution would be to configure the "Mail Server" and use its 'sendmail' command:  /volume1/@appstore/MailServer/sbin/sendmail...

Added on 08/01/2017

mysqldump has moved

  • DSM 6.0 : /usr/syno/mysql/bin/mysqldump
  • MariaDB 5: /volume1/@appstore/MariaDB/usr/bin/mysqldump
    MariaDB 10: /volume1/@appstore/MariaDB10/usr/local/mariadb10/bin/mysqldump

How to Restore

To do a full restore, simply:

  1. Copy the archive (.gz) you plan to restore on your local PC
  2. Go to "phpyMyAdmin" > "Import" > "Chose File" (next to Browse your computer).
  3. Select the .gz file to be restored
  4. Press "Go" at the bottom of the page (no need to change any settings)

Pay attention: this is a complete restore, I.e.: all databases are dropped and recreated from scratch.

If you want to be able to restore a subset of your databases you better have to change the script to backup specific databases instead of --all-database.

If you want to restore only one table 'mytable' from the backup 'mysqldump-datetime.gz':

  1. Unzip first the archive: gunzip mysqldump-datetime.gz
  2. Then extract the part that restore the desired table with this command: sed -n -e '/DROP TABLE.*mytable/,/UNLOCK TABLES;/p' mysqldump-datetime > mytabledump

Download the script here.