Tips Synchronize with an array of disks

Months ago, I did backup a very large disk of my PC (E:\) to multiple smaller disks connected one after one, using a e-sata cradle (G:\).

Recently one of those small backup disk started to experience NTFS Errors and I did lost its content. To avoid restarting a complete backup of E:\, I had to find which data were lost to redo a backup of those only.

Click to read more

First, I did recreate the complete folder structure (i.e.: without the files) of the remaining backup disks in one folder of my PC (C:\Temp\Backups). I did reconnect the disks one after one via the e-sata Cradle and did execute each time the following command in a MS-DOS console:

MkDir C:\Temp\Backups
XCopy G:\ C:\Temp\Backups /T /E

More details typing "XCopy /?" in your console:

/T Creates directory structure, but does not copy files. Does not include empty directories or subdirectories. /T /E includes empty directories and subdirectories.

The result was 22.036 folders - 0 bytes.

Notice that the target folder (C:\Temp\Backups) must exists !

Next, using BeyondCompare, I did a "Folder Compare" between "C:\Temp\Backups" and "E:\".

I did first select the option to see only the orphans, via the menu "View/Show Orphans".

And I did next add a "Filter" to "Exclude files"  with a name "*.*", via the "glasses" icon (i.e.: I did exclude all the files - in order to only compare the directory structure and backup the missing ones. Otherwise, no folder in C:\Temp\Backups would have appeared as orphans as they are all empty while their equivalent folders on E:\ have content).

Finally, in BeyondCompare, I did select all the orphan folders on E:\ and used the contextual menu "Copy To Folder...". In the "Action" field of the "Copy to Folder" dialog box, I did pick G:\ as a destination folder.

Et voilà.

Freewares Image Browser: FastStone

Faststone Image Viewer

Faststone Image Viewer

FastStone is from far my favorite Image Browser (since ACDSee started to become a monster pointlessly full over overkill features). It's also a Freeware for non-commercial use.

Click to Read More

As an Image Browser, FastStone is really light and fast while supporting many graphic formats, including obvious ones (gif, jpeg, png, pcx, tiff) but also ico, camera raw files, ...

It comes with some other small pieces of software that I use each time I transfer photos from my camera to my NAS.

  • An ultra light and quick Image Viewer. I like this one as it opens in seconds (full screen or windowed, as configured) when double clicking an image in Windows Explorer. Next, moving the mouse to screen edges, menu appears with the most common actions such as cropping, resizing, opening the image with an external tools (ex.: Paint.Net :)), ...
  • a batch Image Renamer supporting "expressions", search & replace, renaming preview, ...
  • a batch Image Converter supporting resizing, Lossless JPEG rotations, color adjustments, auto-rotate based on EXIF information...

FastStone also offers Side-by-side Image Comparison, Red-Eye removal, musical slideshow with transitions, ...

Details: http://en.wikipedia.org/wiki/FastStone_Image_Viewer

Download: http://www.faststone.org

Freewares Photo Editor: Paint.net

Paint.Net

Paint.Net

Just as notepad++ is a must-have to replace notepad, Paint.net is not only THE mandatory replacement for Paint, it's also IMHO the little "free" brother of all those well known professional editors. It's a indeed a freeware (Creative Commons License).

Click to Read More

I dislike to use "pro" editors as they are usually too complex for my needs, i.e.: full of over-killing features for a end-user like me. On the opposite, Paint.net offers a clear, pro but still simple interface with all the required powerful tools such the Magic Wand for selecting regions of similar color (ex.: grab clouds in order to remove them), and the Clone Stamp for copying or erasing portions of an image (ex.: rebuild a collapsed wall).

I really use Paint.net a lot to completely remove elements from photos or rebuild their background as well as to simply make pictures' background  transparent...

I like also how simple I can replace a color with another one in a picture using the Recolor Tool.

Tip: I was recently trying several free tools to determine the code of colors seen in web pages. Actually, the most stupid simple solution to get a color code is to take a screenshot (Using the "Prt Scr" key), to paste this screenshot in Paint.Net and grab Hex Codes using the Color Picker tool.

Also, to automatically edit all images with Paint.Net instead of MS Paint, I did set "C:\Program Files\Paint.NET\PaintDotNet.exe" "%1" as Default value for the registry entry: HKEY_CLASSES_ROOT > SystemFileAssociations > image > shell > edit > command. Find attached to this post the reg file that can be used to update the registry.

Details: http://en.wikipedia.org/wiki/Paint.NET

Download: http://www.getpaint.net

Synology Schedule a Backup of all your MySQL databases on Synology

It would be a good idea to schedule a daily backup of all the MySQL databases on your Synology. Those will indeed be wiped out if by accident, you have to reset your NAS (reinstall the firmware), ex.: if you lose a disk in your RAID 0.

That can be done using a shell script and a cron job as described here after.

How to Backup

First, create a user with a local-read-only access on the databases. We will indeed have to let its password in the script, endangering the security. Using a local-read-only user will mitigate the risks.

  1. Go to the web administration interface (DSM) of your NAS.
  2. Install phpMYAdmin via the "Package Center" if not yet done.
  3. Open phpMyAdmin via the "Start Menu" ("Main Menu").
    1. I am using the login 'root' with the password of Syno's 'admin' user
  4. Open the "Users" tab in phpMyAdmin
  5. Click "Add User" (bellow the list of existing users)
  6. Enter the User Name. Ex.: backup
  7. Select “localhost” as an Host.
  8. Enter a password. Ex.: mysql
  9. Keep “none” as “Database for user”
  10. Grant the user with Global privileges: Select (in Data) and Reload, Show Databases and Lock Tables (in Administration)
  11. Click “Add User” at the botton of the page

Next, create a shell scripts in a Shared Folder of the NAS (Ex.: \volume1\backup\backupMySql.sh). Working in a Shared Folder will make it easier for you to copy/open later the backups from your PC). Don't forget to create a "Unix" file, either using the touch command in a Console or saving a file As a "Unix shell script" within Notepad++. Notice that a script created with NotePad++ and saved on the Shared Folder will belong to the user account accessing that Shared Folder (Most probably your Windows Account if like me you simply created a user on your NAS with the same login and password). A script created with "touch" in a Console will belong to the user accessing the NAS via telnet/SSH (Most probably the "root" account).

#!/bin/sh
#
DIR=/volume1/backup/sqlbackup/
DATESTAMP=$(date +%Y%m%d%H%M%S)
DB_USER=backup
DB_PASS=mysql

# create backup dir if it does not exist
mkdir -p ${DIR}

# remove backups older than $DAYS_KEEP
#DAYS_KEEP=30
#find ${DIR}* -mtime +$DAYS_KEEP -exec rm -f {} \; 2> /dev/null

# remove all backups except the $KEEP latest
KEEP=5
BACKUPS=`find ${DIR} -name "mysqldump-*.gz" | wc -l | sed 's/\ //g'`
while [ $BACKUPS -ge $KEEP ]
do
  ls -tr1 ${DIR}mysqldump-*.gz | head -n 1 | xargs rm -f
  BACKUPS=`expr $BACKUPS - 1`
done

#
# create backups securely
#umask 006

# dump all the databases in a gzip file
FILENAME=${DIR}mysqldump-${DATESTAMP}.gz
/usr/syno/mysql/bin/mysqldump --user=$DB_USER --password=$DB_PASS --opt --all-databases --flush-logs | gzip > $FILENAME

NB: Since DSM 6.0, “/usr/syno/mysql/bin/mysqldump” has been moved to “/bin/mysqldump” !!!

Possibly, type the following command in a Console (telnet/SSH) to set the user 'root' as owner of the script:

chown root /volume1/backup/backupMySql.sh

IMPORTANT notice: I used to have "-u DB_USER" (with a blank space in between) in my script above and -"p$DB_USER" (without a blank) instead of --user and --password. But a reader (Fredy) reported that the script was not running fine for him unless removing the blank. As per documentation of mysqldump's user parameter and password parameter, there should be a blank after -u but not after -p. However, samples on the web usually illustrate the use of mysqldump with "-uroot". So, I decided to use the more explicit notation "--user=" and "--password=". I did test this notation with a wrong username or password and the resulting dump is indeed empty. With the correct username and password, it works fine.

Since DSM 4.2, the Task Scheduler can be used to run the script on a daily basis.

  1. Go to the Start Menu
  2. Open the Control Panel
  3. In the "Application Settings", open the Task Scheduler
  4. Select "Create a User-Defined Script"
  5. Type a name for that "Task:"
  6. Keep the "User:" root
  7. In the "Run Command" pane, type:
    sh /volume1/backup/backupMySql.sh

Don't forget the "sh" in front of your command, otherwise, it does not work (although the sample provided by Synology is missing it ?!)

If you don't use the Task Scheduler, you can add a Cron Job to execute the script, e.g. every day at 0:01. Open a Console (Telnet/SSH) and type:

echo "1       0       *       *       *       root    sh /volume1/backup/backupMySql.sh" >> /etc/crontab

FYI, a cron line syntax is "mm hh dd MMM DDD user task" where:

  • mm is the minute (0..59)
  • hh is the hour (0..23)
  • dd is the day in the month (1..31)
  • MMM is the month (jan, feb, ... or 1..12)
  • DDD is the day in the week (sun, mon, ... or 0..7 where 0 and 7 beeing both sunday)

The following values can also be used:

  • * : a every unit (0, 1, 2, 3, 4...)
  • 5,8 : at unit 5 et 8
  • 2-5 : units 2 to 5 (2, 3, 4, 5)
  • */3 : every 3 units (0, 3, 6, 9...)
  • 10-20/3 : every 3 units, from 10th to 20th (10, 13, 16, 19)

So, the script will start every day at 0:01h

Finally, you must restart the cron daemon to activate the new job. To find the process id and send a SIGHUP signal to this one, type the following command line in a Console (Telnet/SSH):

ps | grep crond | grep -v grep | awk '{print$1}' | xargs -t kill -HUP

It should display "Kill -HUP xxxx" where xxx is the pid of the cron daemon.

Added on 03/01/2013

Attention: if you copy paste the command to restart the cron daemon from this page into the telnet console, some symbols will be wiped out: the ‘{ and ‘ around print$1… you have to type them manually…

Attention: if you upgrade the version of DSM, the changes made in the cron script are lost (at least, it's what I have noticed after updating last month...). Reason why I recommend to use the new "Task Scheduler" feature available since DSM 4.2.

Tip to edit the cron script: you can use the package "Config File Editor" available here. Once installed, you can access it via the Main Menu. Then, you have to edit the file named "Config File Editor" and add the following line:

/etc/crontab,crontab

Once this line is added and saved, ... I don't remember how to force the change to be taken into account :/. But restarting the DiskStation is enough :D

Finally, back into the Config File Editor, select the 'crontab' entry and modify this script, save your changes and restart the cron daemon.

Tip to restart the cron daemon: you can use the package "Web Console" available here. To install it, go to the Package Center and add the following url via the Settings > Package Sources : http://missilehugger.com/708/synology-package-web-console. Once this url added, go to the tab "Other Sources" and click Install on the "Web Console" icon.

Run the Web Console via the Main Menu, log in with "admin"/"admin" (The defaults values if not changed) and type:

synoservice --restart crond

NB: Since DSM 6.0, "Web Console" does not work anymore.

Added on 01/12/2013

If you want a lazy solution to notify the Administrator about the success or failure of the backup, you can use the 'synonotify' command (See details here). A more advanced solution would be to configure the "Mail Server" and use its 'sendmail' command:  /volume1/@appstore/MailServer/sbin/sendmail...

Added on 08/01/2017

mysqldump has moved

  • DSM 6.0 : /usr/syno/mysql/bin/mysqldump
  • MariaDB 5: /volume1/@appstore/MariaDB/usr/bin/mysqldump
    MariaDB 10: /volume1/@appstore/MariaDB10/usr/local/mariadb10/bin/mysqldump

How to Restore

To do a full restore, simply:

  1. Copy the archive (.gz) you plan to restore on your local PC
  2. Go to "phpyMyAdmin" > "Import" > "Chose File" (next to Browse your computer).
  3. Select the .gz file to be restored
  4. Press "Go" at the bottom of the page (no need to change any settings)

Pay attention: this is a complete restore, I.e.: all databases are dropped and recreated from scratch.

If you want to be able to restore a subset of your databases you better have to change the script to backup specific databases instead of --all-database.

If you want to restore only one table 'mytable' from the backup 'mysqldump-datetime.gz':

  1. Unzip first the archive: gunzip mysqldump-datetime.gz
  2. Then extract the part that restore the desired table with this command: sed -n -e '/DROP TABLE.*mytable/,/UNLOCK TABLES;/p' mysqldump-datetime > mytabledump

Download the script here.

Synology Touch your scripts! Don't be lazy...

I have just created a new shell script for my NAS. But its behavior is really weird. It seems that concatenating strings is actually overwriting them. Ex.:

#!/bin/bash
A=abcd
B=12
echo $A$B

results in: 12cd

Click to read more

Also, redirecting any output into a file results in a new folder and an new empty file, both with a weird name. Ex.:

DIR=/volume1/sharedFolder/
FILENAME=${DIR}outputFile
echo Hello World > $FILENAME

results in a file named "outputFile" and a folder named "" ( beeing displayed as 'dots' in Windows Explorer!). Both are well located under /volume1/sharedFolder/ and visible in Windows Explorer. However the folder does not appear when executing 'ls -la' in a console on the NAS (only the file appears) and the only way to delete this folder is to execute 'rm -R /volume1/sharedFolder' in the console.

I quickly realized that this behavior was a consequence of creating the file on a Shared Folder of the NAS using the Windows Explorer's contextual menu "New">"Text Document"... Creating an empty file with the command "touch" directly in a console on the NAS does not suffer the same issues.

There is indeed at least one difference that I know between the file format of Unix and DOS: the return character which is 0x0A in Unix, while it is 0x0A, 0x0D in DOS (a.k.a /n in Unix and /r/n in DOS)

I realized that this was the issue because each empty line in the script was resulting in an error: ": not founde x:", x being the line number. Meaning that the "empty lines" were actually not empty (due to the /r).

Notice that the magic "touch" only works because I use next Notepad++ to edit the file. Indeed, this great freeware preserve the Unix file format while NotePad, WordPad, etc..., do not.

I did use the Windows Explorer's contextual menu to create the empty file because I was too lazy to create a new file with notepad++ and browse next for the target subfolder on the NAS where I had to save it :(

Otherwise, with the "Save As" of NotePad++, I would have been able to pick the right format: "Unix Script File" :)

Synology Buffalo + DD-WRT: Disable "Cross Site Action detected"

I have so many devices and services on my LAN that I can't remember all their Urls: IP Camera, VDSL modem, Network Printer, Router, File Server, Backup Server, VMWare Server, NAS, Blog, etc... So, I did create an administration page (php) on my NAS to list them with their intranet or internet address depending on the variable $_SERVER['SERVER_NAME'].

Unfortunately, when clicking on the link to my Router's web interface (a Buffalo WZR-HP-G300NH with DD-WRT v24SP2), this one returns the message "Cross Site Action detected".

As the purpose is not for me to copy the Url manually, I did look how to disable this protection on the Buffalo and here is the tip: Click to read more

  1. Run a telnet or ssh console
  2. Log on as 'root' even if you changed the default username via the web interface (the default password is 'admin' but I hope you did change it).
  3. And type the following commands:
  • nvram set no_crossdetect=1
  • nvram commit

That's it.

P.S.: more information on SSH/Telnet and the CLI here.

SynologyWordpress Edit WordPress sources on Synology via the shared folder "web"

Although all Plugins' files can be edited directly within WordPress, you may sometimes prefer to open them with your favorite editor. Also, you could desire to edit some sources of WordPress...

And if it is quite easy to find the files to be modified under the share folder "web" (\\<SynologyHostname>\web), you will quickly notice that you may not save any change...

This is simply because the folder 'wordpress' and its content (on Synology) belong by default to the user "nobody". Click to read the solution

To save your changes, you will first have to change the owner:

  • Check that Telnet is enabled on your Synology (Start Menu/Control Panel => Terminal)
  • Start the command (in a MS-Dos prompt): Telnet <SynologyHostname>
  • Log in with the root account and its password (same password as the Synology "admin" account)
  • Go to the physical folder associated to the shared folder "web": cd /volume1/web/
  • Take ownership of WordPress's whole content: chown -R <YourAccount> wordpress

For sure, <YourAccount> must exist on Synology, be the same as your windows account (I.e.: same name and same password) and have privileges on the folder "web". Otherwise, create such an account (Synology's Start Menu/Control Panel => User) and don't forget to grant him Read/Write access on "web" (Via the Synology User's tab "Privileges Setup")

Once your changes saved, never forget to give back the ownership to "nobody"(*) otherwise WordPress won't be able to update its plugins, themes, etc... automatically anymore.

(*) Indeed, by default and for security reasons, the httpd demon of apache runs with the account "nobody". All folders/files created by httpd (a.o.: during the installation of WordPress) belong therefore to "nobody". All changes executed by https (e.g.: files editing) are executed with the credentials of "nobody"...

SynologyWordpress WordPress on Synology accessible with both a Netbios name and Domain name

When I have decided to install the package WordPress on my Synology, it was intended to be used as a basic "Knowledge Management Software" as I explained here.

One of my requirement was however not covered out of the box by this solution: WordPress on Synology might only be configured to be accessible with one domain name.

Click to read why

Concretely:

  • Either with the Netbios name of the NAS, accessible from the Intranet only(my home network).
  • Or with the DNS name associated with your public IP, accessible from the Internet only (*) in my case.

(*) Indeed, my DNS name is associated with the IP of my VDSL modem (my "public" IP). And although all http requests are forwarded to my NAS when they come from Internet, they are not when they come from my Intranet (So far, I didn't find how to enable the Port Forwarding for this traffic and don't even know if it's possible with my modem; a Sagem Fast 3464):  If I browse my DNS name from my Intranet, I get the administration page of the modem.

[EDIT] Now, when I browse my DNS name, I don't get the Administration page of my Modem anymore but a message "Your internet connection is up, please close your browser and restart it again". This is something configured by my Internet Provider in their own DNS Servers.

Fortunately, there is an easy solution:

Click to read the solution

==> Install the WordPress' plugin "MultiDomain" and configure it to support "several domains": a first one being simply my Netbios name and another one being your DNS name.

This configuration has to be done in the file "config.php", either with the "WordPress Plugins Editor" or with your favorite editor; the file can be accessed via the system shared drive "web" of Synology: \\<SynologyHostname>\web\wordpress\wp-content\plugins\multidomain\config.php). If you do it with your own editor, read this post about file access rights.

That solution is from far easier than any advanced manual .htaccess customization and more effective than any other multi-site/multi-domain plugin I found ;)

[EDIT] I have finally decided to access my blog only with its fully qualified domain name and never with its Netbios name anymore, including from my Intranet. So, I had to solve the access issue when using a domain name within my Intranet. I use the DNS Server of my Synology for that purpose.

SynologyWordpress WordPress on my Synology!

This first post to proudly announce that I finally decided to install the WordPress Package on my Synology, a DS209+ with DSM 4.1, and start to blog.

Nothing could have been easier than installing this package... Click to read more

  • Log on your Synology as an administrator
  • Open the "Package Center" via the "Start Menu"
  • Go to the "Available" tab
  • Click "Install" on the package "WordPress".

The setup wizard will prompt you to get the root's password in order to create a mysql database. By default, the Synology has no password configured to access mysql with the 'root' account. So I decided to configure one, using "phpMyAdmin":

  • Open "phpMyAdmin" via the "Start Menu"

If "phpMyAdmin" is not available in this menu:

  • Go back to the "Package Center"
  • Go to the "Available" tab
  • Click Install on the package "phpMyAdmin"

Once "phpMyAdmin" open:

  • select the tab "Users"
  • for each "root" entry (one per host), click on "Edit Privileges"
  • scroll down in the "Privileges" window and change the password

Use the "root" account and its new password to complete the setup of WordPress. The only other information required to configure WordPress on your Synology is a title and a tagline for your blog :)

You can now access your blog on http://YourSynology/wordpress.

In a next post, I will explain how to easily access your blog from both intranet and internet, i.e.: using either your Synology netbios name  (hostname) or DNS name (domain name).