SynologyTips How to download playlists from YouTube using Synology

It's really easy to download playlists from YouTube with the Download Station of Synology. I do this to get locally new "No Copyright Music's", to be used later within my video's.

Click to Read More

I am using the "Download Station" as it simply works, opposite to many free or paid software's which pretend to work but usually fail (and are full of advertisements...)

Bref... Assume that you want to get all the free music's from the excellent YouTube Channel "VLOG No Copyright":

  • Visit their "PLAYLISTS" in your Browser (All such channels have a tab "PLAYLISTS" as you can see in the screenshot bellow).
  • Right-click on the link "VIEW FULL PLAYLIST" of the desired playlist.
    • This is important: don't use any URL from the "HOME" tab or from any tail representing a playlist (with the "PLAY ALL"). It would only download one video and not all the playlist. 
  • Select the menu "Copy link address"
    • The copied url should be like this : "https://www.youtube.com/playlist?list=xxxxxxxxxx"

 

Now, go to your "Synology Download Station" and:

  • Click on the large blue sign "+". It opens a window to create new download tasks.
  • Go to the tab "Enter URL".
  • Paste there the URL of the playlist copied previously.
    • A subfolder with the name of the playlist will automatically  be created under the destination folder and the video will be saved in that subfolder
  • Select the option "Show Dialog to select..." if you when to download only some of the video's. Otherwise you can unselect that option.
  • Click "Ok".

 

You should now see a line as illustrated here below, "waiting" to start:

 

You have to wait until the Download Station has crawled across the whole playlist and found all the files to be downloaded... Then, you will see the list of music's:

 

During the download, you could see some failed tasks, with a status "Error". It happens from time to time, but this is usually a temporary issue. Sort the list within the Download Station on the "Status" column and select all the "Error" . Then, right-click the list and select the menu "Resume".

 

Personally, as soon as the "music video's" are downloaded, I extract the sound tracks and keep only those. There are many software to do this. But one of my favorite option is to use ffmpeg directly on the Synology (you have to install that package). Simply type this command in a console, in the download path were are saved the video's: 

for i in *.mp4; do ffmpeg -i "$i" -codec:a libmp3lame -q:a 0 -map a "${i%.*}.mp3"; done

To process mp4 in all subfolders in the download path, I use the command:

for d in *; do cd "$PWD/$d/"; for i in *.mp4; do ffmpeg -i "$i" -codec:a libmp3lame -q:a 0 -map a "${i%.*}.mp3"; done; cd ..; done

 

If you are using Chrome, I can suggest some extensions to help in downloading Playlists..

1. The extension "Copy Selected Links". Using that one, you can copy the URL's of all the playlists of a Channel, at once. Select the whole text under the tab PLAYLISTS (it will then appear highlighted in blue as illustrated on the screenshot here under). Next, right-click in an empty area and select the menu "Copy selected Links".

Now go into Notepad++, as we have to keep only the playlists. Paste the Clipboard into a new tab of Notepad++. Press CTRL-Home to come back to the top. Press CTRL-H to open the "Replace" window. Select "Regular Expression" at the bottom of that window and click the "Replace All" button once you have entered this into the field "Find What":  .*watch.*\n (or .*watch.*\r\n if it does not work properly)

Now, you have two options. If you have no more than 50 playlists, you can paste them directly into the Download Station via the button "+" and the tab "Enter URL" as explained previously. Otherwise, you have to save with all your URL's into a .txt file (with Notepad++) and upload that file via the tab "Open a file" instead of the tab "Enter URL". 

 

2. The extension "Download Station" (or this version in Chrome Web Store) to more easily download the playlists. Same principle as above, but instead of right-clicking "View Full PlayList" to copy the playlist URL, you now have a extra menu "Synology download Station" (or Download with "Download Station").

 

Another trick, when available in the PLAYLISTS tab of a Channel: select and download the Playlists "Sorted by Mood" or "Sorted by Genre". This will help you when you search for a particular type of music for your video. Ex.: with the Channel "Audio Library":

 

Adive: each time you add a playlist in the Download Station, check that the amount of new download tasks match with the amount of tracks in the playlist. Indeed, it happens that not all the video's are added into the Download Station as it has limits(a total of max 2048 download tasks <= reason why you have to clean-up all downloaded tasks before downloading new long playlists. There is also a limit of 256 links per "file" uploaded and a limit for a playlist of "2048 - amount of current download tasks").

Don't forget that for most "No Copyright Music's", you have to mention the author in your video's or posts. Often, in YouTube, you have to click on "Show More" under the video to see the details of the license:

The License is usually clear. Ex.:

To find again later the video's on YouTube and check the type of license, it's important to keep:

  • The original name of the video when converting into mp3
  • The name of the playlist
  • The name of the channel

To do so, the easiest is to add some metadata into the mp3 extracted with ffmpeg. Once downloaded by the "Download Station", your video's should be in subfolders which have the name of their playlists. And the filenames usually contain the Author, the name of the song and the name of the YouTube Channel. So, it's easy to create the metadata. Ex.: the playlist "Dance & Electronic Music | Vlog No Copyright Music" of Vlog has been downloaded into a subfolder named "Dance & Electronic Music _ Vlog No Copyright Music" and the video's are all named like "Artist- Title (Vlog No Copyright Music).mp4"

So, to process that subfolder, I will use the command:

for i in *.mp4; do playlist=${PWD##*/}; artist=${i% - *}; other=${i##* - }; title=${other% (Vlog*}; ffmpeg -i "$i" -codec:a libmp3lame -q:a 0 -map a -metadata artist="${artist}" -metadata title="${title}" -metadata album="Playlist: ${playlist}"  -metadata Publisher="From YouTube Channel: Vlog No Copyright" "${artist} - ${title}.mp3"; done

As you can see, I am using the "Publisher" to store the Channels and the "album" for the playlist... This is a personal choice! (More details on ID3 tags and ffmpeg here)

Regarding the pattern matching used in shell, it's not always that simple and you will have to be creative... Just about the pattern matching:

  • variable2=${variable1%xyz*} => variable2 is the shortest left part of variable1 before a substring xyz.
  • variable2=${variable1%%xyz*} => variable2 is the longest left part of variable1 before a substring xyz.
  • variable2=${variable1#*xyz} => variable2 is the shortest right part of variable1 after a substring xyz.
  • variable2=${variable1##*xyz} => variable2 is the longest right part of variable1 after a substring xyz.
  • in xyz, x and z may be blanks (in my case, "xyz" was " - ")

As there are many duplicates in those playlists, I have written my own script to replace such duplicates by hardlinks (on my Synology). I can't use jdupes or rmlint as those are doing "binary comparisons" and due to the conversion via ffmpeg, it doesn't work. Instead, I search for duplicates based on the filename and size only (per channel, they are usually unique anyway). Here is my script for illustration purpose. It must be stored in an ANSI file (ex.: dedup.sh) and run with: sh dedup.sh go

echo "Start looking for duplicates"

find . -iname "*.mp3" -printf "%p ~ %f\n" | sort -f -k2 | uniq -Di -f1 > list.dup

echo "Duplicates found"

test="$1"
previous=""
previouspath=""
skip=true
while read p; do
mp3name="${p#* ~ }"
mp3path="${p%% ~ *}"
if [[ "$mp3name" == "$previous" ]]; then
mp3new="${mp3path%%.mp3}"
node=$(ls -l "$mp3path" | grep -Po '^.{11}\s1\s.*$')
if [[ "$node" == "" ]]; then
echo " $mp3path is already a hardlink"
else
skip=false
if [[ "$test" == "go" || "$test" == "test" ]]; then
SIZE1=$(stat -c%s "$mp3path")
SIZE2=$(stat -c%s "$previouspath")
#Delta=$(awk "BEGIN{ printf \"%d\n\", sqrt((100 * ($SIZE2 - $SIZE1) / $SIZE1)^2)}")
#if [[ $Delta > 1 ]]; then
Delta=$(awk "BEGIN{ printf \"%d\n\", ($SIZE2 - $SIZE1)}")
if [[ "$Delta" == "0" ]]; then
mv "$mp3path" "$mp3new.old"
ln "$previouspath" "$mp3path"
echo " $mp3path now linked to original"
else
echo " $mp3path seems different from $previouspath"
fi
else
echo " mv $mp3path $mp3new.old"
echo " ln $previouspath $mp3path"
fi
fi
else
if [[ "$test" != "test" ]] || [[ "$skip" = true ]]; then
previous=$mp3name
previouspath=$mp3path
echo "$mp3name has duplicate(s) (original in $mp3path)"
else
break
fi
fi
done <list.dup

 

Note: Facebook will possibly remove the audio from your video even if they is a "No Copyright Music". When you get such a notification from FB, you can simply click to restore the audio if you did mention the Author in your post or video.

Some YouTube Channels with "No Copyright Music's":

 

Synology Sudoer file not working on Synology due to dots in its name

I spent one hour to investigate why I couldn't execute a command with sudo, from a php script, although the user was authorized for that command within a sudoer file... The problem was a dot in the name of the sudoer file.

Click to Read More

My php script is part of a package I have created to run on my Synology (DSM 7.x).. It is running under an account named like my package: MODS_Package7.x

That php script executes the following code:

$COMMAND = "sudo /usr/syno/bin/synopkg start '$PACKAGE' 2>&1";
exec($COMMAND, $output, $result);

My sudoer file was named /etc/sudoers.d/MODS_Package7.x and contained:

MODS_Package7.x ALL=(ALL) NOPASSWD: /usr/syno/bin/synopkg

It didn't work until I removed the ".", renaming the sudoer file into /etc/sudoers.d/MODS_Package7_x

 

How stupid,  but it's indeed mentioned in the documentation:

sudo will read each file in /etc/sudoers.d, skipping file names that end in ‘~' or contain a ‘.' character to avoid causing problems with package manager or editor temporary/backup files.

The /etc/sudoers.d/README file does not exist on Synology, but can be found on other Linux


#
# As of Debian version 1.7.2p1-1, the default /etc/sudoers file created on
# installation of the package now includes the directive:
# 
#   #includedir /etc/sudoers.d
# 
# This will cause sudo to read and parse any files in the /etc/sudoers.d 
# directory that do not end in '~' or contain a '.' character.
# 
# Note that there must be at least one file in the sudoers.d directory (this
# one will do), and all files in this directory should be mode 0440.
# 
# Note also, that because sudoers contents can vary widely, no attempt is 
# made to add this directive to existing sudoers files on upgrade.  Feel free
# to add the above directive to the end of your /etc/sudoers file to enable 
# this functionality for existing installations if you wish!
#
# Finally, please note that using the visudo command is the recommended way
# to update sudoers content, since it protects against many failure modes.
# See the man page for visudo for more information.
#

 

Synology Run a command as root on Synology with any user

Synology is reinforcing the security within its DMS, making more difficult the execution of scripts as root from packages. Here is my trick to do so based on the use of ssh and php.

Click to Read More

Theoretically, to run something as root, a user must be a sudoer, i.e.: a user having the right to execute commands as root. It's not difficult to configure a user to be a sudoer. You simply have to add that user into a file located under /etc/sudoers.d/, with a few parameters to describe what he can execute and if he requires to type his password or not. Ex.: to let a user named "beatificabytes" able to execute the command 'shutdown' on a Synology:

  1. Create a file (the name does not matter): /etc/sudoers.d/beatificabytes 
  2. Edit that file with vi (or nano if you installed that package). Notice that you should really be careful as any error in that file will prevent you to log in anymore:  DANGER !!! That's why it is usually highly recommended to use the command 'visudo' to edit such files (it checks the syntax before saving changes)... Unfortunately, this command is not available on Synology.
  3. To grant rights without having to type a password, type this: beatificabytes ALL=(ALL) NOPASSWD: /sbin/shutdown

The problem I have with this approach is that I have many various users (each package runs with its own user) and I don't want to define them all as sudoers.

One option would be to run all packages with the same user who is defined as a sudoer. This is possible via a privilege file to be added into the packages (/conf/privilege).

But if like me you want to always use the same administrator account, who is already a sudoer, if you have php installed on your NAS and if you have enabled ssh, then your could simply open a ssh session with that admin account, from a php script, to execute your commands.

Here is the php script I am using for such a purpose (saved into a file named 'sudo.php'):

<?php
$options = getopt("u:p:s:o:c:");
$user = $options['u'];
$password = $options['p'];
$server = $options['s'];
$port = $options['o'];
$command = $options['c'];

if (!function_exists("ssh2_connect")) die("php module ssh2.so not loaded");
if(!($con = ssh2_connect($server, $port))){
    echo "fail: unable to establish connection to '$server:$port'\n";
} else {
    // try to authenticate with username password
    if(!ssh2_auth_password($con, $user, $password)) {
	if (strlen($password) > 2){
		$pass = str_repeat("*",strlen($password)-2);
	} else {
		$pass = "";
	}
	$pass =  substr($password, 0,1).$pass.substr($password, -1);
        echo "fail: unable to authenticate with user '$user' and password '$pass'\n";
    } else {
        // allright, we're in!
        echo "okay: logged in...\n";

        // execute a command
	$command = "echo '$password' | sudo -S $command 2>&1 ";
        if (!($stream = ssh2_exec($con, $command ))) {
            echo "fail: unable to execute command '$command'\n";
        } else {
            // collect returning data from command
            stream_set_blocking($stream, true);
            $data = "";
            while ($buf = fread($stream,4096)) {
                $data .= $buf;
		echo $buf;
            }
            fclose($stream);
        }
    }
}
?>

And to call it, from a shell for example, assuming that your are using php7.3, type something like:

php -dextension=/volume1/@appstore/PHP7.3/usr/local/lib/php73/modules/ssh2.so sudo.php -u YourAdminAccount -p YourAdminPassword -s 127.0.0.1 -o 22 -c "whoami"

Here above

  • 22 is the port defined for ssh on my Synology
  • I executed the command 'whoami'. So, the outcome will be "root"

Et voilà.

Synology Sync Plex Movies from Synology onto Unraid

I am managing my movies with Plex. It is installed both on my Synology NAS, which is running  24/7, and on my Unraid Server, that I turn on only for backup purpose.

I am usually adding new movies first on my Synology. I copy them later onto my Unraid Server. To do so, I am using rsync.

Click to Read More

ATTENTION: this is only to sync the files, not the metadata.

 

In each Plex, I have two libraries: Movies and Series TV

On Synology, each library includes two folders:

  • Movies includes /volume1/plex/Movies and /volume1/plex/new Movies
  • Series TV includes /volume1/plex/Series and /volume1/plex/new Series 

On Unraid, each library includes only one folder:

  • Movies includes /mnt/user/Movies
  • Series TV includes /mnt/user/Series

On Unraid, I have two shares Movies and Series to access respectively mnt/user/Movies and /mnt/user/Series.

On the Synology NAS, I have mounted the shares of the Unraid Server as CIFS Shared Folder:

  • /<Unraid Server>/Movies on /volume1/mount/Movies
  • /<Unraid Server>/Series on /volume1/mount/Series

Each time I have a new movie or serie, I copy it onto my Synology, respectively into /volume1/plex/new Movies or /volume1/plex/new Series.

All movies and series are first renamed using filebot. This is a guarantee that all are well uniquely identified with the title, year, resolution, season, episode, encoding, etc, ... According to Plex best practices, each movie is in its own folder.

Once I have a few new media, I turn on my Unraid Server and launch the following script in a SSH console (using Putty) on the Synology:

#!/bin/bash

if grep -qs '/volume1/mount/Series ' /proc/mounts
then
rsync --ignore-existing -h -v -r -P -t /volume1/plex/New\ Series/ /volume1/mount/Series/
else
echo "Cannot sync new Series as Zeus is not mounted on /mount/Series"
fi

if grep -qs '/volume1/mount/Movies ' /proc/mounts
then
rsync --ignore-existing -h -v -r -P -t /volume1/plex/New\ Movies/ /volume1/mount/Movies/
else
echo "Cannot sync new Movies as Zeus is not mounted on /mount/Movies"
fi

Next, on Synology, I move all movies and series respectively from /volume1/plex/new Movies and /volume1/plex/new Series into /volume1/plex/Movies or /volume1/plex/Series (*)

Than, to be sure I don't have the same movie twice on the Unraid Server (with distinct encoding or resolution), I run this command in a SSH console on Unraid:

find /mnt/user/Movies -type f -iname '*.mkv' -printf '%h\n'|sort|uniq -c | awk '$1 > 1'^C

It does not work for the Series as each folder (season) contains obviously several episodes...

 

This is only syncing the files! There is no easy way to sync also the metadata between the two Plex.

But voilà....

 


(*) Doing so, the fine tunings done in Plex, when the movie was under </new Movie>, are not lost. Temporarily, the movie will appear as "deleted" in Plex. Above all, do not "Empty Trash" ! Soon (depending on how many movies you moved), it will be "available" again. I did test that trick explicitly:

1. Take a new movie:

2. Open it:

3. Check that path (here under, it is under /New Movies):

4. Edit some info for testing purpose (here under, the "Original Title"):

5. Change also the poster:

5. Using Windows Explorer or the File Station, move the folder of the movie into its new location. The movie will appear very soon as unavailable:

6. Open it:

7.  Wait... Soon it will become again available:

8. Check now the path (here under, it is now under /Movies):

9. As you can see, the chosen cover is still there. And editing the details, you would see that the original title is still "DEMO MOVE FOLDER within Plex".

Synology How to Add Multiple Hosts in phpMyAdmin on Synology

On Google, one can easily find how to add servers in the list presented on the login page of phpMyAdmin. But thes results don't apply if you are using the package 'phpMyAdmin' for Synology. With that package, one must edit the synology_server_choice.json file.

Click to Read More

If you are connected on your NAS via a SSH console, the file to be edited is located in /var/services/web/phpMyAdmin/synology_server_choice.json.

But you should also be able to access it from a Windows PC on \\<YourNAS>\web\phpMyAdmin\synology_server_choice.json

To add a server, simply duplicate the first statement of the json file, separated with a comma:

[
{"verbose":"Server 1","auth_type":"cookie","host":"localhost","connect_type":"socket","socket":"\/run\/mysqld\/mysqld10.sock","compress":false,"AllowNoPassword":false},
{"verbose":"Server 2","auth_type":"cookie","host":"192.168.0.20","connect_type":"socket","socket":"\/run\/mysqld\/mysqld10.sock","compress":false,"AllowNoPassword":false},
{"verbose":"Server 3","auth_type":"cookie","host":"192.168.0.100","connect_type":"socket","socket":"\/run\/mysqld\/mysqld10.sock","compress":false,"AllowNoPassword":false}
]

Et voilà.

Synology Shrink a SHR Volume and remove disks from a Synology

I wanted to try this since years... I finally did it successfully in a virtual Synology with an array of 5 disks in a single volume using btrfs (See here for VM and btfrs).

Click to Read More

This is a summary of my attempts. It is based on various posts found on the web:

  • https://superuser.com/questions/834100/shrink-raid-by-removing-a-disk
  • https://unix.stackexchange.com/questions/67702/how-to-reduce-volume-group-size-in-lvm#67707
  • https://blog.reboost.net/manually-removing-a-disk-from-a-synology-raid/

Notice that xfs File System only supports "extend" and not "reduce". So, I only tried with etx4 and btrfs.

Although I used a DSM with a video station, a photo station, the web station and wordpress, this was not a "real case" (most of the data were still on the first sectors of the disks). So, I would really not recommend to do this on a real NAS !!!

 

Open a SSH console and enter root mode (sudo -i) to execute the commands here after.

To identify which process is accessing a volume, we possibly need lsof (only if umount fails)

Install OPKG : See https://www.beatificabytes.be/use-opkg-instead-of-ipkg-on-synology/

Install lsof: /opt/bin/opkg install lsof

Layout of the Physical Volume/Volume Group/ Logical Volumes :

Find the Filesytem of /volume1:

df -h

Filesystem      Size  Used Avail Use% Mounted on
/dev/md0        2.3G 1016M  1.2G  46% /
none            2.0G     0  2.0G   0% /dev
/tmp            2.0G  536K  2.0G   1% /tmp
/run            2.0G  3.2M  2.0G   1% /run
/dev/shm        2.0G  4.0K  2.0G   1% /dev/shm
none            4.0K     0  4.0K   0% /sys/fs/cgroup
cgmfs           100K     0  100K   0% /run/cgmanager/fs
/dev/vg1000/lv   14G  3.2G  9.9G  25% /volume1

  • Show Physical Volume: pvdisplay
  • Show Volume Group: vgdisplay
  • Show Logical Volume: lvdisplay
  • Show Disks: fdisk -l

Check if one can umount the volume1 :

umount /dev/vg1000/lv

umount: /volume1: target is busy
       (In some cases useful info about processes that
        use the device is found by lsof(8) or fuser(1).)

Stop all services :

synopkg onoffall stop

/usr/syno/etc.defaults/rc.sysv/S80samba.sh stop

/usr/syno/etc.defaults/rc.sysv/S83nfsd.sh stop

/usr/syno/etc.defaults/rc.sysv/pgsql.sh stop

/usr/syno/etc.defaults/rc.sysv/synomount.sh stop

check which daemons are still using volume1 :

/opt/bin/lsof | grep volume1

COMMAND    PID  TID     USER  FD        TYPE                          DEVICE SIZE/OFF       NODE NAME
s2s_daemo 10868         root    8u      REG               0,30    11264        608 /volume1/@S2S/event.sqlite
synologan  8368         root  3u        REG               0,30       3072        654 /volume1/@database/synologan/alert.sqlite
synoindex  8570         root  mem       REG               0,28                  7510 /volume1/@appstore/PhotoStation/usr/lib/libprotobuf-lite.so (path dev=0,30)
synoindex  8570         root  mem       REG               0,28                 30143 /volume1/@appstore/VideoStation/lib/libdtvrpc.so (path dev=0,30)
lsof       8585         root  txt       REG               0,30     147560      32161 /volume1/@entware-ng/opt/bin/lsof

Kill those daemons :

initctl list | grep synoindex

synoindexcheckindexbit stop/waiting
synoindexd start/running, process 11993

killall synoindexd

initctl list | grep synologan

synologand start/running, process 8368

killall synologand

This is not working… synologand  restart immediately ☹

chmod u-x /usr/syno/sbin/synologand

killall synologand 

killall s2s_daemon

Shrink the ext2 or ext4 Filesystem :

Only do the steps here under if using ext2 or ext4 instead of btfrs

Resize the File System :

umount -d /dev/vg1000/lv

e2fsck -C 0 -f /dev/vg1000/lv

e2fsck 1.42.6 (21-Sep-2012)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
/lost+found not found. Create<y>? yes
Pass 4: Checking reference counts
Pass 5: Checking group summary information

1.42.6-23739: ***** FILE SYSTEM WAS MODIFIED *****
1.42.6-23739: 30742/889440 files (0.7% non-contiguous), 734048/3553280 blocks

NB. If you want to stop e2fsck : killall -USR2 e2fsck
NB. If you want to get a progress from e2fsck : killall -USR1 e2fsck

resize2fs -p -M /dev/vg1000/lv

resize2fs 1.42.6 (21-Sep-2012)
Resizing the filesystem on /dev/vg1000/lv to 702838 (4k) blocks.
The filesystem on /dev/vg1000/lv is now 702838 blocks long.

If you see the error message here under, you possibly have another file system than ext2 or ext4 (E.g.: btfrs, ...):

resize2fs: Bad magic number in super-block while trying to open /dev/vg1000/lv
Couldn't find valid filesystem superblock.

If you have this error message although having ext2 or ext4, try: lvm lvchange --refresh /dev/vg1000/lv

Check the results :

mount /dev/vg1000/lv /volume1
df -h

Filesystem Size Used Avail Use% Mounted on
/dev/md0 2.3G 968M 1.3G 44% /
none 2.0G 0 2.0G 0% /dev
/tmp 2.0G 512K 2.0G 1% /tmp
/run 2.0G 3.0M 2.0G 1% /run
/dev/shm 2.0G 4.0K 2.0G 1% /dev/shm
none 4.0K 0 4.0K 0% /sys/fs/cgroup
cgmfs 100K 0 100K 0% /run/cgmanager/fs
/dev/vg1000/lv 2.6G 2.5G 0 100% /volume1

Resize the Logical Volume :

Resize the logical volume a bit larger than the file system (See the outcome of df -h above)

umount /dev/vg1000/lv
lvm lvreduce -L 2.7G /dev/vg1000/lv

Rounding size to boundary between physical extents: 2.70 GiB
WARNING: Reducing active logical volume to 2.70 GiB
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce volume_1? [y/n]: y
Size of logical volume vg1/volume_1 changed from 13.55 GiB (3470 extents) to 2.70 GiB (692 extents).
Logical volume volume_1 successfully resized.

NB: to get a progress, use : LV=/dev/vg1000/lv; echo `lvdisplay -v $LV | grep current | wc -l` `lvdisplay -v $LV | grep stale | wc -l` | awk ‘{printf ( “%3d percent Complete \n”, 100-$2/$1*100) }’

Shrink a BTRFS File System :

Only do the steps here under if using btfrs

Resize the File System :

btrfs filesystem resize 2.7G /volume1

You can restore max size if required: btrfs filesystem resize max /volume1

df -h

Filesystem      Size  Used Avail Use% Mounted on
/dev/vg1000/lv  2.6G 2.5G  1.4G  71% /volume1

Resize the Logical Volume:

umount -d /dev/vg1000/lv

lvm lvreduce -L 2.7G /dev/vg1000/lv

 

Next steps :

The steps here under are both for ext2/ext4 and btfrs

Resize the Physical Volume :

Look for the device and if blocks must be moved:

pvdisplay -C

PV VG Fmt Attr PSize PFree
/dev/md2 vg1 lvm2 a-- 13.57g 10.85g

pvs -v --segments /dev/md2

    Using physical volume(s) on command line.
   Wiping cache of LVM-capable devices
PV         VG     Fmt  Attr PSize  PFree   Start SSize LV   Start Type   PE Ranges
/dev/md2   vg1000 lvm2 a--  14.00g 440.00m     0  3473 lv       0 linear /dev/md2:0-3472
/dev/md2   vg1000 lvm2 a--  14.00g 440.00m  3473   110          0 free

If there is a bloc trailing after the "free" part (I had not), use:

lvm pvmove --alloc anywhere /dev/md2:xxx-xxx

 

Resize the device a bit larger than the logical volume. If you don't use a sufficient size, you will get an error message:

pvresize --setphysicalvolumesize 2.7G /dev/md2

/dev/md2: cannot resize to 691 extents as 695 are allocated.
0 physical volume(s) resized / 1 physical volume(s) not resized

pvresize --setphysicalvolumesize 2.8G /dev/md2

Physical volume "/dev/md2" changed
1 physical volume(s) resized / 0 physical volume(s) not resized

 

Resize the array to use less disks:

Reduce the array to use 3 disks (-n3)

mdadm --grow -n3 /dev/md2

mdadm: this change will reduce the size of the array.
      use --grow --array-size first to truncate array.
      e.g. mdadm --grow /dev/md2 --array-size 7114624

You have first to reduce the array size.

mdadm --grow /dev/md2 --array-size 7114624

mdadm --grow -n3 /dev/md2 --backup-file /root/mdam.md2.backup &

 

Monitor the progress of the resizing with:

cat /proc/mdstat

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1]md2 : active raid5 sdak5[4] sdaj5[3] sdai5[2] sdah5[1] sdag5[0]7114624 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU][======>..............] reshape = 31.7% (1129600/3557312) finish=0.3min speed=112960K/sec

md1 : active raid1 sdak2[4] sdaj2[3] sdai2[2] sdah2[1] sdag2[0]2097088 blocks [12/5] [UUUUU_______]
md0 : active raid1 sdak1[4] sdaj1[3] sdag1[0] sdah1[1] sdai1[2]2490176 blocks [12/5] [UUUUU_______]
unused devices: <none>

This can take a lot of time, but you can continue (I did wait ;) )

Finalize

pvresize /dev/md2

Physical volume "/dev/md2" changed
1 physical volume(s) resized / 0 physical volume(s) not resized

lvextend -l 100%FREE /dev/vg1000/lv

Size of logical volume vg1/volume_1 changed from 2.70 GiB (692 extents) to 4.07 GiB (1041 extents).
Logical volume volume_1 successfully resized.

for btrfs:

btrfs filesystem resize max /volume1

for ext2/ext4:

e2fsck -f /dev/vg1000/lv

e2fsck 1.42.6 (21-Sep-2012)
Pass 1: Checking inodes, blocks, and sizes

Running additional passes to resolve blocks claimed by more than one inode...
Pass 1B: Rescanning for multiply-claimed blocks
Multiply-claimed block(s) in inode 13: 9221
Pass 1C: Scanning directories for inodes with multiply-claimed blocks
Pass 1D: Reconciling multiply-claimed blocks
(There are 1 inodes containing multiply-claimed blocks.)

File /@tmp (inode #13, mod time Sat May 23 23:00:14 2020)
has 1 multiply-claimed block(s), shared with 0 file(s):
Multiply-claimed blocks already reassigned or cloned.

Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
1.42.6-23739: 30742/179520 files (0.7% non-contiguous), 685397/720896 blocks

Hope that no nodes are corrupted. Otherwise... well... accept to fix them.

resize2fs /dev/vg1000/lv

resize2fs 1.42.6 (21-Sep-2012)
Resizing the filesystem on /dev/vg1/volume_1 to 1065984 (4k) blocks.
The filesystem on /dev/vg1/volume_1 is now 1065984 blocks long.

mount /dev/vg1000/lv /volume1

Restart services:

/usr/syno/etc.defaults/rc.sysv/S80samba.sh start

/usr/syno/etc.defaults/rc.sysv/S83nfsd.sh start

/usr/syno/etc.defaults/rc.sysv/pgsql.sh start

/usr/syno/etc.defaults/rc.sysv/synomount.sh start

 

synopkg onoffall start

 

Remove the new  spare disks from the volume

mdadm --detail --scan

ARRAY /dev/md0 metadata=0.90 UUID=3b122d95:7efea8ff:3017a5a8:c86610be

ARRAY /dev/md1 metadata=0.90 UUID=bd288153:d00708bf:3017a5a8:c86610be

ARRAY /dev/md2 metadata=1.2 spares=2 name=DS3617_62:2 UUID=875ad2d6:956306b7:8c7ba96b:4287f6e6


mdadm --detail /dev/md2

/dev/md2:
       Version : 1.2
Creation Time : Sat May 23 14:00:02 2020
    Raid Level : raid5
    Array Size : 7114624 (6.79 GiB 7.29 GB)
Used Dev Size : 3557312 (3.39 GiB 3.64 GB)
  Raid Devices : 3
Total Devices : 5
   Persistence : Superblock is persistent

    Update Time : Sat May 23 18:54:30 2020
         State : clean
Active Devices : 3
Working Devices : 5
Failed Devices : 0
Spare Devices : 2

         Layout : left-symmetric
    Chunk Size : 64K

           Name : DS3617_62:2  (local to host DS3617_62)
          UUID : 875ad2d6:956306b7:8c7ba96b:4287f6e6
        Events : 56

    Number   Major   Minor   RaidDevice State
      0       8        5        0      active sync   /dev/sda5
      1       8       37        1      active sync   /dev/sdc5
      2       8       53        2      active sync   /dev/sdd5
      3       8       69        -      spare   /dev/sde5
      4       8       85        -      spare   /dev/sdf5

Here above, we see that sde5 and sdf5 are unused

 

mdadm --fail /dev/md2 /dev/sde5

mdadm: set /dev/sde5 faulty in /dev/md2

mdadm --fail /dev/md2 /dev/sdf5

mdadm: set /dev/sdf5 faulty in /dev/md2

mdadm --remove /dev/md2 /dev/sde5

mdadm: hot removed /dev/sde5 from /dev/md2

mdadm --remove /dev/md2 /dev/sdf5

mdadm: hot removed /dev/sdf5 from /dev/md2

 

Et voilà.

Synology Add Support for SHR on Synology DS3617xs

I wanted to test a procedure to shrink a SHR volume on a virtual Synology DS3617. Unfortunately, SHR is not an available option on the high end models of Synology. But it can be enabled easily ;)

Click to Read More

DS3617xs is a model where SHR is not enabled:

0KenZdg.png

To enable it:

  1. Open a SSH console and enter root mode.
  2. Edit the file  /etc.defaults/synoinfo.conf
  3. At the end, you should find: supportraidgroup="yes"
  4. Comment that line with a #
  5. Next to it, add a new line with: support_syno_hybrid_raid="yes"
  6. Reboot your Synology

You can use "vi" to edit the file: sudo vi /etc.defaults/synoinfo.conf,

  • Press i or the insert key to make your modifications, then "Esc" to end the modifications.
  • After that type :x! to save and exit. Ot :q! to exit without saving.

Et voilà, your DS3617xs offers now the option for SHR

Synology Backup Synology to Unraid

The easiest way to backup a Synology NAS to Unraid Server is to use Hyper Backup on Synology and rsync on Unraid.

Click to Read More

First, enable rsync on your Unraid Server. It is preinstalled but not running as a daemon.

Create a file /boot/custom/etc/rsyncd.conf with the following content:

uid             = root
gid             = root
use chroot      = no
max connections = 4
pid file        = /var/run/rsyncd.pid
timeout         = 600

[backups]
    path = /mnt/user/backups
    comment = Backups
    read only = FALSE

Here above:

  • The name "backups" between brackets will be visible as "backup module" from the Synology. You can create several blocks like this one.
  • The "path" (here /mnt/user/backups) must exist on your Unraid server.
  • Notice: the folder /boot should exist. But you could possibly have to create the subfolders /custom/etc

 

Next, create a file /boot/custom/etc/rc.d/S20-init.rsyncd with the following content:

#!/bin/bash

if ! grep ^rsync /etc/inetd.conf > /dev/null ; then
cat <<-EOF >> /etc/inetd.conf
rsync   stream  tcp     nowait  root    /usr/sbin/tcpd  /usr/bin/rsync --daemon
EOF
read PID < /var/run/inetd.pid
kill -1 ${PID}
fi

cp /boot/custom/etc/rsyncd.conf /etc/rsyncd.conf

Finally, add the following line in the file /boot/config/go :

#!/bin/bash
bash /boot/custom/etc/rc.d/S20-init.rsyncd

 

Now, either reboot or execute: bash /boot/custom/etc/rc.d/S20-init.rsyncd

 

Go now on your Synology and open "Hyper Backup" to Create a new Data Backup Task:

Select rsync as Backup Destination:

And Create the backup Task with "rsync-compatible server" as Server Type:

Synology GateOne not opening anymore on Synology as being "unsafe" due to an expired certificate.

There are many many reasons why sometimes GateOne does not open correctly on Synology. One of those is the expiration of your certificate.

Click to Read More

Notice that I am using my own packaging of GateOne for Synology (to be found here). In order to run properly, GateOne needs a copy of the certificates of your Synology. My Package is taking care of that during the installation. But if the system certificate expired or is renewed, GateOne will be in trouble as long as you don't copy yourself the renewed certificate into its setup folder. The symptoms are:

If you open GateOne in a DSM window, you get this: "The webpage at xxx might be temporarily down"

If you open GateOne in a new window, you get this: "Your Connection is not private", ERR_CERT_DATE_INVALID

If you click on Advanced, you see that "this server could not prove it is xxx : its security certificate expired xxx days ago":

If you click on "Proceed to xxx (unsafe)", GateOne will open in a new window. But if opened in the DSM, it will display the following messages:

To solve this, if you didn't renew your certificates yet, do it ! Go to the "Control Panel" > "Security" > "Certificate". Select your certificates one by one and open the "Add" menu + select "Renew certificate"

An alternative is to open a SSH console (See here) and execute the command: /usr/syno/sbin/syno-letsencrypt renew-all -vv

Once the certificates are renewed, in a SSH console, execute:

  • cp /usr/syno/etc/certificate/system/default/cert.pem /var/packages/MODS_GateOne/target/ssl/cert.pem
  • cp /usr/syno/etc/certificate/system/default/privkey.pem /var/packages/MODS_GateOne/target/ssl/privkey.pem
  • /usr/syno/bin/synopkg restart MODS_GateOne

Notice: the path "/var/packages/MODS_GateOne/target" is only valid for GateOne installed with my own package. The path of the official package is probably "/usr/local/gateone/ssl/".