Category: Synology

  • Synology HyperBackup Explorer & StorJ S3

    StorJ is a great solution to backup a Synology NAS with HyperBackup and probably the cheapest (Even if for a less than 1TB I prefer the “native” Synology C2 Storage). But how to browse your backup and recover only a few files? The solution: mount the StorJ’s S3 Bucket on your PC with Mountain Duck (As a Virtual Disk) and use HyperBackup Explorer to browse that “Local Backup” !

    Synology’s HyperBackup Explorer can only browse a local backup or a Synology C2 Storage. It means that, when using a third party S3, you would have to download your full backup locally to be able to retrieve just a few files… (Which cost 7$/TB with StorJ).

    Indeed, the actual files (images, video, documents, …) can’t be browsed within a HyperBackup image with a simple S3 explorer (like the free “S3 Browser” or “CyberDuck”). You only find “bucket” parts…

    Fortunately, with third party S3 like StorJ (pronounced “Storage”), you can mount the Bucket as a local Virtual Disk on your PC… And then use HyperBackup Explorer. This is great as you easily navigate also the timeline (version) of your files…

    First, as I discovered on DrFrankenstrien’s Tech Stuff, here is how to create a StorJ S3 Bucket (you can start a StorJ account with 25GB free for 30 days):

    As I am in the EU, I go to this SignUp page:

    Confirm your subscription (Attention: the verification email arrived in Gmail’s Spam !!!)

    Now select the “PERSONAL” account

    Enter your “Name”, choose “Backup & Recovery” and click “Continue”

    Click “Start Free Trial”

    And finally “Activate Free Trial”

    Ok, now that you have an account, you have to create a “Project” with a “Bucket” where you will upload “Objects” from your Synology:

    In my case, a default “My Sotrj Project” has been created automatically. My first step was therefore to click on “Set a Passphrase”

    I selected to type my own passphrase (it will be requested each time one connect into StorJ’s Dashboard)

    Once the passphrase enters, the next step is to “Create a Bucket”, here named “demostorj4synology”, with the default retention settings applies during the upload (I will let Hyper Backup manager the versioning)

    You are now nearly ready to upload Objects…

    Now, you have to prepare an API Key and get a Secret Key, which will be used to configured the connection on your Synology

    Type an “Name” for this access and select the access type “S3 Credential”. In my case I grant “Full Access” (I.e. HyperBackup will be able to Read/Write/List/Delete).

    Once “Access” created, you will have to download or at least copy the “Access Key”, the “Secret Key” and the “Endpoint” in a secure place (not to be shared, obviously) !

    Voilà, your are now ready to go to your Synology and configure Hyper Backup. First step: create a new “Backup” of “Folders and Packages”

    You have to scroll down in the next screen to select “S3 Storage”

    Now:

    • Select “Custom Server URL” as “S3 Server”
    • Copy the “endpoint” you got when creating an “Access Key” as Server address
    • Select v4 as Signature Version
    • Keep Request Style as is
    • Enter your Access Key (Here is mine as I will delete it anyway after this demo)
    • Enter your Secret Key
    • If your Endpoint, Access Key and Secret Key are corrects, you should see your Bucket available as bucket name.

    You can now select Folders to be included in your backup as well as Synology Packages (attention that some packages will also include all the files, such as the Web Station that includes the shared folder “web” – or Synology Drive Server that includes the shared folder “homes”)

    Configure your Backup as you wish, regarding schedule and retention:

    Now that the backup is running, it’s time to look how to mount that backup as a Virtual Disk on your PC, to be able to browse it with HyperBackup Explorer. A free solution exist but is not for beginners: rClone

    There are a few user friendly tools available for those not confortable with command prompts and scripts… The cheapest is “Mountain Duck“. It’s the one I am using. Here is how to configure it. To make it easier, in the “Preferences”, you can enable the “Storj DCS” profile

    So, now, add a connection

    In, the popup windows:

    • Select ‘Sotrj DCS’ (or ‘Amazon S3’ if you didn’t enable that profile) for the connection
    • Enter a Label. Ex.: StorJ
    • in Server, type the Endpoint of StorJ,if not yet displayed (would be the case is you select “Amazon S3” as profile)
    • Enter your “Access Key” (uncheck the box Anonymous login)
    • Enter your “Secret Key”
    • Other parameters are just fine, but you can force the Driver Letter is you prefer…

    Just in case you used the “Amazon S3” profile, the labels are different, but the config is the very same:

    Now, you should see a Disk R: mounted on your PC, exposing the Bucket from your StorJ S3.

    Install and run HyperBackup Explorer, then “Browse Local Backup”:

    Pick the .bkpi file to open the backup image and navigate your files

    Et voilà :

    Loading

  • Continuously monitor your Servers and Services

    In order to monitor your Servers and Services, nothing better than phpServerMonitor running in a container on Synology.

    The container phpServerMonitor of scavin will do the trick (It’s possibly not the most up-to-date, but is works fine). You will only need a MySQL database hosted on your Synology.

    phpServerMonitor looks like this:

    First, install “MariaDB 10” (the default alternative to MySQL commonly used on Synology)

    Then, open MariaDB10, enable a port (Ex: the default 3306) and apply this change

    If it’s not yet done, install phpMyAdmin (not explained here)

    Launch it with your administrator account (Select MariaDB10 as Server)

    Create a DB named, for example, “servermon”:

    On that new DB, add a new user account:

    Type a user name, for example “servermon”, select a strong password (take note of it) and tick the option “Check all” Global privileges for the moment. Once phpServerMonitor installed and configured, you will only keep the “Data” access rights !

    Launch the “Container Manager” on your Synology and in the Registry, search for scavin/phpservermonitor and download that image.

    Once downloaded, configure and run it. The only config required is

    • mapping a port (Ex.: 4080) of the Synology on container’s port 80 (443 won’t work) and
    • removing the environment variable PHP_MD5 !!!

    Now, navigate to http://<Your Synology Address>:<Port mapped> and click “Let’s GO”

    Complete the required info in the configuration screen:

    • the Application base url should be completed automatically with : http://Your Synology IP:<Port mapped>
    • type the IP of your Synology as “database host” and the port enabled in MariaDB10 as “database port “
    • introduce new the name of database created previously, as well as the name and the password of the user added onto that DB.

    Click on “Save Configuration” and if “MariaDB” was accessible, you should see this:

    Click once more on “Save Configuration” and here you are:

    “Go to your Monitor” to add Servers and Services. More info in the official website of phpServerMonitor 😉

    Haaa, don’t forget to remove “Structure” and “Administration” rights for the user “servermon” on the DB “servermon” and all access rights at global level (if any):

    Et voilà.

    Loading

  • Continuously track your Internet Speed

    In order to monitor your internet connection and confirm that it’s stable, day and night, nothing better than a Speed Test Tracker running in a container on Synology.

    The Speedtest Tracker of Henry Whitaker will do the trick. It’s based on Ookla’s Speedtest Cli from which it records recurrently the results.

    It looks like this:

    It’s quite easy to install and run into a container of DSM 7.x. First, install the “Container Manager”:

    Then, register the image of henrywhitaker3/speedtest-tracker:

    Next, create a new container with that image and enable auto-start (here, this container wont’ be exposed via the Web Station)

    Finally, in the next tab:

    • Map a port of your Synology (Ex.: 7080) onto the port 80 of the container (and/or possibly 443).
    • Map a shared folder of your Synology (Ex.: /docker/Ookla) onto the folder /config of the container.

    Scrolling down in that tab, add an environment variable “OOKLA_EULA_GDPR” with the value “true”.

    If you don’t do it, the container will fail to run (it will stop automatically after one or two minutes) and you will find the following message in the Log of the container:

    Once the configuration done, you can run the container. Wait for a few minutes before opening the Speedtest Tracker. Once the container up, you should see this in the Logs of the container:

    You can now open the Speedtest Tracker (using http(s)://<your NAS IP>:<port configured>) and configure the scheduler to run a test on a regular basis. For that purpose, go to the “Settings” menu . Here under, the pattern ***** is used to run every minute(as per this site).

    The default speedtest server in use does not support such a high pace.. It will soon stop and you will see these messages in the Logs:

    The follow-up schedule can be use to run every 10 minutes: */10 * * * *

    You can possibly look for other speedtest, using this URL : https://www.speedtest.net/api/js/servers?engine=js&https_functional=true&limit=100&search=YOURLOCATION

    Use the ID of a server found via that URL in the Settings of the Speedtest Checker…

    There are much more settings and other options to configure the container, but this minimal setup runs fine… And you will soon see if your internet connection is stable or not:

    Et voilà.

    Loading

  • Renewal of LetsEncrypt certificates on Synology after a move

    After exporting all my certificates from an old NAS to a new one, I realized that they were not renewed automatically anymore. Trying to renew them manually via the DMS UI (Control Panel > Security > Certificate), a zip file with a CSR file (Certificate Signing Request) and a Key file, was downloaded. I had no idea how to proceed with these, so I investigated why the automatic renewal was not working as on the old NAS. The reason was the lack of “renew.json” file on the new NAS.

    Click to Read More


    Before starting, I strongly advice to export all the certificates, one by one, using the DSM UI  (Control Panel > Security > Certificate) !!!

    Connected on the NAS via SSH, I tried first to renew the certificates with the command: /usr/syno/sbin/syno-letsencrypt renew-all

    Looking into /var/log/messages, I noticed that syno-letsencrypt was complaining about a missing renew.json file :

    syno-letsencrypt[19750]: syno-letsencrypt.cpp:489 can not find renew.json. [No such file or directory][/usr/syno/etc/certificate/_archive/XXXXXX]

    NB.: To get more details, the verbose version of the renewal can be useful: /usr/syno/sbin/syno-letsencrypt renew-all -vv

    On Synology, there is one folder /usr/syno/etc/certificate/_archive/XXXXXX per certificate, where XXXXXX is the ID of the certificate. It is assumed to contain these files: cert.pem, chain.pem, fullchain.pem, privkey.pem and renew.json. And indeed, there was no file renew.json, in the folder XXXXXX

    So, on the old NAS, I looked for the folder AAAAAA containing the same certificate as in XXXXXX (once imported on another NAS, the certificate gets a new unique ID ). Check the file /usr/syno/etc/certificate/_archive/INFO to identify the ID of the certificate.

    Once the folder AAAAAA identified, read the file renew.json which looks like this:

    {
    "account" : "/usr/syno/etc/letsencrypt/account/BBBBBB/",
    "domains" : "<your domain>",
    "server" : "https://acme-v02.api.letsencrypt.org/directory",
    "version" : 2
    }

    BBBBBB is the folder containing your letsencrypt user account, stored in the file /usr/syno/etc/letsencrypt/account/BBBBBB/info.json (Notice: there can be several accounts if you used different contact emails for your various certificates).

    Look on the new NAS for the folder ZZZZZZ equivalent to BBBBBB (comparing the info.json files).

    Once AAAAAA and BBBBBB determined, I did create a file /usr/syno/etc/certificate/_archive/XXXXXX/renew.json on the new NAS, containing:

    {
    "account" : "/usr/syno/etc/letsencrypt/account/ZZZZZZ/",
    "domains" : "<your domain>",
    "server" : "https://acme-v02.api.letsencrypt.org/directory",
    "version" : 2
    }

    And finally, I could run successfully the renewal: /usr/syno/sbin/syno-letsencrypt renew-all -vv

    To update only one certificate (for testing purpose, it’s safer than renew-all), use the folder name XXXXX of the certificate : /usr/syno/sbin/syno-letsencrypt renew -c XXXXXX -vv

    Here attached, find a script created by ChatGPT to help in generating the renew.json files (Copy and run it into /usr/syno/etc/certificate/_archive/)

    renewCertificates

    Loading

    Attachments

  • Setup DSM 7.1 in a Virtual Synology DS3622xs+ using VMWare

    Here is a step by step “How-To create a Virtual Machine to emulate a DS3622xs+ running DSM 7.1” using VMWare and a Loader from redpill

    Click to Read More

    I used a tutorial from internet to create a VM in VMWare 16 and exported this one as ovf. Using that ovf, you can easily setup your own VM with a DS3622xs+ running DSM 7.1.

    1: Download and unzip this package, containing:

    • the configuration of the virtual machine (dsm.ovf and two disk.vmk),
    • the boot file for DS3622xs+ (synoboot.vmdk) and
    • the image of the DSM 7.1 for DS3622xs+ (DSM_DS3622xs+_42661.pat)

    2: Open VMWare Workstation. If you never configured the default location where you want to create your Virtual Machines, press CTRL-P to open the “Preferences” panel. There, set this default location (I am using E:\VMWare).

    3: Go next to “Windows Explorer”, in the folder where you have unzipped the package, and double click dsm.ovf. This is going to import the VM into VMWare. Type a name for the new virtual machine (I am using DSM3622xs+ 7.1) and click “Import”.

    4: Here is the outcome to the importation. You can now click on “Power on this virtual machine”. If you want, you may also first increase the Memory, the Processors or the size of the Hard Disk 2 and 3 (Do not touch the first Hard Disk  which contains the boot loader).

    5: 5 sec after starting the VM, the bootloader will run:


    6: Open the “Synology Assistant”, which is more efficient than the page http://find.synology.com to find your VM, and after +/- 3 minutes (depending on the perf of your PC) click on “Search”. If you didn’t wait long enough, you will see an Error message (In such a case, check that you have enabled the option “Allow compatibility with devices that do not support password encryption” in the preferences – via the gear icon in the top-right corner – and/or click Search “Again”. I never had to do this more than 3 times).

    7: Finally, the Assistant will find your VM. A webpage should automatically open (Otherwise, double-click on the VM). Approve the EULA and click Install.

    8: Click next on Browse to select the image of the DSM 7.1 for DS3622xs+ and Select the file “DSM_DS3622xs+_42661.pat” in the folder where you unzipped the package downloaded previously.

    9: Finally, click on “Next” and confirm that the installation can override the disk… (You can safely tick the “I understand…” it’s the virtual Hard Disk member of the VM. No worry 😉 )

    10: The installation will take a few minutes

    11: Once the installation complete, you will see in the VMWare Workstation that the VM is rebooting. On the installation page, you see a countdown.

    12: You should soon see that it’s installing packages before being finally ready to “start”

    13: If you are requested to login into the VM, do it as “admin” without password and configure it. Next configure your new NAS (Do not accept automatic updates !!!).

    I did skip the creation of Synology account and didn’t agree on Device Analytics or the display of this Nas in the Web Assistant.

    14: It’s now time to use the Hard Disk 2 and 3 to create a first Volume. Open the DSM Main Menu and start the Storage Manager (Then, it’s just a next, next, next journey depending on which Disk Array you want).

    15: Et voila, you have now a brand new DS3266xs+ with the DSM 7.1-42661 (DO NOT UPGRADE !!! The NAS would go into a “Recoverable” and I have no resolution for that)

    Loading

  • Setup DSM 7.0.1 in a Virtual Synology DS918+ using VMWare

    Here is a step by step “How-To create a Virtual Machine to emulate a DS918+ running DSM 7.0.1” using VMWare and a Loader from redpill

    THIS DOES NOT WORK ANYMORE (?): issue betwen the .pat file version and the loader => Try the DSM 7.1 with DS3622+ available here

    Click to Read More

    As for my previous packaging (DSM 6.2.2 on DS918+), I simply used various tutorials from internet to create a VM in VMWare 15 and exported this one as ovf

    1: Download and unzip this package, containing:

    • the configuration of the virtual machine (dsm.ovf, disk.vmdk and synoboot.vmdk),
    • the boot file for DS918+ (synoboot-falt.vmdk) and
    • the image of the DSM 7.0.1 for DS198+ (DSM_DS918+_42218.pat)

    2: Open VMWare Workstation. If you never configured the default location where you want to create your Virtual Machines, press CTRL-P to open the “Preferences” panel. There, set this default location (I am using E:\VMWare).

    3: Go next to “Windows Explorer”, in the folder where you have unzipped the package, and double click dsm.ovf. This is going to import the VM into VMWare. Type a name for the new virtual machine (I am using DSM918+ 7.0.1) and click “Import”.

    4: Here is the outcome to the importation. You can now click on “Power on this virtual machine”. If you want, you may also first increase the Memory, the Processors or the size of the Hard Disk 2 (Do not touch the Hard Disk 1 which contains the boot loader).

    5: 5 sec after starting the VM, the bootloader will run:


    6: Open the “Synology Assistant”, which is more efficient than the page http://find.synology.com to find your VM, and after 1 minute click on “Search”. If you didn’t wait long enough, you will see an Error message (In such a case, check that you have enabled the option “Allow compatibility with devices that do not support password encryption” in the preferences – via the gear icon in the top-right corner – and/or click Search “Again”. I never had to do this more than 3 times).

    7: Finally, the Assistant will find your VM. A webpage should automatically open (Otherwise, double-click on the VM). Approve the EULA and click Install.

    8: Click next on Browse to select the image of the DSM 7.0.1 for DS918+ and Select the file “DSM_DS918+_42218.pat” in the folder where you unzipped the package downloaded here previously.

    9: Finally, click on “Next” and confirm that the installation can override the disk… it’s the virtual SCSI Hard Disk member of the VM. No worry 😉

    10: The installation will take a few minutes

    11: Once the installation complete, you will see in the VMWare Workstation that the VM is rebooting. On the installation page, you see a countdown.

    12: You should soon see that it’s installing packages before being finally ready to “start”

    13: You will now be able to login into the VM and configure it.

    14: It’s now time to use the Hard Disk 2 to create a first Volume. Open the DSM Main Menu and start the Storage Manager.

    15: Et voila, you have now a brand new DS918+ with the DSM 7.0.1-43 Update 2.

     

    Loading

  • How to download playlists from YouTube using Synology

    It’s really easy to download playlists from YouTube with the Download Station of Synology. I do this to get locally new “No Copyright Music’s”, to be used later within my video’s.

    Click to Read More

    I am using the “Download Station” as it simply works, opposite to many free or paid software’s which pretend to work but usually fail (and are full of advertisements…)

    Bref… Assume that you want to get all the free music’s from the excellent YouTube Channel “VLOG No Copyright“:

    • Visit their “PLAYLISTS” in your Browser (All such channels have a tab “PLAYLISTS” as you can see in the screenshot bellow).
    • Right-click on the link “VIEW FULL PLAYLIST” of the desired playlist.
      • This is important: don’t use any URL from the “HOME” tab or from any tail representing a playlist (with the “PLAY ALL”). It would only download one video and not all the playlist. 
    • Select the menu “Copy link address”
      • The copied url should be like this : “https://www.youtube.com/playlist?list=xxxxxxxxxx”

     

    Now, go to your “Synology Download Station” and:

    • Click on the large blue sign “+“. It opens a window to create new download tasks.
    • Go to the tab “Enter URL“.
    • Paste there the URL of the playlist copied previously.
      • A subfolder with the name of the playlist will automatically  be created under the destination folder and the video will be saved in that subfolder
    • Select the option “Show Dialog to select…” if you when to download only some of the video’s. Otherwise you can unselect that option.
    • Click “Ok”.

     

    You should now see a line as illustrated here below, “waiting” to start:

     

    You have to wait until the Download Station has crawled across the whole playlist and found all the files to be downloaded… Then, you will see the list of music’s:

     

    During the download, you could see some failed tasks, with a status “Error”. It happens from time to time, but this is usually a temporary issue. Sort the list within the Download Station on the “Status” column and select all the “Error” . Then, right-click the list and select the menu “Resume”.

     

    Personally, as soon as the “music video’s” are downloaded, I extract the sound tracks and keep only those. There are many software to do this. But one of my favorite option is to use ffmpeg directly on the Synology (you have to install that package). Simply type this command in a console, in the download path were are saved the video’s: 

    for i in *.mp4; do ffmpeg -i "$i" -codec:a libmp3lame -q:a 0 -map a "${i%.*}.mp3"; done

    To process mp4 in all subfolders in the download path, I use the command:

    for d in *; do cd "$PWD/$d/"; for i in *.mp4; do ffmpeg -i "$i" -codec:a libmp3lame -q:a 0 -map a "${i%.*}.mp3"; done; cd ..; done

     

    If you are using Chrome, I can suggest some extensions to help in downloading Playlists..

    1. The extension “Copy Selected Links“. Using that one, you can copy the URL’s of all the playlists of a Channel, at once. Select the whole text under the tab PLAYLISTS (it will then appear highlighted in blue as illustrated on the screenshot here under). Next, right-click in an empty area and select the menu “Copy selected Links”.

    Now go into Notepad++, as we have to keep only the playlists. Paste the Clipboard into a new tab of Notepad++. Press CTRL-Home to come back to the top. Press CTRL-H to open the “Replace” window. Select “Regular Expression” at the bottom of that window and click the “Replace All” button once you have entered this into the field “Find What”:  .*watch.*\n (or .*watch.*\r\n if it does not work properly)

    Now, you have two options. If you have no more than 50 playlists, you can paste them directly into the Download Station via the button “+” and the tab “Enter URL” as explained previously. Otherwise, you have to save with all your URL’s into a .txt file (with Notepad++) and upload that file via the tab “Open a file” instead of the tab “Enter URL“. 

     

    2. The extension “Download Station” (or this version in Chrome Web Store) to more easily download the playlists. Same principle as above, but instead of right-clicking “View Full PlayList” to copy the playlist URL, you now have a extra menu “Synology download Station” (or Download with “Download Station”).

     

    Another trick, when available in the PLAYLISTS tab of a Channel: select and download the Playlists “Sorted by Mood” or “Sorted by Genre”. This will help you when you search for a particular type of music for your video. Ex.: with the Channel “Audio Library”:

     

    Adive: each time you add a playlist in the Download Station, check that the amount of new download tasks match with the amount of tracks in the playlist. Indeed, it happens that not all the video’s are added into the Download Station as it has limits(a total of max 2048 download tasks <= reason why you have to clean-up all downloaded tasks before downloading new long playlists. There is also a limit of 256 links per “file” uploaded and a limit for a playlist of “2048 – amount of current download tasks”).

    Don’t forget that for most “No Copyright Music’s”, you have to mention the author in your video’s or posts. Often, in YouTube, you have to click on “Show More” under the video to see the details of the license:

    The License is usually clear. Ex.:

    To find again later the video’s on YouTube and check the type of license, it’s important to keep:

    • The original name of the video when converting into mp3
    • The name of the playlist
    • The name of the channel

    To do so, the easiest is to add some metadata into the mp3 extracted with ffmpeg. Once downloaded by the “Download Station”, your video’s should be in subfolders which have the name of their playlists. And the filenames usually contain the Author, the name of the song and the name of the YouTube Channel. So, it’s easy to create the metadata. Ex.: the playlist “Dance & Electronic Music | Vlog No Copyright Music” of Vlog has been downloaded into a subfolder named “Dance & Electronic Music _ Vlog No Copyright Music” and the video’s are all named like “Artist- Title (Vlog No Copyright Music).mp4”

    So, to process that subfolder, I will use the command:

    for i in *.mp4; do playlist=${PWD##*/}; artist=${i% - *}; other=${i##* - }; title=${other% (Vlog*}; ffmpeg -i "$i" -codec:a libmp3lame -q:a 0 -map a -metadata artist="${artist}" -metadata title="${title}" -metadata album="Playlist: ${playlist}"  -metadata Publisher="From YouTube Channel: Vlog No Copyright" "${artist} - ${title}.mp3"; done

    As you can see, I am using the “Publisher” to store the Channels and the “album” for the playlist… This is a personal choice! (More details on ID3 tags and ffmpeg here)

    Regarding the pattern matching used in shell, it’s not always that simple and you will have to be creative… Just about the pattern matching:

    • variable2=${variable1%xyz*} => variable2 is the shortest left part of variable1 before a substring xyz.
    • variable2=${variable1%%xyz*} => variable2 is the longest left part of variable1 before a substring xyz.
    • variable2=${variable1#*xyz} => variable2 is the shortest right part of variable1 after a substring xyz.
    • variable2=${variable1##*xyz} => variable2 is the longest right part of variable1 after a substring xyz.
    • in xyz, x and z may be blanks (in my case, “xyz” was ” – “)

    As there are many duplicates in those playlists, I have written my own script to replace such duplicates by hardlinks (on my Synology). I can’t use jdupes or rmlint as those are doing “binary comparisons” and due to the conversion via ffmpeg, it doesn’t work. Instead, I search for duplicates based on the filename and size only (per channel, they are usually unique anyway). Here is my script for illustration purpose. It must be stored in an ANSI file (ex.: dedup.sh) and run with: sh dedup.sh go

    echo "Start looking for duplicates"

    find . -iname "*.mp3" -printf "%p ~ %f\n" | sort -f -k2 | uniq -Di -f1 > list.dup

    echo "Duplicates found"

    test="$1"
    previous=""
    previouspath=""
    skip=true
    while read p; do
    mp3name="${p#* ~ }"
    mp3path="${p%% ~ *}"
    if [[ "$mp3name" == "$previous" ]]; then
    mp3new="${mp3path%%.mp3}"
    node=$(ls -l "$mp3path" | grep -Po '^.{11}\s1\s.*$')
    if [[ "$node" == "" ]]; then
    echo " $mp3path is already a hardlink"
    else
    skip=false
    if [[ "$test" == "go" || "$test" == "test" ]]; then
    SIZE1=$(stat -c%s "$mp3path")
    SIZE2=$(stat -c%s "$previouspath")
    #Delta=$(awk "BEGIN{ printf \"%d\n\", sqrt((100 * ($SIZE2 - $SIZE1) / $SIZE1)^2)}")
    #if [[ $Delta > 1 ]]; then
    Delta=$(awk "BEGIN{ printf \"%d\n\", ($SIZE2 - $SIZE1)}")
    if [[ "$Delta" == "0" ]]; then
    mv "$mp3path" "$mp3new.old"
    ln "$previouspath" "$mp3path"
    echo " $mp3path now linked to original"
    else
    echo " $mp3path seems different from $previouspath"
    fi
    else
    echo " mv $mp3path $mp3new.old"
    echo " ln $previouspath $mp3path"
    fi
    fi
    else
    if [[ "$test" != "test" ]] || [[ "$skip" = true ]]; then
    previous=$mp3name
    previouspath=$mp3path
    echo "$mp3name has duplicate(s) (original in $mp3path)"
    else
    break
    fi
    fi
    done <list.dup

     

    Note: Facebook will possibly remove the audio from your video even if they is a “No Copyright Music”. When you get such a notification from FB, you can simply click to restore the audio if you did mention the Author in your post or video.

    Some YouTube Channels with “No Copyright Music’s”:

     

    Loading

    ,
  • Sudoer file not working on Synology due to dots in its name

    I spent one hour to investigate why I couldn’t execute a command with sudo, from a php script, although the user was authorized for that command within a sudoer file… The problem was a dot in the name of the sudoer file.

    Click to Read More

    My php script is part of a package I have created to run on my Synology (DSM 7.x).. It is running under an account named like my package: MODS_Package7.x

    That php script executes the following code:

    $COMMAND = “sudo /usr/syno/bin/synopkg start ‘$PACKAGE’ 2>&1”;
    exec($COMMAND, $output, $result);

    My sudoer file was named /etc/sudoers.d/MODS_Package7.x and contained:

    MODS_Package7.x ALL=(ALL) NOPASSWD: /usr/syno/bin/synopkg

    It didn’t work until I removed the “.”, renaming the sudoer file into /etc/sudoers.d/MODS_Package7_x

     

    How stupid,  but it’s indeed mentioned in the documentation:

    sudo will read each file in /etc/sudoers.d, skipping file names that end in ‘~’ or contain a ‘.’ character to avoid causing problems with package manager or editor temporary/backup files.

    The /etc/sudoers.d/README file does not exist on Synology, but can be found on other Linux

    
    #
    # As of Debian version 1.7.2p1-1, the default /etc/sudoers file created on
    # installation of the package now includes the directive:
    # 
    #   #includedir /etc/sudoers.d
    # 
    # This will cause sudo to read and parse any files in the /etc/sudoers.d 
    # directory that do not end in '~' or contain a '.' character.
    # 
    # Note that there must be at least one file in the sudoers.d directory (this
    # one will do), and all files in this directory should be mode 0440.
    # 
    # Note also, that because sudoers contents can vary widely, no attempt is 
    # made to add this directive to existing sudoers files on upgrade.  Feel free
    # to add the above directive to the end of your /etc/sudoers file to enable 
    # this functionality for existing installations if you wish!
    #
    # Finally, please note that using the visudo command is the recommended way
    # to update sudoers content, since it protects against many failure modes.
    # See the man page for visudo for more information.
    #

     

    Loading

  • Run a command as root on Synology with any user

    Synology is reinforcing the security within its DMS, making more difficult the execution of scripts as root from packages. Here is my trick to do so based on the use of ssh and php.

    Click to Read More

    Theoretically, to run something as root, a user must be a sudoer, i.e.: a user having the right to execute commands as root. It’s not difficult to configure a user to be a sudoer. You simply have to add that user into a file located under /etc/sudoers.d/, with a few parameters to describe what he can execute and if he requires to type his password or not. Ex.: to let a user named “beatificabytes” able to execute the command ‘shutdown’ on a Synology:

    1. Create a file (the name does not matter): /etc/sudoers.d/beatificabytes 
    2. Edit that file with vi (or nano if you installed that package). Notice that you should really be careful as any error in that file will prevent you to log in anymore:  DANGER !!! That’s why it is usually highly recommended to use the command ‘visudo’ to edit such files (it checks the syntax before saving changes)… Unfortunately, this command is not available on Synology.
    3. To grant rights without having to type a password, type this: beatificabytes ALL=(ALL) NOPASSWD: /sbin/shutdown

    The problem I have with this approach is that I have many various users (each package runs with its own user) and I don’t want to define them all as sudoers.

    One option would be to run all packages with the same user who is defined as a sudoer. This is possible via a privilege file to be added into the packages (/conf/privilege).

    But if like me you want to always use the same administrator account, who is already a sudoer, if you have php installed on your NAS and if you have enabled ssh, then your could simply open a ssh session with that admin account, from a php script, to execute your commands.

    Here is the php script I am using for such a purpose (saved into a file named ‘sudo.php’):

    <?php
    $options = getopt("u:p:s:o:c:");
    $user = $options['u'];
    $password = $options['p'];
    $server = $options['s'];
    $port = $options['o'];
    $command = $options['c'];
    
    if (!function_exists("ssh2_connect")) die("php module ssh2.so not loaded");
    if(!($con = ssh2_connect($server, $port))){
        echo "fail: unable to establish connection to '$server:$port'\n";
    } else {
        // try to authenticate with username password
        if(!ssh2_auth_password($con, $user, $password)) {
    	if (strlen($password) > 2){
    		$pass = str_repeat("*",strlen($password)-2);
    	} else {
    		$pass = "";
    	}
    	$pass =  substr($password, 0,1).$pass.substr($password, -1);
            echo "fail: unable to authenticate with user '$user' and password '$pass'\n";
        } else {
            // allright, we're in!
            echo "okay: logged in...\n";
    
            // execute a command
    	$command = "echo '$password' | sudo -S $command 2>&1 ";
            if (!($stream = ssh2_exec($con, $command ))) {
                echo "fail: unable to execute command '$command'\n";
            } else {
                // collect returning data from command
                stream_set_blocking($stream, true);
                $data = "";
                while ($buf = fread($stream,4096)) {
                    $data .= $buf;
    		echo $buf;
                }
                fclose($stream);
            }
        }
    }
    ?>
    

    And to call it, from a shell for example, assuming that your are using php7.3, type something like:

    php -dextension=/volume1/@appstore/PHP7.3/usr/local/lib/php73/modules/ssh2.so sudo.php -u YourAdminAccount -p YourAdminPassword -s 127.0.0.1 -o 22 -c "whoami"
    

    Here above

    • 22 is the port defined for ssh on my Synology
    • I executed the command ‘whoami’. So, the outcome will be “root”

    Et voilà.

    Loading

  • Sync Plex Movies from Synology onto Unraid

    I am managing my movies with Plex. It is installed both on my Synology NAS, which is running  24/7, and on my Unraid Server, that I turn on only for backup purpose.

    I am usually adding new movies first on my Synology. I copy them later onto my Unraid Server. To do so, I am using rsync.

    Click to Read More

    ATTENTION: this is only to sync the files, not the metadata.

     

    In each Plex, I have two libraries: Movies and Series TV

    On Synology, each library includes two folders:

    • Movies includes /volume1/plex/Movies and /volume1/plex/new Movies
    • Series TV includes /volume1/plex/Series and /volume1/plex/new Series 

    On Unraid, each library includes only one folder:

    • Movies includes /mnt/user/Movies
    • Series TV includes /mnt/user/Series

    On Unraid, I have two shares Movies and Series to access respectively mnt/user/Movies and /mnt/user/Series.

    On the Synology NAS, I have mounted the shares of the Unraid Server as CIFS Shared Folder:

    • /<Unraid Server>/Movies on /volume1/mount/Movies
    • /<Unraid Server>/Series on /volume1/mount/Series

    Each time I have a new movie or serie, I copy it onto my Synology, respectively into /volume1/plex/new Movies or /volume1/plex/new Series.

    All movies and series are first renamed using filebot. This is a guarantee that all are well uniquely identified with the title, year, resolution, season, episode, encoding, etc, … According to Plex best practices, each movie is in its own folder.

    Once I have a few new media, I turn on my Unraid Server and launch the following script in a SSH console (using Putty) on the Synology:

    #!/bin/bash

    if grep -qs '/volume1/mount/Series ' /proc/mounts
    then
    rsync --ignore-existing -h -v -r -P -t /volume1/plex/New\ Series/ /volume1/mount/Series/
    else
    echo "Cannot sync new Series as Zeus is not mounted on /mount/Series"
    fi

    if grep -qs '/volume1/mount/Movies ' /proc/mounts
    then
    rsync --ignore-existing -h -v -r -P -t /volume1/plex/New\ Movies/ /volume1/mount/Movies/
    else
    echo "Cannot sync new Movies as Zeus is not mounted on /mount/Movies"
    fi

    Next, on Synology, I move all movies and series respectively from /volume1/plex/new Movies and /volume1/plex/new Series into /volume1/plex/Movies or /volume1/plex/Series (*)

    Than, to be sure I don’t have the same movie twice on the Unraid Server (with distinct encoding or resolution), I run this command in a SSH console on Unraid:

    find /mnt/user/Movies -type f -iname '*.mkv' -printf '%h\n'|sort|uniq -c | awk '$1 > 1'^C

    It does not work for the Series as each folder (season) contains obviously several episodes…

     

    This is only syncing the files! There is no easy way to sync also the metadata between the two Plex.

    But voilà….

     


    (*) Doing so, the fine tunings done in Plex, when the movie was under </new Movie>, are not lost. Temporarily, the movie will appear as “deleted” in Plex. Above all, do not “Empty Trash” ! Soon (depending on how many movies you moved), it will be “available” again. I did test that trick explicitly:

    1. Take a new movie:

    2. Open it:

    3. Check that path (here under, it is under /New Movies):

    4. Edit some info for testing purpose (here under, the “Original Title”):

    5. Change also the poster:

    5. Using Windows Explorer or the File Station, move the folder of the movie into its new location. The movie will appear very soon as unavailable:

    6. Open it:

    7.  Wait… Soon it will become again available:

    8. Check now the path (here under, it is now under /Movies):

    9. As you can see, the chosen cover is still there. And editing the details, you would see that the original title is still “DEMO MOVE FOLDER within Plex”.

    Loading